Digital Transformation of Enterprise Architecture [1 ed.] 1138553786, 9781138553781

"In this book, Vivek Kale makes an important contribution to the theory and practice of enterprise architecture … t

2,785 436 14MB

English Pages 556 [557] Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Digital Transformation of Enterprise Architecture [1 ed.]
 1138553786, 9781138553781

Table of contents :
Cover
Half Title
Title Page
Copyright Page
Dedication
Table of Contents
Foreword
Preface
Acknowledgments
Author
Other Books by Vivek Kale
Prologue: Digital Storms
1: Agile Enterprise Architecture
1.1 Age of Transformation
1.1.1 VUCA Ecosystem
1.2 Four Industrial Generations
1.3 Trends
1.3.1 Social Trends
1.3.2 Organizational Trends
1.3.3 Business Model Trends
1.3.4 Technology Trends
1.4 From Products to Services to Experiences
1.5 Agile Enterprises
1.5.1 Stability versus Agility
1.5.2 Aspects of Agility
1.5.3 Principles of Built-for-Change Systems
1.5.4 Framework for Change Proficiency
1.6 Digital Transformation of EA Attributes
1.7 Summary
Section I: Genesis of Digital Transformation of Enterprise Architecture
2: Systems Theory
2.1 Systems Thinking
2.1.1 Systems Science
2.1.2 Principles of Systems Science
2.2 Systems Engineering
2.2.1 System Dynamics via Simulation Modeling
2.2.2 Changeable Systems
2.3 Systems Architecting
2.3.1 Systems Architecture
2.3.1.1 Functional Architectural Requirements
2.3.1.2 Nonfunctional Architectural Requirements
2.3.2 Enterprise Architecture
2.3.2.1 Business Architecture
2.3.2.2 Information Architecture
2.3.2.3 Application Architecture
2.3.2.4 Technical Architecture
2.4 Enterprise Processes
2.5 Summary
3: Digital Transformation of Enterprises
3.1 Digital Transformation of Enterprises
3.2 Business Model Innovation
3.2.1 Business Model
3.2.1.1 Osterwalder and Pigneur’s Business Model Canvas
3.2.1.2 Design Thinking for Business Model Design
3.2.1.3 Business Model Design and Enterprise Architecture
3.3 Enterprise Architecture
3.3.1 Architectural Element
3.3.2 System Structures
3.3.2.1 Attribute Tradeoffs
3.3.3 Candidate Architecture
3.3.4 Stakeholders
3.4 Viewpoints and Views
3.5 Perspectives
3.5.1 Change Perspective
3.5.2 Scalability Perspective
3.5.3 Availability Perspective
3.6 Enterprise Architecture Frameworks
3.7 Summary
4: Distributed Systems
4.1 Distributed Systems
4.1.1 Distributed Computing
4.1.1.1 System Architectural Styles
4.1.1.2 Software Architectural Styles
4.1.1.3 Technologies for Distributed Computing
4.2 Distributed Databases
4.2.1 Characteristics of Distributed Databases
4.2.1.1 Transparency
4.2.1.2 Availability and Reliability
4.2.1.3 Scalability and Partition Tolerance
4.2.1.4 Autonomy
4.2.2 Advantages and Disadvantages of Distributed Databases
4.2.2.1 Advantages of Distributed Databases
4.2.2.2 Disadvantages of Distributed Databases
4.2.3 Data Replication and Allocation
4.2.4 Concurrency Control and Recovery in Distributed Databases
4.2.4.1 Distributed Recovery
4.2.5 Rules for Distributed Databases
4.3 Summary
Section II: Road to Digital Transformation of Enterprise Architecture
5: Dependability
5.1 Introduction to Dependability
5.1.1 Faults, Errors, and Failures
5.1.1.1 Faults
5.1.1.2 Errors
5.1.1.3 Failures
5.2 Fault Tolerance
5.3 Summary
Appendix 5A: Reliability
6: Performability
6.1 Introduction to Performability
6.1.1 Managing the Resource Demand
6.1.2 Managing Resources
6.2 Performance Evaluation
6.2.1 Performance Evaluation Framework
6.2.1.1 Evaluation Methods
6.2.1.2 Evaluation Results
6.3 Summary
Appendix 6A: Queuing Systems
7: Interoperability
7.1 Introduction to Interoperability
7.1.1 Framework for Enterprise Interoperability (FEI)
7.1.2 Maturity Model for Enterprise Interoperability
7.1.3 Standards for Enterprise Interoperability
7.1.4 Basic Model of System Interaction
7.2 Model-Based Interoperability
7.2.1 Model Driven Architecture
7.2.2 Model Driven Interoperability (MDI)
7.3 Aspects of Interoperability for Internet of Things (IoT)
7.4 Summary
Appendix 7A: Integration
8: Scalability
8.1 Introduction to Scalability
8.1.1 Load Balancing for Scalability
8.2 Load Balancing Schemes
8.2.1 Static Load Balancing
8.2.2 Dynamic Load Balancing
8.2.2.1 Discrete Event Simulation
8.2.2.2 Stream Based Scenarios
8.2.2.3 Cloud Computing
8.3 Load Balancing in Peer-to-Peer Systems
8.3.1 Static Load Balancing
8.3.2 Dynamic Load Balancing
8.3.3 Performing Load Balancing
8.4 Summary
Appendix 8A: Virtualization
9: Availability
9.1 Introduction to Availability
9.1.1 Availability Maturity Model
9.1.2 Disaster Recovery and Business Continuity Planning (BCP)
9.2 Availability and Repair
9.2.1 Basics of Probability
9.2.2 Failure Rate
9.2.3 Mean Time to Failure
9.2.4 Mean Time to Repair
9.2.5 Mean Time Between Failures
9.3 Availability of Services
9.3.1 Service-Level Agreements (SLAs)
9.3.2 Quality of Service (QoS)
9.3.3 Metrics for Interfacing to Cloud Service Providers
9.4 Summary
Appendix 9A: Replication
10: Mobility
10.1 Introduction to Mobility
10.2 Mobility Management
10.2.1 Protocol Reference Model
10.2.1.1 Data Plane
10.2.1.2 Control Plane
10.2.1.3 Management Plane
10.2.2 Network Reference Model
10.3 Moving Objects
10.3.1 Aspects of MOD
10.4 Summary
Appendix 10A: Spatio-Temporal Databases
11: Ubiquity
11.1 Introduction to Ubiquity
11.2 Ubiquitous Computing Fundamentals
11.3 Ubiquitous Computing Core Properties
11.4 Smart Devices
11.4.1 Smart Dust
11.4.2 Skin
11.4.3 Clay
11.5 Smart Environment
11.5.1 Tagging
11.5.2 Sensing
11.5.3 Controlling
11.6 Smart Interaction
11.6.1 Basic Interaction
11.6.2 Smart Interaction
11.7 Summary
Appendix 11A: Embedded Systems
12: Security
12.1 Introduction to Security
12.1.1 Triad of Confidentiality, Integrity, and Availability
12.1.2 Types of Attacks
12.1.3 Threats, Vulnerabilities, and Risk
12.1.4 Controls
12.1.5 Defense in Depths
12.2 Identification
12.3 Authentication
12.4 Authorization
12.5 Access Control
12.5.1 Access Control Models
12.6 Accountability
12.7 Audit
12.8 Summary
Appendix 12A: Cryptography
13: Analyticity
13.1 Introduction to Analytics
13.1.1 Descriptive Analytics
13.1.2 Predictive Analytics
13.1.3 Prescriptive Analytics
13.2 Data Science Techniques
13.2.1 Database Systems
13.2.2 Statistical Inference
13.2.3 Regression and Classification
13.2.4 Data Mining and Machine Learning
13.2.5 Data Visualization
13.2.6 Text Analytics
13.2.7 Time Series and Market Research Models
13.3 Snapshot of Data Analysis Techniques and Tasks
13.4 Summary
Appendix 13A: Data Mining
14: Usability
14.1 Introduction to Usability
14.2 Concept of Usability
14.3 Usability Principles
14.4 Usability Specifications
14.5 Interactions and the Quality of Experience
14.5.1 Factors Influencing Quality of Experience
14.5.1.1 Human Influence Factors
14.5.1.2 System Influence Factors
14.5.1.3 Context Influence Factors
14.5.2 Features of Quality of Experience
14.5.2.1 Feature Levels
14.6 Usability of Interactive Systems
14.7 Summary
Appendix 14A: Interactive Interfaces
Section III: Digital Transformation of Enterprise Architecture
15: Service-Oriented Computing
15.1 Introduction to Service-Oriented Architecture
15.1.1 Defining SOA
15.1.1.1 Services
15.2 SOA Benefits
15.3 Characteristics of SOA
15.4 Advantages of SOA
15.5 SOA Applications
15.5.1 Rapid Application Integration
15.5.2 Multichannel Access
15.5.3 Business Process Management
15.6 SOA Ingredients
15.6.1 Objects, Services, and Resources
15.6.1.1 Objects
15.6.1.2 Services
15.6.1.3 Resources
15.6.2 SOA and Web Services
15.6.3 SOA and RESTful Services
15.7 Enterprise Service Bus
15.8 Summary
16: Cloud Computing
16.1 Introduction to Cloud Computing
16.2 Cloud Characteristics
16.3 Cloud Delivery Models
16.3.1 Infrastructure as a Service (IaaS)
16.3.2 Platform as a Service (PaaS)
16.3.3 Software as a Service (SaaS)
16.4 Cloud Deployment Models
16.4.1 Private Clouds
16.4.2 Public Clouds
16.4.3 Hybrid Clouds
16.4.4 Community Clouds
16.5 Cloud Benefits
16.6 Cloud Challenges
16.6.1 Scalability
16.6.2 Multi-Tenancy
16.6.3 Availability
16.7 Cloud Technologies
16.7.1 Virtualization
16.7.1.1 Characteristics of Virtualized Environment
16.7.2 Service-Oriented Computing
16.7.2.1 Layers in SOA
16.8 Summary
17: Big Data Computing
17.1 Introduction to Big Data
17.1.1 What Is Big Data?
17.1.1.1 Data Volume
17.1.1.2 Data Velocity
17.1.1.3 Data Variety
17.1.1.4 Data Veracity
17.1.2 Common Characteristics of Big Data Computing Systems
17.2 Tools and Techniques of Big Data
17.2.1 Processing Approach
17.2.2 Big Data System Architecture
17.2.2.1 BASE (Basically Available, Soft State, Eventual Consistency)
17.2.2.2 Functional Decomposition
17.2.2.3 Master–Slave Replication
17.2.3 Row Partitioning or Sharding
17.2.4 Row versus Column-Oriented Data Layouts
17.2.5 NoSQL Data Management
17.3 NoSQL Databases
17.3.1 Column-Oriented Stores or Databases
17.3.2 Key–Value Stores (K–V Store) or Databases
17.3.3 Document-Oriented Databases
17.3.4 Graph Stores or Databases
17.4 Aadhaar Project
17.5 Summary
18: Context-Aware Computing
18.1 Introduction to Context
18.2 Context-Aware Applications
18.2.1 Types of Context-Awareness
18.2.2 Types of Contexts
18.2.3 Context Acquisition
18.2.4 Context Models
18.2.5 Generic Context-Aware Application Architecture
18.2.6 Illustrative Context-Aware Applications
18.3 Location Based Systems
18.3.1 Sources of Location Data
18.3.1.1 Cellular Systems
18.3.1.2 Multireference Point Systems
18.3.1.3 Tagging
18.3.2 Mobility Data
18.3.2.1 Mobility Data Mining
18.4 Context-Aware Recommender Systems
18.5 Summary
19: Internet of Things (IoT) Computing
19.1 Introduction to Internet of Things
19.1.1 IoT Building Blocks
19.1.2 IoT Architecture
19.1.3 Widened Address Space with IPv6
19.2 RFID (Radio Frequency Identification)
19.3 Sensor Networks
19.3.1 Wireless Networks
19.3.2 Sensor
19.3.3 Wireless Sensor Networks
19.3.3.1 WSN Characteristics
19.4 Summary
20: Blockchain Computing
20.1 Peer-to-Peer Systems
20.1.1 Overlay Routing versus IP Routing
20.2 Blockchain
20.2.1 Blockchain Transaction
20.2.2 Blockchain Features
20.2.3 Blockchain Benefits
20.2.4 Blockchain Architecture
20.3 Blockchain Security
20.4 Blockchain Security Architecture for Internet of Things
20.4.1 Hierarchical Blockchain Architecture
20.5 Summary
21: Soft Computing
21.1 Artificial Neural Networks
21.1.1 The Biological Neuron
21.1.2 Model of an Artificial Neuron
21.1.2.1 Activation Function
21.1.3 Artificial Neural Network
21.1.3.1 Processing Units
21.1.3.2 ANN Processing
21.1.3.3 ANN Topologies
21.1.3.4 Training of ANN
21.1.3.5 ANN Applications
21.2 Fuzzy Systems
21.2.1 Fuzzy Controller
21.2.1.1 Example of a Fuzzy Logic Controller
21.3 Evolutionary Algorithms
21.3.1 Fundamentals of Evolutionary Algorithms
21.3.2 Genetic Algorithms
21.3.2.1 Benefits of GA
21.3.2.2 Description of a Simple GA
21.3.2.3 General Presentation of GA Using Binary Coding
21.4 Rough Sets
21.5 Summary
22: Interactive Computing
22.1 Business Processes and Human Interactions
22.1.1 Human Interaction Management
22.1.2 Human Interaction Management System
22.1.3 Comparing HIM and BPM
22.1.4 Humanedj Human Interaction Management Systems
22.2 Interaction Architectures
22.2.1 Presentation Abstraction Controller (PAC)
22.2.2 Model View Controller (MVC)
22.2.3 Data Context Interaction (DCI)
22.2.4 Micro-Service Architecture (MSA)
22.3 Customer Interaction Systems
22.3.1 Spoken Language Recognition
22.3.2 Spoken Language Understanding
22.3.3 Dialog Management
22.3.4 Natural Language Generation
22.3.5 Text-to-Speech Synthesis
22.4 Implementing Customer Interaction Systems
22.5 Summary
Epilogue: Blown to Bits
Bibliography
Index

Citation preview

Digital Transformation of Enterprise Architecture

Digital Transformation of Enterprise Architecture

Vivek Kale

CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2020 by Vivek Kale CRC Press is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Printed on acid-free paper International Standard Book Number-13: 978-1-138-55378-1 (Hardback) This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www. copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-7508400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging‑in‑Publication Data Names: Kale, Vivek, author. Title: Digital transformation of enterprise architecture / Vivek Kale. Description: Boca Raton, Florida : CRC Press, [2020] | Includes bibliographical references and index. Identifiers: LCCN 2019012860 | ISBN 9781138553781 (hardback : alk. paper) | ISBN 9781351029148 (ebk) Subjects: LCSH: Business enterprises--Computer networks. | Computer network architectures--Technological innovations. Classification: LCC HD30.37 .K34 2020 | DDC 658/.0557--dc23 LC record available at https://lccn.loc.gov/2019012860 Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com

To My younger daughter, Atmaja, the creative spark in our family

Contents Foreword ..................................................................................................................................... xvii Preface ......................................................................................................................................... xxiii Acknowledgments .................................................................................................................... xxix Author ......................................................................................................................................... xxxi Other Books by Vivek Kale ....................................................................................................xxxiii Prologue: Digital Storms ......................................................................................................... xxxv 1. Agile Enterprise Architecture .............................................................................................. 1 1.1 Age of Transformation ................................................................................................. 1 1.1.1 VUCA Ecosystem ............................................................................................. 4 1.2 Four Industrial Generations ........................................................................................ 6 1.3 Trends ............................................................................................................................. 6 1.3.1 Social Trends.....................................................................................................6 1.3.2 Organizational Trends ....................................................................................8 1.3.3 Business Model Trends ...................................................................................9 1.3.4 Technology Trends ........................................................................................ 10 1.4 From Products to Services to Experiences .............................................................. 11 1.5 Agile Enterprises ......................................................................................................... 13 1.5.1 Stability versus Agility.................................................................................. 15 1.5.2 Aspects of Agility .......................................................................................... 16 1.5.3 Principles of Built-for-Change Systems ...................................................... 17 1.5.4 Framework for Change Proficiency............................................................. 18 1.6 Digital Transformation of EA Attributes ................................................................. 19 1.7 Summary ...................................................................................................................... 20

Section I

Genesis of Digital Transformation of Enterprise Architecture

2. Systems Theory ..................................................................................................................... 23 2.1 Systems Thinking ....................................................................................................... 23 2.1.1 Systems Science .............................................................................................. 24 2.1.2 Principles of Systems Science ....................................................................... 25 2.2 Systems Engineering .................................................................................................. 26 2.2.1 System Dynamics via Simulation Modeling ............................................. 27 2.2.2 Changeable Systems ...................................................................................... 27 2.3 Systems Architecting .................................................................................................. 29 2.3.1 Systems Architecture ....................................................................................30 2.3.1.1 Functional Architectural Requirements ..................................... 32 2.3.1.2 Nonfunctional Architectural Requirements .............................. 33 2.3.2 Enterprise Architecture ................................................................................34 2.3.2.1 Business Architecture .................................................................... 35 2.3.2.2 Information Architecture .............................................................. 36 vii

viii

Contents

2.4 2.5

2.3.2.3 Application Architecture .............................................................. 37 2.3.2.4 Technical Architecture .................................................................. 38 Enterprise Processes ................................................................................................... 39 Summary ...................................................................................................................... 39

3. Digital Transformation of Enterprises............................................................................. 41 3.1 Digital Transformation of Enterprises ..................................................................... 41 3.2 Business Model Innovation .......................................................................................42 3.2.1 Business Model ..............................................................................................42 3.2.1.1 Osterwalder and Pigneur’s Business Model Canvas ................42 3.2.1.2 Design Thinking for Business Model Design ............................44 3.2.1.3 Business Model Design and Enterprise Architecture............... 46 3.3 Enterprise Architecture.............................................................................................. 48 3.3.1 Architectural Element ................................................................................... 48 3.3.2 System Structures .......................................................................................... 49 3.3.2.1 Attribute Tradeoffs ......................................................................... 51 3.3.3 Candidate Architecture ................................................................................ 52 3.3.4 Stakeholders.................................................................................................... 53 3.4 Viewpoints and Views ...............................................................................................54 3.5 Perspectives.................................................................................................................. 55 3.5.1 Change Perspective ....................................................................................... 58 3.5.2 Scalability Perspective................................................................................... 60 3.5.3 Availability Perspective ................................................................................64 3.6 Enterprise Architecture Frameworks ...................................................................... 67 3.7 Summary ...................................................................................................................... 69 4. Distributed Systems............................................................................................................. 71 4.1 Distributed Systems .................................................................................................... 71 4.1.1 Distributed Computing................................................................................. 73 4.1.1.1 System Architectural Styles .......................................................... 74 4.1.1.2 Software Architectural Styles....................................................... 75 4.1.1.3 Technologies for Distributed Computing ...................................80 4.2 Distributed Databases ................................................................................................83 4.2.1 Characteristics of Distributed Databases ...................................................84 4.2.1.1 Transparency ..................................................................................84 4.2.1.2 Availability and Reliability ...........................................................85 4.2.1.3 Scalability and Partition Tolerance .............................................. 86 4.2.1.4 Autonomy ........................................................................................ 86 4.2.2 Advantages and Disadvantages of Distributed Databases...................... 86 4.2.2.1 Advantages of Distributed Databases ......................................... 86 4.2.2.2 Disadvantages of Distributed Databases .................................... 87 4.2.3 Data Replication and Allocation ..................................................................90 4.2.4 Concurrency Control and Recovery in Distributed Databases .............. 91 4.2.4.1 Distributed Recovery ..................................................................... 92 4.2.5 Rules for Distributed Databases .................................................................. 92 4.3 Summary ...................................................................................................................... 93

ix

Contents

Section II

Road to Digital Transformation of Enterprise Architecture

5. Dependability ....................................................................................................................... 97 5.1 Introduction to Dependability .................................................................................. 98 5.1.1 Faults, Errors, and Failures ......................................................................... 100 5.1.1.1 Faults .............................................................................................. 102 5.1.1.2 Errors.............................................................................................. 105 5.1.1.3 Failures........................................................................................... 106 5.2 Fault Tolerance ........................................................................................................... 107 5.3 Summary .................................................................................................................... 109 Appendix 5A: Reliability .................................................................................................... 110 6. Performability ..................................................................................................................... 121 6.1 Introduction to Performability ................................................................................ 122 6.1.1 Managing the Resource Demand .............................................................. 124 6.1.2 Managing Resources ................................................................................... 125 6.2 Performance Evaluation ........................................................................................... 127 6.2.1 Performance Evaluation Framework......................................................... 128 6.2.1.1 Evaluation Methods ..................................................................... 130 6.2.1.2 Evaluation Results ........................................................................ 133 6.3 Summary .................................................................................................................... 135 Appendix 6A: Queuing Systems ....................................................................................... 135 7. Interoperability ................................................................................................................... 147 7.1 Introduction to Interoperability .............................................................................. 147 7.1.1 Framework for Enterprise Interoperability (FEI) .................................... 148 7.1.2 Maturity Model for Enterprise Interoperability...................................... 149 7.1.3 Standards for Enterprise Interoperability ................................................ 150 7.1.4 Basic Model of System Interaction............................................................. 151 7.2 Model-Based Interoperability ................................................................................. 154 7.2.1 Model Driven Architecture ........................................................................ 154 7.2.2 Model Driven Interoperability (MDI) ....................................................... 157 7.3 Aspects of Interoperability for Internet of Things (IoT) ...................................... 157 7.4 Summary .................................................................................................................... 160 Appendix 7A: Integration ................................................................................................... 161 8. Scalability ............................................................................................................................ 169 8.1 Introduction to Scalability ....................................................................................... 169 8.1.1 Load Balancing for Scalability ................................................................... 170 8.2 Load Balancing Schemes ......................................................................................... 172 8.2.1 Static Load Balancing .................................................................................. 173 8.2.2 Dynamic Load Balancing ........................................................................... 173 8.2.2.1 Discrete Event Simulation ........................................................... 174 8.2.2.2 Stream Based Scenarios............................................................... 175 8.2.2.3 Cloud Computing......................................................................... 176

x

Contents

8.3

Load Balancing in Peer-to-Peer Systems ............................................................... 176 8.3.1 Static Load Balancing .................................................................................. 177 8.3.2 Dynamic Load Balancing ........................................................................... 178 8.3.3 Performing Load Balancing ....................................................................... 179 8.4 Summary .................................................................................................................... 179 Appendix 8A: Virtualization .............................................................................................. 180 9. Availability .......................................................................................................................... 189 9.1 Introduction to Availability ..................................................................................... 189 9.1.1 Availability Maturity Model ...................................................................... 193 9.1.2 Disaster Recovery and Business Continuity Planning (BCP) ............... 194 9.2 Availability and Repair ............................................................................................ 195 9.2.1 Basics of Probability .................................................................................... 195 9.2.2 Failure Rate ................................................................................................... 196 9.2.3 Mean Time to Failure .................................................................................. 197 9.2.4 Mean Time to Repair ................................................................................... 198 9.2.5 Mean Time Between Failures ..................................................................... 199 9.3 Availability of Services............................................................................................. 199 9.3.1 Service-Level Agreements (SLAs) ............................................................. 199 9.3.2 Quality of Service (QoS) ............................................................................. 201 9.3.3 Metrics for Interfacing to Cloud Service Providers ................................ 202 9.4 Summary .................................................................................................................... 203 Appendix 9A: Replication .................................................................................................. 205 10. Mobility ................................................................................................................................ 215 10.1 Introduction to Mobility .......................................................................................... 215 10.2 Mobility Management .............................................................................................. 218 10.2.1 Protocol Reference Model .......................................................................... 218 10.2.1.1 Data Plane ................................................................................... 219 10.2.1.2 Control Plane ..............................................................................222 10.2.1.3 Management Plane ....................................................................222 10.2.2 Network Reference Model ......................................................................... 223 10.3 Moving Objects .........................................................................................................223 10.3.1 Aspects of MOD .......................................................................................... 224 10.4 Summary .................................................................................................................... 226 Appendix 10A: Spatio-Temporal Databases .................................................................... 227 11. Ubiquity................................................................................................................................ 237 11.1 Introduction to Ubiquity .......................................................................................... 237 11.2 Ubiquitous Computing Fundamentals .................................................................. 240 11.3 Ubiquitous Computing Core Properties................................................................ 241 11.4 Smart Devices ............................................................................................................ 243 11.4.1 Smart Dust ................................................................................................... 244 11.4.2 Skin................................................................................................................ 244 11.4.3 Clay ............................................................................................................... 245 11.5 Smart Environment .................................................................................................. 245 11.5.1 Tagging ......................................................................................................... 245 11.5.2 Sensing.......................................................................................................... 246 11.5.3 Controlling ................................................................................................... 248

Contents

xi

11.6

Smart Interaction ....................................................................................................... 249 11.6.1 Basic Interaction .......................................................................................... 249 11.6.2 Smart Interaction ......................................................................................... 249 11.7 Summary .................................................................................................................... 250 Appendix 11A: Embedded Systems .................................................................................. 251 12. Security ................................................................................................................................. 259 12.1 Introduction to Security ........................................................................................... 259 12.1.1 Triad of Confidentiality, Integrity, and Availability .............................. 260 12.1.2 Types of Attacks .......................................................................................... 261 12.1.3 Threats, Vulnerabilities, and Risk............................................................. 261 12.1.4 Controls ........................................................................................................ 262 12.1.5 Defense in Depths ....................................................................................... 263 12.2 Identification .............................................................................................................. 264 12.3 Authentication ........................................................................................................... 265 12.4 Authorization............................................................................................................. 267 12.5 Access Control ........................................................................................................... 267 12.5.1 Access Control Models ............................................................................... 269 12.6 Accountability ........................................................................................................... 269 12.7 Audit ........................................................................................................................... 270 12.8 Summary .................................................................................................................... 272 Appendix 12A: Cryptography ........................................................................................... 272 13. Analyticity ........................................................................................................................... 283 13.1 Introduction to Analytics ......................................................................................... 283 13.1.1 Descriptive Analytics .................................................................................284 13.1.2 Predictive Analytics....................................................................................284 13.1.3 Prescriptive Analytics ................................................................................284 13.2 Data Science Techniques .......................................................................................... 285 13.2.1 Database Systems ........................................................................................ 285 13.2.2 Statistical Inference ..................................................................................... 285 13.2.3 Regression and Classification.................................................................... 286 13.2.4 Data Mining and Machine Learning ....................................................... 287 13.2.5 Data Visualization....................................................................................... 287 13.2.6 Text Analytics .............................................................................................. 288 13.2.7 Time Series and Market Research Models .............................................. 289 13.3 Snapshot of Data Analysis Techniques and Tasks ............................................... 291 13.4 Summary .................................................................................................................... 294 Appendix 13A: Data Mining .............................................................................................. 294 14. Usability ...............................................................................................................................305 14.1 Introduction to Usability..........................................................................................305 14.2 Concept of Usability ................................................................................................. 306 14.3 Usability Principles ................................................................................................... 307 14.4 Usability Specifications ............................................................................................ 307 14.5 Interactions and the Quality of Experience .......................................................... 309 14.5.1 Factors Influencing Quality of Experience .............................................. 312 14.5.1.1 Human Influence Factors.......................................................... 312

xii

Contents

14.5.1.2 System Influence Factors........................................................... 313 14.5.1.3 Context Influence Factors ......................................................... 315 14.5.2 Features of Quality of Experience ............................................................ 316 14.5.2.1 Feature Levels............................................................................. 317 14.6 Usability of Interactive Systems .............................................................................. 317 14.7 Summary .................................................................................................................... 319 Appendix 14A: Interactive Interfaces ............................................................................... 319

Section III

Digital Transformation of Enterprise Architecture

15. Service-Oriented Computing........................................................................................... 331 15.1 Introduction to Service-Oriented Architecture .................................................... 331 15.1.1 Defining SOA ............................................................................................... 332 15.1.1.1 Services ........................................................................................ 333 15.2 SOA Benefits ..............................................................................................................334 15.3 Characteristics of SOA.............................................................................................. 335 15.4 Advantages of SOA ................................................................................................... 337 15.5 SOA Applications ...................................................................................................... 337 15.5.1 Rapid Application Integration................................................................... 338 15.5.2 Multichannel Access ................................................................................... 338 15.5.3 Business Process Management ................................................................. 339 15.6 SOA Ingredients ........................................................................................................ 339 15.6.1 Objects, Services, and Resources .............................................................. 339 15.6.1.1 Objects ......................................................................................... 339 15.6.1.2 Services ........................................................................................340 15.6.1.3 Resources ....................................................................................340 15.6.2 SOA and Web Services ............................................................................... 341 15.6.3 SOA and RESTful Services ........................................................................344 15.7 Enterprise Service Bus ..............................................................................................346 15.8 Summary ....................................................................................................................348 16. Cloud Computing ............................................................................................................... 349 16.1 Introduction to Cloud Computing ......................................................................... 349 16.2 Cloud Characteristics ............................................................................................... 351 16.3 Cloud Delivery Models ............................................................................................ 351 16.3.1 Infrastructure as a Service (IaaS) .............................................................. 353 16.3.2 Platform as a Service (PaaS).......................................................................354 16.3.3 Software as a Service (SaaS) ...................................................................... 354 16.4 Cloud Deployment Models...................................................................................... 355 16.4.1 Private Clouds ............................................................................................. 355 16.4.2 Public Clouds ............................................................................................... 356 16.4.3 Hybrid Clouds ............................................................................................. 356 16.4.4 Community Clouds .................................................................................... 356 16.5 Cloud Benefits ............................................................................................................ 356 16.6 Cloud Challenges ...................................................................................................... 360 16.6.1 Scalability ..................................................................................................... 361 16.6.2 Multi-Tenancy .............................................................................................. 362 16.6.3 Availability ................................................................................................... 363

Contents

xiii

16.7 Cloud Technologies ..................................................................................................364 16.7.1 Virtualization...............................................................................................364 16.7.1.1 Characteristics of Virtualized Environment.......................... 365 16.7.2 Service-Oriented Computing .................................................................... 368 16.7.2.1 Layers in SOA ............................................................................. 369 16.8 Summary .................................................................................................................... 371 17. Big Data Computing .......................................................................................................... 373 17.1 Introduction to Big Data........................................................................................... 374 17.1.1 What Is Big Data? ........................................................................................ 374 17.1.1.1 Data Volume ............................................................................... 376 17.1.1.2 Data Velocity............................................................................... 377 17.1.1.3 Data Variety ................................................................................ 377 17.1.1.4 Data Veracity .............................................................................. 378 17.1.2 Common Characteristics of Big Data Computing Systems ................... 379 17.2 Tools and Techniques of Big Data........................................................................... 380 17.2.1 Processing Approach .................................................................................. 380 17.2.2 Big Data System Architecture ................................................................... 382 17.2.2.1 BASE (Basically Available, Soft State, Eventual Consistency) ................................................................................ 382 17.2.2.2 Functional Decomposition ....................................................... 383 17.2.2.3 Master–Slave Replication ..........................................................384 17.2.3 Row Partitioning or Sharding ...................................................................384 17.2.4 Row versus Column-Oriented Data Layouts .......................................... 385 17.2.5 NoSQL Data Management ......................................................................... 385 17.3 NoSQL Databases...................................................................................................... 387 17.3.1 Column-Oriented Stores or Databases .................................................... 387 17.3.2 Key–Value Stores (K–V Store) or Databases ............................................. 387 17.3.3 Document-Oriented Databases ................................................................. 388 17.3.4 Graph Stores or Databases ......................................................................... 389 17.4 Aadhaar Project ......................................................................................................... 390 17.5 Summary .................................................................................................................... 393 18. Context-Aware Computing ............................................................................................... 395 18.1 Introduction to Context ........................................................................................... 396 18.2 Context-Aware Applications ................................................................................... 396 18.2.1 Types of Context-Awareness ..................................................................... 398 18.2.2 Types of Contexts ........................................................................................ 399 18.2.3 Context Acquisition ....................................................................................400 18.2.4 Context Models............................................................................................400 18.2.5 Generic Context-Aware Application Architecture ................................. 402 18.2.6 Illustrative Context-Aware Applications ................................................. 402 18.3 Location Based Systems ........................................................................................... 403 18.3.1 Sources of Location Data............................................................................ 403 18.3.1.1 Cellular Systems.........................................................................404 18.3.1.2 Multireference Point Systems...................................................405 18.3.1.3 Tagging ........................................................................................ 406 18.3.2 Mobility Data ............................................................................................... 409 18.3.2.1 Mobility Data Mining ............................................................... 409

xiv

Contents

18.4 Context-Aware Recommender Systems ................................................................. 411 18.5 Summary .................................................................................................................... 412 19. Internet of Things (IoT) Computing............................................................................... 413 19.1 Introduction to Internet of Things.......................................................................... 414 19.1.1 IoT Building Blocks ..................................................................................... 416 19.1.2 IoT Architecture .......................................................................................... 418 19.1.3 Widened Address Space with IPv6 .......................................................... 420 19.2 RFID (Radio Frequency Identification) ..................................................................422 19.3 Sensor Networks ....................................................................................................... 424 19.3.1 Wireless Networks ...................................................................................... 424 19.3.2 Sensor............................................................................................................ 428 19.3.3 Wireless Sensor Networks ......................................................................... 431 19.3.3.1 WSN Characteristics ..................................................................434 19.4 Summary .................................................................................................................... 435 20. Blockchain Computing ...................................................................................................... 437 20.1 Peer-to-Peer Systems ................................................................................................. 437 20.1.1 Overlay Routing versus IP Routing..........................................................440 20.2 Blockchain ..................................................................................................................443 20.2.1 Blockchain Transaction ..............................................................................446 20.2.2 Blockchain Features .................................................................................... 447 20.2.3 Blockchain Benefits .....................................................................................448 20.2.4 Blockchain Architecture ............................................................................448 20.3 Blockchain Security .................................................................................................. 450 20.4 Blockchain Security Architecture for Internet of Things ................................... 452 20.4.1 Hierarchical Blockchain Architecture ..................................................... 453 20.5 Summary ....................................................................................................................454 21. Soft Computing................................................................................................................... 457 21.1 Artificial Neural Networks...................................................................................... 458 21.1.1 The Biological Neuron ................................................................................ 459 21.1.2 Model of an Artificial Neuron................................................................... 459 21.1.2.1 Activation Function ................................................................... 461 21.1.3 Artificial Neural Network ......................................................................... 462 21.1.3.1 Processing Units......................................................................... 462 21.1.3.2 ANN Processing ........................................................................ 463 21.1.3.3 ANN Topologies ........................................................................464 21.1.3.4 Training of ANN ........................................................................ 465 21.1.3.5 ANN Applications ..................................................................... 465 21.2 Fuzzy Systems ........................................................................................................... 467 21.2.1 Fuzzy Controller ......................................................................................... 468 21.2.1.1 Example of a Fuzzy Logic Controller ..................................... 469 21.3 Evolutionary Algorithms ......................................................................................... 471 21.3.1 Fundamentals of Evolutionary Algorithms ............................................ 472 21.3.2 Genetic Algorithms..................................................................................... 473 21.3.2.1 Benefits of GA ............................................................................. 473 21.3.2.2 Description of a Simple GA ...................................................... 474 21.3.2.3 General Presentation of GA Using Binary Coding ............... 474

Contents

xv

21.4 Rough Sets .................................................................................................................. 475 21.5 Summary .................................................................................................................... 477 22. Interactive Computing ....................................................................................................... 479 22.1 Business Processes and Human Interactions........................................................ 479 22.1.1 Human Interaction Management .............................................................480 22.1.2 Human Interaction Management System ................................................ 481 22.1.3 Comparing HIM and BPM ........................................................................ 482 22.1.4 Humanedj Human Interaction Management Systems .......................... 482 22.2 Interaction Architectures .........................................................................................484 22.2.1 Presentation Abstraction Controller (PAC) .............................................484 22.2.2 Model View Controller (MVC) .................................................................. 485 22.2.3 Data Context Interaction (DCI) ................................................................. 486 22.2.4 Micro-Service Architecture (MSA) ........................................................... 487 22.3 Customer Interaction Systems................................................................................. 489 22.3.1 Spoken Language Recognition ................................................................. 489 22.3.2 Spoken Language Understanding ............................................................ 491 22.3.3 Dialog Management ................................................................................... 492 22.3.4 Natural Language Generation .................................................................. 493 22.3.5 Text-to-Speech Synthesis ............................................................................ 495 22.4 Implementing Customer Interaction Systems....................................................... 497 22.5 Summary .................................................................................................................... 497 Epilogue: Blown to Bits ............................................................................................................ 499 Bibliography................................................................................................................................ 501 Index .............................................................................................................................................505

Foreword Organizations conducting business, whether providing services or delivering products to their customers, require access to digital data and information in order to make informed decisions with respect to the business operations (Hester  & Adams, 2017). Enterprise architectures ensure that the critical logic captured in the formal business processes are directly supported by their associated information technology systems and related infrastructure. Real-time and near real-time data and information are essential elements of decision making and must be a direct reflection of the company’s operating model, the model that ensures business process standardization for delivering goods and services to customers. The business’ enterprise-level operating model is directly supported by enterprise architectures. The enterprise architectures that underlie each company’s operating model will require digital transformation in (1) the business model, (2) the business architecture, and (3) the business processes. The digital transformation refers to the enterprise architecture’s capability to exponentially change through satisficing behaviors (Simon, 1996) in a number of critical non-functional properties (Adams, 2015). Vivek’s text will expose the reader to critical non-functional attributes for enterprise architectures such as interoperability, scalability, availability, mobility, ubiquity, security, analyticity, and usability. More importantly, each critical non-functional attribute will be explored using an appropriate primary digital transformation technology that will enable that transformation. The following sections will provide the reader with a short overview of the genesis of enterprise architectures (EAs), their implementation in the government and the commercial sectors of the economy, and how Vivek’s book completely reframes EAs through both normative and transformative roles.

Genesis of Enterprise Architectures The  acknowledged grandfather of enterprise architectures is P. Duane Dewey Walker, the 1960s director of architecture for IBM’s Information Systems Council and Planning Staff. Walker’s early contribution was in the development and implementation of business systems planning methods. The major outgrowth of this work was the issuance of a formal information systems planning guide to establish an information systems plan that could satisfy both business’ near-term and long-term information needs (IBM, 1975, 1978). Zachman explains that Dewey knew that “it was NOT information systems planning, but a strategyoriented method that produced and employed what I would now call the Framework, Row 1 models. That planning methodology was to be followed by an Architecture methodology because those of us that were practicing it knew that ‘architecture’ was the structural bridge that connected the strategy with the implementation” (Sessions, 2007, p. 3). This  was the first foray into what is now  termed enterprise architecture. Even more importantly, Walker’s influence produced two noteworthy disciples, who may be labeled as the fathers of Enterprise Architecture Frameworks, John Zachman; and Enterprise Architecture Planning (EAP), Stephen Spewak (1951–2004). xvii

xviii

Foreword

Enterprise Architectures Frameworks The actual first issuance of a modern Enterprise Architecture Framework is attributed to the work accomplished by the Partnership for Research in Information Systems Management (PRISM), which was a research service of CSC Index Systems and Hammer and Company. Their 1986 report is the outgrowth of a research project sponsored by a group of 55 companies (including IBM’s Information Systems Group) searching for optimal ways to describe an architecture of distributed systems. The information systems architecture component of the research focused upon “solving several persistent and difficult problems. Among the problems for which the concept of architecture offers solutions are the following: • Rapid change in technology, application requirements, and organization that makes any solution quickly obsolete • Technologies that are incapable of communicating with each other • Lack of organizational consensus and shared understanding about technology and its management • Excessive choice in the marketplace” (CSC, 1986, p. 1) The enterprise architecture proposed by PRISM was a 16-cell, 4 × 4 matrix based upon four architecture domains (infrastructure, data, application, and organization) and four architecture types (inventory, principles, models, and standards). John Zachman (1987), who had been working on these issues since the early 1980s (Sessions, 2007), published a similar 15-cell, 5 × 3 matrix. Zachman’s framework was based upon five unique architectural observations or perspectives (planner, owner, designer, builder, and subcontractor) and the questions that each perspective is interested in answering (what? how? and where?). Zachman’s framework received wide acclaim, and he has been acknowledged as the single-most influential person in the field of enterprise architectures.

Enterprise Architecture Planning Spewak’s most noteworthy contribution to EAs was his work in Enterprise Architecture Planning, where he formalized the “process for defining the top three layers of the Zachman Information Systems Framework” (Spewak & Hill, 1992, p. xxi). This work reinforced the concept that management is required to provide a credible business perspective in order to de-mystify the systems planning process. This emphasized three business-driven elements: 1. A stable business model (independent of organizational boundaries, systems, and procedures) is the foundation for the architectures, 2. Data is defined before applications, and 3. Data dependency determines the sequence for implementing applications systems. (Spewak & Hill, 1992, p. xxi) Spewak’s work was cut short by his passing in March of 2004, but his focus on planning has resonated and been adopted in virtually all enterprise architecture efforts.

Foreword

xix

Enterprise Architectures in the Government Sector The importance of enterprise architectures to the United States economy and its continuing global competitiveness was important enough that the Congress introduced legislation mandating their use throughout the federal government. Public-sector leadership in economic matters has typically lagged that of the private-sector; however, in this case the public-sector has taken a firm leadership position and is worthy of review. This section gives a high-level review of the legislation, directives, and analysis of enterprise architecture in the federal space, providing a framework for the importance of this book. Government enterprise architecture efforts were founded upon the EA  Framework of Zachman (1982, 1987, 1992) and the EA Planning Spewak (1992). They took on added significance when the United States Congress passed the 1993 Government Performance and Results Act (GPRA) (103-62, 1993) and the follow-on 1996 Information Technology Management Reform Act (104-106, 1996), also known as the Clinger-Cohen Act, which legislatively mandated the use of Enterprise Architectures by all United States government departments and agencies. The Clinger-Cohen Act’s legislative mandate for accountability in information technology resource management required each agency’s newly mandated chief information officer (CIO) with “developing, maintaining, and facilitating the implementation of a sound and integrated information technology architecture for the executive agency” (103-62, 1993§5125(b)(3)). The law defined an information technology architecture as “an integrated framework for evolving or maintaining existing information technology and acquiring new information technology to achieve the agency’s strategic goals and information resources management goals” (103-62, 1993§5125(c)(3)(d)). In order to uniformly implement the legislative mandate, the executive branch’s Office of Management and Budget (OMB) was designated as the agent “responsible for overseeing the development of enterprise architectures within and across federal agencies” (see 44 U.S.C. § 3601(4) and 3602(f)(14)). Over the 26 years since the passage of the Clinger-Cohen legislation, the OMB has issued a series of increasingly detailed guidance on Enterprise Architectures (EAs) (OMB, 1996, 1997, 2000, 2007, 2009, 2012). In addition, the Congress has asked the Government Accountability Office (GAO), which is the independent, nonpartisan government agency that works for the Legislative Branch, to both work with and evaluate federal department and agency compliance with law, statutes, regulation, and guidance on EA implementation. GAO has been involved in improvement efforts that focus on information technology investment since the early 1990s (GAO, 1992, 1994). GAOs involvement includes the pioneering work in Enterprise Architectures conducted by John Zachman (1987), the National Institute of Standards and Technology (Fong & Fong, 1989), and Steven Spewak (1992). In an effort to directly support the Congress’ legislative mandate for the use of EAs, the GAO has prepared and issued a framework for assessing and improving EA management (GAO, 2002, 2003a, 2010). GAO has also provided oversight and recommendations and supplied the Congress with a series of reports on Federal EA implementations since 2002 (GAO, 2002, 2003b, 2006, 2012).

Enterprise Architectures in the Commercial Sector The  commercial sector was involved in the development of enterprise architectures from the beginning. Fifty-five commercial entities were involved in the 1986 Partnership for Research in Information Systems Management (PRISM) report. Many

xx

Foreword

organizations continued to invest in solutions and in the mid-1990s The Open Group was founded in response to the federal legislation and enterprise architecture and systems standardization being implemented by the United States Department of Defense (DoD). The  Open Group was founded in 1995 with a goal to enable business organizations, worldwide, to achieve business objectives through the use of open, vendor-neutral standards. Today, The Open Group is a global consortium with a diverse membership of more than 625 organizations that includes customers, systems and solutions suppliers, tool vendors, integrators, academics, and consultants across multiple industries. After establishment in 1995, The  Open Group inherited the DoD’s first enterprise architecture effort, the Technical Architecture Framework for Information Management (TAFIM). The TAFIM materials were explicitly given to The Open Group and this provided the basis for the creation The  Open Group Architecture Framework (TOGAF) standard. The  TOGAF is now  in Revision 9.2 (TOGAF, 2018) and has a multitude of supporting products, including training and certifications, that may be accessed by commercial entities.

Reframing the Role of Enterprise Architectures Vivek’s book completely reframes the EA perspective by ensuring that it includes both transformative and normative roles. The current volume addresses the transformative role where the EA is liminal, or completely abstract to the extent that it is the primary carrier of the burden of any changes that are brought on by the interaction between the constituent subsystems and the environment. A second companion volume is envisioned where the normative role—where an EA serves as the underlying skeletal structure that ensures that its constituent systems, subsystems, and other components are built and maintained in accordance with the dictates established by the enterprise architecture— will be expanded upon.

Summary Enterprise architectures are critical elements in ensuring that any organizational entity has sufficient structure in place to access relevant data and information in support of their business processes. The ability to access, manipulate, and analyze this data and information, in a digital format, is an essential element of developing and maintaining competitive advantage in the modern globally connected economy. You will find that Vivek’s book fully supports the concepts and notions required to conceptualize, design, and implement enterprise architectures in support of the movement to a digital enterprise capable of supporting a business’ position in the global economy. By utilizing this text, readers will be prepared to promote transformability within your organization and ensure that your enterprise architectures are structured to support modern systems in a timely and cost-effective manner, thereby meeting the requirements of an ever-changing business landscape.

Foreword

xxi

I hope that each of you enjoy this book as much as I have and are able to successfully apply its concepts and techniques. Good architecting. Kevin MacG. Adams, Ph.D. University of Maryland University College

References Adams, K. M. (2015). Non-Functional Requirements in Systems Analysis and Design. New York: Springer. CSC. (1986). PRISM: Dispersion and Interconnection: Approaches to Distributed Systems Architecture. Cambridge, MA: CSC Index and Hammer and Company. Fong, E. N., & Fong, A. H. (1989). Information Management Directions: The Integration Challenge (NIST Special Publication 500-167). Gaithersburg, MD: National Institue of Standards and Technology. GAO. (1992). Strategic Information Planning: Framework for Designing and Developing System Architectures (GAO/IMTEC-92-51). Washington, DC: United States General Accounting Office. GAO. (1994). Executive Guide: Improving Mission Performance through Strategic Information Management and Technology (GAO/AIMD-94-115). Washington, DC: United States General Accounting Office. GAO. (2002). Information Technology: Enterprise Architecture Use across the Federal Government Can Be Improved (GAO-02-06). Washington, DC: United States General Accounting Office. GAO. (2003a). Information Technology: A Framework for Assessing and Improving Enterprise Architecture Management (Version 1.1) (GAO-03-584G). Washington, DC: United States General Accounting Office. GAO. (2003b). Information Technology: Leadership Remains Key to Agencies Making Progress on Enterprise Architecture Efforts (GAO 04-40). Washington, DC: United States General Accounting Office. GAO. (2006). Enterprise Architecture: Leadership Remains Key to Establishing and Leveraging Architectures for Organizational Transformation (GAO-06-831). Washington, DC: United States Government Accountability Office. GAO. (2010). Organizational Transformation: A  Framework for Assessing and Improving Enterprise Architecture Management (Version 2.0) (GAO-10-846G). Washington, DC: United States Government Accountability Office. GAO. (2012). Organizational Transformation: Enterprise Architecture Value Needs to be Measured and Reported (GAO-12-791). Washington, DC: United States Government Accountability Office. Government Performance and Results Act (GPRA) 31 U.S.C. 1115, Pub. L. No. 103-62 (1993). Hester, P. T., & Adams, K. M. (2017). Systemic Decision Making: Fundamentals for Addressing Problems and Messes (2nd ed.). Cham, Switzerland: Springer International Publishing. IBM. (1975). Business Systems Planning: Information Systems Planning guide (GE20-0527-1) (1st ed.). White Plains, NY: IBM Corporation. IBM. (1978). Business Systems Planning: Information Systems Planning Guide (GE20-0527-2) (2nd ed.). White Plains, NY: IBM Corporation. Information Technology Management Reform Act (ITMRA), 40 U.S.C. 1401(3), Pub. L. No. 104-106 (1996). OMB. (1996). OMB Memorandum 97-02: Funding Information Systems Investments. Washington, DC: Office of Management and Budget. OMB. (1997). OMB Memorandum 97-16: Information Technology Architectures. Washington, DC: Office of Management and Budget. OMB. (2000). OMB Circular A-130: Management of Federal Information Resources Washington, DC Office of Management and Budget. OMB. (2007). FEA Practice Guidance. Washington, DC: Office of Management and Budget.

xxii

Foreword

OMB. (2009). Improving Agency Performance Using Information and Information Technology (Enterprise Architecture Assessment Framework v3.1). Washington, DC: Office of Management and Budget. OMB. (2012). The  Common Approach to Federal Enterprise Architecture. Washington, DC: Office of Management and Budget. Sessions, R. (2007) Exclusive Interview with John Zachman. Perspectives of the International Association of Software Architects (IASA), International Association of Software Architects, Austin, TX. Simon, H. A. (1996). The Sciences of the Artificial (3rd ed.). Cambridge, MA: MIT Press. Sowa, J. F., & Zachman, J. A. (1992). Extending and formalizing the framework for information systems architecture. IBM Systems Journal, 31(3), 590–616. doi:10.1147/sj.313.0590. Spewak, S. H.,  & Hill, S. C. (1992). Enterprise Architecture Planning: Developing a Blueprint for Data, Applications and Technology. New York: John Wiley & Sons. TOGAF. (2018). The  Open Group Standard: The  TOGAF® Standard, Version 9.2. San Francisco, CA: The Open Group. Zachman, J. A. (1982). Business systems planning and business information control study: A comparison. IBM Systems Journal, 21(1), 31–53. doi:10.1147/sj.211.0031. Zachman, J. A. (1987). A framework for information system architecture. IBM Systems Journal, 26(3), 276–292.

Preface We would never expect that someone would attempt to scale up the capacity of a suburban home to become a 30-floor office building (or an indoor sports complex or a multi-speciality hospital or a luxury hotel). The home doesn’t have the architecture, materials and foundations for this to be even a remote possibility without being completely demolished and rebuilt in the process. Similarly, we shouldn’t expect software systems that do not employ transformable architectures, techniques, and technologies to be quickly changeable to meet alterable capacity needs. The foundation for any transformation needs to be built in from the beginning, with the recognition that constituent components may transform over time. By employing design and development principles that promote transformability, enterprises are able to transform architectures and, hence, systems more rapidly and cheaply to meet rapidly altering demands, expectations or aspirations. Systems with such properties are essentially changeable or transformable systems because of the enterprise architecture (EA). The present book focuses on the digital transformation of enterprise architecture. It proposes that it is the perennial quest for interoperability and portability, scalability, availability, etc., that has directed and driven the evolution of the IT/IS industry in the past 50 years. It is this very quest that has led to the emergence of service-orientation, cloud and big data computing. It must be highlighted that any of such digital transformation of enterprises will necessarily involve a prerequisite stage of virtualization (digital mirroring or digital twinning) of the business strategies, business architectures, and business processes accompanied by concomitant transformation (analog to digital) of the corresponding measures of performances. This entails significant amount of efforts, but for maintaining focus on the primary theme of the book, this virtualization effort is subsumed within the primary activity of the digital transformation efforts. Accordingly, the book will refer only to the post-virtualized business models, enterprise architectures, and enterprise processes. It is important to specify what is meant by digital transformation in this book. Assuming the context of digital technologies, a transformation that affects exponential change (amplification or attenuation) in any aspect of enterprise performance—including enterprise architecture performance—is termed as a digital transformation. This  book proposes that Enterprise Architecture (EA) is the most important element (after Business Models) for obtaining digital transformation of enterprises. Digital transformation involves three parts effort viz. business models, enterprise architecture, and enterprise processes. Business model transformations played out during the early 2000s with the emergence of companies like Amazon, Google, Facebook, and WhatsApp at the forefront of that transformation wave. But, none of these transformations would have emerged without corresponding transformations in enterprise architecture enabled by technologies like service-oriented architecture, cloud computing, and big data computing. Subsequently, additional EA transformations were enabled by technologies like contextaware, Internet of Things (IoT), blockchain, and soft and interactive computing. These played out with companies like Uber, Airbnb, Netflix, and so on. If we want to anticipate future developments in the area of digital transformations of enterprises, it is vital to make the connection between digital transformations and enterprise architecture—this book attempts to do identify and establish such a connection. xxiii

xxiv

Preface

Attributes which are drivers of digital transformation of enterprise architecture are listed below for quick reference: 1. Interoperability is the ability of interaction between two or more systems so as to achieve some goal without having to know the uniqueness of the interacting systems. 2. Scalability is the ability to improve system performance by adding more resources to a single node or multiple nodes—such as addition of CPUs, use of multicore systems instead of single-core, or adding additional memory. 3. Availability is the ability to guarantee nonloss of data and subsequent recovery of the system in a reasonable amount of time. 4. Mobility is the ability of the user or other mobile entity to communicate and access services, irrespective of any changes in the location or in the technical access environment. 5. Ubiquity is the ability to be anywhere and anytime, to anyone, where and when needed. 6. Security is the ability to protect an organization’s information assets from accidental or malicious disclosure, modification, misuse and erasure. 7. Analyticity is the ability for continuous iterative exploration and investigation of past performance, based on data and statistical methods, to gain insight and drive planning for the future. 8. Usability is the ability to be used easily and effectively by the specified range of users, given specified training and user support, to fulfill the specified range of tasks, within the envisaged range of environmental scenarios. This is an appropriate place to request for readers’ indulgence to enable the author to use terms like interoperability, performability, analyticity, etc., since they are similar sounding to the names of other enlisted attributes of EA, even though they are either not completely permissible grammatically or really push the envelope on the same. The Author was motivated to write this book to bring EA in sync with the gigantic changes that have occurred in the IT/Computing landscape in the last few decades. In an enterprise, EA plays multiple roles like performative, normative, transformative, and so on. Performative is the routine role of EA; EA through its constituting attributes delineates a spectrum of zones of performance across the enterprise fitness landscape: each zone of performance is determined by the composition of EA and the characteristic technologies of these components and inter-component interfaces. Normative is the reference role of EA; EA is strong like steel to the extent that the subsequent system/subsystems are built and maintained in absolute adherence to the dictates of EA. Transformative is the adaptive role of EA; EA is liminal, i.e., completely abstract to the extent that it is the primary carrier of the burden of any changes that are wrought by the co-existence and interaction between the environment and the constituent subsystems. For exponential enterprises, this book completely reframes the transformative role of EA; the normative role will be impacted by the constitution and configuration of the enlarged transformative role envisaged in this book. The Author’s journey for exploring the characteristics of digital transformation started with an earlier published book, Agile Network Businesses: Collaboration, Coordination, and Competitive Advantage, which is essentially a book on network and e-Business business

Preface

xxv

models. The following book Creating Smart Enterprises: Leveraging Cloud, Big Data, Web, Social Media, Mobile and IoT Technologies detailed several technologies that are relevant for a digital transformation initiative. Author’s last book Enterprise Process Management Systems: Engineering Process-Centric Enterprise Systems using BPMN 2.0 addresses aspects related to the digital transformation of processes.

What Makes This Book Different? This book proposes that to withstand the disruptive digital storms of the future, enterprise architecture dictates digital transformation of its constituent attributes. Thus, the book proposes that service-oriented, cloud, big data, context-Aware, IoT, blockchain, and soft and interactive computing technologies primarily bring about digital transformation i.e., exponential change (amplification or attenuation) in the performance measures of the respective EA attribute viz. interoperability, scalability, availability, mobility, ubiquity, security, analyticity, and usability. Like the traditional attributes (of interoperability & portability, scalability, availability), mobility, ubiquity, security, analyticity, and usability are also additional non-functional attributes of EA, and an analogous quest for improvements in these additional aspects have led to the digital transformation i.e. exponential changeability (amplification or attenuation) in the performance measure of these newer EA  attributes via technologies like context-aware, IoT, blockchain, and soft and interactive computing, respectively. Thus, this book provides a rationale for the emergence of a host of contemporary technologies. Here are the characteristic features of this book: 1. Identifies three parts effort for any digital transformation: Business Models, Enterprise Architectures, and Enterprise Business Processes. 2. Describes eight attributes of EA: interoperability, scalability, availability, mobility, ubiquity, security, analyticity, and usability. 3. Explains the corresponding technologies of service-oriented, cloud, big data, context-aware, Internet of Things (IoT), blockchain, and soft and interactive computing. 4. Briefs on auxiliary technologies like integration, virtualization, replication, spatiotemporal databases, embedded systems, cryptography, data mining, and interactive interfaces that are essential pre-requisites for affecting or obtaining digital transformation of enterprises. 5. Introduces interactive interfaces like voice, gaze, gesture, and 3D interfaces. 6. Provides an overview of blockchain computing, soft computing, and customer interaction systems. The  surge of companies like Amazon, eBay, Schwab, etc., had brought Enterprise Architecture in focus, but it was relegated on the side lines through the following decade. The Author wanted to restore enterprise architecture to its earlier pre-eminent position. This  book suggests that in the future an enterprise architect will not  only be responsible to give recommendations on the traditional issues like integration, onpremise or hosted n-tier architecture, resources (HW, OS, SW, networking etc.), and configuration, but additionally on:

xxvi

Preface

• Cloud: cloud-based operating environment, selecting service provider(s), SLAs • Big Data: NoSQL, Big Data processing and performance, responsiveness • Mobility: GPS, GIS, Location- and Time-based systems, mobile analytics, recommender systems • Ubiquity: Sensors, sensing and measuring, data fusion • Security: Identity & Access, single sign-on, encryption-decryption, digital signature • Analyticity: predictive markets, voting, predictive analytics • Usability: UX, experience environment, interaction mode, language/currency/of interaction In the final analysis, the advent of cloud and big data computing has blurred the boundaries between hardware, firmware, software, etc.: the operating environment is now divided between the core and the context. Functional application systems constitute the core and all other systems are part of the context. The primary theme of this book should also be important for software product companies managing the design and development of products because the evolution of successful products invariably shows a focus alternating between the core (functional requirements) and the context (non-functional requirements). This book identifies that all EA-related traditional aspects like interoperability, scalability, availability, etc., are part of the context. It  proposes that many other aspects like mobility, ubiquity, security, analyticity, and usability are also part of EA  and, hence, by implication of the context. The author wanted to write a book presenting Enterprise Architecture from this novel perspective; the outcome is the book that you are reading now. Thank you!

How is This Book Organized? This book’s architecture has a pattern, and understanding that pattern makes it easier to comprehend it. For instance, although cryptography is not related to enterprise security per se, Appendix 12A  discusses cryptography because it is a place holder in the book’s architectural pattern and is mapped on to Chapter 20 on “Blockchain Computing.” Chapter 1 sets the context for the whole book by introducing the concept of digital storm that lashed the business environment in 2000s and that return sporadically ever since every few years transformed but with the same devastating force. Section I presents an overview of the genesis of digital transformation of enterprise architecture. Chapters 2 through 4 introduce the concepts of systems theory, digital transformation and distributed systems respectively. Section II Chapters  5 and 6 introduce the overarching concepts of dependability and reliability that characterize the enterprise architecture as a whole. They set the reference for the chapters of this section on the attributes of EA. Chapters 7 through 14 presents an overview of eight attributes of EA: interoperability, scalability, availability, mobility, ubiquity, security, analyticity, and usability. Appendices 7A through 14A brief on auxiliary technologies like integration, virtualization, replication, spatio-temporal databases, embedded systems, cryptography, data mining and interactive interfaces that are essential pre-requisites for the technologies discussed in Section III that fundamentally affect or obtain digital transformation of enterprises.

xxvii

Preface

The complete mapping between the EA attributes and the corresponding primary technologies that obtain digital transformation are shown below. EA Attributes

Technologies obtaining Digital Transformation

7. Interoperability

15. Service-oriented Computing

7A Integration 8. Scalability

16. Cloud Computing

8A Virtualization 9. Availability

17. Big Data Computing

9A Replication 10. Mobility

18. Context-aware Computing

10A Spatio-Temporal Databases 11. Ubiquity

19. Internet of Things Computing

11A Embedded Systems 12. Security

20. Blockchain Computing

12A Cryptography 13. Analyticity

21. Soft Computing

13A Data Mining 14. Usability

22. Interactive Computing

14A Interactive Interfaces

Section III Chapters 15 through 22 explain the corresponding technologies of serviceoriented, cloud, big data, context-aware, Internet of Things, blockchain, soft and interactive computing. 1. There is a certain degree of historicity to the sequence of these technologies; but is too involved an issue to be tackled here and is out of scope for the objective of the book. 2. A  technology chapter in Section III presumes inclusion (and availability) of all technologies explained in sections prior to this chapter. For instance, Chapter 19 on “Internet of Things (IoT) Technologies” presumes inclusion (and availability) of SOA, cloud, big data, and context-aware technologies which are explained in sections occurring before this chapter (see chapters map above). 3. There  is a payoff between the various EA  attributes. For  instance, reliability and availability address two orthogonal issues: a server that is reliable but rarely available serves no purpose; similarly, a server that is available but frequently malfunctions or supplies incorrect or stale data is problematic for every user. In other words, aspects of an individual attribute are also complementarily or orthogonally affected by other attributes’ aspects as well (Section III chapters refer to such cross-dependencies at appropriate places). But, chapters in Section II of the book focus only on factors directly related to a particular attributes’ aspects of the enterprise architecture.

xxviii

Preface

Who Should Read This Book? In contemporary organizations, CIOs are decision makers of IT-related issues—but they don’t battle alone; IT teams have technical members who whet what vendors and/or consultants recommend. This book is addressed not only to CIOs but also their Business-IT and IT teams. All stakeholders of a digital transformation (DT) project can read this book. The approach adopted in this book will be useful to any professional who must present a case for realizing the DT solutions or to those who are involved in a DT computing project. The book is targeted at the following categories of stakeholders: • • • • • •

Executives, business and operational managers Business development, product management and commercial managers Functional and technical members Business analysts, enterprise architects and solution architects Project managers and module leaders Technical managers and professionals interested in digital transformation and EA • Students of engineering, management, computer and technology courses interested in business and digital transformation • General readers interested in the phenomenon of digital transformation and EA Vivek Kale Mumbai, India You may want to read through the Authors’ related books, especially Agile Network Businesses: Collaboration, Coordination, and Competitive Advantage, Creating Smart Enterprises: Leveraging Cloud, Big Data, Web, Social Media, Mobile and IoT Technologies, and Enterprise Process Management Systems: Engineering ProcessCentric Enterprise Systems using BPMN 2.0. These three books focus respectively on three significant aspects of digital transformation viz. business models, relevant technologies and business processes. Along with the current book, these four books provide the complete toolset for building your digitally transformed enterprise.

Acknowledgments I would like to thank all those who have helped me with their clarifications, criticism, and valuable information during the writing of this book. I would like to thank Dr. Richard Hill, Prof. Dr. Robert Winter of University of St Gallen, and Dr. Samuel B. Holcman of Pinnacle Business Group for their comments and feedback on the book. I am thankful to Dr. Daniel Minoli for giving feedback on portions of the book. I am especially thankful to Dr. Kevin MacG. Adams of University of Maryland University College for his feedback and writing the Foreword to the book giving the historical context to the use of enterprise architecture both in the government and private sector. Thanks again to Aastha Sharma for making this book happen, and to Chris Todd, Monica Felomina, and the rest of production team for guiding it through to completion. I owe the largest debt of gratitude to my wife, Girija, and our two (now adult) daughters, Tanaya and Atmaja. None of them ever once suggested during the last 10 years that I should give up writing books, even though none of us knew (or knows) where this series of many small steps (books) might lead. The journey has been long and challenging at times, but the book portion of my life journey has been good because innumerable persons, especially readers, have made it so. To all those persons, both named and unnamed, I offer my sincere appreciation—thank you. Vivek Kale Mumbai, India

xxix

Author Vivek Kale has more than two decades of professional IT experience during which he has handled and consulted on various aspects of enterprise-wide information modeling, enterprise architectures, business process re-design, and, e-business architectures. He has been Group CIO of Essar Group, the steel/oil and gas major of India, as well as, Raymond Ltd., the textile and apparel major of India. He  is a seasoned practitioner in enhancing business agility through digital transformation of business models, enterprise architecture and business processes, and enhancing IT-enabled enterprise intelligence (EQ). He  has authored books on cloud computing and big data computing. He is also author of Agile Network Businesses: Collaboration, Coordination, and Competitive Advantage (CRC Press 2017), and Enterprise Process Management Systems: Engineering Process-Centric Enterprise Systems using BPMN 2.0 (CRC Press 2018).

xxxi

Other Books by Vivek Kale Enterprise Process Management Systems: Engineering Process-Centric Enterprise Systems using BPMN 2.0 (CRC Press, 2018) Creating Smart Enterprises: Leveraging Cloud, Big Data, Web, Social Media, Mobile and IoT Technologies (CRC Press, 2018) Enterprise Performance Intelligence and Decision Patterns (CRC Press, 2018) Agile Network Businesses: Collaboration, Coordination and Competitive Advantage (CRC Press, 2017) Big Data Computing: A Guide for Business and Technology Managers (CRC Press, 2017) Enhancing Enterprise Intelligence: Leveraging ERP, CRM, SCM, PLM, BPM, and BI (CRC Press, 2016) Guide to Cloud Computing for Business and Technology Managers: From Distributed Computing to Cloudware Applications (CRC Press, 2015) Inverting the Paradox of Excellence: How Companies Use Variations for Business Excellence and How Enterprise Variations Are Enabled by SAP (CRC Press, 2015) Implementing SAP® CRM: The Guide for Business and Technology Managers (CRC Press, 2015) A Guide to Implementing Oracle Siebel CRM 8.x (Tata McGraw-Hill, 2010) Implementing SAP R/3: A Guide for Business and Technology Managers (Sams, 2000)

xxxiii

Prologue: Digital Storms This is the first book to propose that Enterprise Architecture (EA) is the most important element (after Business Models) for digital transformation of enterprises. Digital transformation involves three parts effort viz. business strategies, business architecture, and business processes. This book proposes that to withstand the disruptive digital storms of the future, enterprise architecture brings about digital transformation i.e. exponential change (amplification or attenuation) in any performance measure of the constituent attributes of EA. In addition to the conventional attributes of EA like interoperability, scalability and availability, this book identifies additional attributes of mobility, ubiquity, security, analyticity, and usability. Enterprise architecture affects digital transformation of interoperability, scalability, availability, ubiquity, security, analyticity and usability aspects respectively through service-orientation, cloud, big data, context-aware, Internet of Things (IoT), blockchain, and soft and interactive computing technologies. There is a payoff between the various attributes. In other words, aspects of an individual attribute are also complementarily or orthogonally affected by other attributes’ aspects as well, but chapters in Section II of the book focus on factors directly related to a particular attribute’s aspects of the enterprise architecture. As stated in the Preface, it must be highlighted that any of such digital transformation of enterprises will necessarily involve a prerequisite stage of virtualization (digital mirroring or digital twinning) of the business strategies, business architectures, and business processes accompanied by concomitant transformation (analog to digital) of the corresponding measures of performances. This entails a significant amount of effort, but, for maintaining focus on the primary theme of the book, this virtualization effort is subsumed within the primary activity of the digital transformation efforts. Accordingly, the book will refer only to the post-virtualized business models, enterprise architectures, and enterprise processes.

xxxv

1 Agile Enterprise Architecture Agility is the ability to respond to (and ideally benefit from) unexpected change. We must distinguish between agility and flexibility: flexibility is scheduled or planned adaptation to unforeseen yet expected external circumstances, while agility is unplanned and unscheduled adaption to unforeseen and unexpected external circumstances. The enterprises that can best respond to the fast- and frequently changing markets have better competitive advantages than those that fail to sustain the pace dictated by the process of globalization. And this can be realized through enterprises acquiring better control and efficiency in their ability to manage the changes in their enterprise models, architectures and processes.

1.1 Age of Transformation 1. Agricultural economy: In the agricultural economy, people made a living by growing and selling crops and livestock at the market; they were largely dependent on the seasons and climate influences, and it kept man in touch with living nature. If agriculture were to be assumed as the production of food, then if the production of those goods was in line with consumer demand, farmers could have a good living. But this came to an end when the drive to increase productivity led to investments in machinery, and ultimately to overproduction. Prices dropped, and the necessary investments drove the costs higher. It became harder to make a good living with farming. With a shift to an urban economy, for many farmers it looked sensible to abandon their farms and move to the city in pursuit of jobs in industry. 2. Industrial economy: The industrial economy focused on production of non-perishable goods and commodities such as textiles, coal and metal, and it meant larger volumes could be produced and kept in stock or be distributed over larger distances. Investments in technology such as steam power, together with the availability of cheap labor from farmers migrating to cities, made it possible to achieve economies of scale that allowed both greater quantities and higher quality at the same time. The revolution in transportation through the construction of canals and roads, as well as the railway system, increased the ease and speeds with which raw materials and finished goods could be transported to and from the urban centers. The industrial age saw a vastly improved standard of living for many people, but there was a downside to this prosperity. The industrial revolution caused unprecedented changes in society, changing the whole way of life for many people in an unusually short period of time. Where earlier, extended families stayed in the same area for many generations, people suddenly became more mobile, and bonds with the extended family became less strong. Workers sold their labor in return for a wage

1

2

Digital Transformation of Enterprise Architecture

or salary and were no longer free and self-sustaining. The factory working environment was often harsh: noisy and dangerous, with mind-numbing repetitive tasks. 3. Experience economy: For companies, the increasing competition from global companies caused tremendous pressure on the price and quality of their products. Availability of goods from countries with lower wages made companies in the West shift their manufacturing operations out of the Western world. Looking for alternatives where a sound business could be built, many shifted focus to create higher added value through services and experiences. Services are considered as economic goods consisting of labor, advice, managerial skills, etc. Services differ from the production of goods in a number of respects their intangibility; the fact that customers are directly involved and participate in the delivery, and therefore the production and consumption takes place simultaneously; their perishability; and the differences in experience of different consumers. Service industries account for an ever-increasing proportion of the national income in nearly all developed countries. Where competition is strong, and customers can choose from a wide variety of offers, it becomes important to distinguish the offering. The distinguishing factor in services is the experience. 4. Knowledge economy: It includes economic activities concerned with the acquisition, manipulation and transmission of information. The  knowledge-based economy generates information, rather than goods and services, and successfully creates new knowledge as the basis for new products and services. The rise of knowledge-based services is enabled by the widespread use of information technologies. Accurate, up-to-date information from professional services such as news agencies, weather stations, national statistics institutes, etc. is put online. The amount of data that is available online is growing very rapidly, and it can be used to provide knowledgebased services or enhance existing services with additional information. In the knowledge economy there are opportunities to build new knowledgebased services, innovate through the use of knowledge networks, and use supercomputers to make sense of the increasing amount of available data. Knowledge workers are much less dependent on their employers than they used to be in the industrial age. It is economically feasible to become self-employed and pick the projects they want to do. 5. Digital economy: In the transition from a traditional industrial digital economy the whole process of value creation is entirely transformed. Information and knowledge play a crucial role both in the traditional and the digital economy. However, in the industrial economy knowledge generation and application processes were essentially aimed at making production more efficient through cost reductions, while in the digital economy they are mainly directed to intercepting the customer’s preferences and expand his choice. Information, in the digital economy, is an essential source of value and every business is an information business. The digital economy offers companies a variety of tools (e.g., web-based supply chain management systems, online commerce, interactive customer service) that enable the creation of value not only through the reduction of costs, but also and above all making them more capable of responding to customer needs. In fact, in a world where access to information is permanent, immediate and almost ubiquitous, value creation depends on the ability to locate, select, filter and communicate the information which the consumer perceives as actually useful (Table 1.1).

3

Agile Enterprise Architecture

TABLE 1.1 Comparison between the Industrial and Digital Economy Industrial Economy Business process orientation Economic focus Product policy Value chain configuration Key inputs Output

• • • • • •

Guided by offer Cost minimizing Offer standardization Linear value chain Raw materials, intermediate products Intermediate or finished products or services

The role of information

• A supporting and connecting element during the phases of production

Digital Economy • • • • • •

Guided by demand Value maximizing Offer personalization Non-linear value network Digital information Products or services with a high information/knowledge content • A source of value

The Internet, however, proved to be a destructive force for many companies, completely transforming entire sectors, because the network is not only a tremendous source of access to data and information in a digital format, but it also constitutes a new channel of distribution and marketing. In fact, the advent of the Internet has challenged the traditional way of doing business, as it has, and still is, significantly transforming the traditional rules of competition, offer value propositions and business models: 1. The market is no longer just a physical place and geographically fragmented, but rather becomes a digital, open and transparent space. 2. The network has also intensified competition since the easier access to information and the reduction of variable costs stimulates competition on prices and therefore requires the maximization of operational efficiency. In  addition, it is easier for potential entrants to access distribution channels and reach new customers. 3. The ability of the network to get the manufacturer and the end user closer and in direct communication, drastically reducing the need for intermediaries in the sale of goods and services. In  this sense, the network can undoubtedly be counted among the most significant radical innovations, that is those innovations that have as a fundamental trait of a total discontinuity with previous technologies, whose technical skills they tend to displace (making them in some cases so obsolete to cause their withdrawal from the market), resulting in a drastic transformation of the productive processes of economic activities they touch and producing a different distribution of wealth compared to the situation before their introduction. The formulation and implementation of an appropriate business model are vital to meet the challenges of the digital economy, which requires a paradigm shift. Companies are called to deal with the Internet and the opportunities of electronic commerce, but, to be able to acquire the benefits, they must be able to identify the disruptive nature of these innovations in order to effectively reconfigure their distribution strategies or the entire business model.

4

Digital Transformation of Enterprise Architecture

A  number of attributes that make the Internet and electronic commerce disruptive innovations: • Open platform: The internet represents an open and public network which allows a constant flow of communication and collaboration in real time. • Network externalities: Network externalities exist when the value of a given product/ service to any single user becomes greater as the number of users of the same product/service increases. • Connectivity and interaction: E-commerce enables the company to establish new relations and ways of interacting with their customers, suppliers and partners. • Information sharing and exchange: The internet allows information to reach a large number of people without sacrificing the quality of the information content which it distributes. • Production and consumption convergence: The involvement of the consumer-user in the early stages of design and production of highly customized goods and services. • Digital resources: Information and data in a digital form, duly selected, organized and summarized, become sources of essential value that enables companies to formulate new value propositions. • Cost transparency: Users are able to quickly and easily access a vast amount of information regarding prices and characteristics of the products and services offered by the various competitors. • Industry extension: The  value creation made possible by the internet and new digital technologies allows companies to transcend the boundaries of traditional business. • Speed and frequency of changes: In the digital economy companies need to continually adapt to changes, which are extremely fast and frequent. • Virtual capacity: The recent progress in networking and storage technologies has led to the creation of an extensive market place available to users. 1.1.1 VUCA Ecosystem VUCA stands for Volatility, Uncertainty, Complexity and Ambiguity. It describes a situation confronted with in the digital economy: 1. Volatility: The term volatility is commonly used in statistics and financial theory. Volatility can be defined as a statistical measure, describing the amount of uncertainty about the size of changes. In statistics it can be quantified by the standard deviation or variance. Real life examples are increasing price fluctuations on global raw material markets or stock markets. You can see high volatility as significant jumps of values over time, which can be seen as an indicator of increasing pace of the environment. Volatility can be understood as an observable output of a complex system that cannot be easily interpreted any more. While complex systems which are in an equilibrium or which are oscillating between two or three different equilibria are easy to interpret, a system that runs in so called deterministic chaos has no obvious pattern to be easily observed.

Agile Enterprise Architecture

2. Uncertainty: With increased volatility of the environment, it is increasingly hard to predict the future. While in the past statistical regression models were able to predict the future, today it becomes more and more difficult to extrapolate future developments and link them with a probability distribution. Uncertainty can be also described as a lack of clarity to evaluate a situation properly to identify challenges and opportunities. Uncertainty is a problem that might arise in decision making if there is incomplete information, inadequate understanding of available information or equally attractive options. Across time, there develops an increasing belief in the capability to cope with uncertainty and even to overcome uncertainty through planning and control. Decisions in business are often made based on the assumption that with enough research, collection of information and preparation for decision making, uncertainty can be avoided completely. But this is not the case and in highly dynamic environments, the speed of change in the context is higher than the speed of learning. In such a situation, it is customary to build up a certain mutually acceptable perception that is used as a reference by all concerned. This  helps reduce the impact of uncertainty to a certain amount, and helps the people achieve some sense of assurance, security and stability. But this doesn’t mean that it reflects the real-world situation—there is an irreducible gap. 3. Complexity: In  an interconnected and networked environment, it becomes more and more difficult to connect cause and effect. The  idea of linear causality hits the limits. Complexity can be defined as a situation, where interconnectedness of parts and variables is so high, that the same external conditions and inputs can lead to very different outputs or reactions of the system. A real-life example are organizations or even more complex inter-organizational alliance networks where the same inputs can cause very different outputs at different points in time. From a systems perspective complexity can be understood as a specific property, defined as the result of the amount of system elements, their relationships and the dynamics between the elements and the relationships. The more states a system can take, the higher the variety of the system. The variety can be then used as a measure of complexity. In computer science for example the algorithmic complexity is measured by the size of the shortest possible computer program that can solve the problem or at least completely describe the problem. Generally speaking, complexity as two aspects: a. Complex structure is given by the high number of elements that are linked to each other in a non-trivial nonlinear way. In contrast, complicated structures are only characterized by a high amount of system elements and they are missing these intense internal structures of various relationships and dynamics between the elements. b. Complex behavior is characterized mainly by emergence, which can be described as “the action of the whole is more than the sum of the actions of the parts” (Holland). If something contains many interacting objects whose behavior is affected by memory or “feedback,” the interaction becomes non-linear. Complexity is also closely linked to organization, decomposability and nestedness of systems. 4. Ambiguity: Ambiguity is characterized by the fact that causal relationships are completely unclear and the meaning or interpretation of a situation cannot be

5

6

Digital Transformation of Enterprise Architecture

definitely resolved according to a rule or process consisting of a finite number of steps. In contrast to vagueness that characterizes a situation by a lack of clarity, in ambiguity specific and distinct interpretations are permitted. In real life, business decisions become more and more ambiguous, as there is often more than one possible solution to a problem and there is no analytical process to decide, which option should be chosen. If one asks different people for an evaluation of a specific situation and plans for action, one would get different answers which would be equally valid. Organizations cannot reduce the environment’s “degree of VUCA,” but a company’s capability to deal with VUCA can increase. In times of high dynamics and high interconnectedness, traditional simple mind models and decision making rules and heuristics are no longer effective. The traditional mechanistic worldview that worked well in times when the complex environment stayed in “pockets of order”; it was supported by the human need of simple, rational cause-and-effect mind models to be able to make decisions and to easily deal with the environment. Today, often the environment is in a state at the “edge of chaos” or in a “deterministic state of chaos.” In these situations, not only there is a need for different models and approaches in cognition, judgment and action in management, but performance on all of these aspects can be enhanced tremendously by augmenting with technology.

1.2 Four Industrial Generations In the earliest stages of industrialisation, the focus of the technological revolution was on mechanisation and the use of water and steam power. Mechanised production that had previously been organized mainly along craft lines led to larger organizational units and a significantly higher output of goods. The second industrial revolution was characterised by the assembly line and electrification of factories. The third industrial revolution brought with it a further automation of production by means of microelectronics and computerisation. The fourth industrial revolution is characterised by the essential digitisation of the processes of production and delivery.

1.3 Trends 1.3.1 Social Trends 1. Sharing and prosuming: The digital transformation blurs the traditional societal categories of production and consumption. This development has enormous consequences for role behavior, expectations and standards. Increasingly, consumers are becoming producers—and not only producers of digital content. A few years ago, prosuming still referred solely to content shared on social networks, generated by users and made available to the general public as, say, on Wikipedia.

Agile Enterprise Architecture

Today this relationship is increasingly shifting in the direction of digital product design, user experience or participation in services—up to and including power generation in prosumer networks. Companies are increasingly involving customers directly in the design, functionalities and usability of new products and the development of customer specific services. This consumer participation is a path taken by carmakers and manufacturers of sporting goods or household appliances in equal measure. So the borders between producers and consumers are growing less clear. Consumers and customers are taking on an extended workbench role. When we consider the future production of articles in daily use or, perhaps, of food on suitably equipped 3D printers, moving from a home office to a home factory is no longer such a major transition, and the Internet of Things will give this trend an additional boost. 2. Digital inclusion: Just as the meaning of societal categories such as consumers and producers changes, the borders between professional and private life are blurring too. Digital connection has triggered deep-seated societal changes that are perceptible in all areas of life. People are working via the Net on shared projects, maintaining digital friendships across continents and distinguishing less and less between professional and private life. Connecting has become a new form of communicating and living. People are organizing and engaging in networks and setting up vibrant ecosystems in order, for instance, to put their private or professional ideas into practice by means of crowdsourcing or crowdfunding. The central tool in this development is the smartphone. In 2016, around four billion people worldwide own one of this kind of mobile computer. This smart mobile communication device is epicenter of sociocultural changes. The tool used by digital natives who want to be permanently online, contactable and connected, it is by no means the endpoint of the technological development on which this societal trend of inclusion is based. Wearable devices and digital implants may well be next steps in this development. For many working people the dynamics of everyday working life is moving to the Net. Chats, forums and Wikis are taking the place of the personal exchange of opinions and experiences at the workplace. The good old telephone conversation is then often the last bastion for keeping up personal (working) relationships. “Always on” means for communications teams and other employees alike that they can work from a distance—for example from home—which leads to a shift from traditional office hours and workplaces to greater flexibility and independence. 3. More transparency: In most Western industrial countries, a majority of the population is now registered on social networks and nearly most of 14- to 64-year-olds use the internet. They do so of their own free will because they see the advantages and do not want to forgo them even though there are regular warnings about the omnipresent power of algorithms. Free internet services and payback or customer cards of any kind also serve specific business models. Furthermore, the user of internet services is not a customer in the classic meaning of the word. In many cases, the provider’s business model is based on collecting personal data and marketing it in a targeted way. Yet that dissuades very few people from divulging their data, from using Facebook or from posting photos on Instagram. Intentionally or not, that creates a tendentially dangerous transparency that is liable to manipulation and misuse—such as when algorithms are deliberately changed to deliver the results desired or when content is purely and simply falsified.

7

8

Digital Transformation of Enterprise Architecture

1.3.2 Organizational Trends In  the field of organization, we see another three basic trends influencing professional communication: the trend towards more connection, a growing demand for creativity and the quest for identity. 1. Connected work: Platforms form the basis of nearly all new business models, and the concept of a platform has legal, organizational and technical aspects. Processes are controlled, business is driven and communications are distributed via digital platforms. They are also becoming an indispensable collaboration tool in day-to-day work. They simplify closely enmeshed collaboration of teams that are increasingly often engaged in projects across corporate and geographical borders or are jointly taking developments forward. One of the consequences is that to provide specific services companies are less and less dependent on a fixed workforce. The transparency of skills and availabilities of highly qualified specialists brought about by connection leads to “hiring on demand.” The employment relationship is changing into an employment deployment. What is more, software increasingly determines work processes. That makes organizations more homogeneous and efficient, but in some areas perhaps less creative and system driven. 2. Increasing creativity: With the advent of smart software, learning AI systems and industrial robots, the role of people in production and work processes is changing. The individual’s role is changing from that of a provider of job performance to that of a monitor of machines who controls and only intervenes in an emergency. At the same time, new forms of interaction are taking shape between people and machines—up to and including their merger. No one can at present seriously predict what, in the final analysis, that may mean, especially in the production of non-material corporate achievements such as knowledge, transparency, software or decision-making. Kurzweil dealt with this aspect back in 2006. His thesis was that the exponential increase in technologies like computers, genetics, nanotechnology, robotics and artificial intelligence will lead to a technological singularity in the year 2045, a point where progress is so rapid that it outstrips humans’ “ability to comprehend it.” 3. Redefining identity: The employment relationship is changing into an employment deployment. The result is what counts, and no longer just being there. Teams will be set up from case to case, hierarchies will be sidelined and departments  will be discontinued. What is more, services provided in the digital environment will increasingly be delegated to highly specialized “virtual laborers” who will strengthen the company’s team from time to time. What that means is simply constant change. Employee management and control will become the major challenge that a “liquid” workforce poses. Compulsory attendance will in any case become an obsolete model, permanent control will no longer be possible and authoritarian managers will no longer stand a chance. Their place will be taken by openness and transparency, trust, motivation and recognition along with greater decision-making scope for the individual. Along the lines of the 2001 Manifesto for agile software development, the guiding principle will be preferably to try things out and rectify them if need be rather than to die striving for perfection. Conversely, expectations of the “core competences” of employees and co-workers are also changing. Specific skills are no longer the measure of all things. That measure is now creativity, competency to communicate and to work in a team and

Agile Enterprise Architecture

the ability to work as flexibly as possible in changing teams. Other attributes are willingness to change, courage to have an opinion of your own and an unbending will to assume responsibility. In the overwhelming majority of cases, the right organization and legal framework conditions have yet to be established. This requires concerted action by the workforce, HR and the management just as much as it requires new agreements with trade unions and politicians. The  agenda includes flexible working time organization, adequate remuneration of performance and appropriate provision for old age. Especially in view of the growing number of digital working nomads, a collective provision must be made to ensure that binding rules are negotiated which are equally profitable for employees and employers. Digital nomads are only a challenge when it comes to retaining or creating in this highly volatile situation a central resource of every organization that stands apart from individual competences or skills: the ability of employees and managers to identify with the organization, be it a company, an association or an NGO. Only if at least a minimal consensus on vision, mission, values, objectives and strategy can be established and kept alive are peak performances and excellence to be expected.

1.3.3 Business Model Trends 1. Globalization: The digitally connected individual is the powerful player in globalization. About 2.7 billion people around the world are connected. The enormous success of Google, Amazon, Apple, Facebook or Twitter is precisely attributable to this development, as is the hitherto inexorable rise of new sharing platforms like Uber or Airbnb. 2. Industries redefined and reframed: New technology-driven companies shake up entire industries and embark on knocking established companies out of the running. In the music industry Apple today is one of the major providers, Facebook is pestering the banks and Google is proving a problem for the carmakers. That too is a consequence of digitalization and the Industrial internet will further step up the pace. This is made possible by integrated, open, standardized platform solutions on which the most varied players can meet, share their ideas and know-how and establish and manage joint value chains. Start-ups in the financial sector, for example, try to make use of the infrastructure of large retail chains in order at one fell swoop to set up a “branch network” of cash payment points of which the banks can but dream—a threat to the business model of not only banks but also ATM providers. WhatsApp or Skype use the partly still government-subsidized technical infrastructure of the large telephone companies so as to appear to be able to offer their communications services globally free of charge. Smart “capture” of existing infrastructure can succeed on a global scale and even without major capital expenditure if a convincing customer benefit is combined with digitally enabled user-friendliness. 3. Increasing supply chain integration: In the course of platformization of the economy, value chains are not  only better integrated but also redefined in many places. What is currently under discussion in many manufacturing companies is, for

9

10

Digital Transformation of Enterprise Architecture

example, the use of 3D printing. It is a technology with highly disruptive potential that in the years ahead will in all probability intervene massively in the traditional value chains of a wide range of manufacturers and industries. The keyword is co-creation. Carmakers might in future allow their dealers to 3D print certain plastic and metal parts themselves. It would then be possible to transfer parts of their production not only directly to the sales markets in question but also to adjust it to the markets and their specific requirements. Another example is the development of new services and operator models, with interesting options arising in, for example, the combination and services and leasing or rental models. They  could range from coffeemakers via paint shops to turbines. Plant and equipment would remain the property of the manufacturer with payment on the basis of use and availability—such as the coffee consumption, the number of car bodies painted or the number of flight hours logged. This approach may not be fundamentally new, but substantially improved direct access to operating data—in real time if required—offers enormous benefits with regard, say, to the timely delivery of fresh coffee beans or to timely maintenance of machinery. The smart factory of the future will no longer be a group of industrial buildings with intelligent organization and technology; it will break the spatial, organizational and legal bounds of traditional manufacturing operations. 1.3.4 Technology Trends 1. Increasing connection: Improbable though it may sound, even after more than two decades of the internet we are still only at the beginning of comprehensive global interconnection. By 2020, the number of connected devices will increase to 50 billion from 25  billion today. That will mean an increase not  only in the connection of computers in production and administration but above all permanent data communication between smart devices and sensors or all kinds, from machines to machines, from machines to people, from goods and products on their way across the delivery, distribution and consumption chains: independently of specific devices, increasingly mobile and time independent. This is enabled by new technologies that make this interconnection possible in the first place: software that is capable of understanding and processing data from the widest range of sources, cloud computing that makes it possible to manage this data on an almost unlimited scale, high-powered data transmission networks and mobile devices like smartphones and tablets that provide access to this kind of data anytime, anywhere. 2. Big data: The market researchers expect globally generated data to increase 800% in the next 5 years. It will include healthcare, energy, financial or production data. By 2020, this data mountain will increase to 44 zettabytes—an inconceivable amount in that a single zettabyte roughly corresponds to the data capacity of 200 billion DVDs. In addition to machine and sensor data, it will include multimedia content and all kinds of natural verbal communication insofar as it can be recorded by digital systems. Anyone who can analyze live Twitter or Facebook streams contextually in real time with the aid of AI systems enjoys a clear advantage in certain communication

Agile Enterprise Architecture

11

decisions. Anyone who in social media marketing learns from target group reactions at short intervals and optimizes his campaigns can increase his efficiency dramatically. 3. User-friendly technology: What began with the Windows graphic user interface and reached its peak for the time being with smartphone apps that everyone can use is now gaining access to the control of complex technological applications in factories. For communicators, the relevance of this trend can be seen in the use of numerous technologies in their everyday professional life, such as in analyzing target group data or using market research and opinion polling systems. Using special algorithms to evaluate communication or shopping behavior of individual customers, and subsequently automatically initiating individualized communication, cross- and up-selling offerings into the personal digital campaign targeted at these customers.

1.4 From Products to Services to Experiences By the middle of the last century, products, goods, and property came to increasingly mean an individual’s exclusive right to posses, use, and, most importantly, dispose of things as he or she wished. By the 1980s, the production of goods had been eclipsed by the performance of services. These are economic activities that are not products or construction, but are transitory, consumed at the time they are produced (and, thus, cannot be inventoried), and primarily provide intangible value. In a service economy, it is time that is being commodified, not prices, places, or things—this also leads to a transition from profit and loss to market cap as the base metric of success; what the customer is really purchasing is the access for use rather than ownership of a material good. Since the 1990s, goods have become more information intensive and interactive, are continually upgraded, and are essentially evolving into services. Products are rapidly being equated as the cost of doing business rather than as sales items; they are becoming more in the nature of containers or platforms for all sorts of upgrades and value-added services. Giving away products is increasingly being used as a marketing strategy to capture the attention of potential customers. But with the advent of electronic commerce, feedback, and workflow mechanisms, services are being further transformed into multifaceted relationships between the service providers and customers, and technology is becoming more of a medium of relationships. In  the servicized economy, defined by shortened PLCs and an ever-expanding flow of competitive goods and services, it is the customer attention rather than the resources that is becoming scarce. The  true significance of a customer’s attention can be understood the moment one realizes that time is often used as a proxy for attention. Like time, attention is limited and cannot be inventoried or reused. In the current economy, attention is the real currency of business and, to be successful, enterprises must be adept in getting significant and sustained mindshare or attention of their prospects or customers. As with any other scarce and valuable resource, markets for attention exist both within and outside the enterprise. For  extracting optimum value, the real-time and intelligent enterprises must impart optimal attention to the wealth of operational and management information

12

Digital Transformation of Enterprise Architecture

FIGURE 1.1 Spectrum of offerings (products/services) versus medium (container or form)/message (content/substance).

available within the enterprise. This fact alone should automatically put a bar on overzealous reengineering and downsizing efforts (although reengineering and other cost-cutting tactics are necessary, it is essential to ascertain if undertaking such tactics will contribute to the delivery of superior or at least on par value to the customers). One major result of this trend toward the importance of experience has been the blurring of lines between the content (the message) and container (the medium) in the market (Figure 1.1). Convergence describes the phenomenon in which two or more existing technologies, markets, producers, boundaries, or value chains combine to create a new force that is more powerful and more efficient than the sum of its constituting technologies. The value chain migration alludes to the development of real-term systems that integrate supply chain systems and customer-facing systems, resulting in a single and unified integrated process. This convergence occurs primarily because of three factors: 1. The  digitization of information to enable the preparation, processing, storage, and transmission of information regardless of its form (data, graphics, sound, and video or any combination of these). 2. The rapidly declining cost of computing that has enabled it to become ubiquitous and available with sufficient power. 3. The availability of broadband communications is critical to convergence because multimedia is both storage intensive and time sensitive.

Agile Enterprise Architecture

13

1.5 Agile Enterprises The difficult challenges facing businesses today require enterprises to be transitioned into flexible, agile structures that can respond to new market opportunities quickly with a minimum of new investment and risk. As enterprises have experienced the need to be simultaneously efficient, flexible, responsive, and adaptive, they have transitioned themselves into agile enterprises with small, autonomous teams that work concurrently and reconfigure  quickly, and adopt highly decentralized management that recognizes its knowledge base and manages it effectively. Enterprise agility is the ability to be 1. Responsive—Adaptability is enabled by the concept of loosely coupled interacting components reconfigurable within a unified framework. This is essential for ensuring opportunity management to sustain viability. The ability to be responsive involves the following aspects: • An organizational structure that enables change is based on reusable elements that are reconfigurable in a scalable framework. Reusability and reconfigurability are generic concepts that are applicable to work procedures, manufacturing cells, production teams, or information automation systems. • An organizational culture that facilitates change and focuses on change proficiency. 2. The ability to be intelligence intensive or to manage and apply knowledge effectively whether it is knowledge of a customer, a market opportunity, a competitor’s threat, a production process, a business practice, a product technology, or an individual’s competency. This is essential for ensuring innovation management to sustain leadership. The ability to be intelligence intensive involves the following aspects: • Enterprise knowledge management • Enterprise collaborative learning When confronted with a competitive opportunity a smaller company is able to act more quickly, whereas a larger company has access to more comprehensive knowledge (options, resources, etc.) and can decide to act sooner and more thoroughly. Agility is the ability to respond to (and ideally benefit from) unexpected change. Agility is unplanned and unscheduled adaption to unforeseen and unexpected external circumstances. However, we must differentiate between agility and flexibility. Flexibility is scheduled or planned adaptation to unforeseen yet expected external circumstances. One of the foremost abilities of an agile enterprise is its ability to quickly react to change and adapt to new opportunities. This ability to change works along two dimensions: 1. The number or “types of change” an enterprise is able to undergo 2. The “degree of change” an enterprise is able to undergo The  former is termed as range, and the latter is termed as response ability. The  more responseable an enterprise is, the more radical a change it can gracefully address. Range refers to how large a domain is covered by the agile response system; in other words, how

14

Digital Transformation of Enterprise Architecture

far from the expected set of events one can go and still have the system respond well. However, given a specific range, how well the system responds is a measure of response or change ability. Enterprises primarily aim progressively for efficiency, flexibility, and innovation in that order. The Model Builder, Erector set, and LEGO kits are illustrations of enterprises targeting for efficiency, flexibility, and innovation (i.e., agility), respectively. Construction toys offer a useful metaphor because the enterprise systems we are concerned with must be configured and reconfigured constantly, precisely the objective of most construction toys. An enterprise system architecture and structure consisting of reusable components reconfigurable in a scalable framework can be an effective base model for creating variable (or built-for-change) systems. To achieve this, the nature of the framework appears to be a critical factor. We can introduce the framework/component concept, by looking at three types of construction toys and observe how they are used in practice, namely, Erector Set Kit, LEGO Kit, and Model Builder’s Kit. You can build virtually anything over and over again with any of these toys; but fundamental differences in their architecture give each system unique dynamic characteristics. All consist of a basic set of core construction components, and also have an architectural and structural framework that enables connecting the components into an unbounded variety of configurations. Nevertheless, the Model Builder is not as reusable in practice as the Erector Set, and the Erector Set is not as reusable or reconfigurable or scalable in practice as LEGO, and LEGO is more reusable, reconfigurable, and scalable than either of them. LEGO is the dominant construction toy of choice among preteen builders—who appear to value experimentation and innovation. The Model Builder’s kit can be used to construct one object like airplane of one intended size. A highly integrated system, this construction kit offers maximum aesthetic appeal for one-time construction use but the parts are not reusable, the construction cannot be reconfigured, and one intended size precludes any scalability. It will remain what it is for all time—there is zero variability here. Erector Set kits can be purchased for constructing specific models, such as a small airplane that can be assembled in many different configurations. With the Erector Set kit, the first built model is likely to remain as originally configured in any particular play session. Erector Set, for all its modular structure, is just not  as reconfigurable in practice as LEGO. The  Erector Set connectivity framework employs a special-purpose intermediate subsystem used solely to attach one part to another—a nut-and-bolt pair and a 90-degree elbow. The components in the system all have holes through which the bolts may pass to connect one component with another. When a nut is lost, a bolt is useless, and vice versa; when all the nuts and bolts remaining in a set have been used, any remaining construction components are useless, and vice versa. All the parts in a LEGO set can always be used and reused, but the Erector Set, for all its modularity, is not as reusable in practice as LEGO. LEGO offers similar kits, and both toys include a few necessary special parts, like wheels and cowlings, to augment the core construction components. Watch a child work with either and you will see the LEGO construction undergoes constant metamorphosis; the child may start with one of the pictured configurations, but then reconfigures the pieces into all manner of other imagined styles. LEGO components are plug-compatible with each other, containing the connectivity framework as an integral feature of the component. A standard grid of bumps and cavities on component surfaces allows them to snap together into a larger configuration—without limit.

Agile Enterprise Architecture

15

The  Model Builder’s kit has a tight framework: A  precise construction sequence, no part interchange ability, and high integration. Erector Set has a loose framework that does not encourage interaction among parts and insufficiently discriminates among compatible parts. In contrast, each component in the LEGO system carries all it needs to interact with other components (the interaction framework rejects most unintended parts), and it can grow without end. 1.5.1 Stability versus Agility Most large-scale change efforts in established enterprises fail to meet the expectations because nearly all models of organization design, effectiveness, and change assume stability is not only desirable but also attainable. The theory and practice in an organization design explicitly encourages organizations to seek alignment, stability, and equilibrium. The  predominant logic of organizational effectiveness has been that an organization’s fit with its environment, its execution, and its predictability are the keys to its success. Organizations are encouraged to institutionalize best practices, freeze them into place, focus on execution, stick to their knitting, increase predictability, and get processes under control. These ideas establish stability as the key to performance. Stability of a distinctive competitive advantage is a strong driver for organization design because of its expected link to excellence and effectiveness. Leveraging an advantage requires commitments that focus attention, resources, and investments to the chosen alternatives. In other words, competitive advantage results when enterprises finely hone their operations to perform in a particular way. This  leads to large investments in operating technologies, structures, and ways of doing things. If such commitments are successful, they lead to a period of high performance and a considerable amount of positive reinforcement. Financial markets reward stable competitive advantages and predictable streams of earnings: A commitment to alignment reflects a commitment to stability. Consequently, enterprises are built to support stable strategies, organizational structures, and enduring value creations, not to vary. For example, the often-used strengths, weaknesses, opportunities and threats (SWOT) analysis encourages the firm to leverage opportunities while avoiding weaknesses and threats. This alignment among positive and negative forces is implicitly assumed to remain constant, and there is no built-in assumption of agility. When environments are stable or at least predictable, enterprises are characterized by rules, norms, and systems that limit experimentation, control variation, and reward consistent performance. There are many checks and balances in place to ensure that the organization operates in the prescribed manner. Thus, to get the high performance they want, enterprises put in place practices they see as a good fit, without considering whether they can be changed and whether they will support changes in future, that is, by aligning themselves to achieve high performance today, enterprises often make it difficult to vary, so that they can have high performance tomorrow. When the environment is changing slowly or predictably, these models are adequate. However, as the rate of change increases with increasing globalization, technological breakthroughs, associative alliances, and regulatory changes, enterprises have to look for greater agility, flexibility, and innovation from their companies. Instead of pursuing strategies, structures, and cultures that are designed to create long-term competitive advantages, companies must seek a string of temporary competitive advantages through an approach to organization design that assumes change is normal. With the advent of the internet and the accompanying extended “virtual” market spaces, enterprises are now competing based on intangible assets such as identity, intellectual property, ability to attract and stick to customers, and, their ability to organize, reorganize frequently or organize differently

16

Digital Transformation of Enterprise Architecture

in different areas depending on the need. Thus, the need for changes in management and organization is much more frequent, and, excellence is much more a function of possessing the ability to change. Enterprises need to be built around practices that encourage change, not thwart it. Instead of having to create change efforts, disrupt the status quo, or adapt to change, enterprises should be built-for-change. To meet the conflicting objectives of performing well against the current set of environmental demands and changing themselves to face future business environments, enterprises must engender two types of changes: The natural process of evolution, or what we will call strategic adjustments and strategic reorientations: 1. Strategic adjustments involve the day-to-day tactical changes required to bring in new customers, make incremental improvements in products and services, and comply with regulatory requirements. This type of change helps fine-tune current strategies and structures to achieve short-term results; it is steady, incremental, and natural. This basic capability to evolve is essential if an enterprise is to survive to thrive. 2. Strategic reorientation involves altering an existing strategy and, in some cases, adopting a new strategy. When the environment evolves or changes sufficiently, an enterprise must significantly adjust some elements of its strategy and the way it executes that strategy. More often than not, enterprises have to face a transformational change that involves not  just a new strategy but a transformation of the business model that leads to new products, services, and customers, and requires markedly new competencies and capabilities. However, operationally all these changes can be seen as manifestations of the basic changes only differing in degrees and multiple dimensions. Maintaining an agile enterprise is not a matter of searching for the strategy but continuously strategizing, not a matter of specifying an organization design but committing to a process of organizing, and not generating value but continuously improving the efficiency and effectiveness of the value generation process. It is a search for a series of temporary configurations that create short-term advantages. In turbulent environments, enterprises that string together a series of temporary but adequate competitive advantages will outperform enterprises that stick with one advantage for an extended period of time. The key issue for the built-for-change enterprise is orchestration, or coordinating the multiple changing subsystems to produce high levels of current enterprise performance. 1.5.2 Aspects of Agility This section addresses the analytical side of agility or change proficiency of the enterprise. It highlights the fundamental principles that underlie an enterprise’s ability to change, and indicate how to apply these principles in real situations. It illustrates what it is that makes a business and any of its constituting systems easy to change. Agility or change proficiency enables both efficiency programs (e.g., lean production) and transformation programs; if the enterprise is proficient at change, it can adapt to take advantage of an unpredictable opportunity, and can also counter the unpredictable threat. Agility can embrace semantics across the whole spectrum: It can capture cycle-time reduction with everything happening faster; it can build on lean production with high resource productivity; it can encompass mass customization with customer-responsive product variation; it can embrace virtual enterprise with streamlined supplier networks and opportunistic partnerships; it can echo reengineering with a process and transformation focus; it can demand a

Agile Enterprise Architecture

17

learning organization with systemic training and education. Being agile means being proficient at change. Agility allows an enterprise to do anything it wants to do whenever it wants to—or has to—do it. Thus, an agile enterprise can employ business process reengineering as a core competency when transformation is called for; it can hasten its conversion to lean production when greater efficiencies are useful; it can continue to succeed when constant innovation becomes the dominant competitive strategy. Agility can be wielded overtly as a business strategy as well as inherently as a sustainable-existence competency. Agility derives from both the physical ability to act (change ability) and the intellectual ability to find appropriate things to act on (knowledge management). Agility can be expressed as the ability to manage and apply knowledge effectively, so that enterprise has the potential to thrive in a continuously changing and unpredictable business environment. Agility derives from two sources: An enterprise architecture that enables change and an organizational culture that facilitates change. The enterprise architecture that enables change is based on reusable elements that are reconfigurable in a scalable framework. Agility is a core fundamental requirement of all enterprises. It was not an area of interest when environmental change was relatively slow and predictable. Now there is virtually no choice; enterprises must develop a conscious competency. Practically, all enterprises now need some method to assess their agility and determine whether it is sufficient or needs improvement. This  section introduces techniques for characterizing, measuring, and comparing variability in all aspects of business and among different businesses. 1.5.3 Principles of Built-for-Change Systems Christopher Alexander introduced the concept of patterns in the late 1970s in the field of architecture. A pattern describes a commonly occurring solution that generates decidedly successful outcomes. A list of successful patterns for agile enterprises (and systems) in terms of their constituting elements or functions or components are as follows: 1. Reusable Agility Pattern 1 Self-Contained Units (Components): The components of agile enterprises are autonomous units cooperating toward a shared goal. Agility Pattern 2 Plug Compatibility: The components of agile enterprises are reusable and multiply replicable, that is, depending on requirements multiple instances of the same component can be invoked concurrently. Agility Pattern 3 Facilitated Reuse: The  components of agile enterprises share well-defined interaction and interface standards, and can be inserted, removed, and replaced easily and noninvasively. 2. Scalable Agility Pattern 8 Evolving Standards (Framework): The components of agile enterprises operate within predefined frameworks that standardize intercomponent communication and interaction, determine component compatibility, and evolve to accommodate old, current, and new components.

18

Digital Transformation of Enterprise Architecture

Agility Pattern 9 Redundancy and Diversity: The components of agile enterprises replicate components to provide the desired capacity, load balancing and performance, fault tolerance as well as variations on the basic component functionality and behavior. Agility Pattern 10 Elastic Capacity: The components of agile enterprises enable dynamic utilization of additional or a reduced number of resources depending on the requirements. 3. Reconfigurable Agility Pattern 4 Flat Interaction: The components of agile enterprises communicate, coordinate, and cooperate with other components concurrently and in real-term sharing of current, complete, and consistent information on interactions with individual customers. Agility Pattern 5 Deferred Commitment: The components of agile enterprises establish relationships with other components in the real term to enable deferment of customer commitment to as late a stage as possible within the sales cycle, coupled with the corresponding ability to postpone the point of product differentiation as close as possible to the point of purchase by the customer. Agility Pattern 6 Distributed Control and Information: The components of agile enterprises are defined declaratively rather than procedurally; the network of components display the defining characteristics of any “small worlds” network, namely, local robustness and global accessibility. 1.5.4 Framework for Change Proficiency How do we measure enterprise agility? This section establishes a metric framework for proficiency at change; an enterprise’s change proficiency may exist in one or more dimensions of change. And, these dimensions of change can form a structural framework for understanding current capabilities and setting strategic priorities for improvement: How does the agile enterprise know when it is improving its changeability, or losing ground? How does it know if it is less changeable than its competition? How does it set improvement targets? Thus, a practical measure of change proficiency is needed before we can talk meaningfully about getting more of it, or even getting some of it. It must be highlighted that measuring change competency is generally not unidimensional, nor  likely to result in an absolute and unequivocal comparative metric. Change proficiency has both reactive and proactive modes. Reactive change is opportunistic and responds to a situation that threatens viability. Proactive change is innovative and responds to a possibility for leadership. An enterprise sufficiently proficient at reactive change, when prodded should be able to use that competency proactively and let others do the reacting. Would it be proficient if a short-notice change was completed in the time required, but at a cost that eventually bankrupted the company? Or if the changed environment thereafter required the special wizardry and constant attention of a specific employee to keep it operational? Is it proficient if the change is virtually free and painless, but

Agile Enterprise Architecture

19

out-of-sync with market opportunity timing? Is it proficient if it can readily accommodate a broad latitude of change that is no longer needed, or too narrow for the latest challenges thrown at it by the business environment? Are we change proficient if we can accommodate any change that comes our way as long as it is within a narrow 10% of where we already are? Therefore, change proficiency can be understood to be codetermined by four parameters: 1. Time: A measure of elapsed time to complete a change (fairly objective) 2. Cost: A measure of monetary cost incurred in a change (somewhat objective) 3. Quality: A measure of prediction quality in meeting change time, cost, and specification targets robustly (somewhat subjective) 4. Range: A measure of the latitude of possible change, typically defined and determined by mission or charter (fairly subjective)

1.6 Digital Transformation of EA Attributes A  digital transformation affects exponential change (amplification or attenuation) in any (or  multiple) aspect(s) of enterprise performance including enterprise architecture performance. By employing design and development principles that promote transformability, enterprises are able to transform architectures and, hence, systems more rapidly and cheaply to meet rapidly altering demands, expectations or aspirations. Systems with such properties are essentially changeable or transformable systems because of the enterprise architecture. To withstand the disruptive digital storms of the future, enterprise architecture obtains agility through the digital transformation, i.e. exponential change (amplification or attenuation) in any performance measure of the constituent attributes of EA: 1. Interoperability: Interoperability is the ability of interaction between two or more systems so as to achieve some goal without having to know the uniqueness of the interacting systems. 2. Scalability: Scalability is the ability to improve system performance by adding more resources to a single node or multiple nodes—such as addition of CPUs, use of multicore systems instead of single-core, or adding additional memory. 3. Availability: Availability can be defined as the ability to guarantee nonloss of data and subsequent recovery of the system in a reasonable amount of time. 4. Mobility: Mobility refers to the ability of the user or other mobile entity to communicate and access services, irrespective of any changes in the location or in the technical access environment. 5. Ubiquity: Ubiquity is the ability to be anywhere and anytime, to anyone, where and when needed. 6. Security: Security is the ability to protect an organization’s information assets from accidental or malicious disclosure, modification, misuse and erasure.

20

Digital Transformation of Enterprise Architecture

7. Analyticity: Analyticity is the ability for continuous iterative exploration and investigation of past performance, based on data and statistical methods, to gain insight and drive planning for the future. 8. Usability: Usability is the ability to be used easily and effectively by the specified range of users, given specified training and user support, to fulfill the specified range of tasks, within the envisaged range of environmental scenarios. Technologies of service-oriented, cloud, big data, context-aware, Internet of Things (IoT), blockchain, soft and interactive computing bring about digital transformation of the respective EA  attribute, namely, interoperability, scalability, availability, mobility, ubiquity, security, analyticity, and usability.

1.7 Summary The chapter began by discussing the various economic ages, namely, agricultural, industrial, experience, knowledge and, the latest, digital that has heralded the age of endless transformations. After exploring the VUCA  (Volatility, Uncertainty, Complexity and Ambiguity) nature of the business ecosystem, the chapter looked at the characteristics and the significant social, organizational, business and technology trends in the market. The last part of the chapter discussed the requirements and characteristics of agile enterprises and proposed that this can be achieved through digital transformation of enterprises which is in turn obtainable through the digital transformation of EA or in other words, through EA’s attributes, namely, interoperability, scalability, availability, mobility, ubiquity, security, analyticity, and usability.

Section I

Genesis of Digital Transformation of Enterprise Architecture The conventional understanding of enterprise architecture has been presented in several publications like N. Rozanski and E. Woods (2012), L. Bass, P. Clements and R. Kazman (2013), I. Gorton (2011), M. W. Maier and E. Rechtin (2009), D. Minoli (2008), J. Garland and R. Anthony (2003), and, J. Ramachandran (2002). This book primarily focuses on exploring enterprise architecture’s potential to act through its various attributes as a framework for digital transformation of enterprises. Digital transformation is a transformation that affects exponential change (amplification or attenuation) in any aspect of enterprise architecture performance. This book proposes Enterprise Architecture (EA) as the most important element (after Business Models) for digital transformation of enterprises. Digital transformation involves three parts effort viz. business strategies, business architecture, and business processes. Section I sets the context for the whole book. Chapter 2 reviews the basic concepts of systems thinking, systems science, systems engineering, and systems architecting. Knowing this basic information about systems helps in understanding the origin of the significance of enterprise architecture and the constituting business architecture, information architecture, application architecture, and technical architecture. This also provides the context for the significance of business processes in contemporary enterprises. Chapter 3 describes the viewpoints, views, and perspectives that enable an enterprise architecture. Enterprise architecture (EA) is a well-defined practice for conducting enterprise analysis, design, planning, and implementation, using a holistic approach at all times, for the successful development and execution of enterprise strategy. Chapter 4 explains the basics of distributed systems and describes characteristics of distributed databases, which are prerequisite for the chapters of Sections II and III.

2 Systems Theory The  attempt to apply reductionism (and the natural scientific method generally, when applied to social and organizational problems) has not been satisfactory and has yielded only limited success. Systems thinking can be seen as a reaction to the failure of natural science when confronted with complex, real-world problems set in social systems. Systems thinkers advocate using holism rather than reductionism in such situations. Holism does not seek to break down complex problem situations into their parts in order to study them and intervene in them. Rather, it respects the profound interconnectedness of the parts and concentrates on the relationships between them and how these interrelated parts often give rise to surprising outcomes called the emergent properties. Systems thinking uses models rather than laboratory experiments to try to learn about the behavior of the world, and even then it does not take for granted or impose any arbitrary boundary between the whole that is the subject of its attention in the model, and the environment in which it is located. Instead, it reflects upon and questions where the boundary has been drawn and how this impacts on the kind of improvements that can be made. Contemporary systems thinking also respects the different appreciative systems that individuals bring to bear in viewing the world and making value judgments about particular situations. In order to contribute to a holistic appreciation of the problem situation at hand, different perspectives on its nature and possible resolution should be encouraged.

2.1 Systems Thinking A system is defined as a set of elements that have one or more relationships between them, and systems thinking is the process by which one seeks to understand those elements and relationships so as to be able to understand the behavior of the system as a whole. Systems are usually visualized graphically as a set of nodes with links or connections between them. A system is more than the sum of its parts, particularly if it’s large or complex. It may exhibit unique behavior that cannot necessarily be inferred from the behavior of its individual elements or any subset of them. In systems theory, it’s important to have perspective, as a very small observer of a single element may be oblivious to larger interactions that might be clear to a giant observer of the entire system. The  relationships between system elements can be: • strictly deterministic (i.e., controlled by physical or other laws) • completely stochastic (subject to chance), • more complex. 23

24

Digital Transformation of Enterprise Architecture

Where human decision making is involved, some of the interactions can include behavioral effects as well. Modeling a system can involve both art and science when the modeler seeks to decide what is to be included in a system abstraction and what is to be excluded. The boundaries of an individual system depend on the nature of the relationships that link the nodes and drive change. The relationships between elements inside the system are different from the relationships those elements have with elements outside the system. If understanding a particular system requires that its interactions with other systems be understood as well, a modeler might create a kind of meta-system in which the elements of the system may be systems themselves. One of the most fundamental attributes about a system is its predictability: the degree to which we are able to forecast the system’s next state or states. This change in state might be • an inevitable response to endogenous vulnerabilities that are already present in the system • an unanticipated response to an exogenous shock A model without predictive ability has little value in managing extreme risk and can provide few insights that add value to the decision-making process. Conversely, a model with predictive ability may provide forecasts of possible future scenarios that are substantially different from those generated using other analytic methods and greatly aid in risk measurement and mitigation. 2.1.1 Systems Science When scientific knowledge advances to the extent that there is a discrepancy between theory and practice, there is a paradigm shift, according to the eminent scientific historian Thomas Kuhn. Such paradigm shifts have also occurred with systems thinking. The four paradigms of systems thinking described in this chapter are: 1. Hard systems thinking (HST), or functionalist approaches: Though there is wide diversity in the techniques embraced by HST, they all have certain common characteristics. First, they are essentially goal-seeking strategies using quantitative tools to achieve an optimal or near-optimal solution. Second, they need a clear definition of ends and the optimal means for achieving those ends. This  characteristic is a handicap when a messy and complex situation has to be dealt with, which is inevitable in nearly all engineering and planning projects. And third, they are best suited for tackling problems that don’t involve human activity systems. 2. Soft systems thinking (SST), or interpretive approaches: This is a form of systemic thinking that understands reality as the creative thinking of human beings. It takes into consideration social reality as the construction of peoples’ interpretation of their experiences and works with the aspirations of peoples’ views, interpretations, and intentions. Although there are quite a number of soft systems methodologies that have been employed since the 1970s, we list below four that have been extensively used. a. Ackoff’s Interpretive Planning b. Checkland’s Soft Systems Methodology (SSM) c. Senge’s Fifth Discipline d. Strategic Options Development and Analysis (SODA)

Systems Theory

3. Critical systems thinking (CST) or emancipatory approaches: While many practitioners have hung on to and made good use of both HST and SST, it became obvious to practitioners that emancipatory interests for dealing with inequalities, such as power and economic differences in society, were not being adequately considered by SST. As a result, CST emerged in the 1990s to address these inequalities. Werner Ulrich, a Swiss planner inspired by Churchman, made a breakthrough by operationally addressing this problem. 4. Multimodal systems thinking (MST): The most recent addition to the family of systems thinking is MST, and it has recently been adopted in Europe. Developed by J. D. R. de Raadt and his colleagues in Sweden, MST uses as many as 15 performance indicators to question the validity of decisions made by planners and policy makers. Many of these performance indicators cover the issues of sustainability and environmental and ethical issues. 2.1.2 Principles of Systems Science The principles of systems science include the following: 1. Systemness: Bounded networks of relations among parts constitute a holistic unit. Systems interact with other systems, forming yet larger systems. The universe is composed of systems of systems. 2. Systems are processes organized in structural and functional hierarchies. 3. Systems are themselves, and can be represented abstractly as, networks of relations between components. 4. Systems are dynamic on multiple timescales. 5. Systems exhibit various kinds and levels of complexity. 6. Systems evolve. 7. Systems encode knowledge and receive and send information. 8. Systems have regulation subsystems to achieve stability. 9. Systems contain models of other systems (e.g., protocols for interaction with anticipatory models). 10. Sufficiently complex adaptive systems can contain models of themselves (e.g., brains and mental models). 11. Systems can be understood (a corollary of 9)—science. 12. Systems can be improved (a corollary of 6)—engineering. Within the boundary of a system we can find three kinds of properties: 1. Elements: Are the kinds of parts (things or substances) that make up a system. These parts may be atoms or molecules, or larger bodies of matter like sand grains, plants, animals, people etc. 2. Attributes: Are characteristics of the elements that may be perceived and measured. For example: quantity, size, color, volume and mass. 3. Relationships: Are the associations that occur between elements and attributes. These associations are based on cause and effect.

25

26

Digital Transformation of Enterprise Architecture

The state of the system can be defined by determining the value of its properties (the elements, attributes, and/or relationships). Systems can be classified as: 1. Isolated system: A system that has no interactions beyond its boundary layer. Many controlled laboratory experiments are this type of system. 2. Closed system: Is a system that transfers energy, but not matter, across its boundary to the surrounding environment. Our planet is often viewed as a closed system. 3. Open system: Is a system that transfers both matter and energy can cross its boundary to the surrounding environment. Most cosystems are example of open systems. Systems theory suggests that knowledge of one type of system can be applied to many other types. By studying interactions and connections between the pieces of a system, the gained knowledge can be useful when confronted with other problems.

2.2 Systems Engineering The complexity of systems is increasing, and the challenges associated with bringing new systems into being are greater than ever. Requirements are constantly changing with the introduction of new technologies on a continuing and evolutionary basis; the life cycles of many systems are being extended, while at the same time, the life cycles of individual and specific technologies are becoming shorter; and systems are being viewed more in terms of interoperability requirements and within a system of systems (SOS) context. There is a greater degree of outsourcing and the utilization of suppliers throughout the world, and international competition is increasing in a global environment. Available resources are dwindling worldwide, and many of the systems (products) in use today are not meeting the needs of the customer/user in terms of performance, reliability, supportability, quality, and overall cost-effectiveness. The purpose of systems engineering is to support organizations that desire improved performance. This  improvement is generally obtained through the definition, development, and deployment of technological products, services, or processes that support functional objectives and fulfill needs. It is a comprehensive, iterative technical management process that includes translating operational requirements into configured operational systems—integrating the technical inputs of the entire design team, managing interfaces, characterizing and managing technical risk, transitioning technology from the technology base into program specific efforts, and verifying that designs meet operational needs. It is a life cycle activity that demands a concurrent approach to both product and process development. Systems engineering encourages the use of modeling and simulation to validate assumptions or theories on systems and the interactions within them. Use of methods that allow early detection of possible potential failures are integrated into the design process. At the same time, decisions made at the beginning of a project whose consequences are

Systems Theory

27

not clearly understood can have enormous implications later in the life of a system, and it is the task of the modern systems engineer to explore these issues and make critical decisions. There is no method which guarantees that decisions made today will still be valid when a system goes into service months or years (or even decades) after it is first conceived but there are techniques to enable anticipation and exercising corrective measures that can address the envisaged challenges in future. 2.2.1 System Dynamics via Simulation Modeling System dynamics is a computer-based simulation modeling methodology developed at the Massachusetts Institute of Technology (MIT) in the 1950s as a tool for managers to analyze complex problems. Using system dynamics simulations enables not only to observe events, but also patterns of behavior over time. The behavior of a system often arises out of the structure of the system itself, and behavior usually changes over time. Understanding patterns of behavior, instead of focusing on day-to-day events, can offer a radical change in perspective. It shows how a system structure is the cause of its successes and failures. System dynamics simulations are good at communicating not just what might happen, but also why. Providing the managers with more and more information is not necessarily the correct solution, since too much detail or complexity might do more harm than good. The behavior of a system is a consequence of its structure. Therefore, the only real changes we can make to a system are changes to the structure. Other changes to the system will soon be canceled out through the actions of negative feedback loops. This structure is represented by a series of causally linked relationships implying that decisions made within an organization have envisaged or unenvisaged consequences; some of these consequences will be visible immediately while others might not be revealed even for several years. The primary assumption of the system dynamics paradigm is that the persistent dynamic tendencies of any complex system arise from its internal causal structure—from the pattern of physical constraints and social goals, rewards, and pressures that cause people to behave the way they do and to generate cumulatively the dominant dynamic tendencies of the total system. A  system dynamicist is likely to look for explanations of recurring long-term social problems within this internal structure rather than in external disturbances, small maladjustments, or random events. The central concept that system dynamicists use to understand system structure is the idea of two-way causation or feedback. It is assumed that social or individual decisions are made on the basis of information about the state of the system or environment surrounding the decision-makers. The decisions lead to actions that are intended to change (or maintain) the state of the system. New information about the system state then produces further decisions and changes Each such closed chain of causal relationships forms a feedback loop. System dynamics models are made up of many such loops linked together. They are basically closedsystem representations; most of the variables occur in feedback relationships and are endogenous. When some factor is believed to influence the system from the outside without being influenced itself, however, it is represented as an exogenous variable in the model. 2.2.2 Changeable Systems There is an ever-increasing need to develop and produce systems that are robust, reliable and of high-quality, supportable, cost-effective from a total-life-cycle perspective and that are responsive to the needs of the customer/user in a satisfactory manner. Further, future

28

Digital Transformation of Enterprise Architecture

systems must be designed with an open-architecture approach in mind in order to facilitate the incorporation of quick configuration changes and new technology insertions, and to be able to respond to system interoperability requirements on an expedited basis. 1. Increasing complexity: Systems are more complex. Today’s systems involve many science and engineering disciplines. New technologies (including information, biotechnology, and nanotechnology) create new opportunities and challenges. Interfaces are increasingly more complex and system integration is more difficult. To emphasize this complexity new terms have been used; for example, systems of systems, system architectures, and enterprise systems. Table 2.1 illustrates complexity of problems across dimensions of systems decision problems. 2. More dynamic: Systems interact with their environment and the needs of stakeholders evolve in concert with this interaction. Rapid changes in the environment require systems to be dynamic to continue to provide value to consumers of products and services.

TABLE 2.1 Complexity of Problems across Dimensions of Systems Decision Problems Problem Dimension Low (Technical Problem) Medium (Complex Problem)

High (Wicked Problem)

Boundary type

Isolated, defined Similar to solved problems

No defined boundary Unique or unprecedented

Stakeholders

Few homogeneous stakeholders

Challenges

Technology application and natural environment requirements Stable and predictable

Parameters Use of experiments

Multiple low-risk experiments possible

Alternative solutions Solutions

Limited set Single optimal and testable solution

Resources

Reasonable and predictable Optimal solution clearly defined

End state

Interconnected, defined Several unique features and new constraints will occur over time Multiple with different and/or conflicting views and interests New technology development, natural environment, adaptive adversaries Parameter prediction difficult or unknown Modeling and simulation can be used to perform experiments Large number are possible Good solutions can be identified and evaluated objectively and subjectively Large and dynamic Good solutions can be implemented but additional needs arise from dynamic needs

Hostile or alienated stakeholders with mutually exclusive interests No known technology, hostile natural environment, constant threats Unstable or unpredictable Multiple experiments not possible No bounded set No optimal or objectively testable solution Not sustainable within existing constraints No clear stopping point

Systems Theory

29

3. Increasing security concerns: Many systems face increasing security challenges due to threats from malicious adversaries ranging from hackers to terrorists. Information assurance, which is the activity of protecting data and its flow across communication networks, is a major concern of system developers and users. In a similar fashion, physical security is an important design criteria for many systems as well. 4. Increasing privacy concerns: As systems become more complex, more interconnected, and face more security challenges the potential for privacy violations increases. The protection of personal information in systems is now a major system challenge. 5. Increasing interconnectedness: The  Internet and advances in information technology have led to business-to-business collaboration and a global economy enabled by pervasive interconnectedness. Anyone can start a global business by establishing a website. Systems now have an international supply chain for electronic components; and, increasingly, hardware development, software development, component production, and services are being done globally. 6. Many stakeholders: The increasing complexity and interconnectedness contribute to the increase in the number of stakeholders involved in the system life cycle. In addition to considering the perspectives of scientists, engineers, and engineering managers, system engineers must consider the perspectives of functional managers (production, sales, marketing, finance, etc.), regulators, professional organizations, legal, environmentalists, government, community groups, and international groups to name just a few of the many stakeholders with vested interests in the system.

2.3 Systems Architecting Despite ever more sophisticated integrated circuitry and single-chip processors, hardware is not  usually the source of systemic complexity and cost. The  cost of software surpassed the cost of hardware long ago. Often it is the software that is the source of design and development problems—it is often costly to design, construct, and maintain in a timely and cost-effective way. Systems engineering approaches and techniques have their origins in traditional hardware design methods and tend to abstract away systemic software design. Software design emerged in parallel with systems engineering, but focused on detailed code design concerns. However, what we see in terms of static structures like code is different from what we see when the code is compiled and is executing. This ethereal nature of software is why it is so difficult to design and construct software-intensive systems. Engineers and architects are constantly striving to find techniques, methods, and abstractions to design and analyze complex softwareintensive systems—this includes the systemic software that enables the system to do what it is specified to do.

30

Digital Transformation of Enterprise Architecture

Traditional systems engineering approaches are viewed by many as inadequate for designing modern IT systems. Enterprise architecture merges many of the traditional systems engineering concepts with software design and engineering concepts. However, enterprise architectures tend to focus on business process engineering. 2.3.1 Systems Architecture It  is comparatively easy to build small stand-alone applications with a few stakeholders and business concerns with no architecture and very little detailed design; however, this approach does not scale very well. It is intellectually difficult to build large systems directly by focusing on detailed structures without an architectural design to provide a roadmap for detailed designers and implementers. Building large complex softwareintensive systems with many competing stakeholders and business concerns is a very different proposition requiring layers of coordinated design abstraction to guide implementation. It  is very difficult to reason about and satisfy broad systemic properties in systems with many developers, many customers, many users, and other stakeholders without architectural designs to bridge the gap between system requirements and detailed software designs. Architecture design is the place where engineers turn the corner from the requirements space to the design space. In  terms of software design artifact, architecture is typically used to refer to coarse-grained designs that describe gross partitioning of a system into some collection of elements that can be code-oriented, runtime, or physical structures. Architecture provides a means to partition the system into elements that can later be designed in detail. The architecture can be scrutinized and studied to expose weakness before detailed element design and implementation. Finally, the architecture can be used to guide overall construction by serving as a prescription for how the elements can be assembled, resulting in a system with predictable properties and behavior. Systemic properties cannot be achieved through detailed-level design because they require broad coordination across most or all system elements. In  cases where detailed code structures (e.g., objects, classes, functions, etc.) are designed first, the resulting system structure is large and flat, with numerous dependencies between parts of the system that are not well understood. The overall systemic properties that emerge as a result of numerous software engineers designing small pieces of the system without the framework of structure provided by an architecture design will not be well understood until the system is implemented. Architectural design differs from detailed software design in terms of concerns addressed. Table 2.2 presents a comparison of architectural and detailed design. Designing the system’s architecture is a critical first step in system construction because the architecture is used to ensure that the overall system objectives are being met. The  system architecture frames the detailed design and construction of the parts that will make up the system and can help identify potential functional and quality attribute issues. Through the architecture design, these issues can be addressed early in the development process, minimizing downstream production cost and maximizing overall product quality. Systems built without deliberately designed systems architectures will possess emergent properties that will not be well understood because they were not designed into the system. Properties such as performance, availability, scalability, modifiability, security, and so forth must be designed into the system to meet the needs of the stakeholders who

31

Systems Theory

TABLE 2.2 Comparison of Architectural and Detailed Design Architectural Design 1

2

3

Architectural design addresses the overall properties of a system and all of its elements such as performance, availability, scalability, modifiability, security, and others in addition to general functionality. Architectural designs address the partitioning of the system into parts or elements and interaction among the elements. Architects focus on the external properties of elements, the ensemble of elements, and system structure. Architectural design is declarative. Architects partition, design, and document system elements based largely on intuition and experience because an all-encompassing standardized, formal architectural specification language is not available. Architectural design principles guide the intuition of the architects and frame (or constrain) the work of detailed designers.

Detailed Design Detailed designs are concerned with specific computational properties and functionality provided by individual elements.

Detailed designs address the implementation details of the part. Detailed designers focus on the internals of elements, structures, and algorithms utilized within an element. Detailed designs are executable in nature in that they are meant to be translated directly into code.

will utilize, buy, and maintain the system. If not designed into the system, understanding and fixing systemic shortcomings in these properties is often problematic, and in some cases impossible to remedy. Architectural design is required to frame the detailed design. Many broad systemic properties such as these cannot be retrofitted into the system after implementation because they are broad, crosscutting concerns—these properties must be designed into the system from the very beginning. Systems design is the process of defining the architecture, components, interfaces, and other characteristics of a system or component of a system. Where, component is used to refer to a physical part or element that is a member of the system. interface refers to the boundary of a component and the point of connection between components. other characteristics of a system or component of a system refers to the functional behavior of the system as well as the broad systemic properties it possesses, such as performance, availability, scalability, modifiability, and so forth. There  are many specific systems engineering approaches for translating a customer’s need arising from a specific mission or business objective into an architectural design. The systems architecture frames the detailed design and construction of the parts that will make up the system and can help identify potential functional and quality attribute issues. Through the architectural design, these issues can be addressed early in the development process, minimizing downstream production cost and maximizing overall product quality. Systems architecture is concerned with partitioning the system into components and identifying their responsibilities and rules for interaction to ensure that functional and quality attribute requirements are met.

32

Digital Transformation of Enterprise Architecture

Most modern system architectural design methodologies prescribe designing a system using hierarchical decomposition by first decomposing the system into components. These methods generally focus on functionality and use functional requirements to guide the decomposition. This process is recursively repeated on each component until off-the-shelf or easily designed and constructed components are all that remain. Once the elementary components of a system are defined, the detailed interfaces for each component can be defined and the appropriate engineer (or organization) for each component can proceed with the detailed design, implementation, and test of the functional element. In principle, constructing the system then is accomplished by integrating the lowest-level components one level of abstraction at a time. Each level of decomposition becomes a level of construction and integration where the results of the previous level are verified and validated. Therefore, a key tenant of system engineering is that bottom-up integration is only possible with sound top-down design that begins with a robust systems architecture. Architectural design is not a state of being but of becoming—it is a process. In the construction of software-intensive systems there are many different kinds of stakeholders whose wants, needs, and expectations of the system will influence the design of the architecture. Focusing on users as the only stakeholder can cause an overemphasis upon functionality as a prime driver of system structure may not be the best idea of systemic development. Architectural designs can provide essential insight for engineers when designing software intensive systems. Architectural designs identify the parts of the system, enterprise, or software, and how they will interact to render a service. Design representations can be used to analyze the overall behavioral (functional) and nonbehavioral (nonfunctional) properties of systems, enterprises, or software. Behavioral properties include the essential functional requirements, whereas nonbehavioral properties include performance, modifiability, interoperability, and security, among others. These nonbehavioral properties are also referred to as quality attributes. Designers of systems, enterprises, and software must continually make trade-offs between behavioral and quality attribute requirements to achieve a balance acceptable to the stakeholder community. For example, some architectural design decisions might promote data throughput (performance) but undermine the ability to change the system (modifiability). Architectures allow designers to identify and reason about these trade-offs early in the development so these properties can be designed into the system, rather than fixing, tuning, or refactoring after development. Architectural requirements are not  all of the requirements—they are those requirements that will influence the architecture utmost as it initially decomposes the system and selects the fundamental structures that will form it. These are easy choices to make, but they are difficult to get right. Although initial design choices are relatively easy to make, they are binding decisions and come with long-term impact. Selecting the correct structures is a critical first step in establishing systemic designs that satisfy functionality and the broader nonfunctional properties required of a system. The architectural design forms the scaffolding for the downstream detailed designers and implementers. Without clearly defining the architectural requirements before initial design begins, it is nearly impossible to get the architectural design right. 2.3.1.1 Functional Architectural Requirements Functional architectural requirements have the least influence on design. High-level functional requirements are best described with use cases. Although too cumbersome for modeling detailed requirements, use cases are excellent for discovering, analyzing, and documenting functional requirements necessary for designing the architecture. Use cases

Systems Theory

33

are not models for functional decomposition, nor should designers use them to describe how the system provides services. Use case models describe what is needed in a system in terms of functional responses to given stimuli. A use case is initiated by an entity, and then goes on to describe a sequence of interactions between the entity (or entities) and the system that, when taken together, model systemic functional requirements. Use cases may also include variants of the normal operation that describe error occurrences, detection, handling and recovery, failure modes, or other alternative behaviors. Traditional use cases describe functional requirements in terms of actors, and their interactions with the system. Each use case defines a set of interactions between actors and the system we intend to design. Use cases model system interaction with actors at a coarse-grained level of abstraction. This can help in establishing the scope of the project and in guiding architects through the critical initial decomposition and structuring of the system. Because use cases were popularized by object-oriented analysis and design methods (specifically Unified Modeling Language (UML)), they maybe thought to be relevant only when using object-oriented methods and languages. However, use cases are not inherently an object-oriented modeling technique. In  use cases, the system is treated as a black box and the use case describes the requirement—what is needed—not how the system delivers the services. Use cases can also help establish system context or scope, that is, what is inside the design space and what is outside of the design space. While functionality has less influence on structure, establishing a clear context is an essential first step in design. Unclear or poorly defined context can lead to severe design problems. Use case models serve as a communication vehicle and encourage dialog between technical and nontechnical stakeholders. 2.3.1.2 Nonfunctional Architectural Requirements Nonfunctional architectural requirements will have the most influence over the design. While functional requirements describe what the system must do, quality attribute requirements describe how the system must do it; however, both parts are required for a full understanding of the nonfunctional requirement. Any given nonfunctional requirement and associate it with any number of operational elements. Together it provides a fuller understanding of the requirement in terms of what must be done functionally and how it must be done in terms of a nonfunctional response. Thus, to guide design choices and measure the fitness of the design, quality attribute requirements must be described with respect to some operational context. To do this we will use non-functional scenarios to define more completely the quality attribute properties a system must possess. Nonfunctional scenarios describe some requirement in terms of: • Stimulus: The stimulus is the condition affecting the architecture. This can be an event, a user request for data, the initiation of service, or a proposed change to the system. • Source(s) of the stimulus: This is the entity (human, organizational, or technological) that is the source of the stimulus described above. There can be one or more sources. • Relevant environmental conditions: These are the conditions present in the system’s operational environment during the receipt of the stimulus. Relevant environmental conditions can be diverse and will depend upon the stimulus, but examples

34

Digital Transformation of Enterprise Architecture



• • • •

might include “during runtime execution,” “during initial development,” “after deployment,” “during peak load,” “while seven hundred users are logged in,” and so forth. Architectural element(s): This is the element or elements of the architecture that are directly or indirectly affected by the stimulus. In  early requirements gathering, when quality attribute requirements are initially developed, the artifact is probably not known. However, after architectural design has commenced and is successively refined, the architectural element information should be added to the nonfunctional requirements information. System response: A  description of how the systems stakeholders would like the architecture/system to respond to the stimuli. Response measure: A measure of how the system responds. The kind of response measure listed will depend upon the stimuli. For change/modification stimuli, we might have response measures that measure the cost of change in terms of time, manpower, cost, and so forth. For  a performance stimulus, we might have response measures in terms of throughput, response time, and so forth.

The  architectural design is critical to balancing these kinds of nonfunctional concerns before detailed design, implementation, or investing in upgrades to a software-intensive system. Compromise is made in terms of balancing systemic non-functional properties in design. However, it is impossible to strike the optimal design balance if the quality attribute properties are poorly articulated, poorly understood, or remain unstated. If architecture is not informed of the exact nonfunctional requirements, they will rely on intuition, experience, or simply guess when making architectural choices to promote or inhibit various quality attribute properties in the system. 2.3.2 Enterprise Architecture Enterprise architecture concepts evolved from the systems engineering community but specialized to address the specific design concerns of very large, highly distributed IT systems and organizations dependent upon them. Enterprise architecture frameworks are essentially design methodologies focusing on business modeling, business processes, application functionality and the technological infrastructure that supports the enterprise. Many of the concepts embraced by the enterprise architecture community emerged from the commercial information systems development that was taking place at IBM in the 1970s for large, distributed, business-oriented applications. These roots have given enterprise architectures a decidedly business and IT-centric flavor. John Zachman, an employee of IBM, was a key contributor to IBM’s information planning methodology called business systems planning. He  applied concepts from traditional building architectures to the design and construction of business enterprises and the computer systems that empowered them. Zachman coined the term enterprise architecture and created the Zachman Framework for defining enterprise architectures. The term framework is used to describe a prescribed set of steps and artifacts that are created as a course of designing an enterprise system or system of systems. EAFs embody design strategies and provide step-by-step guidance, and even templates, for designing and documenting the enterprise. EAFs prescribe a set of artifacts for specific enterprise stakeholders. Using the EAF, the enterprise architect creates various artifacts intended to

Systems Theory

35

be views of the system from the perspective of the enterprise stakeholders. Most enterprise architecture frameworks identify a number of concerns that will guide the design of enterprises, such as: • Business requirements: The business needs of the organization. • Stakeholders: Those who will interact with the enterprise in some way. • Business processes: A series of activities leading to the achievement of a measurable business result. • Environment: Those conditions in which the enterprise must operate. • Software: The standard software suite that is closely coupled to the infrastructure, such as operating systems, drivers, database systems, and so forth. • Data: High-level data designs that describe the structure of an enterprise’s data needs in terms of entities and relationships between entities. • Infrastructure: The enterprise’s general IT assets, such as networks, hardware, computing systems, routers, and so forth. Today there are many different enterprise architecture frameworks (EAFs) for designing and constructing enterprise architectures: 1. 2. 3. 4. 5. 6. 7.

The Zachman Enterprise Architecture Framework Federal Enterprise Architecture Framework (FEAF) Treasury Enterprise Architecture Framework (TEAF) Popkin Enterprise Architecture Framework Extended Enterprise Architecture The Open Group Architecture Framework (TOGAF) Department of Defense Architecture Framework (DoDAF)

Key objectives of an EAF are to model the enterprise networks, databases, middleware, security, fault handling, and transaction processing, and connectivity to the Internet so customers can access services. The purpose of an EAF is to help enterprise architects manage the complexity of highly distributed systems of systems by providing techniques and methods to identify key stakeholders and their role in the enterprise, discover relationships between various entities of the enterprise, and in some cases map business process to IT infrastructure. Most EAFs provide comprehensive documentation guidelines, templates, and frameworks for documenting the enterprise architecture. 2.3.2.1 Business Architecture The business architecture results from the implementation of business strategies and the definition of processes. This  architecture dictates the functional requirements of business processes that determine the information systems that will operationally support the business. The core concept within the business architecture is the business process. A business process is a set of value-adding activities that operates over input entities producing output entities. These activities are either orchestrated by a central controlling entity or choreographed—the actual coordination mechanism is only relevant while detailing how the process is enacted.

36

Digital Transformation of Enterprise Architecture

Although an organization always comprises multiple sets of coordinated activities, each may or may not  be classified as an actual business process. What distinguishes an arbitrary set of coordinated activities from a business process is the fact that the process necessarily adds value to a customer, whether internal or external to the organization. An activity is performed during a specific period. As a precondition for its enactment, all of the business roles must be fulfilled by specific entities. These entities will be engaged in playing their roles for the duration of the activity. The activity postcondition is that all of the roles would have completed by the end of the specified period. An activity describes the business roles required of the organizational entities for its operation. These roles include: 1. Actor role: An activity requires one actor, or a combination or team of actors to be executed. The actor represents a person, a machine or device, or an information system. An actor provides the services required for fulfilling the business role required by the activity. 2. Resource role: A resource is used as input or output of an activity during its operation. A resource is usually created, used, transformed, or consumed during the operation of the activity. 3. Observable state role: An observable state is a specific resource role that is used as a means to observe the status of an activity. The major components for describing the business architecture are as follows: • Business strategy: Key business requirements, processes, goals, strategies, key performance indicators (KPIs), business risks, and the business-operating model • Business function: Key business services, processes, and capabilities that will be affected by the OA effort • Business organization: The high-level nature of organizational structures, business roles (internal audiences, external customers and partners), the decision-making process, and organizational budget information

2.3.2.2 Information Architecture The  information architecture describes what the organization needs to know to run its processes and operations as described in the business architecture. It is an abstraction of the information requirements of the organization, and provides a high-level logical representation of all the key information elements that are used in the business, as well as the relationship between them. It defines a view on the business information that is independent of the application and technology architectures. Business information is structured as a collection of informational entities. Entities describe various resources required by processes, including business, support, and management processes. An entity can result from the composition or specialization of other entities in the object-oriented sense. Entities have an identifier defined from a business perspective along with the associated set of roles with a related set of attributes; each role integrates its set of attributes into the entity. Thus, every entity has an overall set of attributes that results from the summation of attributes derived from each role the entity is able to play.

Systems Theory

37

The principal components for describing information architecture are as follows: • Information strategy: Information architecture principles, information governance and compliance requirements, canonical data models, industry data model support strategy, and dissemination patterns and reference models • Information assets: A  catalog of critical business data types and models (such as customer profile, purchase order, product data, and supply chain), relationships between such business data types, and all the processes and services that interact with these data 2.3.2.3 Application Architecture The  application architecture fulfills two major goals—supporting the business requirements, and allowing efficient management of the organization’s entities. To satisfy these goals, the application architecture should be derived top-down from the analysis of the business and information architectures. The  application architecture defines the applications required to enable the business architecture. This includes identifying how the applications interact with each other, how they will interact with other business integration elements, and how the application and data will be distributed throughout the organization. It typically includes descriptions of automated services that support the business processes, and of the interaction and interdependencies between an organization’s application systems, plans for developing new applications, and revision of old applications based on the enterprises objectives. The architecture of a business process support system is described using a structure of information system (IS) block. An IS block is then defined as an organized collection of services, mechanisms, and operations designed to handle organization information. Each block may state several attributes, such as availability, scalability (ability to scale up performance), and profile-based access (ability to identify who does what). The application architecture defines the applications needed for data management and business support, regardless of the actual software used to implement systems. It functionally defines what application services are required to ensure entities and processes are supported in acceptable time, format, and cost. Service is the aggregation of a set of operations provided by an architectural block. It can be seen as a generalization of the concept of Web Services. Service is of three types: 1. Business service: A  set of operations provided by IS blocks supporting business processes. 2. IS service: A set of operations provided by an IS block to other IS blocks. This is used to aggregate multiple IS blocks. 3. IT service: A  set of technological services provided by the specific application platforms. The principal components for describing application architecture are as follows: • Application strategy: The  key application architecture principles (build vs. buy, hosted vs. in-house, open source vs. commercial grade, open standards vs. .NET, etc.), application governance, and portfolio management, and a set of reference application architectures relevant to the customer

38

Digital Transformation of Enterprise Architecture

• Application processes: A  series of application-specific processes that support the business processes in BA • Application services: An inventory of the key application services exposed to internal and external applications that support the business services • Logical components: An inventory of relevant product-agnostic enterprise application systems that is relevant to stated business objectives • Physical components: Actual products that support the logical application components and their relationships to relevant components and services in information and technology architectures The  granularity of abstraction required by an enterprise depends upon factors such as domain, scope, responsibilities, design and construction roles, and so on. Increasingly more detailed abstractions based on a frame of reference established by a higher abstraction constrain the downstream architect: detailed design and implementers would be maximally constrained at the finest granularity of abstraction. 2.3.2.4 Technical Architecture The technological architecture represents the technologies behind application implementation as well as the infrastructure and environment required for the deployment of the business process support systems. The technological architecture addresses a large number of concepts since it must cope simultaneously with continuous technological evolutions and the need to provide different specialized technological perspectives, such as those centered on security and hardware. These concepts are abstracted as an information technology (IT) block. An IT block is the infrastructure, application platform, and technological or software component that realizes or implements a set of IS blocks. The principal components for describing technology architecture are as follows: • Technology strategy: It  comprises technology architecture principles; technology asset governance methodology; portfolio management strategy; and technology standards, patterns, and RAs. These assets and artifacts go a long way in strengthening and sustaining technology-driven business solutions. • Technology services: An inventory of specific technology services and their relationships, and the business services, application services, information assets, and logical or physical technology components that realize such services. • Logical components: The product-agnostic components that exist at the technology infrastructure tier to support each technology service. • Physical components: The set of technology products that exists behind each logical technology component to implement the technology service.

Systems Theory

39

2.4 Enterprise Processes Enterprise processes are business structures that make up the enterprise. An enterprise might be composed of customer service, inventory, shipping, and production organizations. Business processes define how these entities interact, and identifying the enterprise business processes is the primary aim of most EAFs. A business process is a description of the dynamic interaction of stakeholders and the flow of information between the various entities that compose the enterprise. Business processes drive the analysis and design of the enterprise architecture and are used to identify organizations, pertinent stakeholders, systems, data, and other entities relevant to the enterprise. In most enterprise methodologies, business processes are directly implemented or supported by IT infrastructures and systems. Processes are means for identifying, documenting, and analyzing complex networks of human interactions with organizations and the IT systems they use to provide services, communicate, and generally conduct business operations. For instance, consider an organization whose business model is to sell products to other businesses via the Web, track inventory and shipping, manage customer relations, and so forth. This  business model could be distilled into various business processes that describe the activities of the organization. Business processes might describe the dynamic aspects of • • • •

How a customer’s order is processed How the product is manufactured How the inventory is updated How quickly the product is delivered to the customer

2.5 Summary Systems thinking views the enterprise as a whole and assesses the system properties to try to understand the system behavior. This chapter reviewed the basic concepts of systems thinking, systems science, systems engineering, and systems architecting. Knowing this basic information about systems helps in understanding the origin of the significance of enterprise architecture and the constituting business architecture, information architecture, application architecture and technical architecture. This also provides the context for the significance of business processes in contemporary enterprises. Many of the current paradigms for enterprise improvement adhere to a systems view including lean enterprise systems, total quality management, and supply chain management.

3 Digital Transformation of Enterprises An enterprise is not only expected to be effective and efficient but should also be able to adapt to the frequent changes in the market environment driven by technology, regulation, and competition—in other words, it should be agile. Enterprise agility has become even more important in these times of globalization, particularly in periods of continuous organizational change that are often caused by an increasing pace of innovation, collaboration with other organizations, new challenges in the market, mergers and acquisitions, societal changes, and/or technology advancements. The enterprises that can best respond to the fast- and frequently changing markets will have better competitive advantages than those that fail to sustain the pace dictated by the process of globalization. This can be realized through enterprises acquiring better control and efficiency in their ability to manage the changes in their enterprise processes.

3.1 Digital Transformation of Enterprises In the past few years, new firms have grown from start-ups to international multibilliondollar companies, with much of their success credited to their innovative business models and smart use of the digital environment. Customers have begun to expect services to perform at a level comparable to the front-runners of digitalized companies. Firms increasingly need to maintain, on an ongoing basis, a variety of innovation efforts to operate more efficiently, as well as to continuously strive to deliver greater and greater value to their customers. To stay competitive, this has led to an increased focus on keeping the firm’s business model current in line with the leading digitalized companies, in other words, for the digital transformation of enterprises, such involves the digital transformation of business models, architectures, and processes. Digitization refers to the process of converting or switching from analog to digital format. Digitalization refers to the waves of change that empower society through the increased utilization of digital technology. Digital transformation refers to the adaptations that firms make in order to thrive in an increasingly digital world; it incorporates the effects that digitalization has upon businesses. Digital transformation can be accomplished through the following: 1. Business model innovation along the dimensions of value propositions, customer segments, channels, customer relationships, key activities, key resources, key partners, cost structure, and revenue streams (see Section 3.2) 2. Enterprise architecture evolution along the dimensions of integration and interoperability, availability and scalability, performance and reliability, and access and security (see Section 3.3) 41

42

Digital Transformation of Enterprise Architecture

3. Enterprise processes improvement programs ranging right from disruptive programs like business process reengineering to continuous improvement programs like Lean, Six Sigma, and Theory of Constraints (see Chapter 15, Kale 2018)

3.2 Business Model Innovation An enterprises business model reflects the design of value creation, delivery, and capture mechanism which a firm deploys: it is about how a firm has organized itself to create and deliver value in a profitable manner. In other words, a firm’s behavior reflects its management hypothesis about what customers want, how they want it, and how an enterprise can organize to best meet those needs, get paid for doing so, and make a profit. Two companies competing in the same industry may deploy two different business models. For  example, 7-Eleven and Tesco Lotus have completely different business models. 7-Eleven makes profit from selling fewer items to customers with frequent visits. Its profit is driven by a smaller number of shelf items, each has a high inventory turn, and the products are easy to carry and consume. It chooses to spread its chain of stores throughout the city and sells franchises to building owners located in high-pedestrian traffic areas. Tesco Lotus realizes its profit from selling a broad range of products with different inventory turns. The company builds large-scale malls to attract infrequent shoppers who spend more during each store visit. The two companies have different store designs, supply-chain configurations, and ownership structures. Because the two companies target different customer segments with different value propositions, they consequently deploy two distinct business models. 3.2.1 Business Model As a conceptual tool, business model is a simplified representation of how the business of any company works. For achieving this, it contains elements describing different parts of a business and their relationships to each other: it states the business logic of a firm. With the business model concept we can depict the following: • Which kind of value is offered to which customers • Which kinds of resources, activities and partners are needed • How the value offered to customers is captured and transferred back to the company through revenues 3.2.1.1 Osterwalder and Pigneur’s Business Model Canvas Business model canvas is an easy-to-use tool for business model development and proposes an business model framework consisting of nine elements grouped in four pillars (Osterwalder 2004, Osterwalder and Pigneur 2010): 1. Product pillar describes the business of a company, its offerings and the value propositions it offers to the market. a. Value proposition describes an overall view of a company’s bundle of products and services that represent value for a specific customer segment, and is packaged and offered to fulfill customer needs. Moreover, it describes how a company differentiates itself from its competitors.

Digital Transformation of Enterprises

2. Customer interface pillar identifies who a company’s target customers are, how it delivers products and services to them and how it builds strong relationships with them; in particular, it specifies the path how a company goes to the market, reaches its customers and interacts with them. b. Customer segments define the types of customers a company wants to offer value to. A  company selects its target customers by segmenting the potential customer base according to different criteria, techniques or approaches. c. Channels describe the means of getting in touch with the customers. They are the conduit or connections between a company’s value proposition and its target customers. Channels include communication, distribution and sales channel delivering value propositions to customers. The sales channels can be further divided into company-owned direct and indirect sales channels, and partner-owned indirect sales channels. d. Customer relationships describe the kinds of links a company establishes between itself and its customers. All the interactions between a company and its customers determine the strength of the relationships. As interactions come with cost, a company carefully defines the kinds of relationships it wants to maintain and the kinds of customers it wants to establish these relationships. 3. Infrastructure management pillar addresses the question how a company executes its business and creates value. It describes how a company efficiently performs infrastructural and logistical issues, with whom and as to the kind of network enterprise or business (Kale 2017). Thus, it defines what capabilities and capacities the company needs to provide its value proposition and maintain its customer interface. It specifies the capabilities and resources are needed in a business model and the executors of each activity as well as their relationships with each other. e. Key resources describe the arrangement of assets required to create value for the customer. They are inputs in the value-creation process and sources of capabilities which a company needs to provide its value propositions. Resources can be categorized into physical, intellectual, human and financial resources. f. Key activities describe the actions a company performs to make realize the business model. Business model framework helps in configuring key activities both inside and outside of a company. This is done in accordance with the logic of the company’s value chain or value network. g. Key partnerships describe the network of suppliers and partners needed to realize the business model. It is a result of cooperative agreements for outsourcing activities and acquiring resources from outside the company. 4. Financial aspects pillar defines the company’s revenue model and cost structure, resulting in profitability. It defines a business model’s economical sustainability in terms of the configuration of all other elements. h. Revenue streams describe how a company makes money through a variety of revenue flows resulting from value propositions offered profitably to customers. Revenue streams results directly from pricing models chosen by the company. i. Cost structure is the representation of costs resulting from the operation of a business model, which is to create, market and deliver value to customers. It monetizes the various elements of a business model, that is, it sets a price tag for the other elements of a business model. Figure 3.1 presents the schematic of a business model canvas.

43

44

Digital Transformation of Enterprise Architecture

FIGURE 3.1 Business model canvas.

3.2.1.2 Design Thinking for Business Model Design Scientific inquiry means reasoning, questioning, and considering alternatives, thus inducing the deliberative mindset; this is specially so when one considers the classic, philosophical, and truth-oriented perspective on scientific enterprise. However, the pragmatic view on science reveals scientific enquiry as a paradigm oriented toward envisioning new possibilities resulting from the scientific findings. In 1877, S. Peirce had proposed a pragmatic view on scientific inquiry. Following his notion of human reasoning, he described inquiry as attempts to eliminate problems, doubts, and dissatisfaction related to them, and replace them with a satisfactory and calm belief that one can act upon (Hartshorne and Weiss 1931). He  effectively criticized the view of science as the ultimate search for fundamental truth. Almost 100 years later, T.S. Kuhn proposed another pragmatic view of science; he stressed the fact that primary activities of scientists focus on solving particular problems (“puzzle-solving”) rather than testing of fundamental theories (Kuhn 1970a, 1970b). Peirce’s and Kuhn’s perspectives on scientific enquiry highlight the problem solving aspects of the enquiry and as such they open pathways for design-oriented approaches to general inquiry, including Design Thinking. Information Systems (IS) is a specific field of scientific research encompassing the development, use and application of Information Systems by individuals, organizations and society as a whole. IS research strategies are of two types (Hevner and Chatterjee 2010): 1. Behavioral research (BR) is rooted in psychology science and thereby has its roots in natural sciences. BR seeks to develop and justify theories (i.e., principles and laws) that explain or predict organizational and human phenomena surrounding the analysis, design, implementation, management, and use of IS. Behavioral science

45

Digital Transformation of Enterprises

starts with a hypothesis, then researchers collect data, and either prove it right or wrong. Eventually a theory is developed. The behaviorist approach underlies logical positivism, which would not consider the hypothesis as acceptable scientific knowledge as long as it had not been allowed for being tested through observations (Popper 2002). 2. Design science research (DSR) is construction-oriented and in which a designer answers questions relevant to human problems via the creation of innovative artifacts, thereby contributing new knowledge to the body of scientific evidence. An artifact is a solution made by humans with the intention to solve a problem. Unlike the natural sciences, DSR is fundamentally a problem-solving paradigm, whose final goal is to produce an artifact that must be built and then evaluated. The  knowledge generated by this research informs us: how a problem can be improved, why the developed artifact is better than existing solutions, and can more efficiently solve the problem being addressed. Table 3.1 shows a comparison of Behavioral and Design Science Research strategies. Frameworks like design science research (DSR) address the problems encountered in the domain of IS. DSR aims to solve a defined problem and meet identified needs through the artifacts produced and addresses research relevance by clear linkage of a solution to

TABLE 3.1 Comparison of Behavioral and Design Science Research Strategies Behavioral Research Goal

Perception of reality

Knowledge evaluation

Description and declaration of the reality with the aid of theories (focus on reality) There exists an ontic reality that is responsible for perceiving a subject (realism) Differentiation between knowledge development and application. Methodological principles and procedures guarantee knowledge quality (positivism)

Knowledge structure

It is assumed that socio-technical coherences are explicable by empirical data (describe, explain and predict) (reductionism)

Knowledge development process

Inquiry, evaluation, interpretation, generalization (sequence)

Interaction with the object of research

Actions that have an influence on the object of research should be omitted (observer)

Design Science Research Changing the reality by developing and using artifacts (focus on benefits) There exists an ontic reality which is bound to a subject that creates distortions (relativism) A logical separation between knowledge development and knowledge application is either not possible or not desired; only a few methodological standards; the grade of knowledge is determined by the quality of the argumentation (pragmatism) Data form a basis for constructing an artifact but are not applicable for drawing conclusion within the overall context called contextual knowledge about the artifact (emergence) Problem analysis and formulation, development and adaptation of concepts, evaluation and recalibration synthesis (iteration) Affecting opportunities for target-oriented modification of the environment are actively used (participant)

46

Digital Transformation of Enterprise Architecture

an identified problem. Design science is a good fit where the research focus is on gaining both knowledge and understanding of a defined problem and its solution while creating and applying the artifacts identified (Hevner et al. 2004). Hevner et al. (2004) provide seven guidelines for good design science research within the IS discipline, based on the fundamental principle that understanding and knowledge of a problem is acquired by building and applying artifacts. These guidelines include the requirement of 1. 2. 3. 4.

Creation of an artifact Usefulness of an artifact for a specified problem or problem domain Ascertaining the usefulness of an artifact by performing a rigorous evaluation Contribution of the artifact through a new and innovative solution to the problem domain 5. Building and evaluating the artifact based on existing knowledge base in the problem domain 6. Design science being an iterative search process involving a cycle of designing, building and testing of the solution 7. Communicating the research clearly to both technical and managerial audience

The design science method (DSM) offers a framework to perform a design science research project in a structured way (Johannesson and Perjons 2014). It consists of five main activities: 1. The first activity is Explicate Problem, which is the investigation and analysis of a practical problem. In this activity the problem should be formulated to highlight its significance and general interest and even explain the underlying causes. 2. The second activity is Outline Artifact and Define Requirements, where the solution to the defined problem is outlined as an artifact. The problem is transformed into defined requirements for the proposed artifact. 3. The third activity is Design and Develop Artifact, where an artifact is designed and created to solve the explicated problem by fulfilling the requirements that were defined. 4. The fourth activity is Demonstrate Artifact, which aims to use the resulting artifact in a scenario to demonstrate or proof that the artifact does solve the explicated problem. 5. The fifth and final activity is Evaluate Artifact, which aims to determine how the artifact fulfills the requirements and how it addresses or solves the practical problem that was identified in the first step (Peffers et al. 2008). 3.2.1.3 Business Model Design and Enterprise Architecture The  main drawbacks of the business model canvas framework relate to its inability to address the context, the environment where the business model is to be applied. The business environment as a background concept also allows taking into account the special characteristics that the product-service system context sets for business model development. With it the overall orientation of the business logic of the company can be addressed and the business model designed according to it. Organizational structure and culture, which are closely related to the abovementioned aspects, can be addressed in order to build a better organizational fit for the business model.

47

Digital Transformation of Enterprises

Adaptability of business models, responsiveness to market changes and the increasing digitalization are acknowledged factors for competitiveness on globalized markets. Enterprises can be understood as complex and highly integrated systems, which are comprised of a set of elements such as goals, processes, organizational units, information and supporting technologies, with multifaceted interdependencies across their boundaries. Enterprises become more complex in the context of strategic changes e.g. digitalization of products and services, establishment of highly integrated supply chains in global markets (with suppliers), mergers and acquisitions including global interoperability or increased outsourcing. Economic success of an organization can only be achieved when an enterprise is able to handle upcoming challenges and complex changes as fast as possible, without generating new problems and challenges. As discussed in Section  3.3, enterprise architecture (EA) enables maintenance of flexibility, cost-effectiveness and transparency of their infrastructure, information systems and business processes. EA enables, supports and manages the alignment of business strategy and IT; EA  enables coordination of business and IT activities, thereby assisting strategic decisions, accompanying its implementation in collaboration with IT and enabling informed strategy controlling. In particular, EA determines how IT has to reconfigure driven by the changing needs of the business and, on the other hand, how business has to reframe strategy driven by new opportunities enabled by newer technologies. An enhanced business model design relevant for undertaking digital transformation initiatives is shown in Figure 3.2 (Simon et al. 2014, WiBotzki 2016).

Vision

Mission Goals

Drivers

Strategy

Constraints

Business Motivation

Customer Perspective

Business Perspectives

KPI Unique Value Proposition

Business Architecture

Supplier Perspective Supply Chain Perspective

Financial Perspective Capabilities Governance Structure

Information Architecture Application Architecture Technical Architecture

Resources

Information Flows Business Applications

Process Business Objects

Interfaces

Business Data

IT Service

Infrastructure Service Software Components

FIGURE 3.2 Enhanced business model design.

Device Location

Devices

Business Execution

48

Digital Transformation of Enterprise Architecture

3.3 Enterprise Architecture All computer systems are made up of the same three fundamental parts: hardware (e.g., processors, memory, disks, network cards); software (e.g., programs or libraries); and data, which may be either transient (in memory) or persistent (on disk or ROM). Computer system can be understood in terms of what its individual parts actually do, how they work together, and how they interact with the world around them—in other words, its architecture. An enterprise architecture provides a high-level design of the entire enterprise that will guide all other enterprise projects. An architecture represents significant, broad design decisions for the enterprise, from which all other design decisions should be consistent. Architecturally significant decisions are those that the architect (or architecture team) needs to make in order to address the concerns of strategy, structure, and enterprise integration. Architectural decisions include deciding how to decompose the enterprise into views, how to decompose each view into different abstraction levels, policies on technology usage, decisions on business culture to guide organizational design, and decisions on what modeling conventions to use. Software Engineering Institute (SEI) at Carnegie-Mellon University in Pittsburgh defines architecture as below: The architecture of a software-intensive system is the structure or structures of the system, which comprise software elements, the externally visible properties of those elements, and the relationships among them.

Enterprise architectures (EA) has the following major goals: 1. EA, its units, policies, processes, strategies, and technological infrastructure (e.g., IT systems), supports all stakeholders in achieving short- and long-term business goals and objectives of the enterprise. 2. EA fosters an alignment of the technological systems developed by and used by an enterprise with its business goals and strategic direction. 3. EA helps an enterprise to learn, grow, innovate, and respond to market demands and changing basic conditions. 4. EA fosters and maintains the learning capabilities of enterprises so that they may be sustainable. Figure 3.3 shows a schematic of enterprise architecture (EA). 3.3.1 Architectural Element An architectural element is a fundamental piece from which a system can be considered to be constructed. Depending on the system being built, programming libraries, subsystems, deployable software units (e.g., Enterprise Java Beans and Active X controls), reusable software products (e.g., database management systems), or entire applications may form architectural elements in an information system. An architectural element possesses the following key attributes: • A clearly defined set of responsibilities • A clearly defined boundary • A set of clearly defined interfaces, which define the services that the element provides to the other architectural elements

Digital Transformation of Enterprises

FIGURE 3.3 Schematic of enterprise architecture (EA).

3.3.2 System Structures System structures are of two kind: 1. The static structures of a software system define its internal design-time elements and their arrangement. The static structures of a system tell you what the designtime form of a system is, what are its elements and how they fit together. Internal design-time software elements might be modules, object-oriented classes or packages, database stored procedures, services, or any other selfcontained code unit. Internal data elements include classes, relational database entities/tables, and data files. Internal hardware elements include computers or their constituent parts such as disk or CPU and networking elements such as cables, routers, or hubs. The static arrangement of these elements defines the associations, relationships, or connectivity between these elements. For software modules, for example, there may be static relationships such as a hierarchy of elements or dependencies between elements. For classes, relational entities, or other

49

50

Digital Transformation of Enterprise Architecture

data elements, relationships define how one data item is linked to another one. For  hardware, the relationships define the required physical interconnections between the various hardware elements of the system. 2. The dynamic structures of a software system define its runtime elements and their interactions. The dynamic structures show how the system actually works, what happens at runtime and what the system does in response to external (or internal) stimulus. These internal interactions may be flows of information between elements, or the parallel or sequential execution of internal tasks, or, they may be expressed in terms of the effect they have on data. External properties manifest in two forms: 1. The externally visible behavior of a software system defines the functional interactions between the system and its environment. The externally visible behavior of a system (what it does) is determined by the combined functional behavior of its internal elements. This includes flows of information in and out of the system, the way that the system responds to external stimuli, and the published “contract” or API that the architecture has with the outside world. 2. A  quality property is an externally visible, nonfunctional property of a system such as performance, security, or scalability. Quality properties tell you how a system behaves from the viewpoint of an external observer (often referred to as its nonfunctional characteristics). The  quality properties of a system such as performance, scalability, and resilience (how it does it) arise from the quality properties of its internal elements. (Typically, a system’s overall quality property is only as good as the property of its worst-behaving or weakest internal element.) Quality attributes can be categorized into the following three groups: 1. Implementation attributes (not observable at runtime) a. Interoperability: Universal accessibility and the ability to exchange data among internal components and with the outside world. Interoperability requires loose dependency of infrastructure. b. Maintainability and extensibility: The  ability to modify the system and conveniently extend it. c. Testability: The degree to which the system facilitates the establishment of test cases. Testability usually requires a complete set of documentation accompanied by system design and implementation. d. Portability: The system’s level of independence on software and hardware platforms. Systems developed using high-level programming languages usually have good portability. One typical example is Java—most Java programs need only be compiled once and can run everywhere. e. Scalability: A  system’s ability to adapt to an increase in user requests. Scalability disfavors bottlenecks in system design.

Digital Transformation of Enterprises

51

f. Flexibility: The ease of system modification to cater to different environments or problems for which the system was not originally designed. Systems developed using component-based architecture or service-oriented architecture usually possess this attribute. 2. Runtime attributes (observable at runtime) a. Availability: A  system’s capability to be available 24/7. Availability can be achieved via replication and careful design to cope with failures of hardware, software, or the network. b. Security: A  system’s ability to cope with malicious attacks from outside or inside the system. Security can be improved by installing firewalls, establishing authentication and authorization processes, and using encryption. c. Performance: Increasing a system’s efficiency with regard to response time, throughput, and resource utilization, attributes which usually conflict with each other. d. Usability: The  level of human satisfaction from using the system. Usability includes matters of completeness, correctness, compatibility, as well as a userfriendly interface, complete documentation, and technical support. e. Reliability: The  failure frequency, the accuracy of output results, the MeanTime-to-Failure (MTTF), the ability to recover from failure, and the failure predictability. f. Maintainability (extensibility, adaptability, serviceability, testability, compatibility, and configurability): The ease of software system change. 3. Business attributes a. Time to market: The time it takes from requirements analysis to the date a product is released. b. Cost: The expense of building, maintaining, and operating the system. c. Lifetime: The period of time that the product is “alive” before retirement. 3.3.2.1 Attribute Tradeoffs In many cases, no single architecture style can meet all quality attributes simultaneously. Software architects often need to balance tradeoffs among attributes. Typical quality attribute tradeoff pairs include the following: 1. Tradeoff between space and time: For  example, to increase the time efficiency of a hash table means a decrease in its space efficiency. 2. Tradeoff between reliability and performance: For instance, Java programs are well protected against buffer overflow due to security measures such as boundary checks on arrays. Such reliability features come at the cost of time efficiency, compared with the simpler and faster C language which provides the “dangerous,” yet efficient, pointers. 3. Tradeoff between scalability and performance: For  example, one typical approach to increase the scalability of a service is to replicate servers. To ensure consistency of all servers (e.g., to make sure that each server has the same logically consistent data), performance of the whole service is compromised.

52

Digital Transformation of Enterprise Architecture

When an architecture style does not  satisfy all the desired quality attributes, software architects work with system analysts and stakeholders to nail down the priority of quality attributes. By enumerating alternative architecture designs and calculating a weighted evaluation of quality attributes, software architects can select the optimal design. 3.3.3 Candidate Architecture A  candidate architecture for a system is a particular arrangement of static and dynamic structures that has the potential to exhibit the system’s required externally visible behaviors and quality properties. Although the candidate architectures have different static and dynamic structures and share the same important externally visible behaviors (in this case, responses to booking transactions) and general quality properties (such as acceptable response time, throughput, availability, and time to repair), they are likely to differ in the specific set of quality properties that each exhibits (such as one being easier to maintain but more expensive to build than another). An architecture style (also known as an “architecture pattern”) abstracts the common properties of a family of similar designs. An architecture style contains a set of rules, constraints, and patterns of how to structure a system into a set of elements and connectors. It governs the overall structure design pattern of constituent element types and their runtime interaction of flow control and data transfer. Theoretically, an architecture style is a viewpoint abstraction for a software structure that is domain independent. Typically, a system has its own application domain such as image processing, Web portal, expert system, or mail server. Each domain may have its own reference model. For instance, the Model-View-Controller (MVC) is widely adopted by designers of interactive systems. Such a reference model partitions the functionalities of a system into subsystems or software components. Architecture styles are important because each style has a set of quality attributes that it promotes. By identifying the styles that a software architecture design supports, we can verify whether the architecture is consistent with the requirement specifications, and identify which tactics can be used to better implement the architecture. In many cases, a system can adopt heterogeneous architectures, i.e., more than one architecture style can coexist in the same design. It is also true that an architecture style maybe applied to many application domains. The key components of an architecture style are: • Elements that perform functions required by a system • Connectors that enable communication, coordination, and cooperation among elements • Constraints that define how elements can be integrated to form the system • Attributes that describe the advantages and disadvantages of the chosen structure For example, 1. In the data-centric style, the data store plays a central role and it is accessed frequently by other elements that modify data. 2. In  the dataflow style, input data is transformed by a series of computational or manipulative elements.

Digital Transformation of Enterprises

53

3. In the call-and-return style, functions and procedures are the elements organized in a control hierarchy with a main program invoking several subprograms. 4. In the object-oriented style, elements are represented as objects that encapsulate data and operations, and the communication among them is by message passing. 5. In the layered style, each module or package completes tasks that progress in a framework from higher-level abstractions to lower-level implementations. Multi-tier architecture is commonly used for distributed systems. It  usually consists of three element types: 1. Client element is responsible for GUI interface presentation, accepting user requests, and rendering results. 2. Middleware element gets the requests from the client element, processes the requests based on the business logic, and sends a data request to the back-end tier. 3. Data store server element manages data querying and updating. All three types of elements are connected via a network (e.g., the Internet). 3.3.4 Stakeholders A stakeholder in an architecture is a person, group, or entity with an interest in or concerns about the realization of the architecture. A concern about an architecture is a requirement, an objective, an intention, or an aspiration a stakeholder has for that architecture. A stakeholder often represents a class of person, such as user or developer, rather than an individual. This presents some problems because it may not be possible to capture and reconcile the needs of all members of the class. The important attributes of a software development project are often interpreted as a triangle whose corners represent cost, quality, and time to market. For example, a high-quality system would take longer to build and, hence, would cost more for completion. Conversely, it is may be possible to reduce the initial development time but, assuming costs are kept roughly constant, this would come at the expense of reducing the quality of the delivered software. One or more of these attributes is likely to be important to different stakeholders, and it is the architect’s job to understand which of these attributes is important to whom and to reach an acceptable compromise when necessary. A  good architecture is one that successfully meets the objectives, goals, and needs of its stakeholders. Stakeholders (explicitly or implicitly) drive the whole shape and direction of the architecture, which is developed solely for their benefit and to serve their needs. Stakeholders ultimately make or direct the fundamental decisions about scope, functionality, operational characteristics, and structure of the eventual product or system. Describing architecture designs: The “4 + 1” view model, developed by P. B. Kruchten, is a way to show different views of a software system, from the perspective of different stakeholders. It is especially useful in describing a complete set of functional and nonfunctional requirements. Alternative choice is to use Architecture Description Languages (ADL) to formally specify the structure and semantics of software architecture.

54

Digital Transformation of Enterprise Architecture

3.4 Viewpoints and Views A view is a representation of one or more structural aspects of an architecture that illustrates how the architecture addresses one or more concerns held by one or more of its stakeholders. An architectural view is a way to portray those aspects or elements of the architecture that are relevant to the concerns with reference to this view. An architectural description comprises a number of views; a view conforms to a viewpoint and so communicates the resolution of a number of concerns (and a resolution of a concern may be communicated in a number of views). A viewpoint is a collection of patterns, templates, and conventions for constructing one type of view. It defines the stakeholders whose concerns are reflected in the viewpoint and the guidelines, principles, and template models for constructing its views.

A  viewpoint defines the aims, intended audience, and content of a class of views and defines the concerns that views of this class will address. The objective of the viewpoint concept is to make available a library of templates and patterns that can be used off the shelf to guide the creation of an architectural view that can be inserted into an architectural description. They provide a framework for capturing reusable architectural knowledge that can be used to guide the creation of a particular type of architectural description. 1. Viewpoint catalog A viewpoint is defined with details like: a. Information: Describes the way that the architecture stores, manipulates, manages, and distributes information. The ultimate purpose of virtually any computer system is to manipulate information in some form, and this viewpoint develops a complete but high-level view of static data structure and information flow. The objective of this analysis is to answer the big questions around content, structure, ownership, latency, references, and data migration. b. Functional: Describes the system’s functional elements, their responsibilities, interfaces, and primary interactions. A functional view is the cornerstone of most architectural descriptions and is often the first part of the description that stakeholders try to read. It drives the shape of other system structures such as the information structure, concurrency structure, deployment structure, and so on. It also has a significant impact on the system’s quality properties such as its ability to change, its ability to be secured, and its runtime performance. c. Concurrency: Describes the concurrency structure of the system and maps functional elements to concurrency units to clearly identify the parts of the system that can execute concurrently and how this is coordinated and controlled. This entails the creation of models that show the process and thread structures that the system will use and the interprocess communication mechanisms used to coordinate their operation. d. Development: Describes the architecture that supports the software development process. Development views communicate the aspects of the architecture of interest to those stakeholders involved in building, testing, maintaining, and enhancing the system.

Digital Transformation of Enterprises

55

e. Deployment: Describes the environment into which the system will be deployed, including capturing the dependencies the system has on its runtime environment. This view captures the mapping of the software elements to the runtime environment that will execute them, the technical environment requirements for each element and the hardware environment that your system needs (primarily the processing nodes, network interconnections, and disk storage facilities required). f. Operations: Describes how the system will be operated, administered, and supported when it is running in its production environment. For all but the simplest systems, installing, managing, and operating the system is a significant task that must be considered and planned at design time. The aim of the operational viewpoint is to identify system-wide strategies for addressing the operational concerns of the system’s stakeholders and to identify solutions that address these. 2. Benefits a. Management of complexity: Dealing simultaneously with all of the aspects of a large system can result in overwhelming complexity that no one person can possibly handle. By treating each significant aspect of a system separately helps in conquering this complexity. b. Communication with stakeholder groups: The concerns of each stakeholder group are typically quite different (e.g., contrast the primary concerns of end users, security auditors, and help-desk staff), and communicating effectively with the various stakeholder groups is quite a challenge. The  viewpoint-oriented approach can help considerably with this problem. Different stakeholder groups can be guided quickly to different parts of the architectural description based on their particular concerns, and each view can be presented using language and notation appropriate to the knowledge, expertise, and concerns of the concerned stakeholder. c. Separation of concerns: Describing many aspects of the system via a single representation can cloud communication and may result in independent aspects of the system becoming intertwined in the model. Separating different models of a system into distinct (but related) descriptions helps the design, analysis, and communication processes by focusing on each aspect separately. d. Improved developer focus: The architectural description is the foundation of the system design. Building right system is enabled by separating out into different views those aspects of the system that are particularly important to the development team.

3.5 Perspectives Many architectural decisions address concerns that are common to many or all views. These concerns are normally driven by the need for the system to exhibit a certain quality property rather than to provide a particular function; such properties are sometimes referred as a non-functional property. For  instance, security is clearly a vital aspect of most architecture. It has always been important to be able to restrict access to data or functionality to appropriate classes of users, and in the age of the Internet, good external and internal security is even more important. There  is an inherent need to consider quality

56

Digital Transformation of Enterprise Architecture

properties such as security in each architectural view. Considering them in isolation does not make sense, neither does using a viewpoint to guide the creation of another view for each quality property make sense. Rather than defining another viewpoint and creating another view, we need some way to modify and enhance our existing views to ensure that our architecture exhibits the desired quality properties. An architectural perspective is a collection of activities, tactics, and guidelines that are used to ensure that a system exhibits a particular set of related quality properties that require consideration across a number of the system’s architectural views.

The most important perspectives for large information systems include • Performance and scalability (meeting the system’s required performance profile and handling increasing workloads satisfactorily) • Availability and resilience (ensuring system availability when required and coping with failures that could affect this) • Security (ensuring controlled access to sensitive system resources) • Evolution (ensuring that the system can cope with likely changes) Advantages of perspectives are: 1. A  perspective is a useful store of knowledge, helping you quickly review your architectural models for a particular quality property without having to absorb a large quantity of more detailed material. 2. A perspective acts as an effective guide when you are working in an area that is new to you and you are not familiar with its typical concerns, problems, and solutions. 3. A perspective is a useful memory aid when you are working in an area that you are more familiar with, to make sure that you don’t forget anything important. Perspective should be employed as early as possible in the design of the architecture because that helps in preventing going down architectural blind alleys that lead to developing a model that is functionally correct but offers, for example, poor performance or availability. 1. Benefits Applying perspectives to a view benefits your architectural description in several ways. a. The perspective provides common conventions, measurements, or even a notation or language you can use to describe the system’s qualities. For example, the Performance perspective defines standardized measures such as response time, throughput, latency, and so forth, as well as how they are specified and captured. b. The  perspective defines concerns that guide architectural decision making to help ensure that the resulting architecture will exhibit the quality properties considered by the perspective. For example, the Performance perspective defines standard concerns such as response time, throughput, and predictability. Understanding and prioritizing the concerns that a perspective addresses helps you bring a firm set of priorities to later decision making.

57

Digital Transformation of Enterprises

c. The  perspective describes how you can validate the architecture to demonstrate that it meets its requirements across each of the views. For example, the Performance perspective describes how to construct mathematical models or simulations to predict expected performance under a given load and techniques for prototyping and benchmarking. d. The  perspective may offer recognized solutions to common problems, thus helping to share knowledge between architects. For example, the Performance perspective describes how hardware devices may be multiplexed to improve throughput. e. The perspective helps you work in a systematic way to ensure that its concerns are addressed by the system. This helps you organize the work and make sure that nothing is forgotten. 2. Perspectives catalog Table 3.2 charts views versus the applicable perspectives. Table 3.3 describes perspectives in detail. TABLE 3.2 Views versus the Applicable Perspectives Perspectives Views

Security

Performance and Scalability

Availability and Resilience

Change

Functional Information Concurrency Development Deployment Operational

Medium Medium Low Medium High Medium

Medium Medium High Low High Low

Low Low Medium Low High Medium

High High Medium High Low Low

TABLE 3.3 Perspectives Described in Detail Perspective Security

Performance and scalability Availability and resilience

Evolution

Desired Quality The ability of the system to reliably control, monitor, and audit who can perform what actions on what resources and to detect and recover from failures in security mechanisms The ability of the system to predictably execute within its mandated performance profile and to handle increased processing volumes The ability of the system to be fully or partly operational as and when required and to effectively handle failures that could affect system availability The ability of the system to be flexible in the face of the inevitable change that all systems experience after deployment, balanced against the costs of providing such flexibility

58

Digital Transformation of Enterprise Architecture

Perspectives are defined with details like: • Applicability: This  section explains which of your views are most likely to be affected by applying the perspective. For  example, applying the Evolution perspective might affect your Functional view more than your Operational view. • Concerns: This  information defines the quality properties that the perspective addresses. • Activities: In this section, we explain the steps for applying the perspective to your views—identifying the important quality properties, analyzing the views against these properties, and then making architectural design decisions that modify and improve the views. • Architectural tactics: An architectural tactic is an established and proven approach you can use to help achieve a particular quality property (e.g., defining different processing priorities for different parts of the system’s workload and managing this by using a priority-based process scheduler to achieve satisfactory overall system performance). Each perspective identifies and describes the most important tactics for achieving its quality properties. • Problems and pitfalls: This  section explains the most common things that can go wrong and gives guidance on how to recognize and avoid them. 3.5.1 Change Perspective A snapshot on this perspective is: • Desired quality: The ability of the system to be flexible in the face of the inevitable change that all systems experience after deployment, balanced against the costs of providing such flexibility • Applicability: Generally important for all systems; more important for longer lived and more widely used systems • Concerns: Magnitude of change, dimensions of change, likelihood of change, timescale for change, when to pay for change, development complexity, preservation of knowledge, and reliability of change • Activities: Characterize the change needs, assess the current ease of change, consider the change tradeoffs, and rework the architecture • Architectural tactics: Contain change, create flexible interfaces, apply change-oriented architectural styles, build variation points into the software, use standard extension points, achieve reliable change, and preserve development environments • Problems and pitfalls: Prioritization of the wrong dimensions, changes that never happen, impacts of change on critical quality properties, lost development environments, and ad hoc release management 1. Applicability of change perspective Applicability of change perspective to the six views is summarized below: a. Functional view may be informed by change perspective by enabling the functional structure to reflect the required change. b. Information view may be informed by change perspective by mandating a flexible information model.

Digital Transformation of Enterprises

c. Concurrency view may be informed by change perspective by dictating a particular element packaging or some constraints on the concurrency structure (e.g., that it must be very simple). d. Development view may be informed by change perspective by determining the impact on the development environment (e.g., enforcing portability guidelines). e. Deployment view may be informed by change perspective by defining impact on the Deployment view because system change usually affects structures described in other views. f. Operational view may be informed by change perspective to the extent that it impacts on the operational view. 2. Tactics for enhancement a. Contain change: Design a system structure so that the required changes are as contained as possible. A change starts to be a problem when its effects ripple through a number of different parts of the system simultaneously. For  the interface dimension, for example, if the interface change requires a change to internal system interfaces, this is a much more serious problem than changing the single software module that provides the external interface. General design principles that can help in localizing the effects of change are: • Encapsulation • Separation of concerns • Single point of definition b. Create flexible interfaces: The specifics of creating flexible interfaces depend on the technology environment and problem domain of the concerned system. However, considering the degree of flexibility required of the important interfaces and how to achieve it is an important aspect of addressing the change concerns of the system. When designing the main internal interfaces of the system (and between system and its environment), create them as flexible as possible. Some interface designs are significantly more resilient to change than others. For example, by using a self-describing message technology such as XML to define message formats and allowing new message elements to be optional, allows messages to be extended with little or no impact on system elements that do not need to use the extended form of the interface. This approach is particularly useful for external interfaces if environment change is likely to be important for the system. Such approaches are not  without their costs, and they need to be evaluated in the context of the change requirements. To take interface flexibility to its logical extreme would involve dropping static typing of your interfaces altogether and establishing the types of all request parameters at runtime. However, such a flexible approach can be more  difficult to understand and test and may be less efficient at runtime. c. Build variation points into the system: This approach involves identifying system variation points locations where supporting a certain type of change is critically important and specifying a mechanism to achieve the change required. A large number of specific software design patterns have been published that attempt to introduce some form of variation point (Façade, Chain of Responsibility, Bridge and so on).

59

60

Digital Transformation of Enterprise Architecture

Some general approaches which can be useful are: • Make elements replaceable • Parameterize operation with configuration parameters • Use self-describing data and generic processing • Separate physical and logical processing d. Use standard extension points: Many mainstream information systems technologies provide standard extension points (such as the J2EE platform’s ability to easily add support for new types of databases, via the JDBC interface, and external systems, via the JCA interface). For using such standard technology solutions, a number of flexible extension points are available that can adapted to meet the change requirements. For example, custom adaptors can be written to use standard application integration facilities to connect to in-house systems as well as the packaged applications they are normally used for. This avoids designing and building custom mechanism for future integration into in-house systems. e. Implement reliable changes: Since the effects of implementing a change cannot be assessed easily, the system must be exhaustively tested to confirm assumptions regarding postchange behavior. Changes can be assured of reliability by: • Software configuration management • Automated build and release process • Environment configuration management • Automated testing • Continuous integration f. Apply change-oriented architectural styles: If there are significant requirements for system change, it may be worth considering the adoption of an overall architectural style that is particularly focused on supporting change. Metamodel-based systems provide a very high degree of flexibility in some problem domains (particularly database systems requiring significant schema evolution). Such metamodel-based systems can provide the ability to support very rapid change because instead of modifying the system implementation to change its behavior, it can just be reconfigured—usually a much faster process. Metamodel-based systems are much more complex to develop and test than systems based on more static architectural styles. They are also inherently less efficient in terms of runtime performance, which can limit their applicability in environments where performance is a major concern. 3.5.2 Scalability Perspective The  scalability property of a system is closely related to performance; but scalability focuses mainly on the predictability of the system’s performance as the workload increases. The performance of a computer system depends on much more than the raw processing power of its hardware, including the way that hardware is configured, the way resources are allocated and managed, and the way the software is written.

Digital Transformation of Enterprises

A snapshot on this perspective is: • Desired quality: The  ability of the system to predictably execute within its mandated performance profile and to handle increased processing volumes • Applicability: Any system with complex, unclear, or ambitious performance requirements; systems whose architecture includes elements whose performance is unknown; and systems where future expansion is likely to be significant • Concerns: Response time, throughput, scalability, predictability, hardware resource requirements, and peak load behavior • Activities: Capture the performance requirements, create the performance models, analyze the performance models, conduct practical testing, assess against the requirements, and rework the architecture • Architectural tactics: Optimize repeated processing, reduce contention via replication, prioritize processing, consolidate related workloads, distribute processing over time, minimize the use of shared resources, partition and parallelize, use asynchronous processing, and make design compromises • Problems and pitfalls: Imprecise performance and scalability goals, unrealistic models, use of simple measures for complex cases, inappropriate partitioning, invalid environment and platform assumptions, too much indirection, concurrencyrelated contention, careless allocation of resources, and disregard for network and in-process invocation differences 1. Applicability of scalability perspective Applicability of scalability perspective to the six views is summarized as follows: a. Functional view may be informed by scalability perspective by revealing the need for changes and compromises to your ideal functional structure to achieve the system’s performance requirements (e.g., by consolidating system elements to avoid communication overhead). The models from this view also provide input to the creation of performance models. b. Information view may be informed by scalability perspective by providing useful input to performance models, identifying shared resources and the transactional requirements of each. As you apply this perspective, you may identify aspects of the Information view as obstacles to performance or scalability. In addition, considering scalability may suggest elements of the Information view that could be replicated or distributed in support of this goal. c. Concurrency view may be informed by scalability perspective to change the concurrency design by identifying problems such as excessive contention on key resources. Alternatively, considering performance and scalability may result in concurrency becoming a more important design element to meet these requirements. Elements of concurrency views (such as interprocess communication mechanisms) can also provide calibration metrics for performance models. d. Development view may be informed by scalability perspective through a set of guidelines related to performance and scalability that should be followed during software development. These guidelines will probably take the form of dos and don’ts (e.g., patterns and antipatterns) that must be followed as the software is developed in order to avoid performance and scalability problems later when it is deployed. You will capture this information in the Development view.

61

62

Digital Transformation of Enterprise Architecture

e. Deployment view may be informed by scalability perspective through crucial inputs to the process of considering performance and scalability. Many parts of the system’s performance models are derived from the contents of this view, which also provides a number of critical calibration metrics. In turn, applying this perspective will often suggest changes and refinements to the deployment environment, to allow it to support the performance and scalability needs of the system. f. Operational view may be informed by scalability perspective by highlighting the need for performance monitoring and management capabilities. 2. Tactics for enhancement a. Optimize repeated processing: Most systems have a small number of common operations that the system spends most of its time performing. The resulting performance implication is that you should focus your performance efforts on that core 20% of your system. The goal of the performance engineering process is to minimize the overall system workload M



System workload = l =1 (Operation Cost) That is,

M



System workload = l =1 (Operation invocation × frequency of invocation) The total workload for our system, for a unit time, is the sum of all of the total operation costs over that unit time (where we have n possible operations in our system). System’s operations are ranked by the total cost metric, and the performance engineering efforts are focused to optimize the operations at the top of this list first. b. Reduce contention through replication: Whenever there are concurrent operations in the system, there is a potential for contention. This  contention is often a major source of performance problems, causing reduction in throughput and wasted resources. Eliminating contention can be a difficult process, particularly when the contention involved is actually within the system’s underlying infrastructure (e.g., an application server), rather than in the software over which there is a direct control. A  possible solution for some contention problems is to replicate system elements—hardware, software, or data. This  approach works only in certain  situations, and a limiting factor is that the system often needs to be designed to take advantage of replication from the outset. c. Prioritize processing: A  problem in many otherwise well-built systems can emerge when the overall performance of the system is within target, but some important tasks still take too long to execute. To avoid this situation, partition the system’s workload into priority groups (or classes), and add the ability to prioritize the workload to process. This allows you to ensure that your system’s resources will be applied to the right element of workload at any point in time, so the perception of performance problems is much less likely.

Digital Transformation of Enterprises

63

A low-level example of this approach is the priority class-based thread and process scheduling built into most modern operating systems. When dealing with large information systems, this prioritizing usually involves identifying the business criticality of each class of workload. Those types of workloads that have to complete in a timely manner for business to continue (such as order processing) are naturally prioritized over workloads that, while important, will not immediately impact business operation (such as management reporting). d. Minimize use of shared resources: As systems get busier and contention for shared resources increases, the waiting time takes proportionally more of the total elapsed time for tasks and contributes proportionally more to the overall response time. At any instance, a nonidle task running on a system is in one of two states: • Waiting for a resource, either because it is busy (e.g., being used by another task) or not in a ready state (e.g., waiting for a head to move over a particular track on a disk or a software service to initialize) • Making use of a resource (e.g., a hardware resource such as processor, memory, disk, or network or a software resource such as a message service) Other than increasing the performance of resources that the task uses, a way to alleviate this situation is to minimize the use of shared resources by: • Using techniques such as hardware multiplexing to eliminate hardware hot spots in your architecture • Favoring short, simple transactions over long, complex ones where possible (because transactions tend to lock up resources for extended periods) • Not locking resources in human time (e.g., while waiting for a user to press a key) • Trying to access shared resources nonexclusively wherever possible e. Minimize partition and parallelize: If a system involves large, lengthy processes, a possible way to reduce their response time is to partition them into a number of smaller processes, execute these subprocesses in parallel, and, when the last subprocess is complete, consolidate all of the subprocess output into a single result. Naturally, situations involving lengthy and expensive consolidation of subprocess output are suitable for this technique only if the consolidation cost is still small relative to that of the longer response time for the original process. This approach is likely to be effective in a particular situation depending on • Whether the overall process can be quickly and efficiently partitioned into subprocesses • Whether the resulting subprocesses can be executed independently to allow effective parallel processing • How much time it will take to consolidate the output of the subprocesses into a single result • Whether enough processing capacity is available to process the subprocesses in parallel faster than handling the same workload in a single process This  approach is less efficient than using a single linear process (due to the partitioning and consolidation overheads) and achieves a reduction in response time at the price of requiring more processing resources. If spare processing

64

Digital Transformation of Enterprise Architecture

resources aren’t available, parallelization is unlikely to be effective because the subprocesses will be executed one after the other and will be slower than the original design, due to the partitioning and consolidation overheads. f. Make design compromises: Many of the techniques of good architecture definition that we have described in this chapter can themselves cause performance problems in extreme situations. A loosely coupled, highly modular, and coherent system tends to spend more time communicating between its modules than a tightly coupled, monolithic one does. Where performance is a critical concern, as a last resort consider compromises in the ideal structure of design: • Moving to a more tightly coupled, monolithic design to minimize internal communication delays and contention • Denormalizing parts of your data model to optimize accesses to data • Using very coarse-grained operations between distributed system elements • Duplicating data or processing locally to minimize traffic over slow communication channels Making such a change may improve performance but is likely to have costs in terms of maintainability and possibly even ease of use. The desirability of these tradeoffs should be assessed carefully. 3.5.3 Availability Perspective In recent years there has been a significant change in the way that most companies carry out their business, driven to a large extent by the Internet and the global operations of large organizations. Today’s requirement for many systems is to be available for much of the twenty-four-hour cycle. With the improved reliability of hardware and software, many expect that failures will be few and far between and that upon failure recovery will be prompt, effective, and largely automated. A snapshot on this perspective is: • Desired quality: The  ability of the system to be fully or partly operational as and when required and to effectively handle failures that could affect system availability • Applicability: Any system that has complex or extended availability requirements, complex recovery processes, or a high visibility profile (e.g., is visible to the public) • Concerns: Classes of service, planned downtime, unplanned downtime, time to repair, and disaster recovery • Activities: Capture the availability requirements, produce the availability schedule, estimate platform availability, estimate functional availability, assess against the requirements, and rework the architecture • Architectural tactics: Select fault-tolerant hardware, use hardware clustering and load balancing, log transactions, apply software availability solutions, select or create fault-tolerant software, and identify backup and disaster recovery solutions • Problems and pitfalls: Single point of failure, overambitious availability requirements, ineffective error detection, overlooked global availability requirements, and incompatible technologies

Digital Transformation of Enterprises

1. Applicability of availability perspective Applicability of availability perspective to the six views is summarized below: a. Functional view may be informed by availability perspective by enabling the business’s ability to operate effectively. Functional changes may sometimes be needed to support availability requirements, such as the ability to operate in an offline mode when a communications network is unavailable. b. Information view may be informed by availability perspective by considering the set of processes and systems for backup and recovery. Systems must be backed up in such a way that they can be recovered in a reasonable amount of time if a disaster occurs. Backups should not impact online availability, or if they do, they may need to be scheduled to occur outside the online day. c. Concurrency view may be informed by availability perspective by incorporating features such as hardware replication and failover in the system that may entail changes or enhancements to your concurrency model. d. Development view may be informed by availability perspective by imposing design constraints on the software modules. For example, all subsystems may have to support start, stop, pause, and restart commands to align with the system failover strategy. e. Deployment view may be informed by availability perspective by mandating a fault-tolerant production environment (i.e., one in which each hardware component is duplicated and failover is automatic) or a separate disaster recovery site that can be quickly activated if the production site goes down. Per this view, the system may also dictate special software to support hardware redundancy or clustering. f. Operational view may be informed by availability perspective to allow the identification and recovery of problems in the production environment. There may also be a need for geographically separate disaster recovery facilities. Processes for main site failover, network failover, and data recovery must be designed, tested, and implemented. If the standby site is physically remote from the production site, processes are also required to move production staff from one location to the other or to deploy suitably trained staff at the standby site. 2. Tactics for enhancement a. Select fault-tolerant hardware: Fault-tolerant computing platforms can continue to operate without interruption even if a hardware component fails. These are typically implemented by means of redundant or duplicated hardware: each component—CPU, memory, disk, I/O port, and so on—is deployed in pairs, and if one of the pair fails, the other continues to operate while the fault is analyzed and the faulty part repaired or replaced. Such platforms, although expensive, deliver a very high level of system availability and often allow hardware upgrades to be performed while the system is online. Redundant disk architectures (such as redundant array of inexpensive disks (RAID) or mirrored disks) are a particularly common example. b. Use hardware clustering and load balancing: High-availability clustering is a technique for protecting against failures by mirroring the whole computer rather than just a disk. In a clustered configuration, two or more identical

65

66

Digital Transformation of Enterprise Architecture

c.

d.

e.

f.

computers (referred to as nodes) are deployed in parallel and share the total workload between them. If any one of the nodes fails, one of the remaining nodes takes over its workload and processing can continue (although the transactions in progress at the failed node may have to be resubmitted). This  process is known as failover. Special software is required to manage the cluster. Some types of clusters (scalable clusters) can also be used to enhance performance due to their ability to reduce contention via replication. A  variety of different clustering configurations are available, depending on how the nodes are connected to shared resources, such as memory, disk, and the communications network, and what failover scenarios are supported. Whatever approach you choose, incoming transactions must be allocated to one of the nodes. A technique called load balancing performs this allocation and helps ensure that the nodes are used to their fullest possible extent. Load balancing can be provided via hardware or (less commonly) software. Log transactions: For  a number of reasons, backups may not  bring data (or a database) back to the state it was in at the point of failure It is extremely useful to have a separate transaction log that can be replayed into the recovered database to bring it completely up-to-date, Such a capability may be provided by the storage mechanism, by front-end applications, or by underlying transaction management software. An added benefit of such transaction logging is that it can provide an audit trail. Implement fault-tolerant software: Software can also be written to reconfigure itself in response to changing conditions, for example, by allocating itself more resources (such as shared memory) when under load or by automatically disabling certain features if they malfunction and offering only partial service until the problem is rectified. This requirement is being addressed by cloud computing solutions. Implement software availability solution: Develop effective strategies to ensure the reliability and recoverability of the system including: • A robust strategy for data validation and dealing with rejected data • A  common strategy for detecting, logging, and responding to software errors • A service to which error messages are logged and from which alerts and errors can be trapped • The recording of full diagnostic information to help in subsequent analysis and repair Implement backup and disaster recovery solutions: Every system that manages persistent information (i.e., information that must be stored on stable storage and available across system restarts) must include mechanisms for backing up this information by writing it to a separate storage medium from which the information can be recovered in the case of system failure (particularly, disk failure). While the first mirroring architectures placed the disks physically near one another, you can now  mirror onto disks at another location, possibly many

Digital Transformation of Enterprises

67

miles away, by means of high-bandwidth, highly reliable fiber-optic networks. Such a distributed mirroring solution, while expensive, can also form part of your disaster recovery architecture. Any backup solution must maintain the transactional integrity of the data so that, when the data is restored, it is left in a transactionally consistent state.

3.6 Enterprise Architecture Frameworks A reference architecture is a meta-model, in that it is a model of how to model enterprises. It describes a structured set of models that collectively represent the building blocks of an enterprise system. Reference architectures embody the knowledge gathered, on a large scale, from a multitude of enterprise engineering projects. Creating and describing of a systems architecture from scratch can be a daunting task. Architecture frameworks simplify the process and guide an architect through all areas of architecture development; they provide a set of conventions, principles, and practices, which can be used by systems engineers to create such descriptions. There are a variety of different architecture frameworks in existence; they differ mainly with respect to their field of application. The architecture frameworks have several benefits: • It helps stakeholders make decisions about enterprise design and operation. • It provides users with some confidence that use of the reference architecture will be successful in the current project. • It facilitates communication of enterprise design. • It is applicable to a wide range of enterprise systems and scenarios. • It establishes a common means to organize, interpret, and analyze architectural descriptions. • It identifies architectural concerns, generic stakeholders, viewpoints, and abstraction levels. • It encourages reuse. • It provides a unified, unambiguous definition of terminology. There are several useful and not mutually exclusive principles to knowledge reuse, such as: • Best practices: A concept used in knowledge management in order to capture and share good solutions and lessons learned which are of value to the organization • Frameworks: A  concept used in software development for providing building blocks for implementing generic functionality of a software system. A  framework usually contains working code that needs to be changed and/or extended in order to fit specific application requirements. Examples of software frameworks

68

Digital Transformation of Enterprise Architecture

are code libraries, tool sets, and application programming interfaces. The framework principle is also used in organizational setting as Enterprise Architecture Frameworks. These frameworks usually offer generic guidance for creating and managing enterprise architecture as well as for the form in which an enterprise architecture should be described. Examples of EA frameworks are the Zachman Framework and TOGAF. • Reference models: Are a system of models used to define concepts in a certain domain. The aim of reference models typically is to unify and integrate the body of knowledge in a certain area. Reference models are not  directly seen as standards but they often have a significant role in developing them. Reference models are frequently used for managing an established set of best practices and commonly available solutions. • Patterns: A  concept for capturing and presenting proven reusable solutions to recurring problems initially used in architecture and later adopted to information systems analysis, design, programming, as well as organizational design. Figure 3.4 shows development of EA framework. A list of standard architectural frameworks are as given below: 1. 2. 3. 4. 5.

Zachman FrameworkTM The Open Group Architecture Framework (TOGAF®) Federal Enterprise Architecture Framework (FEAF) Department of Defense Architecture Framework (DoDAF) Ministry of Defense Architecture Framework (MODAF)

For a detailed description of the above frameworks please refer to Kale (2018).

FIGURE 3.4 Development of EA frameworks.

Digital Transformation of Enterprises

69

3.7 Summary Enterprise architecture is a well-defined practice for conducting enterprise analysis, design, planning, and implementation, using a holistic approach at all times, for the successful development and execution of enterprise strategy. Enterprise architecture applies architecture principles and practices to guide enterprises through the business, information, process, and technology changes necessary to execute their strategies. These practices utilize the various aspects of an enterprise to identify, enable and achieve these changes. This chapter described the viewpoints, views and perspectives that enable an enterprise architecture. Creating and describing of a systems architecture from scratch can be a daunting task. Enterprise reference architectures or architecture frameworks simplify the process and guide an architect through all areas of architecture development; they provide a set of conventions, principles, and practices, which can be used by systems engineers to create such descriptions.

4 Distributed Systems Several distributed database prototype systems were developed in the 1980s and 1990s to address the issues of data distribution, data replication, distributed query and transaction processing, distributed database metadata management, and other topics. More recently, many new technologies have emerged that combine database and distributed technologies. These technologies and systems are being developed for dealing with the storage, analysis, and mining of the vast amounts of data that are being produced and collected, and they are referred to generally as big data technologies (see Chapter 17). Decentralized peer-to-peer systems are discussed in Chapter 20 on blockchain computing. The centralized approach to processing data, in which users access a central database on a central computer through personal computers (PCs) and workstations, dominated organizations from the late 1960s through the mid-1980s because there was no alternative approach to compete with it. The introduction of reasonably priced PCs during the 1980s, however, facilitated the placement of computers at various locations within an organization; users could access a database directly at those locations. Networks connected these computers, so users could access not only data located on their local computers but also data located anywhere across the entire network. This  chapter addresses issues involved in distributed databases where a database is stored on more than one computer.

4.1 Distributed Systems Distributed systems consist of a collection of heterogeneous but fully autonomous components that can execute on different computers. Whereas each of these components has full control over its constituent sub-parts, there is no master component that possesses control over all the components of a distributed system. Thus, for each of these systems to appear as a single and integrated whole, the various components need to be able to interact with each other via pre-defined interfaces through a computer network. The characteristic global features of a successful distributed system are as follows. • Distributed systems are heterogeneous arising from the need to (say) integrate components on a legacy IBM mainframe, with the components newly created to operate on a UNIX workstation or Windows NT machine. • Distributed systems are scalable in that when a component becomes overloaded with too many requests or users, another replica of the same component can be instantiated and added to the distributed system to share the load amongst them. Moreover, these instantiated components can be located closer to the local users and other interacting components to improve the performance of the overall distributed system. 71

72

Digital Transformation of Enterprise Architecture

• Distributed systems execute components concurrently in a multi-threaded mode via multiply invoked components corresponding to the number of simultaneously invoked processes. • Distributed systems are fault-tolerant in that they duplicate components on different computers so that if one computer fails another can take over without affecting the availability of the overall system. • Distributed systems are more resilient in that whereas distributed systems have multiple points of failure, the unaffected components are fully operational even though some of the components are not functional or are malfunctioning. Moreover, the distributed system could invoke another instance of the failed components along with the corresponding state of the process (characterized by the program counter, the register variable contents and the state of the virtual memory used by the process) to continue with the process. • Distributed systems demonstrate invariance or transparency with reference to characteristics like: • Accessibility, either locally or across networks to the components • Physical location of the components • Migration of components from one host to another • Replication of components including their states • Concurrency of components requesting services from shared components • Scalability in terms of the actual number of requests or users at any instance • Performance in terms of the number and type of available resources • Points of failure be it a failure of the component, network or response The terms parallel systems and distributed are often used interchangeably; however, the term distributed refers to a wider class of systems, whereas the term parallel implies a subclass of tightly coupled systems. Distributed systems encompasses any architecture or system that allows the computation to be broken down into units and executed concurrently on different computing elements, whether these are processors on different nodes, processors on the same computer, or cores within the same processor. Distributed often implies that the locations of all the constituting computing elements are not the same and such elements might be heterogeneous in terms of hardware and software features. Classic examples of distributed computing systems are computing grids or Internet computing systems, which combine together the biggest variety of architectures, systems, and applications in the world. Parallel systems refers to a model in which the computation is divided among several processors sharing the same memory. The architecture of a parallel computing system is often characterized by the homogeneity of components: each processor is of the same type and it has the same capability as the others. The shared memory has a single address space, which is accessible to all the processors. Parallel programs are then broken down into several units of execution that can be allocated to different processors and can communicate with each other by means of the shared memory. Originally, parallel systems used to include only those architectures that featured multiple processors sharing the same physical memory and computer. However, by now parallel systems are considered to include all those architectures that are based

Distributed Systems

73

on the concept of shared memory regardless of whether this is physically colocated or created with the support of libraries, specific hardware, and a highly efficient networking infrastructure. For  example, a cluster of which the nodes are connected through an InfiniBand network and configured with a distributed shared memory system can also be considered a parallel system. 4.1.1 Distributed Computing Distributed computing studies the models, architectures, and algorithms used for building and managing distributed systems. A  distributed system is a collection of independent computers that appears to its users as a single coherent system. Distributed systems are primarily focused on aggregation of distributed resources and unified usage. This is accomplished by communicating with other computers only by exchanging messages. A distributed system is the result of the interaction of several components across the entire computing stack from hardware to software. The  hardware and operating system layers make up the bare-bone infrastructure of one or more data centers, where racks of servers are deployed and connected together through high-speed connectivity. This infrastructure is managed by the operating system, which provides the basic capability of machine and network management. The core logic is then implemented in the middleware that manages the virtualization layer, which is deployed on the physical infrastructure in order to maximize its utilization and provide a customizable runtime environment for applications. The middleware provides different facilities and functionalities per the requirement of the end customers and users. These facilities range from virtual infrastructure building and deployment to application development and runtime environments. The different layers constituting the distributed system stack are as follows: 1. Hardware layer: At the very bottom layer, computer and network hardware constitute the physical infrastructure; these components are directly managed by the operating system, which provides the basic services for interprocess communication (IPC), process scheduling and management, and resource management in terms of file system and local devices. Taken together these two layers become the platform on top of which specialized software is deployed to turn a set of networked computers into a distributed system. 2. Operating system layer: The use of well-known standards at the operating system level and even more at the hardware and network levels allows easy harnessing of heterogeneous components and their organization into a coherent and uniform system. For example, network connectivity between different devices is controlled by standards, which allow them to interact seamlessly. At the operating system level, IPC services are implemented on top of standardized communication protocols such Transmission Control Protocol/Internet Protocol (TCP/IP), User Datagram Protocol (UDP) or others. 3. Middleware layer: The  middleware layer leverages such services to build a uniform environment for the development and deployment of distributed applications. This layer supports the programming paradigms for distributed systems, which we will discuss in Chapters 5 through 7 of this book. By relying on the services offered by the operating system, the middleware develops its own protocols, data formats, and programming language or frameworks for the development of

74

Digital Transformation of Enterprise Architecture

distributed applications. All of them constitute a uniform interface to distributed application developers that is completely independent from the underlying operating system and hides all the heterogeneities of the bottom layers. 4. Application layer: The topmost layer is represented by the applications and services designed and developed to use the middleware. These can serve several purposes and often expose their features in the form of graphical user interfaces (GUIs) accessible locally or through the Internet via a web browser. Architectural styles for distributed systems are helpful in understanding the different roles of components in the system and how they are distributed across multiple computers. Architectural styles are of two types: • System architectural style • Software architectural style 4.1.1.1 System Architectural Styles System architectural styles cover the physical organization of components and processes over a distributed infrastructure. They provide a set of reference models for the deployment of such systems and help engineers not only have a common vocabulary in describing the physical layout of systems but also quickly identify the major advantages and drawbacks of a given deployment and whether it is applicable for a specific class of applications. These reference models can be further enhanced or diversified according to the specific needs of the application to be designed and implemented. 1. N-Tier architecture: In  the 1980s, the prior monolithic architecture began to be replaced by the client/server architecture, which split applications into two pieces in an attempt to leverage new inexpensive desktop machines. Distributing the processing loads across many inexpensive clients allowed client/server applications to scale more linearly than single host/single process applications could, and the use of off-the-shelf software like relational database management systems (RDBMS) greatly reduced application development time. While the client could handle the user interface and data display tasks of the application, and the server could handle all the data management tasks, there was no clear solution for storing the logic corresponding to the business processes being automated. Consequently, the business logic tended to split between the client and the server; typically, the rules for displaying data became embedded inside the user interface, and the rules for integrating several different data sources became stored procedures inside the database. Whereas this division of logic made it difficult to reuse the user interface code with a different data source, it also made it equally difficult to use the logic stored in the database with a different front-end user interface (like ATM and mobile) without being required to redevelop the logic implemented in the earlier interface. Thus, a customer service system developed for a particular client system (like a 3270 terminal, a PC, or a workstation) would have great difficulty in providing telephony and Internet interfaces with the same business functionality.

Distributed Systems

75

The client/server architecture failed to recognize the importance of managing the business rules applicable to an enterprise independent of both the user interface and the storage and management of enterprise data. The three-tiered application architecture of the 1990s resolved this problem by subdividing the application into three distinct layers: a. Data management, which stores and manages data independent of how they are processed and displayed by the other layers b. Business logic, which implements the business logic to process data independent of how they are stored or displayed by the other two layers c. Presentation, which formats and displays the data independent of the way they are interpreted/processed and stored by the other two layers With the advent of the Internet in the past few years, the three tiers were split even further to accommodate the heterogeneity in terms of the user interfaces, processing systems, or databases existing in various parts of an enterprise. The power of the n-tier architecture derives from the fact that instead of treating components as integral parts of systems, components are treated as standalone entities, which can provide services for applications. Applications exist only as cooperating constellation of components, and each component in turn can simultaneously be part of many different applications. 2. Peer-to-peer: The peer-to-peer model introduces a symmetric architecture in which all the components, called peers, play the same role and incorporate both client and server capabilities of the client/server model. More precisely, each peer acts as a server when it processes requests from other peers and as a client when it issues requests to other peers. With respect to the client/server model that partitions the responsibilities of the IPC between server and clients, the peer-to-peer model attributes the same responsibilities to each component. Therefore, this model is quite suitable for highly decentralized architecture, which can scale better along the dimension of the number of peers (see Chapter 20, Section 20.1 Peer-to-Peer Systems). The disadvantage of this approach is that the management of the implementation of algorithms is more complex than in the client/server model.

4.1.1.2 Software Architectural Styles Software architectural styles are based on the logical arrangement of software components. They are helpful because they provide an intuitive view of the whole system, despite its physical deployment. 4.1.1.2.1 Data-Centered Architectures The repository architectural style is the most relevant reference model in this category. It is characterized by two main components: the central data structure, which represents the current state of the system, and a collection of independent components, which operate on the central data. The ways in which the independent components interact with the central data structure can be very heterogeneous. In  particular, repository-based architectures differentiate and specialize further into subcategories according to the choice of control discipline to apply for the shared data structure.

76

Digital Transformation of Enterprise Architecture

Data-centered architectures are of two types: 1. Database systems, in this case the dynamic of the system is controlled by the independent components, which, by issuing an operation on the central repository, trigger the selection of specific processes that operate on data. 2. Blackboard systems, in this case the central data structure itself is the main trigger for selecting the processes to execute; knowledge sources representing the intelligent agents sharing the blackboard react opportunistically to changes in the knowledge base—almost in the same way that a group of specialists brainstorm in a room in front of a blackboard. 3. The blackboard architectural style is characterized by three main components: • Knowledge sources: These are the entities that update the knowledge base that is maintained in the blackboard. • Blackboard: This represents the data structure that is shared among the knowledge sources and stores the knowledge base of the application. • Control: The control is the collection of triggers and procedures that govern the interaction with the blackboard and update the status of the knowledge base. Blackboard models have become popular and widely used for artificial intelligent applications in which the blackboard maintains the knowledge about a domain in the form of assertions and rules, which are entered by domain experts. These operate through a control shell that controls the problem-solving activity of the system. Particular and successful applications of this model can be found in the domains of speech recognition and signal processing. 4.1.1.2.2 Data Flow Architectures In the case of data-flow architectures, it is the availability of data that controls the computation. With respect to the data-centered styles, in which the access to data is the core feature, data-flow styles explicitly incorporate the pattern of data flow, since their design is determined by an orderly motion of data from component to component, which is the form of communication between them. Styles within this category differ in one of the following ways: how the control is exerted, the degree of concurrency among components, and the topology that describes the flow of data. Data-flow architectures are optimal when the system to be designed embodies a multistage process, which can be clearly identified into a collection of separate components that need to be orchestrated together. Within this reference scenario, components have welldefined interfaces exposing input and output ports, and the connectors are represented by the data streams between these ports. 1. Batch sequential style: The  batch sequential style is characterized by an ordered sequence of separate programs executing one after the other. These programs are chained together by providing as input for the next program the output generated by the last program after its completion, which is most likely in the form of a file. This design was very popular in the mainframe era of computing and still finds applications today. For example, many distributed applications for scientific computing are defined by jobs expressed as sequences of programs that, for example, pre-filter, analyze, and post-process data. It is very common to compose these phases using the batch-sequential style.

Distributed Systems

77

2. Pipe-and-filter style: The  pipe-and-filter style is a variation of the previous style for expressing the activity of a software system as sequence of data transformations. Each component of the processing chain is called a filter, and the connection between one filter and the next is represented by a data stream. With respect to the batch sequential style, data is processed incrementally and each filter processes the data as soon as it is available on the input stream. As soon as one filter produces a consumable amount of data, the next filter can start its processing. Filters generally do not have state, know the identity of neither the previous nor the next filter, and they are connected with in-memory data structures such as first-in/ first-out (FIFO) buffers or other structures. This  particular sequencing is called pipelining and introduces concurrency in the execution of the filters. A  classic example of this architecture is the microprocessor pipeline, whereby multiple instructions are executed at the same time by completing a different phase of each of them. We can identify the phases of the instructions as the filters, whereas the data streams are represented by the registries that are shared within the processors. Another example are the Unix shell pipes (i.e., cat | grep| wc -l), where the filters are the single shell programs composed together and the connections are their input and output streams that are chained together. Applications of this architecture can also be found in the compiler design (e.g., the lex/yacc model is based on a pipe of the following phases | scanning | parsing | semantic analysis | code generation), image and signal processing, and voice and video streaming. 4.1.1.2.3 Call and Return Architectures This category identifies all systems that are organized into components mostly connected together by method calls. The activity of systems modeled in this way is characterized by a chain of method calls whose overall execution and composition identify the execution of one or more operations. The internal organization of components and their connections may vary. Nonetheless, it is possible to identify three major subcategories, which differentiate by the way the system is structured and how methods are invoked: top-down style, object-oriented style, and layered style. 1. Top-down style: This architectural style is quite representative of systems developed with imperative programming, which leads to a divide-and-conquer approach to problem resolution. Systems developed according to this style are composed of one large main program that accomplishes its tasks by invoking subprograms or procedures. The components in this style are procedures and subprograms, and connections are method calls or invocation. The calling program passes information with parameters and receives data from return values or parameters. Method calls can also extend beyond the boundary of a single process by leveraging techniques for remote method invocation, such as remote procedure call (RPC) and all its descendants. The overall structure of the program execution at any point in time is characterized by a tree, the root of which constitutes the main function of the principal program. This architectural style is quite intuitive from a design point of view but hard to maintain and manage in large systems. 2. Layered style: The layered system style allows the design and implementation of software systems in terms of layers, which provide a different level of abstraction of the system. Each layer generally operates with at most two layers: the one that provides a lower abstraction level and the one that provides a

78

Digital Transformation of Enterprise Architecture

higher abstraction layer. Specific protocols and interfaces define how adjacent layers interact. It is possible to model such systems as a stack of layers, one for each level of abstraction. Therefore, the components are the layers and the connectors are the interfaces and protocols used between adjacent layers. A  user or client generally interacts with the layer at the highest abstraction, which, in order to carry its activity, interacts and uses the services of the lower layer. This process is repeated (if necessary) until the lowest layer is reached. It is also possible to have the opposite behavior: events and callbacks from the lower layers can trigger the activity of the higher layer and propagate information up through the stack. The advantages of the layered style are that it: • supports a modular design of systems and allows us to decompose the system according to different levels of abstractions by encapsulating together all the operations that belong to a specific level. • enables layers to be replaced as long as they are compliant with the expected protocols and interfaces, thus making the system flexible. The main disadvantage is constituted by the lack of extensibility, since it is not possible to add layers without changing the protocols and the interfaces between layers. This also makes it complex to add operations. Examples of layered architectures are the modern operating system kernels and the International Standards Organization/Open Systems Interconnection (ISO/OSI) or the TCP/IP stack. 3. Object-oriented style: This  architectural style encompasses a wide range of systems that have been designed and implemented by leveraging the abstractions of object-oriented programming (OOP). Systems are specified in terms of classes and implemented in terms of objects. Classes define the type of components by specifying the data that represent their state and the operations that can be done over these data. One of the main advantages over the top-down style is that there is a coupling between data and operations used to manipulate them. Object instances become responsible for hiding their internal state representation and for protecting its integrity while providing operations to other components. This leads to a better decomposition process and more manageable systems. Disadvantages of this style are mainly two: each object needs to know the identity of an object if it wants to invoke operations on it, and shared objects need to be carefully designed in order to ensure the consistency of their state. 4.1.1.2.4 Virtual Architectures The virtual machine class of architectural styles is characterized by the presence of an abstract execution environment (generally referred as a virtual machine) that simulates features that are not available in the hardware or software. Applications and systems are implemented on top of this layer and become portable over different hardware and software environments as long as there is an implementation of the virtual machine they interface with. The general interaction flow for systems implementing this pattern is the following: the program (or the application) defines its operations and state in an abstract format, which is interpreted by the virtual machine engine. The interpretation of a program constitutes its execution. It is quite common in this scenario that the engine maintains an internal representation of the program state. Very popular examples within this category are rule-based systems, interpreters, and command-language processors.

Distributed Systems

79

Virtual machine architectural styles are characterized by an indirection layer between applications and the hosting environment. This design has the major advantage of decoupling applications from the underlying hardware and software environment, but at the same time it introduces some disadvantages, such as a slowdown in performance. Other issues might be related to the fact that, by providing a virtual execution environment, specific features of the underlying system might not be accessible. 1. Rule-based style: This  architecture is characterized by representing the abstract execution environment as an inference engine. Programs are expressed in the form of rules or predicates that hold true. The input data for applications is generally represented by a set of assertions or facts that the inference engine uses to activate rules or to apply predicates, thus transforming data. The output can either be the product of the rule activation or a set of assertions that holds true for the given input data. The set of rules or predicates identifies the knowledge base that can be queried to infer properties about the system. This approach is quite peculiar, since it allows expressing a system or a domain in terms of its behavior rather than in terms of the components. Rule-based systems are very popular in the field of artificial intelligence. Practical applications can be found in the field of process control, where rule-based systems are used to monitor the status of physical devices by being fed from the sensory data collected and processed by PLCs1 and by activating alarms when specific conditions on the sensory data apply. Another interesting use of rule-based systems can be found in the networking domain: network intrusion detection systems (NIDS) often rely on a set of rules to identify abnormal behaviors connected to possible intrusions in computing systems. 2. Interpreter style: The  core feature of the interpreter style is the presence of an engine that is used to interpret a pseudo-program expressed in a format acceptable for the interpreter. The interpretation of the pseudo-program constitutes the execution of the program itself. Systems modeled according to this style exhibit four main components: the interpretation engine that executes the core activity of this style, an internal memory that contains the pseudo-code to be interpreted, a representation of the current state of the engine, and a representation of the current state of the program being executed. This model is quite useful in designing virtual machines for high-level programming (Java, C#) and scripting languages (Awk, PERL, and so on). Within this scenario, the virtual machine closes the gap between the end-user abstractions and the software/hardware environment in which such abstractions are executed. 4.1.1.2.5 Independent Components This  class of architectural style models systems in terms of independent components that have their own life cycles, which interact with each other to perform their activities. There  are two major categories within this class—communicating processes and event systems—which differentiate in the way the interaction among components is managed. 1. Communicating processes: In this architectural style, components are represented by independent processes that leverage IPC facilities for coordination management. This  is an abstraction that is quite suitable to modeling distributed systems that, being distributed over a network of computing nodes, are necessarily composed of several concurrent processes. Each of the processes provides other processes with services and can leverage the services exposed by the other processes. The conceptual

80

Digital Transformation of Enterprise Architecture

organization of these processes and the way in which the communication happens vary according to the specific model used, either peer-to-peer or client/server. Connectors are identified by IPC facilities used by these processes to communicate. 2. Event systems: In this architectural style, the components of the system are loosely coupled and connected. In  addition to exposing operations for data and state manipulation, each component also publishes (or announces) a collection of events with which other components can register. In general, other components provide a callback that will be executed when the event is activated. During the activity of a component, a specific runtime condition can activate one of the exposed events, thus triggering the execution of the callbacks registered with it. Event activation may be accompanied by contextual information that can be used in the callback to handle the event. This information can be passed as an argument to the callback or by using some shared repository between components. Event-based systems have become quite popular, and support for their implementation is provided either at the API level or the programming language level. The main advantage of such an architectural style is that it fosters the development of open systems: new modules can be added and easily integrated into the system as long as they have compliant interfaces for registering to the events. This architectural style solves some of the limitations observed for the top-down and object-oriented styles: • The invocation pattern is implicit, and the connection between the caller and the callee is not  hard-coded; this gives a lot of flexibility since addition or removal of a handler to events can be done without changes in the source code of applications. • The event source does not need to know the identity of the event handler in order to invoke the callback. The disadvantage of such a style is that it relinquishes control over system computation. When a component triggers an event, it does not  know how many event handlers will be invoked and whether there are any registered handlers. This  information is available only at runtime and, from a static design point of view, becomes more complex to identify the connections among components and to reason about the correctness of the interactions. 4.1.1.3 Technologies for Distributed Computing Middleware is system services software that executes between the operating system layer and the application layer and provides services. It connects two or more applications, thus providing connectivity and interoperability to the applications. Middleware is not  a silver bullet that will solve all integration problems. Due to overhyping in the 1980s and early 1990s, the term middleware has lost popularity but is coming back in the last few years. The  middleware concept, however, is today even more important for integration, and all integration projects will have to use one or many different middleware solutions. Middleware is mainly used to denote products that provide glue between applications, which is distinct from simple data import and export functions that might be built into the applications themselves. All forms of middleware are helpful in easing the communication between different software applications. The  selection of middleware influences the application architecture, because middleware centralizes the software infrastructure and its deployment.

Distributed Systems

81

Middleware introduces an abstraction layer in the system architecture and thus reduces the complexity considerably. On the other hand, each middleware product introduces a certain communication overhead into the system, which can influence performance, scalability, throughput, and other efficiency factors. This  is important to consider when designing the integration architecture, particularly if our systems are mission critical and are used by a large number of concurrent clients. Middleware is connectivity software that is designed to help manage the complexity and heterogeneity inherent in distributed systems by building a bridge between different systems, thereby enabling communication and transfer of data. Middleware could be defined as a layer of enabling software services that allow application elements to interoperate across network links, despite differences in underlying communications protocols, system architectures, operating systems, databases, and other application services. The role of middleware is to ease the task of designing, programming, and managing distributed applications by providing a simple, consistent, and integrated distributed programming environment. Essentially, middleware is a distributed software layer, or platform, that lives above the operating system and abstracts over the complexity and heterogeneity of the underlying distributed environment with its multitude of network technologies, machine architectures, operating systems, and programming languages. The middleware layers are interposed between applications and Internet transport protocols. The middleware abstraction comprises two layers. The bottom layer is concerned with the characteristics of protocols for communicating between processes in a distributed system and how the data objects, for example, a customer order, and data structures used in application programs can be translated into a suitable form for sending messages over a communications network, taking into account that different computers may rely on heterogeneous representations for simple data items. The  layer above is concerned with  interprocess communication mechanisms, while the layer above that is concerned with non-message- and message-based forms of middleware. Message-based forms of middleware provide asynchronous messaging and event notification mechanisms to exchange messages or react to events over electronic networks. Non-message-based forms of middleware provide synchronous communication mechanisms designed to support client–server communication. Middleware uses two basic modes of message communication: 1. Synchronous or time dependent: The defining characteristic of a synchronous form of execution is that message communication is synchronized between two communicating application systems, which must both be up and running, and that execution flow at the client’s side is interrupted to execute the call. Both sending and receiving applications must be ready to communicate with each other at all times. A sending application initiates a request (sends a message) to a receiving application. The sending application then blocks its processing until it receives a response from the receiving application. The receiving application continues its processing after it receives the response. 2. Asynchronous or time independent: With asynchronous communication, an application sends (requestor or sender) a request to another while it continues its own processing activities. The sending application does not have to wait for the receiving application to complete and for its reply to come back. Instead, it can continue processing other requests. Unlike the synchronous mode, both application systems (sender and receiver) do not have to be active at the same time for processing to occur.

82

Digital Transformation of Enterprise Architecture

The basic messaging processes inherently utilize asynchronous communication. There are several benefits to asynchronous messaging: 1. Asynchronous messaging clients can proceed with application processing independently of other applications. Loose coupling of senders and receivers optimizes system processing by not having to block sending client processing while waiting for the receiving client to complete the request. 2. Asynchronous messaging allows batch and parallel processing of messages. The sending client can send as many messages to receiving clients without having to wait for the receiving clients to process previously sent messages. On the receiving end, different receiving clients can process the messages at their own speed and timing. 3. There is less demand on the communication network because the messaging clients do not have to be connected to each other or the MOM while messages are processed. Connections are active only to put messages to the MOM and get messages from the MOM. 4. The network does not have to be available at all times because of timing independence of client processing. Messages can wait in the queue of the receiving client if the network is not  available. MOM implements asynchronous message queues at its core. It can concurrently service many sending and receiving applications. Despite the performance drawbacks, synchronous messaging has several benefits over asynchronous messaging. The tightly coupled nature of synchronous messaging means the sending client can better handle application errors in the receiving client. If an error occurs in the receiving client, the sending client can try to compensate for the error. This is especially important when the sending client requests a transaction to be performed in the receiving client. The better error handling ability of synchronous messaging means it is easier for programmers to develop synchronous messaging solutions. Since both the sending and receiving clients are online and connected, it is easier for programmers to debug errors that might occur during the development stage. Since most developers are also more familiar with programming using synchronous processing, this also facilities the development of synchronous messaging solutions over asynchronous messaging solutions. When speaking of middleware products, we encompass a large variety of technologies. The most common forms of middleware are as follows: 1. Database access technologies • Microsoft Open Database Connectivity (ODBC) • Java Database Connectivity (JDBC) 2. Asynchronous Middleware • Store and Forward Messaging • Publish/Subscribe Messaging • Point-to-Point Queuing • Event-driven Processing Mechanism

Distributed Systems

83

3. Synchronous Middleware • Remote Procedure Call (RPC) • Remote Method Invocation (RMI) 4. Message-oriented Middleware (MOM) • Integration Brokers • Java Messaging Service (JMS) 5. Request/Reply Messaging Middleware 6. Transaction Processing Monitors (TPMs) 7. Object Request Brokers (ORBs) • OMG CORBA ORB compliant • Java RMI and RMI-IIOP (Internet Inter-ORB Protocol) • Microsoft COM/DCOM/COM+/.NET Remoting/WCF 8. Application Servers 9. Service-Oriented Architecture • Web Services using WSDL, UDDI, SOAP and XML • RESTful Services 10. Enterprise Service Buses (ESBs) 11. Enterprise Systems For  a detailed discussion on the above technologies, refer to Chapter 5 Integration Technologies of the companion volume Guide to Cloud Computing for Business and Technology Managers (Kale 2015).

4.2 Distributed Databases A distributed database is a logically interrelated collection of shared data (and a description of this data), physically distributed over a computer network. The distributed database management system (DDBMS) is the software that transparently manages the distributed database. A DDBMS is distinct from distributed processing, where a centralized DBMS is accessed over a network. It is also distinct from a parallel DBMS, which is a DBMS running across multiple processors and disks and which has been designed to evaluate operations in parallel, whenever possible, in order to improve performance. The advantages of a DDBMS are that it is enabled to reflect the organizational structure; it makes remote data more shareable; it improves reliability, availability, and performance; it may be more economical; it provides for modular growth, facilitates integration, and helps organizations remain competitive. The major disadvantages are cost, complexity, lack of standards, and experience. A  DDBMS may be classified as homogeneous or heterogeneous. In  a homogeneous system, all sites use the same DBMS product. In a heterogeneous system, sites may run different DBMS products, which need not be based on the same underlying data model, and so the system may be composed of relational, network, hierarchical, and object-oriented

84

Digital Transformation of Enterprise Architecture

DBMSs. A multidatabase system (MDBS) is a distributed DBMS in which each site maintains complete autonomy. An MDBS resides transparently on top of existing database and file systems and presents a single database to its users. It maintains a global schema against which users issue queries and updates; an MDBS maintains only the global schema and the local DBMSs themselves maintain all user data. Communication takes place over a network, which may be a local area network (LAN) or a wide area network (WAN). LANs are intended for short distances and provide faster communication than WANs. A  special case of the WAN is a metropolitan area network (MAN), which generally covers a city or suburb. As well as having the standard functionality expected of a centralized DBMS, a DDBMS needs extended communication services, extended system catalog, distributed query processing, and extended security, concurrency, and recovery services. Network design and performance issues are critical for an efficient operation of a distributed database management system (DDBS) and are an integral part of the overall solution. The nodes may all be located in the same campus and connected via a local area network, or they may be geographically distributed over large distances and connected via a long-haul or wide area network. Local area networks typically use wireless hubs or cables, whereas long-haul networks use telephone lines, cables, wireless communication infrastructures, or satellites. The DDBMS appears like a centralized DBMS by providing a series of transparencies. With distribution transparency, users should not know that the data has been fragmented/ replicated. With transaction transparency, the consistency of the global database should be maintained when multiple users are accessing the database concurrently and when failures occur. With performance transparency, the system should be able to efficiently handle queries that reference data at more than one site. A  relation may be divided into a number of subrelations called fragments, which are allocated to one or more sites. Fragments may be replicated to provide improved availability and performance. There are two main types of fragmentation: horizontal and vertical. Horizontal fragments are subsets of tuples and vertical fragments are subsets of attributes. The definition and allocation of fragments are carried out strategically to achieve locality of reference, improved reliability and availability, acceptable performance, balanced storage capacities and costs, and minimal communication costs. 4.2.1 Characteristics of Distributed Databases 4.2.1.1 Transparency A highly transparent system offers a lot of flexibility to the end user and application developer since it implies being oblivious of the underlying details on their part. The concept of transparency extends the general idea of hiding implementation details from end users. In the case of a traditional centralized database, transparency simply pertains to logical and physical data independence for application developers. However, in a DDB scenario, the data and software are distributed over multiple nodes connected by a computer network, so additional types of transparencies are introduced. 1. Distribution or network transparency: This  refers to freedom for the user from the operational details of the network and the placement of the data in the distributed system.

Distributed Systems

85

Distribution transparency can be of two types: a. Naming transparency implies that once a name is associated with an object, the named objects can be accessed unambiguously without additional specification as to where the data is located. b. Location transparency refers to the fact that the command used to perform a task is independent of the location of the data and the location of the node where the command was issued. 2. Fragmentation transparency: Fragmentation transparency can be of two types: a. Horizontal fragmentation distributes a relation (table) into subrelations that are subsets of the tuples (rows) in the original relation which is also known as sharding in the newer big data and cloud computing systems. b. Vertical fragmentation distributes a relation into subrelations where each subrelation is defined by a subset of the columns of the original relation. Fragmentation transparency makes the user unaware of the existence of fragments. 3. Replication transparency implies copies of the same data objects may be stored at multiple sites for better availability, performance, and reliability. Replication transparency makes the user unaware of the existence of these copies. 4.2.1.2 Availability and Reliability Reliability and availability are two of the most common potential advantages cited for distributed databases. Reliability is broadly defined as the probability that a system is running (not down) at a certain time point, whereas availability is the probability that the system is continuously available during a time interval. Reliability and availability of the database are directly related to the faults, errors, and failures associated with it. Fault is the cause of an error. Errors constitute that subset of system states that causes the failure. A failure can be described as a deviation of a system’s behavior from that which is specified in order to ensure correct execution of operations. A  reliable DDBMS tolerates failures of underlying components, and it processes user requests as long as database consistency is not violated. A  DDBMS system can adopt several strategies to achieve the objective of system reliability: 1. A  cure-oriented fault tolerance strategy recognizes that faults will occur, and it designs mechanisms that can detect and remove faults before they can result in a system failure. 2. A  prevention-oriented strategy attempts to ensure that the final system does not contain any faults. This is achieved through an exhaustive design process followed by extensive quality control and testing. A DDBMS recovery manager deals with failures arising from transactions, hardware, and communication networks. Hardware failures can either be those that result in loss of main memory contents or loss of secondary storage contents. Network failures occur due to errors associated with messages and line failures. Message errors can include their loss, corruption, or out-of-order arrival at destination.

86

Digital Transformation of Enterprise Architecture

4.2.1.3 Scalability and Partition Tolerance Scalability determines the extent to which the system can expand its capacity while continuing to operate without interruption. Scalability can be of two types: 1. Horizontal scalability: This refers to expanding the number of nodes in the distributed system. As nodes are added to the system, some of the data and processing loads are distributed from existing nodes to the new nodes. 2. Vertical scalability: This refers to expanding the capacity of the individual nodes in the system, such as expanding the storage capacity or the processing power of a node. The concept of partition tolerance states that the system should have the capacity to continue operating while the network is partitioned. As the system expands its number of nodes, it is possible that the network, which connects the nodes, may have faults that cause the nodes to be partitioned into groups of nodes. The nodes within each partition are still connected by a subnet-work, but communication among the partitions is lost. 4.2.1.4 Autonomy Autonomy determines the extent to which individual nodes or databases in a connected DDB can operate independently. A high degree of autonomy is desirable for increased flexibility and customized maintenance of an individual node. Autonomy can be of three types: 1. Design autonomy refers to independence of data model usage and transaction management techniques among nodes. 2. Communication autonomy determines the extent to which each node can decide on sharing of information with other nodes. 3. Execution autonomy refers to independence of users to act as they please. 4.2.2 Advantages and Disadvantages of Distributed Databases 4.2.2.1 Advantages of Distributed Databases 1. Improved ease and flexibility of application development: Developing and maintaining applications at geographically distributed sites of an organization is facilitated due to transparency of data distribution and control. 2. Increased availability: This is achieved by the isolation of faults to their site of origin without affecting the other database nodes connected to the network. When the data and DDBMS software are distributed over many sites, one site may fail while other sites continue to operate. Only the data and software that exist at the failed site cannot be accessed. Further improvement is achieved by judiciously replicating data and software at more than one site. In a centralized system, failure at a single site makes the whole system unavailable to all users. In  a distributed database, some of the data may be unreachable, but users may still be able to access other parts of the database. If the data in the failed site has been replicated at another site prior to the failure, then the user will not be affected at all. The ability of the system to survive network partitioning also contributes to high availability.

Distributed Systems

87

3. Improved performance: A  distributed DBMS fragments the database by keeping the data closer to where it is needed most. Data localization reduces the contention for CPU and I/O services and simultaneously reduces access delays involved in wide area networks. When a large database is distributed over multiple sites, smaller databases exist at each site. As a result, local queries and transactions accessing data at a single site have better performance because of the smaller local data-bases. In addition, each site has a smaller number of transactions executing than if all transactions are submitted to a single centralized database. Moreover, inter-query and intra-query parallelism can be achieved by executing multiple queries at different sites, or by breaking up a query into a number of subqueries that execute in parallel. This contributes to improved performance. 4. Easier expansion via scalability: In  a distributed environment, expansion of the system in terms of adding more data, increasing database sizes, or adding more nodes is much easier than in centralized (non-distributed) systems. 4.2.2.2 Disadvantages of Distributed Databases 4.2.2.2.1 More Complex Treatment of Concurrent Update Concurrent update in a distributed database is treated basically the same way it is treated in nondistributed databases. A  user transaction acquires locks, and the locking is twophase. (Locks are acquired in a growing phase, during which time no locks are released and the DDBMS applies the updates. All locks are released during the shrinking phase.) The DDBMS detects and breaks deadlocks, and then the DDBMS rolls back interrupted transactions. The primary distinction lies not in the kinds of activities that take place but in the additional level of complexity that gets created by the very nature of a distributed database. If all the records to be updated by a particular transaction occur at one site, the problem is essentially the same as in a nondistributed database. However, the records in a distributed data-base might be stored at many different sites. Furthermore, if the data is replicated, each occurrence might be stored at several sites, each requiring the same update to be performed. Assuming each record occurrence has replicas at three different sites, an update that would affect 5 record occurrences in a nondistributed system might affect 20 different record occurrences in a distributed system (each record occurrence together with its three replica occurrences). Having more record occurrences to update is only part of the problem. Assuming each site keeps its own locks, the DDBMS must send many messages for each record to be updated: a request for a lock, a message indicating that the record is already locked by another user or that the lock has been granted, a message directing that the update be performed, an acknowledgment of the update, and, finally, a message indicating that the record is to be unlocked. Because all those messages must be sent for each record and its occurrences, the total time for an update can be substantially longer in a distributed database. A partial solution to minimize the number of messages involves the use of the primary copy mentioned earlier. Recall that one of the replicas of a given record occurrence is designated as the primary copy. Locking the primary copy, rather than all copies, is sufficient and will reduce the number of messages required to lock and unlock records. The number of messages might still be large, however, and the unavailability of the primary copy can cause an entire transaction to fail. Thus, even this partial solution presents problems.

88

Digital Transformation of Enterprise Architecture

Just as in a nondistributed database, deadlock is a possibility in a distributed database. In a distributed database, however, deadlock is more complicated because two types of deadlock are possible: 1. Local deadlock is deadlock that occurs at a single site in a distributed database. If each of two transactions is waiting for a record held by the other at the same site, the local DBMS can detect and resolve the deadlock with a minimum number of messages needed to communicate the situation to the other DBMSs in the distributed system. 2. Global deadlock involves one transaction that requires a record held by a second transaction at one site, while the second transaction requires a record held by the first transaction at a different site. In this case, neither site has information individually to allow this deadlock to be detected; this is a global deadlock, and it can be detected and resolved only by sending a large number of messages between the DBMSs at the two sites. 4.2.2.2.2 Update of Replicated Data Replicating data can improve processing speed and ensure that the overall system remains available even when the database at one site is unavailable. However, replication can cause update problems, most obviously in terms of the extra time needed to update all the copies. Instead of updating a single copy of the data, the DBMS must update several copies. Because most of these copies are at sites other than the site initiating the update, each update transaction requires extra time to update each copy and extra time to communicate all the update messages over the network. Often a DDBMS designates one copy of the data to be the primary copy. As long as the primary copy is updated, the DDBMS considers the update to be complete. The primary site and the DDBMS must ensure that all the other copies are in sync. The primary site sends update transactions to the other sites and notes whether any sites are currently unavailable. If a site is unavailable, the primary site must try to send the update again at some later time and continue trying until it succeeds. This strategy overcomes the basic problem, but it obviously uses more time. Further, if the primary site is unavailable, the problem would remain unresolved. 4.2.2.2.3 More Complex Query Processing Processing queries is more complex in a distributed database. The  complexity occurs because of the difference in the time it takes to send messages between sites and the time it takes to access a disk. Systems that are record-at-a-time-oriented can create severe performance problems in distributed systems. If the only choice is to transmit every record from one site to another site as a message and then examine it at the other site, the communication time required can become unacceptably high. DDBMSs that permit a request for a set of records, as opposed to an individual record, outperform record-at-a-time systems. 4.2.2.2.4 More Complex Recovery Measures Although the basic recovery process for a distributed data-base is the same as for centralized database, there is an additional potential problem. To make sure that the database remains consistent, each database update should be made permanent or aborted and undone, in which case, none of its changes will be made. In a distributed database,

Distributed Systems

89

with an individual transaction updating several local databases, it is possible—because of problems affecting individual sites—for local DBMSs to commit the updates at some sites and undo the updates at other sites, thereby creating an inconsistent state in the distributed database. The DDBMS must not allow this inconsistency to occur. A DDBMS usually prevents this potential inconsistency through the use of two-phase commit. The basic idea of two-phase commit is that one site, often the site initiating the update, acts as coordinator. In the first phase, the coordinator sends messages to all other sites requesting that they prepare to update the database; in other words, each site acquires all necessary locks. The sites do not update at this point, however, but they do send messages to the coordinator stating that they are ready to update. If for any reason any site cannot secure the necessary locks or if any site must abort its updates, the site sends a message to the coordinator that all sites must abort the transaction. The coordinator waits for replies from all sites involved before determining whether to commit the update: 1. If all replies are positive, the coordinator sends a message to each site to commit the update. At this point, each site must proceed with the commit process. 2. If any reply is negative, the coordinator sends a message to each site to abort the update, and each site must follow this instruction. In this way, the DDBMS guarantees consistency. While a process similar to two-phase commit is essential to the consistency of the database, two problems are associated with it. For one thing, many messages are sent during the process. For another, during the second phase, each site must follow the instructions from the coordinator; otherwise, the process will not accomplish its intended result. This process means that the sites are not as independent as you would like them to be. 4.2.2.2.5 More Complicated Security and Backup Requirements With a single central database, you need to secure the central physical site, the central database, and the network connecting users to the database at the central site. The security requirements for a distributed database are more demanding, requiring you to secure every physical site and every database, in addition to securing the network. Backing up a distributed database is also more complicated and is best initiated and controlled from a single site. 4.2.2.2.6 More Difficult Management of the Data Dictionary A distributed database introduces further complexity to the management of the data dictionary or catalog. The choice of storing the data dictionary can have three options: 1. Choose one site and store the complete data dictionary at this site and this site alone 2. Store a complete copy of the data dictionary at each site 3. Distribute, possibly with replication, the data dictionary entries among the various sites Although storing the complete data dictionary at a single site is a relatively simple approach to administer, retrieving information in the data dictionary from any other site is more time-consuming because of the communication involved. Storing a complete copy

90

Digital Transformation of Enterprise Architecture

of the data dictionary at every site solves the retrieval problem because a local DBMS can handle any retrieval locally. Because this second approach involves total replication (every data dictionary occurrence is replicated at every site), updates to the data dictionary are more time-consuming. If the data dictionary is updated with any frequency, the extra time needed to update all copies of the data dictionary might be unacceptable. One intermediate strategy is to partition the data by storing data dictionary entries at the site at which the data they describe are located. Interestingly, this approach also suffers from a problem. If a user queries the data dictionary to access an entry not stored at the user’s site, the system has no way of knowing the entry’s location. Satisfying this user’s query might involve sending a message to every other site, which involves a considerable amount of network and DDBMS overhead. 4.2.2.2.7 More Complex Database Design A distributed database adds another level of complexity to database design. Distributing data does not  affect the information-level design. During the physical-level design in a nondistributed database, disk activity—both the number of disk accesses and the volumes of data to be transported—is one of the principal concerns. Although disk activity is also a factor in a distributed database, communication activity becomes another concern during the physical-level design. Because transmitting data from one site to another is much slower than transferring data to and from disk, in many situations, communication activity is the most important physical-level design factor. In addition, you must consider possible fragmentation and replication during the physical-level design. 4.2.3 Data Replication and Allocation Replication is useful in improving the availability of data. The most extreme case is replication of the whole database at every site in the distributed system, thus creating a fully replicated distributed database. This can improve availability remarkably because the system can continue to operate as long as at least one site is up. It also improves performance of retrieval (read performance) for global queries because the results of such queries can be obtained locally from any one site; hence, a retrieval query can be processed at the local site where it is submitted, if that site includes a server module. The disadvantage of full replication is that it can slow down update operations (write performance) drastically, since a single logical update must be performed on every copy of the database to keep the copies consistent. This is especially true if many copies of the database exist. Full replication makes the concurrency control and recovery techniques more expensive than they would be if there was no replication. The  other extreme from full replication involves having no replication—that is, each fragment is stored at exactly one site. In this case, all fragments must be disjoint, except for the repetition of primary keys among vertical (or mixed) fragments. This is also called nonredundant allocation. Between these two extremes, we have a wide spectrum of partial replication of the data—that is, some fragments of the database may be replicated whereas others may not. The  number of copies of each fragment can range from one up to the total number of sites in the distributed system. A special case of partial replication is occurring heavily in applications where mobile workers—such as sales forces, financial planners, and claims adjustors—carry partially replicated databases with them on laptops and PDAs and synchronize them periodically with the server database. A description of the replication of fragments is sometimes called a replication schema.

Distributed Systems

91

Each fragment—or each copy of a fragment—must be assigned to a particular site in the distributed system. This process is called data distribution (or data allocation). The choice of sites and the degree of replication depend on the performance and availability goals of the system and on the types and frequencies of transactions submitted at each site. For example, if high availability is required, transactions can be submitted at any site, and most transactions are retrieval only, a fully replicated database is a good choice. However, if certain transactions that access particular parts of the database are mostly submitted at a particular site, the corresponding set of fragments can be allocated at that site only. Data that is accessed at multiple sites can be replicated at those sites. If many updates are performed, it may be useful to limit replication. Finding an optimal or even a good solution to distributed data allocation is a complex optimization problem. 4.2.4 Concurrency Control and Recovery in Distributed Databases For concurrency control and recovery purposes, numerous problems arise in a distributed DBMS environment that are not encountered in a centralized DBMS environment. These include the following: • Dealing with multiple copies of the data items: The  concurrency control method is responsible for maintaining consistency among these copies. The recovery method is responsible for making a copy consistent with other copies if the site on which the copy is stored fails and recovers later. • Failure of individual sites: The DDBMS should continue to operate with its running sites, if possible, when one or more individual sites fail. When a site recovers, its local database must be brought up-to-date with the rest of the sites before it rejoins the system. • Failure of communication links: The system must be able to deal with the failure of one or more of the communication links that connect the sites. An extreme case of this problem is that network partitioning may occur. This breaks up the sites into two or more partitions, where the sites within each partition can communicate only with one another and not with sites in other partitions. • Distributed commit: Problems can arise with committing a transaction that is accessing databases stored on multiple sites if some sites fail during the commit process. The  two-phase commit protocol is often used to deal with this problem. • Distributed deadlock: Deadlock may occur among several sites, so techniques for dealing with deadlocks must be extended to take this into account. Distributed concurrency control and recovery techniques must deal with these and other problems. In the following subsections, we review some of the techniques that have been suggested to deal with recovery and concurrency control in DDBMSs. To deal with replicated data items in a distributed database, a number of concurrency control methods have been proposed that extend the concurrency control techniques that are used in centralized databases. We discuss these techniques in the context of extending centralized locking. Similar extensions apply to other concurrency control techniques. The  idea is to designate a particular copy of each data item as a distinguished copy. The locks for this data item are associated with the distinguished copy, and all locking and unlocking requests are sent to the site that contains that copy.

92

Digital Transformation of Enterprise Architecture

A number of different methods are based on this idea, but they differ in their method of choosing the distinguished copies. In the primary site technique, all distinguished copies are kept at the same site. A modification of this approach is the primary site with a backup site. Another approach is the primary copy method, where the distinguished copies of the various data items can be stored in different sites. A site that includes a distinguished copy of a data item basically acts as the coordinator site for concurrency control on that item. We discuss these techniques next. 4.2.4.1 Distributed Recovery The recovery process in distributed databases is quite involved. We give only a very brief idea of some of the issues here. In some cases it is difficult even to determine whether a site is down without exchanging numerous messages with other sites. For example, suppose that site X sends a message to site Y and expects a response from Y but does not receive it. There are several possible explanations: • The message was not delivered to Y because of communication failure. • Site Y is down and could not respond. • Site Y is running and sent a response, but the response was not delivered. Without additional information or the sending of additional messages, it is difficult to determine what actually happened. Another problem with distributed recovery is distributed commit. When a transaction is updating data at several sites, it cannot commit until it is sure that the effect of the transaction on every site cannot be lost. This means that every site must first have recorded the local effects of the transactions permanently in the local site log on disk. The two-phase commit protocol is often used to ensure the correctness of distributed commit. 4.2.5 Rules for Distributed Databases C.J. Date (Date, C.J. “Twelve Rules for a Distributed Database.” ComputerWorld 21.23, June 8, 1987) formulated 12 rules that distributed databases should follow. The basic goal is that a distributed database should feel like a nondistributed database to users; that is, users should not be aware that the database is distributed. The 12 rules serve as a benchmark against which you can measure DDBMSs. The 12 rules are as follows: 1. Local autonomy: No site should depend on another site to perform its database functions. 2. No reliance on a central site: The  DDBMS should not  rely on a single central site to control specific types of operations. These operations include data dictionary management, query processing, update management, database recovery, and concurrent update. 3. Continuous operation: Performing functions such as adding sites, changing versions of DBMSs, creating backups, and modifying hardware should not require planned shutdowns of the entire distributed database. 4. Location transparency: Users should not be concerned with the location of any specific data in the database. Users should feel as if the entire database is stored at their location.

Distributed Systems

93

5. Fragmentation transparency: Users should not be aware of any data fragmentation that has occurred in the database. Users should feel as if they are using a single central database. 6. Replication transparency: Users should not  be aware of any data replication. The  DDBMS should perform all the work required to keep the replicas consistent; users should be unaware of the data synchronization work carried out by the DDBMS. 7. Distributed query processing: You already learned about the complexities of query processing in a distributed database. The DDBMS must process queries as rapidly as possible. 8. Distributed transaction management: You already learned about the complexities of update management in a distributed database and the need for the two-phase commit strategy. The  DDBMS must effectively manage transaction updates at multiple sites. 9. Hardware independence: Organizations usually have many different types of hardware, and a DDBMS must be able to run on this hardware. Without this capability, users are restricted to accessing data stored only on similar computers, disks, and so on. 10. Operating system independence: Even if an organization uses similar hardware, different operating systems might be used within the organization. For  the same reason that it is desirable for a DDBMS to support different types of hardware, a DDBMS must be able to run on different operating systems. 11. Network independence: Because different sites within an organization might use different communications networks, a DDBMS must run on different types of networks and not be restricted to a single type of network. 12. DBMS independence: Another way of stating this requirement is that a DDBMS should be heterogeneous; that is, a DDBMS must support different local DBMSs. Supporting heterogeneous DBMSs is a difficult task. In practice, each local DBMS must “speak” a common language; this common language most likely is SQL.

4.3 Summary In  this chapter, we provided an introduction to distributed computing as a foundation for better understanding of cloud-enabled business processes. Distributed systems constitutes of a large umbrella under which several different software systems are classified. Unification of parallel and distributed computing allows one to harness a set of networked and heterogeneous computers and present them as a unified resource. Architectural styles help categorize and provide reference models for distributed systems. More precisely, system architectural styles are more concerned with the physical deployment of such systems, whereas software architectural styles define logical organizations of components and their roles. These two styles are the fundamental deployment blocks of any distributed system. Message-based communication is the most relevant abstraction for Interprocess communication (IPC) and forms the basis for several techniques of IPC which is a fundamental

94

Digital Transformation of Enterprise Architecture

element in distributed systems; it is the element that ties together separate processes and allows them to be seen as a unified whole. The later-half of the chapter introduced the concepts, characteristics and issues related to distributed databases. As part of the latter, the chapter discussed the use of data replication (to improve system reliability and availability), concurrency control and recovery techniques. The chapter ends with Codd’s Twelve Rules for the Distributed Databases that have been devised on the lines of his rules for the relational database systems.

Section II

Road to Digital Transformation of Enterprise Architecture This section begins by Chapters 5 and 6 introducing the overarching concepts of dependability and reliability that characterize the enterprise architecture as a whole. In the following eight chapters, it describes the eight attributes of EA: interoperability, scalability, availability, mobility, ubiquity, security, analyticity, and usability. A parallel track of appendix also briefs on auxiliary technologies like integration, virtualization, replication, spatio-temporal databases, embedded systems, cryptography, data mining, and interactive interfaces that are essential for the digital transformation of the respective EA attribute. Chapter 5 introduces the concept and characteristics of dependability. Dependability is defined as the ability of a system to deliver a service while avoiding service failures that are more frequent and more severe than what is acceptable. In the parallel track, Appendix 5A briefs on reliability aspects. Chapter 6 introduces the concept and characteristics of performability. Performability is the ability of a system to meet timing requirements. In the parallel track, Appendix 6A briefs on queuing systems aspects. Chapter 7 introduces the concept and characteristics of interoperability. Interoperability is the ability of interaction between two or more systems so as to achieve some goal without having to know the uniqueness of the interacting systems. In the parallel track, Appendix 7A briefs on integration aspects. Chapter 8 introduces the concept and characteristics of scalability. Scalability is the ability to improve system performance by adding more resources to a single node or multiple nodes—such as addition of CPUs, use of multicore systems instead of single-core, or addition of additional memory. In the parallel track, Appendix 8A briefs on virtualization aspects. Chapter 9 introduces the concept and characteristics of availability. Availability can be defined as the ability to guarantee no loss of data and subsequent recovery of the system in a reasonable amount of time. In the parallel track, Appendix 9A briefs on replication aspects.

96

Road to Digital Transformation of Enterprise Architecture

Chapter 10 introduces the concept and characteristics of mobility. Mobility refers to the ability of the user or other mobile entity to communicate and access services, irrespective of any changes in the location or in the technical access environment. In the parallel track, Appendix 10A briefs on spatio-temporal databases aspects. Chapter 11 introduces the concept and characteristics of ubiquity. Ubiquity is the ability to be anywhere and anytime, to anyone, where and when needed. In the parallel track, Appendix 11A briefs on embedded systems aspects. Chapter 12 introduces the concept and characteristics of security. Security is the ability to protect an organization’s information assets from accidental or malicious disclosure, modification, misuse, and erasure. In the parallel track, Appendix 12A briefs on cryptography aspects. Chapter 13 introduces the concept and characteristics of analyticity. Analyticity is the ability for continuous iterative exploration and investigation of past performance, based on data and statistical methods, to gain insight and drive planning for the future. In the parallel track, Appendix 13A briefs on data mining aspects. Chapter 14 introduces the concept and characteristics of usability. Usability is the ability to be used easily and effectively by the specified range of users, given specified training and user support, to fulfill the specified range of tasks, within the envisaged range of environmental scenarios. In the parallel track, Appendix 14A briefs on interactive interfaces aspects.

5 Dependability To withstand the disruptive digital storms of the future, enterprise architecture affects digital transformation of the overarching dependability aspects through all the constituting attributes of enterprise architecture, namely, interoperability, scalability, availability, mobility, ubiquity, security, analyticity, and usability. In all engineering disciplines, reliability is the ability of a system to perform its required functions under stated conditions for a specified period of time. In software, for application reliability, this becomes the ability of a software application and all the components it depends on (operating system, hypervisor, servers, disks, network connections, power supplies, etc.) to execute without faults or halts all the way to completion. But completion is defined by the application designer. Even with perfectly written software and no detected bugs in all underlying software systems, applications that begin to use thousands of servers will run into the “mean time to failure” in some piece of hardware, and some number of those instances will fail. Therefore, the application depending on those instances will also fail. In 2008, Google had to solve the massive search problem across all content on the web, which was bordering on one trillion unique URLs. They  ended up employing loosely coupled distributed computing on a massive scale: clusters of commodity (cheap) computers working in parallel on large data sets. Even with individual server with excellent reliability statistics, with hundreds of thousands of servers, there were still multiple failures per day as one machine or another reached its mean time to failure. Google had no choice but to give up on reliability of the hardware and switch things over to achieve the same with the reliability of the software. The only way to build a reliable system across a group of large number of unreliable computers is to employ suitable software to address those failures. MapReduce was the software framework invented by Google to address this issue; the name MapReduce was inspired by the map and reduce functions of the functional programming language Lisp. Parallel programming on a massive scale has the potential to not only address the issue of reliability but also deliver a huge boost in performance. This is opportune because, given the problems with large data sets of the web, without massive parallelism, leave aside reliability, the processing itself may not be achievable. Instituting strategies and mechanisms that accommodate instead of avoiding possibilities of routine faults and failures is not new. TCP/IP that is a grouping of transmission control protocol (TCP) and Internet protocol (IP) are two of the core technology standards on which the Internet is based. TCP/IP is a low-level protocol that ensures that signals can be moved from one place to another. IP moves packets of data from one point to another, with routers helping those packets find their way across networks; this transmission of packets is unreliable. TCP builds reliable connections on top of IP, accepting that not  all packets will complete their journeys and resending them as necessary.

97

98

Digital Transformation of Enterprise Architecture

5.1 Introduction to Dependability The  term dependability is defined as the ability of a system to deliver a service while avoiding service failures that are more frequent and more severe than what is acceptable. The dependability is a concept that integrates several attributes of a system: 1. Availability is the readiness for provisioning of correct service. It  can be also defined as the probability that a system is able to deliver correctly its service at any given time. Relatively few systems are designed to operate continuously without interruption and without maintenance of any kind. In many cases, we are interested not only in the probability of failure, but also in the number of failures and, in particular, in the time required to make repairs. For such applications, the attribute which we would like to maximize is the fraction of time that the system is in the operational state, expressed by availability. Availability A(t) of a system at time t is the probability that the system is functioning correctly at the instant of time t. The steady-state availability defined by T

1 A(t)dt T →∞ T

A(∞) = lim

∫ 0

where, A(T) is the value of the point availability averaged over some interval of time T. This interval might be the life-time of a system or the time to accomplish some particular task. If a system cannot be repaired, the point availability A(t) is equal to the system’s reliability, i.e., the probability that the system has not failed between 0 and t. Thus, as T goes to infinity, the steady-state availability of a nonrepairable system goes to zero A(∞) = 0. Steady-state availability is often specified in terms of downtime per year. Table 5.1 shows the values for the availability and the corresponding downtime.

TABLE 5.1 Availability and the Corresponding Downtime per Year Availability (%) 90 99 99.9 99.99 99.999 99.9999

Downtime 36.5 days/year 3.65 days/year 8.76 h/year 52 min/year 5 min/year 31 s/year

99

Dependability

FIGURE 5.1 Relationship between MTTF, MTBF and MTTR.

Availability is typically used as a measure of dependability for systems where short interruptions can be tolerated. Networked systems, such as telephone switching fall into this category. In systems with constant failure and repair rates, the reliability (MTTF), maintainability (MTTR), and availability (A) measures are related by A = MTTF /(MTTF + MTTR) The  sum MTTF  +  MTTR is sometimes called the Mean Time Between Failures (MTBF). Figure  5.1  shows the relationship between MTTF, MTTR, and MTBF. A high availability can be achieved either by a long MTTF or by a short MTTR. In the construction of a high-availability system the designer has thus some freedom in the selection of their approach. 2. Reliability is the capability of providing the continuity of a correct service. It can be also defined as the probability of a system to function correctly over a given period of time under a given set of operating conditions. Reliability is a measure of the continuous delivery of correct service. High reliability is required in situations when a system is expected to operate without interruptions, as in the case of a heart pacemaker. Reliability is a function of time. The way in which time is specified varies substantially depending on the nature of the system under consideration. Reliability R(t) of a system at time t is the probability that the system operates without a failure in the interval [0, t], given that the system was performing correctly at time 0. Reliability expresses the probability of success. Alternatively, we can define unreliability Q(t) of a system at time t as the probability that the system fails in the interval [0, t], given that it was performing correctly at time 0. Unreliability expresses the probability of failure. The reliability and the unreliability are related as Q(t) = 1 − R(t). The probability that a system will fail in a given interval of time is expressed by the failure rate, measured in FITs (Failure In Time). A failure rate of 1 FIT means that the mean time to a failure (MTTF) of a device is 109 h, i.e., one failure occurs in about 115,000 years. If a system has a constant failure rate of λ failures/h, then the reliability at time t is given by R(t) = exp(−λ (t − t0 )), where t − t0 is given in hours. The inverse of the failure rate 1/λ = MTTF is called the Mean-Time-To-Failure MTTF (in hours).

100

Digital Transformation of Enterprise Architecture

3. Safety is the capability of avoiding catastrophic consequences on the users or the environment. Safety can be considered as an extension of reliability, namely reliability with respect to failures that may create safety hazards. Safety S(t) of a system at time t is the probability that the system either performs its function correctly or discontinues its operation in a fail-safe manner in the interval [0, t], given that the system was operating correctly at time 0. From the reliability point of view, all failures are equal. For  safety considerations, failures are partitioned into fail-safe and fail-unsafe ones. For  example, the alarm may either fail to function correctly even though a danger exists, or it may give a false alarm when no danger is present. The former is classified as a fail-unsafe failure. The latter is considered a fail-safe one. 4. Integrity is the capability of avoiding improper alterations. Integrity can be of two type: • System integrity defined as the ability of a system to detect faults in its own operations and to inform a human operator. • Data integrity defined as the ability of a system to prevent damage to its own database and to detect, and possibly correct, errors that do occur as consequence of faults. 5. Maintainability is the capability of undergoing modifications and repairs. Alternatively, we can define the term maintenance as the action taken to retain a system in, or return a system to, its designer operating condition, and the maintainability as the ability of a system to be maintained. Maintainability is a measure of the time interval required to repair a system after the occurrence of a benign failure. Maintainability is measured by the probability M (d) that the system is restored within a time interval d after the failure. In keeping with the reliability formalism, a constant repair rate m (repairs per hour) and a Mean Time to Repair (MTTR) are introduced to define a quantitative maintainability measure. Sometimes fault tolerance is stated as a requirement for a system to be built or as a property of an existing system. Fault tolerance is neither a system requirement nor a system property, because fault tolerance is a mechanism for achieving dependability, not an attribute of dependability. Requiring that a system be fault tolerant would not tell us anything useful about the system’s overall dependability performance, nor  would such a requirement provide a useful goal for developers unless the specific faults of interest were specified in detail. 5.1.1 Faults, Errors, and Failures Faults are reasons for errors, and errors are reasons for failures. A  fault is a physical defect, imperfection, or flaw that occurs in some hardware or software component. Examples are a short circuit between two adjacent interconnects, a broken pin, or a software bug. An error is a deviation from correctness or accuracy in computation, which occurs as a result of a fault. Errors are usually associated with incorrect values in the system state. For example, a circuit or a program computed an incorrect value, or incorrect information was received while transmitting data. A  failure is a nonperformance of some action which is due or expected. A system is said to have a failure if the service it delivers to the user deviates from compliance with the system specification for

Dependability

101

a specified period of time. A  system may fail either because it does not  act in accordance with the specification, or because the specification did not adequately describe its function. A system is a physical entity that implements the functionalities needed by one or more users: typically it is a processor-based system, where software and hardware entities cooperate to implement the needed functionalities. A user is the entity that interacts with the system by providing input stimuli to the system, and by capturing and using the system’s output responses: typically the user may be a human being, i.e., the user of a personal computer or a physical entity, i.e., a processing module (the user) that reads and processes the data sampled by an acquisition module (the system). The environment is a set of external entities that may alter the system’s behavior without acting on its inputs directly (Figure 5.2a). Due to the alterations induced by the environment (thermal stress, impact with ionizing radiations, etc.) a correct system may stop working correctly (either permanently, or by a period of time only). Failure indicates the non-performance or inability of a system or a system’s component to perform its intended function for a specified time under specified environmental conditions. A fault can be defined as the cause of a failure. A fault can be dormant or passive, when it is present in a system without affecting the system, or it can be active when it has an effect on the system functioning. An active fault is termed as an error that creates a failure as soon as it is propagated from the inside of the system to the system’s outputs. Once a fault has been activated as an error within a system module, several degradation mechanisms can propagate this error through the system’s structure until the error reaches the system’s outputs, thus producing a failure. The  propagation process is conducted through error propagation paths, which depend on: • The system module where the fault originates • The structure of the system • The input sequence that is applied to the system Figure 5.2b shows a central processing unit (CPU) whose Input is the program the CPU is executing, and its Output is the result of the executed instructions. The fault originates within one register in the Register file module. As a consequence of the program execution, the fault remains passive for some clock cycles, after which it propagates through the

FIGURE 5.2 (a) System, user and the environment. (b) Example of error propagation.

102

Digital Transformation of Enterprise Architecture

Register file and finally becomes active as an error when it exits the Register file and enters the Pipeline module. After propagating through the different stages of the Pipeline, the error affects the Data cache, and finally the CPU’s Output, thus becoming a failure. 5.1.1.1 Faults Figure 5.3 depicts a classification of faults. Fault containment refers to design and engineering efforts that ensure that the immediate consequences of a fault are limited to a single Fault Containment Unit (FCU). Many reliability models make the tacit assumption that FCUs fail independently, i.e., there is no single fault that can affect more than one FCU. This FCU independence assumption must be justified by the design of the system. 1. Fault-space: It is important to distinguish faults that are related to a deficiency internal to the FCU or to some adverse phenomena occurring external to the FCU. An internal fault of a component, i.e., a fault within the FCU can be a physical fault, such as the random break of a wire, or a design fault either in the software (a program error) or in the hardware (an erratum). An external fault can be a physical disturbance, e.g., a lightning stroke causing spikes in the power supply or the impact of a cosmic particle. The provision of incorrect input data is another class of an external fault. 2. Fault time: In the temporal domain a fault can be transient or permanent. Whereas physical faults can be transient or permanent, design faults (e.g., software errors) are always permanent. A transient fault appears for a short interval at the end of which it disappears without requiring any explicit repair action. A transient fault can lead to an error, i.e., the corruption of the state of an FCU, but leaves the physical hardware undamaged (by definition). We call a transient external physical fault a transitory fault. An example for a transitory fault is the impact of a cosmic particle that corrupts the state of an FCU. We call a transient internal physical fault an intermittent fault. Examples for intermittent faults are oxide defects, corrosion or other fault mechanisms that have not  yet developed to a stage where the hardware fails permanently. A permanent fault is a fault that remains in the system until an explicit repair action has taken place that removes the fault.

FIGURE 5.3 Classification of faults.

Dependability

103

An example for a permanent external fault is a lasting breakdown of the power supply. A permanent internal fault can be in the physical embodiment of the hardware (e.g., a break of an internal wire) or in the design of the software or hardware. The mean time it takes to repair a system after the occurrence of a permanent fault is called MTTR (mean time to repair). A substantial number of the transient faults observed in the field are intermittent faults. Whereas the failure rate of transitory faults is constant, the failure rate for intermittent faults increases as a function of time indicating aging or wear-out. There are four approaches to deal with faults: 1. Fault prevention: Always attempt this approach first. From a software perspective, fault prevention is the first and best approach to dealing with faults, because the alternatives are not guaranteed to work and can have substantial cost. Fault prevention is a set of techniques attempting to prevent the introduction or occurrence of faults in the system in the first place. Fault prevention is achieved by quality control techniques during the specification, implementation, and fabrication stages of the design process: • For hardware, this includes design reviews, component screening, and testing. • For software, this includes structural programming, modularization, and formal verification techniques. A rigorous design review may eliminate many specification faults. If a design is efficiently tested, many of its faults and component defects can be avoided. Faults introduced by external disturbances such as lightning or radiation are prevented by shielding, radiation hardening, etc. User and operation faults are avoided by training and regular procedures for maintenance. Deliberate malicious faults caused by viruses or hackers are reduced by firewalls or similar security means. Degradation fault avoidance might seem like an unachievable goal, but it is not. There are many techniques that allow the rate of degradation faults to be reduced to a level at which they occur so infrequently in practice that they can be ignored. All of the techniques that can help us to achieve fault avoidance have limited capability, applicability, and performance, and so we cannot use them exclusively. 2. Fault elimination: If faults cannot be avoided, then the next best thing is to eliminate them from the system in which they are located before the system is put into use. Although not as desirable as fault avoidance, fault elimination is still highly beneficial. A  simple example of design fault elimination is software testing. By testing a piece of software, developers are looking for faults in the software so that they can be removed before the software is deployed. Fault elimination is a set of techniques targeting the reduction of the number of faults which are present in the system. Fault elimination is performed: a. During the development phase, fault removal involves three steps: verification, diagnosis, and correction. Verification is the process of checking whether the system meets a set of given conditions. Subsequently, the fault that prevents the conditions from being fulfilled is diagnosed and the necessary corrections are performed.

104

Digital Transformation of Enterprise Architecture

b. During the operational life of the system fault elimination consists of • preventive maintenance, in which parts are replaced or adjustments are made before failure occurs. The objective is to increase the dependability of the system over the long term by staving off the ageing effects of wear-out. • corrective maintenance is performed after the failure has occurred in order to return the system to service as soon as possible. 3. Fault forecasting: If a fault cannot be dealt with by avoidance or elimination, the untreated faults will be present during any operation, and attempt must be made to forecast the effects of the fault. Fault forecasting is the process during which • the number of any residual faults in the system is estimated. • their impact is analyzed. However, in systems where the runtime execution can be affected by perturbations in the environment, such as embedded systems, the impact of environmental problems needs to be also assessed. By predicting the effect of the fault on the system, a decision can be made whether the system will meet its dependability requirements. Fault forecasting is a set of techniques aiming to estimate how many faults are present in the system, possible future occurrences of faults, and the consequences of faults. Fault forecasting is done by performing an evaluation of the system behavior with respect to fault occurrences or activation. The evaluation can be a. qualitative, which aims to rank the failure modes or event combinations that lead to system failure. b. quantitative, which aims to evaluate in terms of probabilities the extent to which some attributes of dependability are satisfied. Simplistic estimates merely measure redundancy by accounting for the number of redundant success paths in a system. More sophisticated estimates account for the fact that each fault potentially alters a system’s ability to resist further faults. 4. Fault tolerance: Fault tolerance is an approach that is used in systems for which: • Degradation faults are expected to occur at a rate that would cause an unacceptable rate of service failure. • Complete avoidance, elimination or forecasting of design faults was not possible and design faults were expected to remain in a system. Fault tolerance targets the development of systems which function correctly in the presence of faults. Redundancy is necessary, but not sufficient for fault tolerance. For example, two duplicated components connected in parallel do not  make a system faulttolerant, unless some form of monitoring is provided, which analyzes the results and selects the correct one. Fault tolerance is achieved by using some kind of redundancy which allows a fault either to be masked, or detected, followed by location, containment, and recovery: a. Fault masking is the process of ensuring that only correct values get passed to the system output in spite of the presence of a fault. This is done by preventing the system from being affected by errors by either correcting the error, or compensating for it in some fashion. Since the system does not show the impact of the fault, the

Dependability

b.

c.

d.

e.

105

existence of the fault is invisible to the user/operator. For example, a memory protected by an error-correcting code corrects the faulty bits before the system uses the data. Another example of fault masking is triple modular redundancy with the majority voting. Fault detection is the process of determining that a fault has occurred within a system. Examples of techniques for fault detection are acceptance tests and comparison. Acceptance tests is a fault detecting mechanism that can be used for systems having no replicated components. Acceptance tests are common in software systems. The  result of a program is subjected to a test. If the result passes the test, the program continues execution. A  failed acceptance test implies a fault. Comparison is an alternative technique for detecting faults, used for systems with duplicated components. The  output results of two components are compared. A disagreement in the results indicates a fault. Fault location is the process of determining where a fault has occurred. A failed acceptance test cannot generally be used to locate a fault. It can only tell that something has gone wrong. Similarly, when a disagreement occurs during the comparison of two modules, it is not possible to tell which of the two has failed. Fault containment is the process of isolating a fault and preventing the propagation of its effect throughout the system. This  is typically achieved by frequent fault detection, by multiple request/confirmation protocols and by performing consistency checks between modules. Fault recovery is the process of detecting a fault and then the system reconfiguring itself to isolate the faulty component from the rest of the system and regain operational status. This might be accomplished by having the faulty component replaced by a redundant backup component. Alternatively, the system could switch the faulty component off and continue operation with a degraded capability. This is known as graceful degradation.

5.1.1.2 Errors The immediate consequence of a fault is an incorrect state in a component. We call such an incorrect state, i.e., a wrong data element in the memory, a register, or in a flip-flop circuit of a CPU, an error. As time progresses, an error is activated by a computation, detected by some error detection mechanism, or wiped out. An error is activated if a computation accesses the error. From this instant onwards, the computation itself becomes incorrect. If a fault impacts the contents of a memory cell or a register, the consequent error will be activated when this memory cell is accessed by a computation. There can be a long time-interval between error occurrence and error activation (the dormancy of an error) if a memory cell is involved. If a fault impacts the circuitry of the CPU, an immediate activation of the fault may occur and the current computation will be corrupted. As soon as an incorrect computation writes data into the memory, this part of memory becomes erroneous as well. Bohrbug is a class of bugs that always produces a failure on retrying the operation which caused the failure. The Bohrbug was named after the Bohr atom because the bug represents a solid and easily detectable bug that can be isolated by standard debugging techniques.

106

Digital Transformation of Enterprise Architecture

Heisenbug is a class of bugs that disappear or alter their behavior when an attempt to isolate it is made. Due to the unpredictable nature of a Heisenbug, when trying to recreate the bug or using a debugger, the error may change or even vanish on a retry. The name Heisenbug is derived from Heisenberg’s uncertainty principle which states, “it is fundamentally impossible to predict the position and momentum of a particle at the same time.” One common example of a Heisenbug is a bug that appears when the program is compiled with an optimizing compiler, but does not appear when the same program is compiled without optimization. A mature piece of software in the operational phase, released after its development and testing stage, is more likely to experience failures caused by Heisenbugs than Bohrbugs. Most recent studies on failure data have reported that a large proportion of software failures are transient in nature, caused by phenomena such as overloads or timing and exception errors. An error is wiped-out, if a computation overwrites the error with a new value before the error has been activated or detected. An error that has neither been activated, detected, or wiped-out is called a latent error. A latent error in the state of a component results in a silent data corruption (SDC), which can lead to serious consequences. 5.1.1.3 Failures A failure is an event that denotes a deviation between the actual behavior and the intended behavior (the service) of a component, occurring at a particular instant. Since, in our model, the behavior of a component denotes the sequence of messages produced by the component, a failure manifests itself by the production of an unintended (or no intended) message. Figure 5.4 classifies the failure of a component: 1. Domain: A failure can occur in the value domain or in the temporal domain: • A value failure means that an incorrect value is presented at the componentuser interface (since the user can also be another system). • A  temporal failure means that a value is presented outside the intended interval of real-time. Temporal failures only exist if the system specification contains information about the intended temporal behavior of the system. Temporal failures can be subdivided into early temporal failures and late temporal failures. A component that contains internal error detection mechanisms in order to detect any error and suppresses a result that contains a value error or an early temporal failure will only exhibit a late temporal failure, i.e., an omission, at the interface to its users. We call such a failure an omission failure.

FIGURE 5.4 Classification of failures.

Dependability

107

2. Severity: Failures can be of two types depending on the effect a failure has on its environment: • Benign failure where the cost of failure is of the same order of magnitude as the loss of the normal utility of the system • Malign failure where the failure costs are orders of magnitude higher than the normal utility of a system, e.g., a malign failure can cause a catastrophe such as an accident. We call applications where malign failures can occur, safetycritical applications. The  characteristics of the application determine whether a failure is benign or malign. In between these two extreme cases of benign and malign, we can assign a severity class to a failure, e.g., based on the monetary impact of a failure or the impact of the failures on the user experience. 3. Frequency: Within a given time interval, a failure can occur only once or repeatedly. If it occurs only once, it is called a single failure. If a system continues to operate after the failure, we call the failure a transient failure. A frequently occurring transient failure is called a repeated failure. A special case of a single failure is a permanent one, i.e., a failure after which the system ceases to provide a service until an explicit repair action eliminates the cause of the failure. 4. View: If more than one user looks at a failing component, two cases can be distinguished: all users see the same failing behavior—we call this a consistent failure— or different users see different behaviors–we call this an inconsistent or malicious or Byzantine failure.

5.2 Fault Tolerance Fault tolerance is the ability of a system to continue performing its intended functions in presence of faults. In a broad sense, fault tolerance is associated with reliability, with successful operation, and with the absence of breakdowns. A fault-tolerant system should be able to handle faults in individual hardware or software components, power failures, or other kinds of unexpected problems and still meet its specification. Fault tolerance is necessary because it is practically impossible to build a perfect system. The fundamental problem is that, as the complexity of a system grows, its reliability drastically decreases, unless compensatory measures are taken. For example, if the reliability of individual components is 99.99%, then the reliability of a system consisting of 100  nonredundant components is 99.01%, whereas the reliability of a system consisting of 10,000 nonredundant components is just 36.79%. Such a low reliability is unacceptable in most applications. If a 99% reliability is required for a 10,000-component system, individual components with a reliability of at least 99.999% should be used, implying a sharp increase in cost. A system is said to fail if it has ceased to perform its intended functions; failure can be a total cessation of function or a performance of some function in a subnormal quality or quantity, like deterioration or instability of operation. System is used in this book in the generic sense of a group of independent but interrelated elements comprising a unified  whole.

108

Digital Transformation of Enterprise Architecture

Therefore, the techniques presented are applicable to a variety of products, devices, and subsystems. The aim of fault-tolerant design is to minimize the probability of failures, whether those failures simply annoy the users or result in lost fortunes, human injury, or environmental disaster. Another problem is that although designers do their best to have all the hardware defects and software bugs cleaned out of the system before it goes on the market, history shows that such a goal is not attainable. It is inevitable that some unexpected environmental factor is not taken into account, or some new unexpected environmental factor comes into play, or some potential user mistakes are not foreseen. Thus, even in an unlikely case in which a system is designed and implemented perfectly, faults are likely to be caused by situations outside the control of the designers. As semiconductor technology progressed, hardware components became intrinsically more reliable and the need for tolerating component defects diminished for general purpose applications. Nevertheless, fault tolerance remained an essential attribute for: • Safety-critical applications are those where loss of life or environmental disaster must be avoided. Examples are nuclear power plant control systems, computer controlled radiation therapy machines, heart pacemakers, and flight control systems. • Mission-critical applications stress mission completion, as in the case of a spacecraft or a satellite. • Business-critical applications are those in which keeping a business operating continuously is an issue. Examples are bank and stock exchange automated trading systems, web servers, and e-commerce. During the mid-1990s, the interest in fault tolerance resurged considerably. On the one hand, noise margins were reduced to a critical level as the semiconductor manufacturing process size continuously shrinks, the power supply voltage is lowered, and operating speed increases. This  made integrated circuits (ICs) highly sensitive to transient faults caused by crosstalk and environmental upsets such as atmospheric neutrons and alpha particles. It  became mandatory to design ICs that are tolerant to these faults in order to maintain an acceptable level of reliability. On the other hand, the rapid development of real-time computing applications that started around the mid-1990s, especially the demand for software-embedded intelligent devices, made software fault tolerance a pressing issue. Software systems offer a compact design and a rich functionality at a competitive cost. Instead of implementing a given functionality in hardware, a set of instructions accomplishing the desired tasks are written and loaded into a processor. If changes in the functionality are required, the instructions can be modified instead of building a different physical device. Software eliminates many of the physical constraints of hardware; for example, it does not  suffer from random fabrication defects and does not wear out. There  are various approaches to achieving fault tolerance. Common to all these approaches is a certain amount of redundancy. For our purposes, redundancy is the provision of functional capabilities that would be unnecessary in a fault-free environment. This can be a replicated hardware component, an additional check bit attached to a string of digital data, or a few lines of program code verifying the correctness of the program’s results. The idea of incorporating redundancy in order to improve the reliability of a system was pioneered by John von Neumann in the 1950s in his work “Probabilistic Logic and Synthesis of Reliable Organisms from Unreliable Components.”

Dependability

109

Two kinds of redundancy are possible: 1. Space redundancy provides additional components, functions, or data items that are unnecessary for fault-free operation. Space redundancy is further classified into hardware, software, and information redundancy, depending on the type of redundant resources added to the system. 2. Time redundancy: In time redundancy, the computation or data transmission is repeated and the result is compared to a stored copy of the previous result. For redundant software components, this may consist of double- or triple-redundant software components (portions of your application) running in parallel with common validation checks. One idea is to have the components developed by different teams based on the same specifications. This approach costs more, but extreme reliability may require it. Because each component is designed to perform the same function, the failures of concurrent identical components are easily discovered and corrected during quality-assurance testing. Although redundant software components provide the quality-assurance process with a clever way to validate service accuracy, certain applications may want to deploy component redundancy into the production environment. In  such conditions, multiple parallel application processes can provide validity checks on each other and let the majority rule. Although the redundant software components consume extra resource consumption, the trade-off between reliability and the cost of extra hardware may be worth it. Another redundancy-based design technique is the use of services such as clustering (linking many computers together to act as a single faster computer), load balancing (workloads kept balanced between multiple computers), data replication (making multiple identical copies of data to be processed independently and in parallel), and protecting complex operations with transactions to ensure process integrity. Naturally, when one is using cloud provider services, many of these services are inbuilt in the base infrastructure and services. Redundant hardware is one of the most popular strategies for providing reliable systems. This includes redundant arrays of independent disks (RAID) for data storage, redundant network interfaces, and redundant power supplies. With this kind of hardware infrastructure, individual component failures can occur without affecting the overall reliability of the application. It is important to use standardized commodity hardware to allow easy installation and replacement.

5.3 Summary This chapter began by defining dependability as the ability of a system to deliver a service while avoiding service failures that are more frequent and more severe than what is acceptable. Faults, Errors and Failures are threats to dependability of systems. To withstand the disruptive digital storms of the future, enterprise architecture affects digital transformation of the overarching dependability aspects through all the constituting attributes of enterprise architecture, namely, interoperability, scalability, availability, mobility, ubiquity, security, analyticity, and usability—but reliability is an essential pre-requisite to enable this transformation.

110

Digital Transformation of Enterprise Architecture

The chapter’s appendix begins with system reliability’s definition, its characteristics followed by mathematical modeling of reliability. The basic component structures for system reliability, namely, series and parallel relationships are explained—any complex system configuration can be decomposed by using a combination of these two basic structures. This is followed by discussion on reliability (the probability of success for a given period of time), with exponential failures used as the failure distribution functions. In the last part, practical methods and tools for design for reliability in the system life cycle are discussed including Failure Mode Effect Analysis (FMEA) and Faulty Tree Analysis (FTA).

Appendix 5A: Reliability As stated in the beginning of this chapter, to withstand the disruptive digital storms of the future, enterprise architecture affects digital transformation of the dependability aspects through all the constituting attributes of enterprise architecture—but reliability is an essential pre-requisite to enable this transformation. The  appendix begins with system reliability’s definition, its characteristics followed by mathematical modeling of reliability. The  basic component structures for system reliability, namely, series and parallel relationships are explained—any complex system configuration can be decomposed by using a combination of these two basic structures. This is followed by discussion on reliability (the probability of success for a given period of time), with exponential failures used as the failure distribution functions. In the last part, practical methods and tools for design for reliability in the system life cycle are discussed including Failure Mode Effect Analysis (FMEA) and Faulty Tree Analysis (FTA). These two methods address the occurrence of system failures from two different perspectives: • FMEA looks at the bottom-up level of component failures, and induces the possible effects caused by one or more such failures. • FTA  looks at system failures from another perspective, specifying the possible causes of a particular failure occurring. These two methods are complimentary to each other; to obtain a complete picture of system failures and their effects, FMEA and FTA need to be combined together. 5A.1 Introduction to Reliability Reliability can be defined as the probability that a system or a product will operate properly for a specific period of time in a satisfactory manner under the specified operating conditions. The presentation and approach in this appendix is adapted from Liu (2016). System reliability has the following four major characteristics: 1. It is a probability: A system becomes unreliable due to failures that occur randomly, which, in turn, also makes system reliability a random variable. The probability of system reliability provides a quantitative measure for such a random phenomenon. A probability measures the odds or the fraction/percentage of the number of times that the system will be functional, not the percentage of the time that the

111

Dependability

system is working. For example, a reliability of 0.90 for a system to operate for 18 h implies that the system is expected to function properly for at least 18 h, 90 out of 100 times. An intuitive definition of the reliability is as follows: Suppose there are n totally identical components that are simultaneously subjected to a design operating conditions test; during the interval of time [0, t], nf(t) components are found to have failed and the remaining ns(t) survived. At time t, the reliability can be estimated as R(t) = ns(t)/n. 2. System reliability is a function of time: As seen in the definition of system reliability, reliability is defined for a certain system operation time period. If the time period changes, one would expect the value for reliability to change also. The chance of system failure over an hour to be a lot lower than that over a year—a system is more reliable for a shorter period of time. Time is one of the most important factors in system reliability: many reliability-related factors are expressed as a time function, such as MTBF. 3. Satisfactory performance is specified for system reliability; it defines the criteria at which the system is considered to be functioning properly. These criteria are derived from systems requirement analysis and functional analysis, and must be established to measure reliability. Satisfactory performance may be a particular value to be achieved, or sometimes a fuzzy range, depending on the different types of systems or components involved. 4. Reliability needs to be considered under the specified operating conditions. These conditions include environmental factors such as the temperature, humidity, vibration, or surrounding locations. These environmental factors specify the normal conditions at which the systems are functional. Almost every system may be considered as an open system as it interacts with the environment, and the environment will have a significant impact on system performance. System reliability has to be considered in the context of the designed environment; it is an inherent system characteristic. This  is why product warranties do not  cover accidents caused by improper use of systems. 5A.1.1 Understanding Reliability Reliability is a function of time to failure, t, which is a random variable denoting the time to failure. The reliability function at time t can be expressed as a cumulative probability, the probability that the system survives at least time t without any failure: R(t) = P(t > t) The system is either in a functional condition or a state of failure, so the cumulative probability distribution function of failure F(t) is the complement of R(t), or R(t) + F(t) = 1

R(t) = 1 − F(t)

If the time to failure follows an exponential distribution with parameter λ, then the probability function for failure is ∞

f (t) = λ e

− λt



R(t) = λ e − λ x dx = e − λt t

112

Digital Transformation of Enterprise Architecture

Assuming that no failure has occurred prior to t1, the failure rate is defined in a time interval [t1, t2] as the probability that a failure per unit time occurs in that interval. Thus, the failure rate λ(t) can be formally expressed as

λ (t2 ) =

λ (t)



t2

t1

f (t)dt

( t2 − t1 ) R ( t1 )

1  d f (t)  − R(t) λ (t) = =λ R(t)  dt R (t) 

For  the exponential failure function, the instantaneous failure rate function is constant over the time. For illustration purposes, in this chapter we focus on exponential failure rate function, as exponential failure is commonly found in many applications. Under these conditions, the system or component has no “memory” and, technically, does not wear out. For most types of purely electronic equipment, this is a good approximation. For mechanical equipment, this is usually a poor model and should not be used. Instead, the Weibull distribution is a better choice. The failure rate λ, generally speaking, is the measure of the number of failures per unit of operation time. The reciprocal of λ is MTBF,

MTBF =

1 λ

Failure rate, especially instantaneous failure, can be considered as a conditional probability. The failure rate is one of the most important measures for the systems designers, operators, and maintainers, as they can derive the MTBF, or the mean life of components, by taking the reciprocal of the failure rate, expressed in the last equation. MTBF is a common measure for systems reliability due to its simplicity of measurement and its direct relationship to the systems reliability measure. For a constant failure rate, the failure rate can also be estimated by using the following formula:

λ=

Number of failures Total operating hours

Failure rate is a function of time; it varies with different time intervals and different times in the system life cycle. If we plot the system failure rate over time from a system life cycle perspective, it exhibits a so-called “bathtub” curve shape, as illustrated in Figure 5A.1. The salient parts of the bathtub curve: 1. Region A: At the beginning of the system life cycle, the system is being designed, concepts are being explored, and system components are being selected and evaluated. At this stage, because the system is immature, there are many “bugs” that need to be fixed; there are many incompatibilities among components, and many

Dependability

113

FIGURE 5A.1 Bathtub shape of the changing failure rate.

errors are being fixed. The  system gradually becomes more reliable along with design effort; thus, the failure rate of the system components decreases. This is the typical behavior of the system failure rate in the early life cycle period, as shown in Figure 5A.1 in the first segment of the failure rate curve. 2. Region B: Once the system is designed and put into operation, the system achieves its steady-state period in terms of failure rate, and presents a relatively constant failure rate behavior. The  system is in its maturity period, as presented in the middle region in Figure  5A.1. In  this stage, system failure is more of a random phenomenon with steady failure rate, which is expected under normal operating conditions. 3. Region C: When the system approaches the end of its life cycle, it is in its wear-out phase, characterized by its incompatibility with new technology and user needs and its worn-out condition caused by its age, it presents a significantly increasing pattern of failure occurrence, as seen in the last region of the life cycle in Figure 5A.1. Failures are no longer solely due to randomness but also due to deterministic factors mentioned above; it is time to retire the system. Understanding this characteristic of system failure enables us to make feasible plans for preventive and corrective maintenance activities to prolong system operations and make correct decisions about when to build a new system or to fix the existing one.

5A.1.2 Reliability with Independent Failure Event Consider a system with n components, and suppose that each component has an independent failure event (i.e., the occurrence of the failure event does not depend on any other). Components may be connected in different structures or networks within the system configuration; these could be in series, in parallel, or a combination thereof.

114

Digital Transformation of Enterprise Architecture

FIGURE 5A.2 (a) Components in series. (b) Components in parallel.

1. Components in series: If two components (that comprise a system) are placed in a “series” reliability configuration, as in Figure 5A.2a, it means that both must be operative for the system to be working properly. The reliability of the system is given by RS = exp(−λat)exp(−λbt) = exp  − ( λa + λb ) t  This is the basis for the simple addition of failure rates for components when considering the reliability of a system. Because the MTBFs and failure rates are reciprocals of one another,

λs = λa + λb 1 1 1 = + MTBFS MTBFa MTBFb 2. Components in parallel: A parallel reliability configuration, as in Figure 5A.2b, means that at least one of the components must be operative in order for the system to be working. For two components in parallel, the system reliability therefore can be expressed as RS = 1 − [1 − R( A)][1 − R(B)] = 1 − 1 − exp ( −λat )  1 − exp ( −λbt )  and the failure rates are not additive for such a system. The parallel configuration introduces redundancy, and thus improves the reliability of the system, with the penalty being the addition of the redundant component. This is necessary when it is extremely important to keep a system on the air, such as with a manned spacecraft or an air traffic control system.

115

Dependability

A k-out-of-n system is functioning if and only if at least k components of the n total components are functioning. It is easy to see that series and parallel systems are both special cases for the k-out-of-n structure. The series structure is an n-out-of-n system and the parallel structure is a 1-out-of-n system. 3. Components in network: Using the concepts above, one can easily solve the reliability for any network structure by decomposing it into components in series and parallel structures. In  a standby system, the backup component is not  put into operation until the preceding component fails. In  standby systems, the failures of individual components are not totally independent of each other; this is different from a purely parallel network, in which failures occur independently. In standby structures, failures occur one at a time (while in the parallel network, two parts can fail at the same time). Assuming all components are identical and failures occur one by one and failure function being exponential, the reliability for the standby system can be expressed as R ( N standby ) = P (  t ≤ N ) =

N

∑ n=0

e − λtλ t n n!

5A.2 Reliability Analysis Tools System reliability, as one of the inherent design characteristics, is one of the most important parts of any system’s operational requirements. The original requirements are derived from users, mission planning, and feasibility analysis. Starting from a very high level, requirements regarding system reliability are defined both quantitatively and qualitatively: • Performance and effectiveness factors for system reliability • System operational life cycle for measuring reliability • Environmental conditions in which the system is expected to be used and maintained (such as temperature, humidity, vibration, radiation, etc.) Once the high-level requirements are obtained, lower-level requirements are developed as the system design evolves; system requirements need to be allocated to the system components. System reliability is allocated in the system technical performance measures and integrated within the functional analysis and functional allocation processes. When allocating to the lower levels of the system, most of the allocations utilize a trialevaluation-modify cycle until a feasible solution is reached. It  is very difficult to arrive at optimum solutions; usually a feasible solution meeting the system requirements and complying with all other design constraints is pursued, and this process is also iterative and often involves users. Throughout the design, as part of the iterative design and evaluation process, there are many analysis tools that are available to aid the designers to effectively derive the requirements and technical performance measures for system reliability at different levels of the system structure. For most systems, the reliability requirements are addressed in an empirical manner

116

Digital Transformation of Enterprise Architecture

with a large volume of requirements and many iterations of analysis-integration-evaluation; one needs to have a practical tool to elicit the structures and relationships required for system reliability. Two of the most commonly used tools are failure mode effect analysis (FMEA) and faulty tree analysis (FTA). Performing FTA and FMEA together may give a more complete picture of the inherent characteristics of system reliability, thus providing a basis for developing the most efficient and cost-effective system maintenance plans. 5A.2.1 Failure Mode Effect Analysis (FMEA) FMEA is a bottom-up inductive approach to analyze the possible component failure modes within the system, classifying them into different categories, severities, and likelihoods, identifying the consequences caused by these failures to develop a proactive approach to prevent them from occurring, and the related maintenance policy for these failures. It is an inductive process, because FMEA starts with detailed specific examples and cases of failure, to gradually derive general propositions regarding system reliability predictions. FMEA usually consists of two separate analyses: 1. FMEA, which investigates the possible failure modes at different system levels (components or subsystems) and their effects on the system if failure occurs 2. Criticality analysis (CA), which quantifies the likelihood of failure occurrence (i.e.,  failure rate) and ranks the severity of the effects caused by the failures. This  ranking is usually accomplished by analyzing historical failure data from similar systems/components and through a team approach, derived in a subjective manner (Table 5A.1) Before embarking on FMEA analysis, the essential pre-requisites are: • • • •

System structure in schematic form System function Functional Flow Block Diagram (FFBD) Knowledge of systems requirements A comprehensive understanding of the systems components

A typical FMEA procedure is shown in Figure 5A.3. TABLE 5A.1 Typical FMEA Severity Ranking System Severity Score

Severity

Potential Failure Effects

1 2–3 4–6

Minor Low Moderate

7–8

High

9–10

Hazard

No effect on higher system Small disruption to system functions; repair will not delay the system mission May be further classified into low moderate, moderate or high moderate, causing moderate disruption and delay for system functions Causes high disruption to system functions. Some portion of functions are lost; significant delay in repairing the failure Potential safety issues, potential whole system mission loss and catastrophic if not fixed

Dependability

117

FIGURE 5A.3 Typical FMEA procedure.

5A.2.2 Faulty Tree Analysis (FTA) A  faulty tree analysis, or FTA  model, is a deductive approach, using graphical symbols and block diagrams to determine the events and the likelihood (probability) of an undesired failure event occurring. FTA is used widely in reliability analysis where the cause–effect relationships between different events are identified. It  is a graphical method for identifying the different ways in which a particular component/system failure could occur.

118

Digital Transformation of Enterprise Architecture

FIGURE 5A.4 (a) AND-gate structure. (b) OR-gate structure.

FTA is commonly used as a design method, based on the analysis of similar systems and historical data, to predict causal relationships in terms of failure occurrences for a particular system configuration. Hence, FTA  models are usually paralleled with functional analysis, providing a concise and orderly description of the different possible events and the combination thereof that could lead to a system/subsystem failure. Figure 5A.4 illustrates the basic symbols that an FTA model uses. Conducting an FTA analysis consists of four steps: 1. Develop the functional reliability diagram: Develop a functional block diagram for systems reliability, based on the system Functional Flow Block Diagram (FFBD) model, focusing on the no-go functions and functions of diagnosis and detection. Starting from the system FFBD, following the top-down approach, a tree structure for critical system failure events is identified. This diagram includes information about the structures of the no-go events, what triggers/activates the events, and what the likelihoods and possible consequences of those events are. 2. Construct the faulty tree: Based on the relationships described in the functional diagram, a faulty tree is constructed by using the symbols from Figure  5A.4 In constructing the faulty tree, the focus is on the sequence of the failure events for a specific functional scenario or mission profile—it is very possible that, for different operational modes, multiple FTAs may be developed for a single functional path. The faulty tree is based on the functional diagram but is not exactly the same, in the sense that functional models follow the system operational sequences of functions while the FTA  tree follows the logical paths of cause–effect failure relationships; 3. Develop the failure probability model: After the FTA is constructed, the next step is to quantify the likelihood of failure occurrence by developing the probability model of the faulty tree. The mathematical model of FTA is primarily concerned with predicting the probability of an output failure event with the probabilities of events that cause this output failure. 4. Identify the critical fault path: With the probability of failure of the system or of a higher-level subsystem, a path analysis can be conducted to identify the key causal factors that contribute most significantly to the failure. Certain models can be applied to aid the analysis based on the assumptions made about the faulty tree, such as a Bayesian model, Markov decision model, or simply using a Monte Carlo simulation.

Dependability

119

The results of FTA are particularly beneficial for designers to identify any risks involved in the design, and more specifically to: • Allocate failure probabilities among lower levels of system components • Compare different design alternatives in terms of reliability and risks • Identify the critical paths for system failure occurrence and provide implications of avoiding certain failures • Help to improve system maintenance policy for more efficient performance FTA provides a very intuitive and straightforward method to allow designers to visually perceive the possible ways in which a certain failure can occur. FTA  is a deductive approach: once the bottom-level failures have been identified, FTA  can easily assist the designers to assess how resistant the system is to various sources of risk. However, FTA is not good at finding the bottom-level initiating faults; that is why it works best when combined with FMEA, which exhaustively locates the failure modes at the bottom level and their local effects.

6 Performability This chapter proposes performability as the overarching aspect of enterprise architecture. To withstand the disruptive digital storms of the future, enterprise architecture affects digital transformation of the overarching performability aspects through all the constituting attributes of the enterprise architecture. There are numerous elements in a computer system that can have a complementary or orthogonal effect on the performance. They can be classified under the headings of hardware, system software, supporting software such as middleware, OLTP and database management system (DBMS) or relational database management system (RDBMS), network links, and application design and coding. The latter can render performance-enhancing efforts in the previous elements totally useless by poor design and coding. This  is particularly true in the client/server and RDBMS environments, where performance can be beaten to death by poor design. The objective of achieving maximum performance is by • Optimizing the speed of doing things that have to be done. For example, executing program code and transferring data to or from storage across networks are things that have to be done. • Minimizing inhibitors to this aim—elimination or minimization of any delays (latency) in the execution of the application and the other activities that support the application. These delays are called waits or waiting times in queuing terms. Much of the processes or tasks involved in computational work is serial, and in such cases, the final performance of the computational work will depend on the performance of the elements in the execution chain. Some work can, of course, be run in parallel, such as scientific programs or parallel data paths provided to speed up data transmission or storage I/O, but for performance planning and capacity work, it is wise to take the worst case of everything operating serially and not in parallel. Response time is the elapsed time between a work request being submitted and the receipt of a response by the same submitter, normally measured as the last character sent by the user to the first character returned. Some people define the latter condition as the time from submission to receiving the first byte of the response, a point we raised above. Examples of response time are • Time for a database or other data query • Time an OLTP transaction to complete successfully • Time for a web page request, access, transmission, and display

121

122

Digital Transformation of Enterprise Architecture

Throughput is the measure of work completed over a fixed time, for example, • • • •

OLTP transactions per minute or second Bytes or megabytes of a file transferred per unit time over a network Bytes or megabytes of data read or written per unit time Web server pages accessed per unit time

These entities are not  related in any fixed way so that, for example, it may be possible to increase the throughput at the expense of response time, or vice versa. The  setup of the system will in theory be done to satisfy the performance criteria in a service level agreement (SLA). These numbers, when used in an SLA, are essentially those that are reproducible, unlike (say) analytical queries, which may takes seconds to hours to complete, depending on the parameters involved in a particular query. Numbers regarding and helping to quantify performance in systems are normally referred to by the generic term performance metrics. Resources are elements of a system that can contribute in enhancing response time or throughput. Changes in the magnitude of one resource should not affect another resource in the same system, but in practice this may not  be exactly true. Simply increasing the supply of one resource to solve an apparent response time or throughput problem may not have the desired effect since it may alter the demands on other resources in the system.

6.1 Introduction to Performability Performability is the ability of a system to meet timing requirements. Performance can be measured in terms of throughput and latency for both interactive and embedded real-time systems, although throughput is usually more important in interactive systems, and latency is more important in embedded systems. When events—interrupts, messages, requests from users or other systems, or clock events marking the passage of time—occur, the system, or some element of the system must respond within an expected duration of time. For example, web-based system events come in the form of requests from users (numbering in the tens or tens of millions) via their clients such as web browsers. For a web-based system, the desired response might be expressed as number of transactions that can be processed in a minute. Performance primarily involves characterizing it in terms of: • Events that can occur • When they can occur • System or element’s time-based response to those events The general scenario for performance consists of 1. Artifact: The artifact is the system or one or more of its components. 2. Environment: The  system can be in various operational modes, such as normal, emergency, peak load, or overload.

Performability

123

3. Source of stimulus: The  stimuli arrive either from external (possibly multiple) or internal sources. 4. Stimulus: The stimuli are the event arrivals. The arrival pattern can be periodic, stochastic, or sporadic, characterized by numeric parameters. 5. Response: The system must process the arriving events. This may cause a change in the system environment (e.g., from normal to overload mode). 6. Response measure: The response measures are • The time it takes to process the arriving events (latency or a deadline), • The variation in this time (jitter), • The number of events that can be processed within a particular time interval (throughput) • A characterization of the events that cannot be processed (miss rate). Arrival patterns can be characterized as • Periodic i.e. predictable • Stochastic i.e. mathematical distributions • Unpredictable i.e. sporadic The response of the system can be measured in terms of: • Latency: The time between the arrival of the stimulus and the system’s response to it. • Deadlines in processing: In the engine controller, for example, the fuel should ignite when the cylinder is in a particular position, thus introducing a processing deadline. • The throughput of the system: Usually given as the number of transactions the system can process in a unit of time. • The jitter of the response: The allowable variation in latency. • The number of events not processed: Because the system was too busy to respond. Response time is constituted of two parts: 1. Processing time wherein the system is working to respond. Processing consumes resources, which takes time. Events are handled by the execution of one or more components, which incurs resources. Hardware resources include CPU, data stores, network communication bandwidth, and memory and software resources include entities defined by the system as per design. When utilization nears saturation of its capacity, different resources behave differently. For example, as a CPU becomes more heavily loaded, performance usually degrades fairly steadily. On the other hand, when you start to run out of memory, at some point the page swapping becomes overwhelming and performance crashes suddenly. 2. Blocked time for which the system is unable to respond. This may be by the reason of contention for some needed resource, because the resource is unavailable, or because the computation depends on the result of other computations that are not yet available.

124

Digital Transformation of Enterprise Architecture

• Dependency on other computation: A  computation may have to wait because it must synchronize with the results of another computation or because it is waiting for the results of a computation that it initiated. If a component calls another component and must wait for that component to respond, the time can be significant if the called component is at the other end of the network (as compared with that co-located on the same processor). • Contention for resources: Many resources can only be used by a single client at a time. This means that other clients must wait for access to those resources. These events may be in a single stream or in multiple streams: different events in the same stream vying for the same resource or multiple streams vying for the same resource contribute to latency. The more contention for a resource, the more likelihood of latency being introduced for that resource. • Availability of resources: Even in the absence of contention, computation cannot proceed if a resource is unavailable. Unavailability may be caused by the resource being offline or by failure of the component or for some other reason. The overarching goal of performability is to generate a response to an event arriving at the system within some stipulated constraint of time. 6.1.1 Managing the Resource Demand The resource demand can be managed by reducing the number of events processed by enforcing a sampling rate, or by limiting the rate at which the system responds to events. 1. Managing sampling rate: Demand reduction can be achieved if it is possible to reduce the sampling frequency at which a stream of environmental data is captured, although this is accomplished typically with some attendant loss of fidelity. For  example, in signal processing systems different codecs can be chosen with different sampling rates and data formats to maintain predictable levels of latency—the decision is choosing between having a lower fidelity but consistent stream of data as against losing packets of data altogether. 2. Limit event response: The system can be designed to process events only up to a set maximum rate, thereby ensuring more predictable processing when the events are actually processed. This tactic could be triggered by a queue size or processor utilization measure exceeding some warning threshold. If adopt this tactic and it is unacceptable to lose any events, then you must ensure that your queues are large enough to handle the worst case scenario. 3. Prioritize events: The system can impose a priority scheme that ranks events according to how important it is to service them. If there are not enough resources available to service then low-priority events can be ignored as and when they arise. Ignoring events consumes minimal resources (including time), and thus increases performance compared to a system that services all events all the time. 4. Reduce overhead: A  strategy for reducing computational overhead is to co-locate resources; co-location may mean • hosting cooperating components on the same processor to avoid the time delay of network communication • putting the resources in the same runtime software component to avoid even the expense of a subroutine call.

Performability

125

A special case of reducing computational overhead is to perform a periodic cleanup of resources that have become inefficient. For  example, hash tables and virtual memory maps may require recalculation and reinitialization. Another common strategy is to execute single-threaded servers (for simplicity and avoiding contention) and split workload across them. The use of intermediaries, a linchpin of modifiability, increases the resources consumed in processing an event stream, and so removing them improves latency. Separation of concerns can also increase the processing overhead necessary to service an event if it leads to an event being serviced by a chain of components rather than a single component. The context switching and intercomponent communication costs add up, especially when the components are on different nodes on a network. 5. Bound execution times: Place a limit on how much execution time is used to respond to an event. For iterative, data-dependent algorithms, limiting the number of iterations is a method for bounding execution times. The cost is usually a less accurate computation. If you adopt this tactic, you will need to assess its effect on accuracy and see if the result is “good enough.” This  resource management tactic is frequently paired with the manage sampling rate tactic. 6. Increase resource efficiency: Improving the algorithms used in critical areas will decrease latency. A critical section is a section of code in a multi-threaded system in which at most one thread may be active at any time. 6.1.2 Managing Resources At  times the management of these resources can be more amenable than managing the demand for these resources. Managing resources more appropriately can be done through scheduling, replication, or just increasing the availability of resources. Sometimes one resource can be traded for another. For example, intermediate data may be kept in a cache or it may be regenerated depending on time and space resource availability. This tactic is usually applied to the processor but is also effective when applied to other resources such as a disk. 1. Bound queue sizes: This controls the maximum number of queued arrivals and consequently the resources used to process the arrivals. This necessitates adopting a policy for what happens when the queues overflow and decide if not responding to lost events is acceptable. This  tactic is frequently paired with the limit event response tactic. 2. Introduce concurrency: If requests can be processed in parallel, the blocked time can be reduced. Concurrency can be introduced by processing different streams of events on different threads or by creating additional threads to process different sets of activities. Once concurrency has been introduced, scheduling policies can be used to achieve the envisaged goals. Different scheduling policies may maximize fairness (all requests get equal time), throughput (shortest time to finish first), or other goals. 3. Schedule resources: Whenever there is contention for a resource, the resource and the corresponding processors, buffers and networks are scheduled. The goal is to understand the characteristics of each resource’s use and choose the scheduling strategy that is compatible with it.

126

Digital Transformation of Enterprise Architecture

A  scheduling policy conceptually has two parts: a priority assignment and dispatching. • All scheduling policies assign priorities: In some cases the assignment is as simple as first-in/first-out (or FIFO). In other cases, it can be tied to the deadline of the request or its semantic importance. Competing criteria for scheduling include optimal resource usage, request importance, minimizing the number of resources used, minimizing latency, maximizing throughput, preventing starvation to ensure fairness, and so forth. Some common scheduling policies are these: a. First-in/first-out: FIFO queues treat all requests for resources as equals and satisfy them in turn. One possibility with a FIFO queue is that one request will be stuck behind another one that takes a long time to generate a response. As long as all of the requests are truly equal, this is not a problem, but if some requests are of higher priority than others, it is problematic. b. Fixed-priority scheduling: Fixed-priority scheduling assigns each source of resource requests a particular priority and assigns the resources in that priority order. This  strategy ensures better service for higher priority requests. But it admits the possibility of a lower priority, but important, request taking an arbitrarily long time to be serviced, because it is stuck behind a series of higher priority requests. Three common prioritization strategies are these: • Semantic importance: Each stream is assigned a priority statically according to some domain characteristic of the task that generates it. • Deadline monotonic: This  is a static priority assignment that assigns higher priority to streams with shorter deadlines. This scheduling policy is used when streams of different priorities with real-time deadlines are to be scheduled. • Rate monotonic: This is a static priority assignment for periodic streams that assigns higher priority to streams with shorter periods. This scheduling policy is a special case of deadline monotonic and is more likely to be supported by the operating system. c. Dynamic priority scheduling: Strategies include these: • Round-robin: Round-robin is a scheduling strategy that orders the requests and then, at every assignment possibility, assigns the resource to the next request in that order. A special form of round-robin is a cyclic executive, where assignment possibilities are at fixed time intervals. • Earliest-deadline-first: This  assigns priorities based on the pending requests with the earliest deadline. • Least-slack-first: This strategy assigns the highest priority to the job having the least “slack time,” which is the difference between the execution time remaining and the time to the job’s deadline. d. Static scheduling: A cyclic executive schedule is a scheduling strategy where the pre-emption points and the sequence of assignment to the resource are determined offline. The runtime overhead of a scheduler is thereby obviated.

Performability

127

• A high-priority event stream can be dispatched only if the resource to which it is being assigned is available. Sometimes this depends on pre-empting the current user of the resource. Possible preemption options are as follows: • Can occur anytime • Can occur only at specific preemption points • Executing processes cannot be preempted 4. Replication • Maintain multiple copies of computations: Multiple servers in a client-server pattern are replicas of computation. The purpose of replicas is to reduce the contention that would occur if all computations took place on a single server. A  load balancer is a piece of software that assigns new work to one of the available duplicate servers; criteria for assignment vary but can be as simple as round-robin or assigning the next request to the least busy server. • Maintain multiple copies of data: Caching is a tactic that involves keeping copies of data (possibly one a subset of the other) on storage with different access speeds. The different access speeds may be inherent (memory versus secondary storage) or may be due to the necessity for network communication. It is system’s responsibility to choose the data to be cached. Some caches operate by merely keeping copies of whatever was recently requested, but it is also possible to predict users’ future  requests based on patterns of behavior, and begin the calculations or prefetches necessary to comply with those requests before the user has made them. Data replication involves keeping separate copies of the data to reduce the contention from multiple simultaneous accesses. Because the data being cached or replicated is usually a copy of existing data, keeping the copies consistent and synchronized also becomes a responsibility that the system must assume. 5. Increase resources: Faster processors, additional processors, additional memory, and faster networks all have the potential for reducing latency. Increasing the resources is definitely a tactic to reduce latency and in many cases is the cheapest way to get immediate improvement although cost is usually a consideration in the choice of actual resources.

6.2 Performance Evaluation Several of the existing performance evaluation approaches are based on Queuing Network Models (QNMs), Petri nets, or process algebras. QNMs are constructed from service centers and queues. Service centers provide services and each service centre has a queue attached to it. Compared to QNM, Petri nets and process algebras are more formal techniques. Petri nets describe the system using places, transitions, and tokens. Timing information has been added by a number of extensions to traditional Petri nets. Process algebras are algebraic languages that are used for description and formal verification of the functional properties of concurrent and distributed systems. Stochastic process algebras (SPAs) are extensions to process algebras allowing analysis of performance properties. The presentation and approach in this subsection has been adopted from Purhonen (2005).

128

Digital Transformation of Enterprise Architecture

6.2.1 Performance Evaluation Framework Architecture is the fundamental organization of a software system embodied in its components, their relationships to each other and to the environment. Performance evaluation means evaluation of both the time and resource behavior of the system. Architecture Trade-off Analysis Method (ATAM) is a scenario-based method for evaluating architecture-level designs. ATAM considers multiple quality attributes, including performance. Support for performance-based design decisions is gathered, for example, using benchmarks, simulation, prototyping, and analyzing worst case situations based on experience. Some specific problems of the current practices are that it is difficult and time consuming to estimate the reliability of the results, the analysis cannot be easily repeated, and the results are not comparable with other similar evaluations. In order to make the methods easier to use and accept, developers are trying to connect the performance models directly to familiar architectural models and automate the whole process right from the software architecture description to the evaluation results. The elements of the comparison framework are: 1. Context defines the types of purposes the method has been applied to and in what type of systems. • Evaluation goal: The evaluation may be performed differently depending on the purpose of the evaluation. For example, there can be different needs in different stages of the product’s life cycle. Therefore, it is important to know what types of evaluations have already been performed with the approach or for what types of evaluations it was originally intended. • Application area: Application area describes what kinds of applications the approach is intended for or what applications it has been already applied to. Application area can affect, for example, the requirement for the accuracy of the results. • Product type: The  products differ, for example, in size and complexity. The  approaches can differ in terms of what support they give in evaluating different types of products. 2. Architecture defines the kind of information the method requires from the system under study. • Views: Software architecture descriptions are usually divided into views. The evaluation approach should define the architectural structures that need to be described in order to be able to make the evaluation. • Language: In order to allow easy integration with the other development activities, the language in which the architecture is assumed to be described is important. Architectures are depicted with architecture description languages (ADL). For example, Unified Modeling Language (UML) is commonly used as ADL, but there are also several languages that have been specifically developed for describing architectures. • Parameters: In  addition to architectural structures, some other information may be needed. For example, in order to analyze response time of requests, the execution time estimates of the individual components needed for serving the request are required. The analysis technique may expect information about resource allocation policies such as scheduling policy. Additionally, the objectives for the performance evaluation such as timing requirements and resource constraints, are needed.

Performability

129

3. Evaluation describes how the evaluation is actually performed using the method. • Process: Guidance is needed for understanding what tasks belong to the evaluation and how the tasks should be performed. The theory behind the evaluation methods is often difficult to understand for non-performance experts and thus an ambiguous process description can be a cause for not taking the method into use. • Performance model: Performance model is the model of which the actual evaluation is performed. The  performance model may be part of the architecture description or the architectural diagrams maybe transformed into a performance model. • Solution technique: An approach may support the use of one or more solution techniques. A solution technique may be based on some mathematical theory or it may be formed from rules and guidelines. Different techniques may be applied in different stages in the product development. • Results: The performance-related design decisions that are made during software architecture design include selection of architectural styles and patterns, task partition and deployment to the platform, task scheduling and communication, and decisions concerning use of different types of resources. All those design decisions can be sources of improvement. In addition to software architecture changes, improvements to the hardware architecture or changes to the requirements, both functional and non-functional, can be proposed. • Tools: Tools are needed to support the different tasks in the evaluation. In addition to helping the evaluation, transformation tools can be used between architecture models and performance models. Furthermore, the reuse of results is facilitated with tools. 4. Costs and benefits examines how useful the method is to an organization and to the actual user. • Collaboration: There are several domains in the product development that need to work together so as to produce a final product that meets all the requirements. Trade-off studies link the evaluation to the other needs of the stakeholders. Software architects have to know the possible dependencies between different quality attributes. Because expertise and information tend to be distributed between experts in these domains, easy integration with other development activities is needed. • Effort: In addition to benefits there are also costs from the evaluation. Similarly, the time-to-market requirements may lead to the systematic architecture evaluation being omitted if it is too laborious. • Flexibility: The approach should be flexible to the abstraction level of the architecture description and to the size of the product. For example, when in the early stages of the development the speed of the analysis is often more important than the accuracy of the results. In addition, when analyzing large systems unnecessary details may be discarded in order to get an understanding of the efficiency of the system as a whole. In contrast, sometimes detailed analysis is needed of the most critical areas. • Reusability: In the early stages of product development there is a lot of uncertainty that has to be handled. The  requirements change and therefore the

130

Digital Transformation of Enterprise Architecture

architecture also has to be changed. As the development proceeds, the estimates also become more accurate and therefore it should be easy to modify the performance models and re-evaluate the architecture. In addition, one of the problems with the current practices is that the results of the evaluations are not reused. Thus, the approach should support reuse between projects. • Maturity: The maturity is assessed by examining the amount of support that is available for the use of the method; it is easier to convince the organization of the usefulness of the method if the method has already been used widely. It  indicates that the method is probably practical and stable, which means less problems while using it and that support is also available if problems do occur. 6.2.1.1 Evaluation Methods Various performance evaluation methods are: 1. Rate-Monotonic Analysis (RMA) is a collection of quantitative techniques that are used to understand, analyze, and predict the timing behavior of the systems. RMA  can be traced back to the rate monotonic scheduling (RMS) theory; consequently, it has been especially used for analyzing schedulability. RMA  has been used by several organizations in development efforts in mathematically guaranteeing that critical deadlines will always be met, even in worst-case situations. It has also been applied to evaluating software architectures, for example, for discovering the ratio between concurrently available functionality over the cost of required hardware resources and for comparing architecture candidates. Definitions of critical use cases and scenarios specify the scope of the analysis. In order to be able to use RMA, a task model of the system has to be defined. The  model should specify period, response time, and deadline for each task. RMA does not have a separate performance model, but the mathematical analysis can be made directly from the architectural diagrams. There are commercial tools based on RMA  and the tools can be linked directly to UML modeling tools. 2. Performance Assessment of Software Architectures (PASA) uses the principles and techniques of Software Performance Engineering (SPE); PASA can be used as a framework when SPE techniques are applied to software architecture evaluation. PASA identifies deviations from architectural style and proposes alternative interactions between components and refactoring to remove anti-patterns. PASA is intended for uncovering potential problems in new development or for deciding whether to continue to commit resources to the current architecture or migrate to a new one. PASA  is developed on experiences in performance assessment of web-based systems, financial applications, and real-time systems. The  evaluation starts from critical use cases that are further specified as key performance scenarios. The scenarios are documented using augmented UML sequence diagrams. PASA uses three different approaches to the impact analysis: identification of architectural styles, identification of performance anti-patterns, and performance modeling and analysis. Performance analysis is made in two phases: initially, a  simple analysis of performance bounds may be sufficient. If the analysis of

Performability

131

performance bounds indicates the need for more detailed modeling, this is done in the second phase. Detailed performance analysis is based on two models: • Software execution model, which provide optimistic performance metrics • System execution model, which are used for studying the resource contention effects on the execution behavior The results of solving the system execution model include, for example, metrics for resource contention, sensitivity of performance metrics to variation in workload composition, and identification of bottleneck resources. 3. Layered Queuing Network (LQN) modeling is an extension to QNM; the main difference is that a server, to which customer requests are arriving and queuing for service, may become a client to other servers from which it requires nested services while serving its own clients. If tasks have no resource limits then LQN gives the same predictions as a queuing network. LQN was originally created for modeling client/server systems. It has now been applied to database applications  and web servers. Moreover, simulation of LQN models has been used to incrementally optimize the software configuration of electronic data interchange converter systems in a financial domain and to determine the highest achievable throughput in different hardware and software configurations in a telecommunication system. LQN starts with the identification of the critical scenarios with the most stringent performance constraints. Then LQN models are prepared. A  LQN model is represented as an acyclic graph whose nodes are software entities and hardware devices and whose arcs denote service requests. The  LQN model is transformed from UML class, deployment, and interaction diagrams or use case maps. There are also transformation rules for creating LQN models from architectural patterns. The LQN model produces results such as response time, throughput, utilization of servers on behalf of different types of requests, and queuing delays. The parameters in a LQN model are the average service time for each entry and average number of visits for each request. LQN offers several analytical solvers. In case the scheduling policy or something else hinders the analytical solver to be used, then simulation is available. A confidence interval can be given to the results. 4. Colored Petri Nets (CPN) approach is based on just one case study where CPN was used for evaluating alternative mechanisms and policies in an execution architecture. In the Colored Petri Nets case study, CPN has been used for evaluating alternative mechanisms and policies in execution architecture. The mechanisms include the task control and communication mechanisms and the policies are for task division and allocation. In addition, CPN is used for setting timing requirements for the component design and implementation when the available resources are already fixed. They  estimate the message buffer usage and message delays based on simulation of lot of use cases. The application area is mobile phone software. The evaluation handles industrial scale products and product families. The “4 + 1” views of architecture design approach and UML are used to describe the architecture. The parameters are: • Message delays on different message links • Task-switching time of the operation system • Event processing time

132

Digital Transformation of Enterprise Architecture

Different probability distributions are used for the streams of user request, events and network signals. The module architecture in UML is mapped to the execution architecture as a CPN model for simulation. The simulation gives the following results: the message buffer usage in worst case, minimum, average, and maximum message delays, and the number of task switches needed for each transaction. 5. ATAM is probably the best-known software architecture evaluation method— though it is not necessarily a performance evaluation method. Architecture Tradeoff Analysis Method (ATAM) is used to learn what the critical architectural design decisions in the context of selected attributes are. Those design decisions can then be modeled in subsequent analyzes. One of the supported attributes in ATAM is performance. ATAM has been used for risk analysis, for example, in realtime systems and in aeronautical systems. ATAM is a review type of activity that takes a few days to perform. An evaluation team that has several members with welldefined responsibilities performs ATAM. In addition, the development organization has to take part in the evaluation. In  particular, the architect has a role in providing information on the architectural decisions. The quality goals in ATAM are characterized with scenarios. They use three types of scenarios: • Usage scenarios • Growth scenarios • Exploratory scenarios The scenarios are created by studying the architecture and interviewing the stakeholders. They  use standard characterization for each attribute, which facilitates the elicitation of scenarios. The analysis is based on finding sensitivity points and trade-off points. A  sensitivity point is a property that is critical for achieving a particular quality and a trade-off point is a property that affects more than one attribute and is a sensitivity point for at least one attribute. An attribute-based architecture style (ABAS) adds to architectural style the ability to reason based on quality attribute-specific models. An analysis of an ABAS is based on a quality attribute-specific model that provides a method of reasoning about the behavior of component types that interact in the defined pattern. For example, a definition of performance ABAS can include a queuing model and the rules for solving the model under varying sets of assumptions. The qualitative analysis that ATAM uses is based on asking questions regarding what kinds of quantitative evaluations have been performed on the system and how else the performance characteristics have been ensured. The definitions of ABASs help in eliciting these questions. In case the screening questions reveal potential problems, a more comprehensive model of the quality attribute aspect under scrutiny is built. ATAM has a clear process description and definition of roles for members of the evaluation team. 6. Metrics are used for analyzing performance of the high-level software architecture of a telecommunication system. The goal in the metrics case study has been to analyze the effect of architectural decisions so that the possible implementation of the components does not need to be taken into account. In the case study, metrics are applied to a high-level software architecture of a telecommunication system. The evaluation is scenario-based. A performance scenario describes a particular use of the system performed by users and its weight is derived from the frequency of how often in time this scenario happens.

133

Performability

Architecture Description Language for Telecommunication (ADLT) is used for describing the architecture. It  has been designed with an easy translation from UML diagrams in mind. ADLT uses four diagrams: • Activity diagram • Architectural configuration • Sequence diagram • Protocol diagram Activity diagrams and sequence diagrams are used in performance analysis. An activity diagram shows the dynamic component and connector behavior using simple finite state machines enriched with activities. A sequence diagram is similar to UML’s sequence diagram. The analysis is performed using a metric called Elementary Stress: Presences + Queues Paralellisms + 1 where: Presences is the number of times that a component or connector appears in a scenario Parallelisms denote the number of parallelism symbols inside a component or connector activity diagrams Queues are the number of queue-type structures inside activity diagrams The justification of the metric is that more occurrences of a component or connector imply a larger overhead associated with that scenario. This overhead is decreased by use of parallelism and made worse by queues or stacks. The trade-off analysis is made using a table. The trade-off value indicates the number of attributes the element is sensitive to. The average of all trade-off values gives the architecture trade-off value. 6.2.1.2 Evaluation Results The various structures or diagrams that are needed by various methods are presented in Table 6.1. In addition to the architectural diagrams each approach needs some additional information or parameters: • In RMA, each task should have period, deadline and response time possibly representing the worst case defined. In addition, the RMA tools provide support for different types of scheduling disciplines. • In  PASA  the important performance metrics for each server are residence time, utilization, throughput, and average queue length. The scheduling discipline for queues also restricts how the problem is solved. • LQN requires execution time demands for each software component on behalf of different types of system requests and demands for other resources such as I/O devices and communication networks. The parameters include, for example, the average values for the following parameters: arrival rate of requests, execution time of requests and average message delays. In  addition, the scheduling discipline for each software and hardware server is needed.

134

Digital Transformation of Enterprise Architecture

TABLE 6.1 Architectural Diagrams RMA UML class diagram UML deployment diagram UML sequence diagram UML use cases Augmented UML sequence diagram MSC Use case maps ADLT diagrams META-H diagrams

PASA

LQN

CPN

X X X X

X

X X X

X X

ATAM

Metrics

X

X X X X

• CPN requires time parameters such as message delays, task-switching time and event processing time. • Metrics and ATAM need definitions of usage scenarios and weights to them. These scenarios are derived from requirements. Part of the ATAM approach is that it supports the scenario elicitation. The solution techniques used by the various methods are listed in Table 6.2. Based on the publications, the following types of results have been received with the solution techniques: • The main result of RMA is the schedulability of the system. However, other information can also be revealed, for example, response times for requests. • PASA is intended for deciding end-to-end processing for messages and for analyzing scalability. PASA identifies problem areas that require correction in order to achieve the desired scalability and quantifies the alternatives so that developers can select the most cost-effective solution. • LQN results are response time, throughput, queuing delays, and utilization of different software and hardware components. The more exact the solver the more accurate the results. TABLE 6.2 Solution Techniques

Performance bounds Architectural patterns Architectural anti-patterns Analytic solver Simulation Mathematical rules ABAS Metrics

RMA

PASA

LQN X X

X

X X X X X X

X X

CPN

ATAM

Metrics

X X X

Performability

135

• By simulating a lot of use cases CPN is used to obtain maximum size of message buffers and average transaction processing time. On the other hand, when the total size of buffers is fixed, the timing parameters are results that can be further used as requirements for the component design and implementation. • ATAM is used to find out the points in the architecture that are sensitive to performance. They are further used in specifying trade-off points and risks. • Metrics are able to show the critical elements in the architecture.

6.3 Summary This  chapter proposed performability as the overarching aspect of enterprise architecture. The  chapter discusses aspects of response time and how it can be managed either by managing the demand of resources or the resources themselves. Much of the processes or tasks involved in computational work is serial, and in such cases, the final performance of the computational work will depend on the performance of the elements in the execution chain. Some work can, of course, be run in parallel, such as scientific programs or parallel data paths provided to speed up data transmission or storage I/O, but for performance planning and capacity work, it is wise to take the worst case of everything operating serially and not  in parallel. The  latter half of the chapter focused on frameworks for evaluation of performance. It describes several techniques and methods for evaluating performance like Rate-Monotonic Analysis (RMA), Performance Assessment of Software Architectures (PASA), Layered Queuing Network (LQN) modeling, Colored Petri Nets (CPN), Architecture Trade-off Analysis Method (ATAM) and so on. The  chapter’s appendix discusses queuing systems that are essential pre-requisite to enable the digital transformation. Analytical queuing models offer powerful means for understanding and evaluating queuing processes. However, the use of these analytical models is somewhat restricted by their underlying assumptions. The limitations pertain to the structure of the queuing system, the way variability can be incorporated into the models, and the focus on steady-state analysis. Because many business processes are cross-functional and characterized by complex structures and variability patterns, a more flexible modeling tool is needed. Simulation, discussed in the latter half of this chapter, offers this flexibility and represents a powerful approach for analysis and quantitative evaluation of business processes.

Appendix 6A: Queuing Systems As stated in the beginning of this chapter, to withstand the disruptive digital storms of the future, enterprise architecture affects digital transformation of the performability aspects through all the constituting attributes of the enterprise architecture, but queuing systems are an essential pre-requisite to enable this transformation.

136

Digital Transformation of Enterprise Architecture

Analytical queuing models offer powerful means for understanding and evaluating queuing processes. However, the use of these analytical models is somewhat restricted by their underlying assumptions. The limitations pertain to the structure of the queuing system, the way variability can be incorporated into the models, and the focus on steady-state analysis. Because many business processes are cross-functional and characterized by complex structures and variability patterns, a more flexible modeling tool is needed. Simulation, discussed in the latter half of this chapter, offers this flexibility and represents a powerful approach for analysis and quantitative evaluation of business processes. 6A.1 Queuing Systems We come across people queuing for many activities in daily life. It  can be the issue of consulting in a medical clinic, ration in a ration shop, issue of cinema tickets, issue of rail/ airline tickets, etc. The  arriving people are called customers, and the person issuing the ticket is referred to as server. There can be more than one queue and more than one server in many cases—typically out-patient department (OPD), rail tickets, bus tickets etc. If the server is free at the time of arrival of the customer, he can be serviced immediately. If there are a number of people, a waiting line and consequently waiting time comes into operation. There can also be server idle time. Queuing theory was originally developed by Agner Krarup Erlang in 1909. Erlang was a Danish engineer who worked in the Copenhagen telephone exchange. When studying the telephone traffic problem, he used a Poisson process as the arrival mechanism and, for the first time, modeled the telephone queues as an M/D/1 queuing system. Ever since the first queuing model, queuing theory has been well developed and extended to many complex situations, even with complicated queuing networks. These models, together with the advancement of computer technology, have been used widely now in many fields and have shown significant benefits in optimizing behavior within these systems. The entities that request services are called customers, and the processes that provide services and fulfills customer needs are called service channels. It  is obvious that the capacity is the key factor in influencing system behaviors. If the service channels have less capacity that cannot satisfy customer demand, then the waiting line will form, and, systems will become more and more crowded; thus, the quality of service will be degraded and many customers may choose to leave the system before getting served. From the standpoint of customers, the more service channel capacity the better; this implies less time to wait to be served and a high service quality is perceived. On the other hand, if the service channels have more capacity than needed, then from the service provider’s perspective, more service channels predominantly mean more investment, capital expenditure, and human labor involved, which increases the operations cost of the service or the manufacturing process. Thus, one of the most important purposes of studying queuing theory is to find a balance between these two costs, the waiting cost and the service cost. If the customer waits for too long, he/she may not be happy with the service, thus might not return in the future, causing loss of potential profit; or, parts may be waiting too long, increasing production cycle time, again losing potential sales and profits. These costs are considered to be waiting costs. Service costs are those that increase service capacity such as salary paid to the servers. Queuing theory application balances these two costs by determining the right level of service so that the total cost of the operations (waiting cost + service cost) can be optimized. Figure 6A.1 shows the schematic of the total cost of queue operations versus process capacity.

Performability

137

FIGURE 6A.1 Total cost of queue operations versus process capacity.

6A.1.1 Queuing Process 1. Calling population: A calling population represents a group of flow units, for example customers, some of whom are associated with the queuing system for predetermined purposes. This  means that a flow unit joins one or more queues of flow units waiting in different buffers within a certain process. The calling population may contain flow units of the same type or flow units of different types. The first is called a homogeneous whereas the second is called a heterogeneous calling population. Calling populations are usually heterogeneous. For example, patients enter a hospital for different purposes, such as hospitalization because of different health problems; such a calling population consists of different subpopulations. The calling population could be understood as an infinite or finite population. The calling population is considered as infinite when a flow unit joins a queue of such a large number of flow units that the queuing system is not affected by the arrival of the new flow unit. For example, a patient joins a waiting list of patients in a long queue that may last months of waiting, for the purpose conducting an operation. Otherwise, the calling population is considered finite if the criterion for an infinite population is not valid. For example, a patient joins a queue of a number of patients in a waiting room to see a specialist physician. 2. Arrival process: The arrival process represents a determined way or path that every flow unit should follow after entering the queuing process. The path for each flow unit is determined depending on the type of flow unit. Such a path consists of a number of activities through which the flow unit passes. There are queues, particularly when people (customers) as flow units form the queue, where the person who joins the queue may decide for one of the following three possible actions: • Balk is where the person decides to leave and not to join the queue because of the large number of people already in the queue.

138

Digital Transformation of Enterprise Architecture

• Renege is where the person joins the queue but after some time decides to leave the queue. • The person decides to wait in the queue regardless of the time spent waiting. 3. Queue configuration: The queue configuration indicates the type of queue that the flow unit joins, which determines the requirements to join a queue and the behavior of a flow unit or customer who joins the queue. There are two types of queue: • Single-line queue configuration requires that a flow unit joins the queue at the end of a single line and the flow unit is served after all the flow units before it are served. • Multiple-line queue configuration enables the flow unit to choose one of several queue lines. 4. Queue discipline: Queue discipline represents the rule or discipline used to choose the next flow unit for serving; there are different disciplines used depending on the purpose of the queuing system. The most commonly used rule is known as first-in-first-out (FIFO). Some queue disciplines also use priority of the flow unit. This rule is, for example, used in medical institutions, where patients with life-threatening cases or children have the highest priority to be served first. 5. Service mechanism: The service mechanism consists of a number of services that perform a set of tasks on the flow unit within a process. The flow unit enters the service facility when the resource is available to provide the flow unit with the service it needs. The time spent by the service in performing the work on the flow unit is called the service time.

6A.2 Queuing Models Whenever an OPD is scheduled, one witnesses large queues at the OPD section prior to the start of the OPD hours to register the patient number tokens. The actors in this situation are the patients arriving to get the tokens for Physician’s consulting, and the service counter at the hospital or the registration server providing the service by registering tokens. The  arrival process is represented by the inter-arrival times, and the service process is represented by the service time per patient. The  inter-arrival time is the time between successive patient arrivals. The service time per patient is the time taken to provide the patients the expected service as desired. In the above OPD clinic example, the inter-arrival time may be very less during an OPD off days and may be high during an OPD scheduled days. Similarly, the service rate will be high during OPD scheduled days and will be slightly low during the OPD off days. In other words, the inter-arrival times and service rates are probabilistic. Queue discipline represents the manner in which the customers waiting in a queue are served. It may be: • First-come-first-serve (FCFS) • Service in random order (SIRO) • Last-come-first-serve (LCFS) If there are more queues, then the customers join the queue where the length is small. This is known as jockeying. Sometimes, the customers tend to move away from the queue

139

Performability

place on seeing its length. This is known as balking. If the customers wait for a long time in the queue, but have not been serviced, they may move away. This is known as reneging. The field of queuing theory has developed a taxonomy to describe systems based on their arrival process, service process, and number of servers written as Arrival/ Service/Number Servers. The basic notation, widely used in queuing theory, is composed of three symbols separate by forward slashes. The symbols are • • • • •

M for Poisson or exponential distributions D for deterministic (constant) distributions E for Erlang distributions G for general distributions (any arbitrary distribution) GI for general independent in the case of arrival rates

There  are varieties of queuing models that arise from the elements of a queue that are described next. 6A.2.1 Model I: Pure Birth Model The pure birth model considers only arrivals, and the inter-arrival time in the pure birth model is explained by the exponential distribution. Birth of babies is a classical example for the pure birth model. Let P0(t) be the probability of no customer arrivals during a period of time t. Let Pn(t) be the probability of n arrivals during a period of time t. As Pn (t) =

P0 (t) = e − λt ,

(λ t)n e − λt , n = 0, 1, 2 … n!

This is a Poisson distribution with mean E{n/t} = λ t arrivals during a period of time t. 6A.2.2 Model II: Pure Death Model The pure death model contradicts the pure birth model, in the sense that only departures are considered in this model. Here, the system starts with N customers at time 0, and no further arrivals are permitted to the system. The  departure rate is l customers per unit time. Pn (t) is the probability of n customers in the system after t time units. So, Pn (t) =

( µ t)N − n e −µt , n = 1, 2, … N (N − n)!

and N

P0 (t) = 1 −

∑ P (t) 0

n =1

140

Digital Transformation of Enterprise Architecture

6A.2.3 Model III: Generalized Poisson Queuing Model This  model considers both inter-arrival time and service time, and both these times follow exponential distribution. During the early operation of the system, it will be in the transient state. On the contrary, if the system is in operation for a long time, it attains steady state. In  this model, both the inter-arrival and the service time exhibit state dependency. Assuming, n as the number of customers in the system (system refers to those customers who are waiting for service and who are being serviced)

λn is the arrival rate where there are n customers in the system already µn is the service rate where there are n customers in the system already Pn is the steady-state probability of n customers in the system All the above steady-state probabilities help in determining the different parameters for the model such as average queue length, waiting time in the system, and various measures of system’s performance. Then,  λ λ … λ0  Pn =  n −1 n − 2  P0 , n = 1, 2, …  µ n µ n − 1 … µ1  ∞



P0 = 1 We can determine P0 from the equation  n=0 For n = 0, λ  P1 =  0  P0  µ1  For n = 1,

λ0 P0 + µ2 P2 = ( λ1 + µ1 ) P1 Substituting the values of P1, we get  λλ  P2 =  1 0  P0  µ2 µ1  6A.2.4 Single Server Models The  basic queuing models can be classified into six categories using Kendall notation which uses six parameters to define a model (P/Q/R): (X/Y/Z).

Performability

141

The parameters of the notation are P is the distribution of the arrival rate Q is the distribution of the service rate R refers to the number of service channels providing the service X is the service discipline; it may be general, FCFS, SIRO, LCFS Y is the maximum number of customers allowed to stay in the system at any point in time Z is the calling source size

6A.2.4.1 Model IV (M/M/1):(GD/∞/∞) The features of this model are: 1. 2. 3. 4. 5.

There is a single service channel providing the service. Arrival rate or input follows Poisson distribution. Service rate is exponentially distributed. There is no limit on the system’s capacity. Customers are served on first-come-first-served basis.

1. λ—Arrival rate of customers (numbers/hour) 2. µ —Service rate (numbers/hour) 3. T—Mean time between arrivals = 1/λ 4. t—Average time of servicing = 1/µ 5. ρ (rho)—Utilization factor or traffic intensity = λ/µ 6. ρ 0—Idling factor of the facility, i.e., probability of providing the service right away to the customers without him having to wait = 1− ρ 7. Pn—Probability that there are n customers waiting in the system for service. 8. N—Number of customers in the system 9. Lq— Length of the queue or average number of customers in the queue waiting for service 10. Ls —Average number of customers in the system (both at service counters and queue) 11. Wq—Average waiting time in the queue 12. Ws —Average waiting time in the system 13. W /W > 0—Expected waiting time of a customer who has to wait 14. L / L > 0—Expected length of nonempty queue

142

Digital Transformation of Enterprise Architecture

The formula list for the model (M/M/1): (GD/ ∞/∞) is given below 1. P0 = 1 − ( λ µ ) = 1 − ρ  λ  λ 2. Pn =    1 −  = ( ρ )n × (1 − ρ ) µ µ   3. Probability of queue size greater than n(Q ≥ n) = ( ρ )n  λ  4. Ls =    µ −λ    λ   λ   λ2 5. Lq =    ×  =  µ − λ   µ   µ(µ − λ )   λ  1 6. Ws =  ×   µ −λ   λ 

1  1  Ls ×   =    µ   µ −λ 

  λ 7. Wq =    µ(µ − λ )   λ  8. L / L > 0 = Ln =    µ −λ   1  9. Average waiting time in the nonempty queue =    µ −λ  10. Probability of an arrival waiting for t mins or more = ρ e −( µ − λ )t 6A.2.4.2 Model V: (M/M/1): (GD/N/∞) The features of this model are: 1. Single service channel provides the service. 2. Both arrival rate and service rate follow Poisson distribution. 3. Number of customers allowed in the system cannot exceed N at any point of time. The formula list is given below.  λ N 1−  µ λ λ 1. a. PN =  ×   , ≠ 1, N = 0, 1, 2, … N  λ µ µ 1−  µ  =

(1 − ρ ) × ( ρ )N , ρ ≠ 1, N = 0, 1, 2, … N (1 − ρ )N +1

b. If ρ = 1, i.e., 100% utilization, Pn =

1 N +1

143

Performability

2. Effective arrival rate of customers (λe )

λe = λ (1 − PN ) = µ (Ls − Lq )

λ N N +1   1 − (N + 1)( ρ ) + N ( ρ )  µ 3. a. Ls =   , if ρ ≠ 1 (1 − ρ )(1 − ρ )N +1 =

( ρ ) 1 − (N + 1)( ρ )N + N ( ρ )N +1  (1 − ρ )(1 − ρ )N +1

, if ρ ≠ 1

b. If ρ = 1, Ls = N /2 4. Lq = Ls −

λ (1 − PN ) λe = Ls − µ µ

5. Ws =

Ls λ (1 − PN )

6. Wq =

Lq Lq 1 = = Ws − λe λ (1 − PN ) µ

6A.2.5 Multiple Server Models In this case, there will be c counters or servers in parallel to serve the customer. The customer has the option of choosing the server that is free, or that has less number of people waiting for service. The mean arrival rate (λ ) follows Poisson distribution, and the mean service rate ( µ ) follows exponential distribution. 6A.2.5.1 Model VII (M/M/C): (GD/∞/∞) The features of this model are: 1. 2. 3. 4.

Arrival rate and service rate follows Poisson distribution. There are C serving channels. Infinite number of customers permitted in the system. GD stands for general discipline servicing.

The formulae list is given below.  ρn for 0 ≤ n ≤ c 1.  Pnl = n!    ρn P0 P = for n > c n  C n − cC !

144

Digital Transformation of Enterprise Architecture

    C − 1 ρ n ρC + 2. P0 =    n = 0 n ! C !  1 − ρ    c    C⋅ρ 3. Lq = Pc (C − ρ )2 4. Ls = Lq + ρ

−1



5. W = Lq q λ 6. Ws = Wq +

1 µ

For (M/M/C): (GD/∞/∞)

Lq =

ρ ρ as → 1 C−ρ C

6A.2.5.2 Model VIII (M/M/C): (GD/N/∞) The features of this model are: 1. 2. 3. 4.

The system limit is finite and is equal to N. The arrival rate and service rate follow Poisson distribution. The maximum queue length is N – C. There are C service channels.

Here λe < λ , because of N The generalized model can be defined as (N − n)λ for 0 ≤ n ≤ N λn =  for n ≥ N 0  nµ  µn = C µ  0 The formula list is given below. N Cn ρ n ⋅ P0 for 0 ≤ n ≤ C  1. Pn =  n! ρ n N Cn ⋅ P0 for C ≤ n ≤ N C ! C n−C 

for 0 ≤ n ≤ C for C ≤ n ≤ N for n ≥ N

145

Performability

N  C n ! ρ n  2. P0 =  N Cn ρ n +  C ! C n−C  n=C+1  n = 0 





−1

N

3. Lq =

∑ (n − C )P

n

n=c+1

4. Ls = Lq +

λe µ C

5. λe = µ (C − C1 ), C1 =

∑ (C − n)P = λ(N − L ) n

n=0

6. Ws =

Ls λe

7. wq =

Lq λe

s

7 Interoperability To withstand the disruptive digital storms of the future, enterprise architecture affects digital transformation of interoperability aspects primarily through service-oriented computing (see Chapter 15). This chapter gives an introduction to interoperability aspects of the enterprise architecture. Interoperability aspects are also complementarily or orthogonally affected by other attributes, but this chapter focuses on factors directly related to interoperability aspects of the enterprise architecture. Interoperability involves business applications on distributed platforms tied together in a number of different ways, such as by transaction monitors, message-oriented middleware, data access middleware, Web services, and remote procedure calls (RPCs), or sometimes by clumsier mechanisms like file transfers or sequential batch processing jobs. They  often involve multiple databases, perhaps based on different database management system (DBMS) platforms, such as Oracle Corp.’s ORACLE and the IBM DB/2 and Internet Multicasting Service. They  typically involve several different computing platforms, or different types of computers running different operating systems, such as PCs running Microsoft Corp.’s Windows; servers running Hewlett-Packard’s HP-UX; and mainframes running IBM Corp.’s MVS. Large-scale client/server applications involve complex networks, usually with many local area networks (LANs) interconnected through a wide area network. More often, such applications involve multiple wide area network (WAN) and multiple network protocols, starting from IBM System Network Architecture (SNA), NetBIOS, Transmission Control Protocol/Internet Protocol (TCP/IP), and Frame Relay.

7.1 Introduction to Interoperability Interoperability is the ability of interaction between two or more systems so as to achieve some goal without having to know the uniqueness of the interacting systems. Interoperability allows some form of interaction between two or more systems so as to achieve some goal without having to know the uniqueness of the interacting systems. The IEEE defines it as the ability of two or more systems or components to exchange information and to use the information that has been exchanged. ISO defines it as the capability to communicate, execute programs, or transfer data among various functional units in a manner that requires the user to have little or no knowledge of the unique characteristics of those units. Interoperability is the ability of systems to provide and receive services from other systems and to use the services so interchanged to enable them to operate effectively together.

147

148

Digital Transformation of Enterprise Architecture

Systems integration at the enterprise level entails getting many different information systems components to work together across the enterprise network. Because these myriad components must interoperate effectively, interoperability is the first key to success. But, interoperability is not simply a problem of network protocol compatibility—it exists at many different levels, such as: 1. Application interoperability: This  refers to distributing the processing among applications located on different systems. 2. GUI interoperability: This  is concerned with interoperation of graphical user interface (GUI)s when different types of GUIs are involved, such as Windows, Presentation Manager, and Motif, and how to make Full-Motion Video or interactive compound media information exchanges work across heterogeneous platforms and heterogeneous networks. 3. Object interoperability: Object-oriented systems in which the data is encapsulated in objects allows information to be exchanged between applications as objects via Object-Request-Broker (ORB). However, with ORBs available from multiple software companies issues are being encountered regarding ORB dependence on remote procedure calls when operating across enterprise networks, and with ORBto-ORB interoperability. Applications built using other types of component-based software are also becoming more commonplace—with Microsoft’s.NET or Java Enterprise Edition. 4. Database interoperability: Applications access information in databases located on multiple systems, in databases on different platform types, or in databases of completely different types (such as Oracle and IMS). Although all structured query languages are not  the same, the interoperability problem is somewhat lessened if all databases are relational databases using Structured Query Language (SQL). It is definitely easier if all databases use the same DBMS product, but even then there may be difficulties between certain platforms or with certain network protocols. In  any of these cases, database interoperability is a major consideration, especially when legacy systems are involved and are expected to work with newer systems. 5. Platform interoperability: Organizations striving to implement mission-critical distributed applications face the difficult challenge of interoperability among platforms of completely different types, such as IMS on IBM mainframes and UNIX platforms. Much of what has been done to date under the client/server classification involves decision support applications. Most mission-critical functions are performed primarily with the assistance of mainframe applications; yet, getting IMS or CICS to talk to non-IBM platforms, and especially non-mainframe platforms, is proving to be difficult. 6. Network interoperability: At  the application-to-application level, differing protocols and spanning across networks of varying types are usually the biggest problems. When one application is on a System Network Architecture network and the partner application is on a TCP/IP network, a major interoperability problem arises. 7.1.1 Framework for Enterprise Interoperability (FEI) The Framework for Enterprise Interoperability aims at structuring basic enterprise interoperability concepts and issues. FEI has three basic dimensions:

Interoperability

149

FIGURE 7.1 Enterprise interoperability framework (EIF).

• Interoperability concerns that define the content of interoperation that may take place at various levels of the enterprise (data, service, process, business). • Interoperability barriers that identify various obstacles to interoperability in three categories (conceptual, technological, organizational). • Interoperability approaches that represent the different ways in which barriers can be removed (integrated, unified and federated). The three dimensions of the framework (see Figure 7.1) constitute the solution space of enterprise interoperability. The intersection of an interoperability barrier, an interoperability concern and an interoperability approach is the set of solutions to the breakdown of that interoperability barrier existing for the particular concern and using the selected approach. 7.1.2 Maturity Model for Enterprise Interoperability Maturity model of enterprise interoperability consists of the following stages: • • • • •

Connection: basic identification and use of a channel Communication: using connection to detection and recognition of data format Coordination: using communication to access and acknowledge information Cooperation: using coordination to confirm commitment and resources Collaboration: using cooperation to enact and execute

However, it is not always the case that collaborating systems have a common manner of codifying, understanding, and using the data that is exchanged. The difference can also be viewed in terms of three layers:

150

Digital Transformation of Enterprise Architecture

1. Syntactic interoperability: When collaborating systems have a compatible way of structuring data during exchange; i.e., the manner in which the data is codified using a grammar or vocabulary, is mutually compatible. 2. Semantic interoperability: When the collaborating systems understand the meaning of the syntactic elements; i.e., they share the same meaning of the data in relation to the entity or phenomena it represents in the real world. Semantic interoperability can only be achieved if systems are also syntactically interoperable. 3. Pragmatic interoperability: When the collaborating systems understand the meaning of the message to cause the same effect intended by that system; i.e., the intended effect of the message is interpreted and acted similarly by the collaborating systems. Pragmatic interoperability can only be achieved if systems are also syntactically and semantically interoperable.

7.1.3 Standards for Enterprise Interoperability This global environment has increased dramatically the need for intra-organizational connection, communication, co-ordination, co-operation and collaboration. The corresponding need for interoperation of the related Information and Communication Technology (ICT)-based business and technical processes has led to huge R&D efforts in both academia and industry, which in turn has initiated numerous standardization efforts in Standards Development Organizations (SDOs) and industry consortia. Companies that apply successful standards can gain significant competitive advantage. However, the choice of standards and their implementation have become a critical part of managing the IS function of an enterprise and developing its application software. Interoperability standards can be categorized via a hierarchy of standards by organizing the standards according to their area of use (Table 7.1): 1. Standards for the development of standards: This provides general standards, which identify basic concepts, their relationships and rules to guide their application in the standards development work. Examples are ISO/IEC 42010 “Systems and Software Engineering—Architectural Description” and ISO 15704 “Referencebase for Enterprise Architectures and Models.” 2. Standards for product development guidelines/constraints: This provides standards that define standardized means used to represent the content of implementable standards and guidelines for their development. Examples are standards for explicit usage for encoding data formats, like ISO/IEC 19776-1:2009: “Extensible Mark-up Language” (XML) or infrastructure standards like OMGs “Web Services for Business Process Execution Language” (WS-BPEL), or standards still at framework level, but for a specific application domain, like CEN/ISO 19439 “Framework for Enterprise Modelling,” or ISO/CEN 11354-1 “Framework for Enterprise Interoperability.” 3. Standards to be implemented in marketable products or services: This  provides product-relevant, implementable standards. Examples of such products that should be designed using standards for product development are enterprise modeling tools—plus the inherent language and methodology—that follow CEN/ISO 19439 “Framework for Enterprise Modelling.”

151

Interoperability

TABLE 7.1 Enterprise Interoperability Standards Categories Category

Purpose

Provider

User

Standards for standard developments

A general framework of concepts and guidelines used to develop standards

Researcher, SDOs and User Consortia together with Researcher

Software developer, system integrator

Standards for product development

Information exchange between applications, within or across domains, Infrastructure services, multiple use Direct deployment in the design of product and services

SDOs and User Consortia together with Software developer and system integrator

Industry consortia, Trade organizations, government agencies, product designer

Product/service application designer

Business end user

Standards for implementation in products

Examples of Standards for Enterprise reference architecture, (OMG-MDA, SOA), Application Integration Frameworks for Modeling, Interoperation, eBusiness, Modeling Languages, General ICT interfaces Product/service specific interfaces

Figure 7.2 presents an overview of standards related to interoperation for use both by standard developers and standard users. The standards are often only known to a small group of experts involved in their development. The promotion of interoperation standards and their specific benefits to a large public audience is urgently needed. 7.1.4 Basic Model of System Interaction The presentation in this subsection has been adopted from Delgado (2015). The most basic or simple interoperability scenario has been shown in Figure  7.3a with a system in the role of consumer and another in the role of provider: the consumer sends a request to the provider, which reacts to it, typically executing some actions and eventually sending back a response to the consumer. The characteristics of this interaction are: 1. System interaction does not have to only be a message, it can also be a file. 2. System interaction depends upon the motive and intent of interoperability. 3. System interaction depends on the minimum requirements (both functional and non-functional) that get imposed on the interacting systems to enable interoperability. 4. System interaction is asymmetric in terms of the differences between the consumer and provider roles.

152

FIGURE 7.2 Enterprise interoperability standards overview.

Digital Transformation of Enterprise Architecture

Interoperability

153

FIGURE 7.3 System interaction: (a) Simple; (b) Composite.

5. System can act as a consumer as well as a provider in bidirectional interactions, and several systems can take part in a multipartite interaction (a choreography) towards some common goal (Figure 7.3b). 6. System interaction need not be designed for a full match; as long as some characteristics support the interaction and are maintained as such, the systems involves in the interaction can evolve independently. This leads to the concept of degree of interoperability that indicates how much of the functionality or characteristics of a system can be accessed by another: ranging from none to the full set. The  notion of partial interoperability introduces a different perspective, stronger than similarity but weaker than commonality (resulting from using the same schemas and ontologies). Figure 7.4 shows a system A (the consumer) that needs to be made interoperable with system X (the provider), where A is designed to interact with a provider B and X is designed to expect a consumer Y. B is how A  views provider X; Y is how X views consumer A. B and Y are specifications, not actual systems.

FIGURE 7.4 System compliance and conformance.

154

Digital Transformation of Enterprise Architecture

Enabling A to be interoperable with X depends on two conditions: 1. Compliance: B must comply with Y in terms of requests, which means that B must satisfy all the requirements of X to honor the requests. Therefore, A can use X as if it were B 2. Conformance: Y must conform to B in terms of effects (including eventual responses), which means that Y must fulfill all the expectations of B regarding the effects of a request. Therefore, X can replace (take the form of) B without A noticing it. Partial interoperability can been achieved by subsumption, with the set of features that A uses as a subset of the set of features offered by X and as long as X (or another system that replaces it) supports the Y specification. In other words, interoperability can be redefined as “the ability of a system A  to send a stimulus to another X with a given intent, satisfying the requirements of X to accept that stimulus and getting a reaction of X that fulfills the expectations of A.” It should be noted that compliance and conformance apply not only to message based interactions, in both content (syntax and semantics, including ontology) and protocol (flow of messages and their effects), but also to non-operational relationships, such as strategy alignment and regulatory compliance. This highlights the fact that interoperability is not merely about the operation stage but starts much earlier in the system’s lifecycle, in the conception stage. After all, what happens in the operation stage is a consequence of what has been conceived and designed.

7.2 Model-Based Interoperability Model Driven Interoperability (MDI) starts at the highest level of abstraction and derives solutions from successive transformations, instead of solving the interoperability at the code level. It is supported through the extensive use of models for both horizontal and vertical integration of the multiple abstraction levels, therefore bringing the advantage of legacy systems integration and facilitates the construction of new interoperable software/services. The integration of existing legacy systems is performed horizontally at the different model abstraction levels, while vertical transformations facilitate the construction of new software/services assuring that these become immediately interoperable with others built according to the same paradigm and with the same conceptual models. 7.2.1 Model Driven Architecture The  Object Management Group (OMG) is an industry consortium established in 1989 with the goal of defining standards for interoperability for distributed object systems. OMG adopted the Model Driven Architecture (MDA) standard in 2001; MDA is aimed at model-driven development or engineering that uses the core OMG standards (UML, MOF, XMI, CWM).

Interoperability

155

Based primarily on the principle of architectural separation of concerns, MDA has three objectives: 1. Portability 2. Interoperability 3. Reusability MDA has three basic tenets: 1. Direct representation expresses a desire to shift the focus of software development away from the technology domain and toward the concepts and terminology of the problem domain. The goal is to represent a solution as directly as possible in terms of the problem domain that may result in more accurate designs, improved communication between various participants in the system development process, and overall increased productivity. 2. Automation endorses the concept of using machines to perform rote tasks that require no human ingenuity, freeing software developers to focus on creative problem-solving work. Dealing directly with the underlying implementation is not productive per se—it is the solving of business problems that creates value and qualifies to contributes to the productivity. 3. Building on open standards is important not  only because standards promote reuse, but also because they cultivate the building of an ecosystem of tool vendors addressing the various needs of MDA. MDA describes three main layers of architectural abstraction or viewpoints (Figure 7.5): 1. Computation independent model (CIM): describes a system environment and its requirements using terminology that is familiar to practitioners in the system domain. 2. Platform independent model (PIM): describes a system’s structure and functions formally, and yet without specifying platform-specific implementation details. 3. Platform specific model (PSM): includes details that are important to the implementation of a system on a given platform. By platform, MDA means a cohesive set of subsystems and technologies on which a system can execute (such as Sun’s Java EE or Microsoft’s.NET platforms). One of the main benefits of MDA is that the implementation step, PIM-to-PSM transformation, can presumably be done relatively easily for multiple platforms. Thus, there may be many PSM’s corresponding to each of the target platforms. The MDA Guide discusses a wide variety of transformation types, techniques, and patterns. As with MDD in general, the concept of model transformations is central to the MDA philosophy. There are mappings between models up and down as indicated in Figure 7.5. CIM-to-CIM or PIM-to-PIM mappings represent model transformation that occurs when moving from an analysis phase into a design phase:

156

Digital Transformation of Enterprise Architecture

FIGURE 7.5 Layers and implementation of MDA.

• PSM-to-ISM transformation are required to configure and package the elements of a PSM for deployment to the desired target environment. • PSM-to-PIM transformation are required when refactoring or reverse-engineering a system. MDA has four modeling layers: M3: The  meta-metamodel layer; describes concepts that appear in the metamodel, such as Class. For UML, MOF describes the M3 layer. M2: The  metamodel layer; describes concepts that make up a modeling language; examples include the UML metamodel, the Executable UML profile, and a domainspecific metamodel created and customized for a particular company or industry segment. M1: The user model or model instance layer; class diagrams, statecharts, and other such artifacts are M1-layer elements. M0: The data instance layer; objects, records, data, and related artifacts exist at this level. An MDA process may use any of a number of different UML profiles or domain-specific metamodels, rather than using UML exclusively for all modeling activities. While developers usually produce UML diagrams using UML or UML profiles, it is also possible to create an MDA process that uses a MOF conforming domain-specific metamodel to then perform domain-specific modeling tasks within the MDA framework.

Interoperability

157

7.2.2 Model Driven Interoperability (MDI) This  relationship between two models is a form of morphim, which in mathematics is an abstraction of a structure-preserving map between two structures. It can be seen as a function in set theory, the connection between domain and codomain in category theory, the edge that connects two nodes within a graph, or the relation (e.g., mapping, merging, transformation, etc.) between two or more information model specifications. Model non-altering morphisms are based on the concept of traditional model mappings, whereas model altering implies a transformation, i.e., the source model is transformed using a function that applies a mapping to the source model and outputs the target model. In either one of the morphism types, to enable interoperability and respond to the constant knowledge and model changes, it is required to use a more detailed and traceable mapping format that provides not only structural relationship envisaged by classical MDA, but also a semantic “link” between different models and its components. The  MDI enables the application of horizontal transformation for the integration of legacy systems and vertical transformation for an easier construction of new software and/or services guarantying that these remain interoperable with other built according to the same paradigm, and respecting the same CIM/PIM models. Based on MOF enabled transformations, the MDA unifies every step of the development of an application or integrated suite from its start as a CIM of the application’s business functionality and behavior, through one or more PIMs and PSMs, to generated code and a deployable application. The PIM remains stable as technology evolves, extending and thereby maximizing software return on investment (ROI) thus achieving the fundamental objective of MDA.

7.3 Aspects of Interoperability for Internet of Things (IoT) The IoT is changing the scenario of global interoperability. From a rather homogeneous, reliable and static network of servers that we identify as the Web, built on top of the Internet, we are moving to what is termed as a Mesh, a heterogeneous, unreliable and dynamically reconfigurable network of computing nodes, ranging from stationary, generic and fullyfledged servers to mobile, application-specific and low-level devices, such as sensors. The rapid adoption of Internet of Things (IoT) has been thwarted by the lack of interoperability between IoT platforms, systems, and applications. Ideally, IoT scenarios should be implemented by a complex ecosystem of distributed applications, with nodes interacting seamlessly with each other, independently of platform, programming language, network protocol and node capabilities (from a fully-fledged computer, connected to a wide, fast and reliable network, down to a tiny sensor, connected to a local, slow and ad hoc wireless network). However, in reality, there is no well-established reference standard for IoT platform technology and there would be none emerging in the foreseeable future. Hence, IoT scenarios will be characterized by a high-degree of heterogeneity at all levels (device, networking, middleware, application service, data/semantics), preventing interoperability of IoT solutions. A multi-layered approach to integrate heterogeneous IoT devices, networks, platforms, services and data will allow heterogeneous elements to cooperate seamlessly to share information, infrastructures and services as in a homogenous scenario. The EU funded INTERIoT project presents a comprehensive approach to IoT platform interoperability, offering its tools and services across the communication/software stack consisting of:

158

Digital Transformation of Enterprise Architecture

• Data and semantics layer: common understanding of data and information • Application service layer: reuse of heterogeneous services from heterogeneous platforms • Middleware layer: seamless service discovery and management of smart objects • Network layer: handling of object mobility and information routing • Device layer: seamless inclusion of new devices into the existing ecosystem of IoT devices Table 7.2 shows a comparison of TCP/IP and IoT protocol suite. In the context of IoT, interoperability can be defined as the ability of two or more devices to exchange messages and to react to them according to some pattern or contract that fulfils the constraints and expectations of all devices involved. Interoperability involves several abstraction layers, from low-level networking issues to high-level aspects reflecting the purpose of the interaction. The  presentation and approach in this subsection is adapted from Delgado (2017). These layers are: 1. Symbiotic: This  category expresses the interaction nature of two interacting devices in a mutually beneficial agreement. This  can be a tight coordination under a common governance, if the devices are controlled by the same entity, a joint-venture agreement, if there are two substantially aligned clusters of devices or a mere collaboration involving a partnership agreement and if some goals are shared. 2. Pragmatic: The interaction between a consumer and a provider is done in the context of a contract, which is implemented by a choreography that coordinates processes, which in turn implement workflow behavior by orchestrating service invocations. 3. Semantic: Interacting devices must be able to understand the meaning of the content of the messages exchanged, both requests and responses. This implies compatibility in rules, knowledge and ontologies, so that meaning is not  lost when transferring a message from the context of the sender to that of the receiver.

TABLE 7.2 Comparison of TCP/IP and IoT Protocol Suite OSI Application Presentation

TCP/IP

IoT

HTTP/FTP/SMTP etc.

CoAP

TCP/UDP IPv4/IPv6, RP, ICMP IEEE 802.3 Ethernet/802.11, Wireless LAN Ethernet (IEEE 802.3), Token Ring, RS-232, FDDI, and others

UDP IPv6/6LoWPAN IEEE 802.15.4e IEEE 802.15.4

Session Transport Network Data Link Physical

Interoperability

159

4. Syntactic: This category deals mainly with form, rather than content. Each message has a structure, composed by data (primitive objects) according to some structural definition (its schema). The data in messages need to be serialized to be sent over the channel, using formats such as XML or JSON. 5. Connective: The main objective in this category is to transfer a message from one device to another, regardless of its content. This usually involves enclosing that content in another message with control information and implementing a message protocol over a communications network protocol and possibly involving routing gateways. All these interoperability layers, as mentioned above, constitute an expression of device coupling, leading to two conflicting aspects: • Coupling: Decoupled devices (with no interactions or dependencies between them) can evolve freely and independently, which favors adaptability, changeability and even reliability (if one fails, there is no impact on the other). Therefore, coupling should be avoided as much as possible. • Interoperability: Devices need to interact to cooperate towards common or complementary objectives, which implies that some degree of previously agreed mutual knowledge is indispensable. Therefore, the fundamental problem of device interaction is to provide the maximum decoupling possible (exposing the minimum possible number of features) while ensuring the minimum interoperability requirements. In other words, the main goal is to ensure that each device knows just enough about others to be able to interoperate with them but no more than that, to avoid unnecessary dependencies and constraints. Independent of whether the IoT devices are modeled as services or as resources, their interaction is always message based. Messages are serialized data structures described by schemas, and the typical interoperability solution used in distributed systems is to share the message’s schema between the sender and the receiver of that message: both sender and receiver work on the same message, with the same schema. Hence, it is termed as symmetric interoperability. Asymmetric interoperability assumes that the schema used to produce a message does not have to be identical to the schema of the messages expected by the receiver. Asymmetric interoperability assumes that, unlike symmetric interoperability, the schema of a message does not have to be the same as the schema the receiver is expecting. Since the message’s value schema just needs to comply with the minimum requirements (mandatory components) of the receiver’s type schema, this works towards decreasing the coupling between them. Consequently, the two schemas (of the message and of the receiver) have only to match partially. This enables the receiver to receive messages from different senders, as long as they match the relevant part of the receiver’s schema. Similarly, we can consider the case of replacing the receiver with another one, with a different schema. This can occur due to evolution of the receiver (replaced by a new version) or by resorting to a new receiver altogether. Allowing a receiver to be able to interpret messages from different senders, and a sender to be able to send messages to different receivers, is just what is needed to decrease coupling.

160

Digital Transformation of Enterprise Architecture

Thus, asymmetric interoperability entails the following ideas: 1. Compliance: At the receiver, instead of validating an incoming message against a schema shared with the sender, check whether the message fulfils the minimum requirements of the receiver’s schema. 2. Conformance: If the schema of the new receiver includes all the features of the old one and does not mandatorily require new ones, a receiver device can be changed or replaced by another one, without impact on those sending messages to it. Consequently, a device can send messages to the new receiver without noticing that it has been changed. 3. Universal reception mechanism: The  receiver does not  see the message’s schema. Compliance checking and the assignment of the compliant parts of the message to the receiver’s data template (the receiver’s view of the message) are done in a universal manner by the message-based platform, independently of the actual schemas used. These interoperability features contribute to reduce coupling and to increase the range of devices that can interact with a given device. This is especially relevant in the Internet of Things, in which devices interact in huge numbers and with a wide range of characteristics, and therefore reducing the interoperability problems is of prime importance.

7.4 Summary This  chapter proposed interoperability as an attribute of enterprise architecture. It  discussed the characteristics of interoperability that may be relevant while considering the digital transformation of interoperability aspects primarily through service-oriented computing (see Chapter 15). The  chapter began by explaining a framework, maturity model and standards for interoperability. In  the latter part of the chapter, after introducing the concept of model driven architecture (MDA), it described aspects of model driven interoperability (MDI). Interoperability is the ability of interaction between two or more systems so as to achieve some goal without having to know the uniqueness of the interacting systems. Interoperability involves business applications on distributed platforms tied together in a number of different ways, such as by transaction monitors, messageoriented middleware, data access middleware, Web services, and remote procedure calls (RPCs), or sometimes by clumsier mechanisms like file transfers or sequential batch processing jobs. This  chapter’s appendix describes aspects related to the integration of applications. It describes the basics and models of integration including presentation, functional, data, business process, and business-to-business integration. The  last part of the appendix explains the various patterns of integration.

Interoperability

161

Appendix 7A: Integration As stated in the beginning of this chapter, to withstand the disruptive digital storms of the future, enterprise architecture affects digital transformation of the interoperability aspects primarily through service-oriented computing (see Chapter 15)—but integration is an essential pre-requisite to enable this transformation. This appendix describes aspects related to the integration of applications. It describes the basics and models of integration including presentation, functional, data, business process, and business-to-business integration. The last part of the appendix explains the various patterns of integration 7A.1 Integration of Enterprise Applications EAI provides components for integrating applications with external applications and technologies within the enterprise and is designed to work with third-party products. By employing EAI effectively, an enterprise can leverage its existing enterprise-wide information assets, that is, customer relationships: • • • •

To provide new products and services easily and quickly To streamline its internal process and operations To strengthen supply relationships To enhance customer relationships

As EAI enables enterprise-wide integration of diverse applications across various products and divisions, it provides the enterprise with a 360° view of its customer relationships across multiple channels of interaction. Every customer perceives the enterprise as a whole and also expects to be recognized and valued by the enterprise as a whole; familiarity with customers’ earlier interactions and purchases helps frontline members of the enterprise to create opportunities for selling other products or additional add-ons and services to the earlier purchases. One of the important objectives of application integration is to achieve the integration between applications with as reduced a level of coupling or interdependency as possible so that the impact of changes in one application does not affect the other application(s). 7A.1.1 Basics of Integration The basic concepts related with EAI are described here. A robust and flexible EAI provides a combination of the methods of integration and modes of communication that are embodied into the various models of integration that are deployed within the EAI architecture as discussed next.

162

Digital Transformation of Enterprise Architecture

7A.1.1.1 Methods of Integration Methods of integration are the approaches used to guide a request from a sender to a receiver. The two primary methods of integration are as follows: 1. Messaging: In this approach, the sender constructs a message that contains information on the actions desired as well as the data required to perform these actions—the message contains both the control information and data. Messages provide a lot of flexibility because the control information can be easily changed and extended; they are independent of any of the applications. However, to function correctly, the integration messages must be predefined precisely so that the messages can be coded and decoded in exactly the same way by all senders and receivers. 2. Interface: In  this approach, the sender communicates through an interface, which defines explicitly the actions that can be invoked by an application; the interface is self-describing in terms of the actions that can be taken. Interfaces make the application look like a procedure, or a method, or an object. Interfaces are difficult to change and extend; they are associated with a particular application. 3. Connector: In this approach, the application provides an access point that allows either a message or invocation on an interface to be passed into the application; a connector is more than an interface providing additional capabilities like error handling and validation checking, conversion and transformation of data into appropriate format, and managing state information enabling guaranteed delivery or graceful recovery. Many applications do not have a predefined or prebuilt entry point into the application. In such cases, one may need to use data files, databases, user interfaces, or memory as the entry point for the injection of the request. It is at this point that the correct integration model must be selected—presentation, data, or functional—to build the right connector based on the internal structures of the application. 7A.1.1.2 Modes of Communication The flexibility of systems is critically dependent on the modes of communications that are utilized by the systems. Assuming that a request refers to a communication from a sender to a receiver, the two basic options for communications are as follows: 1. Synchronous communication: This requires the sender of a request to wait until a reply, which is the result of the request, is received before continuing the processing. Synchronous communication between systems implies a high degree of coupling and requires the sender and the receiver to coordinate the communications with their internal processing. A reliable network infrastructure is essential for this kind of communication. It is used when the sender requires a notification of the receipt or needs the result of the processing from the receiver. For instance, interactive systems need a synchronous type of communication. There are three popular types of synchronous communications: a. Request/response b. Transmit c. Polling

Interoperability

163

2. Asynchronous communication: This allows the sender to continue processing after sending the request without waiting for a reply to this request. The sender does not concern itself with whether or when the request has been received, how it is processed, or the results returned from the receiver. Asynchronous communications does not demand a high degree of coupling and also does not require the sender to coordinate the communications with its internal processing. It is used when the communication of information is required without the need to coordinate activities or responses. There are three popular types of asynchronous communications: a. Message passing: This is used in situations where information needs to be transmitted but a reply is not required. This needs a reliable network for guaranteed delivery. b. Publish/subscribe: This is used in situations where a reply is not required, but unlike all other cases, the recipient is determined based on the content of the request and the predeclared interest of the receiver application. This type of communication is useful for STP type of functional integration. c. Broadcast: This is used in situations where again a reply is not needed but the request is sent to all the applications and each receiver decides if it is interested in the request/message and accordingly processes that request/message in accordance with the business and functional logic programmed into each of the receiver systems. 7A.1.1.3 Middleware Options Middleware is software that enables disparate applications to interact with each other— it facilitates the communication of requests between software components through the use of predefined interfaces or messages. The  five basic types of middleware are as follows: 1. Remote procedure call (RPC): This is based on the notion of developing distributed applications that integrate at the procedure level but across a network. 2. Database access middleware: This is based on the notion of developing distributed applications that integrate at the distributed data level whether in files or databases but across the network. 3. Message Oriented Middleware (MOM): This  is based on the notion of developing distributed applications that integrate at the message level but across the network. 4. Distributed object technology (DOT): This is based on the notion of developing distributed applications that integrate at the interface level but those that make the application look like an object. 5. Transaction processing monitor (TPM): This is based on the notion of developing distributed applications that integrate at the distributed transaction level but across the network. 7A.1.2 Models of Integration An integration model defines the approach and configurations used to integrate software applications depending on the nature and methods of the envisaged integration. There are three possible points of integration, namely, presentation, functional, and data integration.

164

Digital Transformation of Enterprise Architecture

1. Presentation integration: In this model, the integration is accomplished by deploying a new and uniform application user interface—the new application appears to be a single application although it may be accessing several legacy and other applications at the back–end. The integration logic, the instructions on where to direct the user interactions, communicates the interaction of the user to the corresponding application using their existing presentations as a point of integration. It then integrates back any results generated from the various constituent applications. Thus, a single presentation could replace a set of terminal-based interfaces and might incorporate additional features, functions, and workflow for the user. For instance, a mainframe application can be integrated into a new Microsoft Windows application at the frontend using the screen-scraping technology that effectively copies, maps, and imports data from specific locations on character-based screens of the mainframe application onto the new schemas and data structures of the new system. Presentation integration is the easiest to achieve and can be automated almost 100%; however, it is also the most limiting of the three models. 2. Functional integration: In this model, the integration is accomplished by invoking from other applications functionality or from the business logic of the existing applications by using code level interfaces to the existing applications. This might be achieved at the level of an object or a procedure or via application programming interface (API) if it exists for each of the corresponding applications. The business logic includes the processes and workflow as well as the data manipulation and rules of interpretation. For instance, to change the customer’s address in an enterprise application, the functionality of the existing customer order and billing application can be accessed if it is functionally integrated with these later applications. Rather than re-create the logic in the new application, it is more efficient and less error prone to reuse the existing logic. Traditionally, remote procedure calls (RPCs), which have been employed for this kind of integration, have provided the definitions for access and basic communications facilities. However, lately, distributed processing middleware has become the preferred method of integration as it not only provides a more robust approach to the interface definitions and communications but also enables runtime support for intercomponent requests. The three categories of distributed processing are as follows: a. Message oriented Middleware (MOM): This  achieves integration by providing for the communication of messages between applications by means of the messages placed in MOM, which itself is implemented in a variety of configurations, including message queuing and message passing. MOM is then responsible for delivering to the target system. Microsoft’s MSMQ, BizTalk, IBM’s MQSeries, and Talarian’s SmartSockets are examples of MOM. b. Distributed object technology (DOT): This  achieves integration by providing object interfaces that make applications look like objects. The application can then be accessed by other applications across a network through the object interfaces. OMG’s CORBA, Microsoft’s COM+, and Sun’s Java 2 Enterprise Edition (J2EE) are examples of DOT. c. Transaction processing monitors (TPMs): These achieve integration by providing critical support for integrity of distributed resources such as databases, files, and message queues across distributed architectures by allowing various types of transactions to be managed using a variety of concepts including twophase commit. BEA’s Tuxedo is an example of TPM.

Interoperability

165

Functional integration that is more flexible than the other two integration models can be applied in three different forms as described later. i. Synchronization: This  corresponds to the coordination of data updates from multiple sources across integrated applications that may have been developed and enhanced over a long period of time. It provides integration that is loosely coupled and predominantly asynchronous. These applications may represent various relationships that a customer may have had with the enterprise or manage employee- or product-related information. When an update is made into any of the systems, the update needs to be propagated across all of these systems. Typically, synchronization is implemented by propagating a request that describes the intended action and the corresponding data to each of the relevant systems. ii. Component integration: Component integration is the integration of applications where a well-defined interface exists that allows a component to be accessed via requests from other components without modifications. The  interfaces for each component must identify the specific functions that the component supports. It  provides integration that is tightly coupled and predominantly synchronous. iii. Straight-Through Processing (STP): This corresponds to a coordinated set of automated actions executed across all relevant applications in the correct order of precedence automatically, that is, without human intervention. It provides integration that is tightly coupled and can be both synchronous and asynchronous. This kind of process is commonly associated with workflow though it does not involve decision making or complicated scheduling. For instance, an order for a product is placed on a Website; the Order Processing System (OPS) creates the order and notifies the logistics and shipping system to ship the product. When the order is completed, the OPS is notified of the change of status and the billing system triggers a bill for payment. Once the payment is received, the OPS is notified to close the order. 3. Data integration: In this model, integration is accomplished by bypassing the existing application business logic and directly accessing the data created, processed, and stored by each of the corresponding applications. For  instance, an Oraclebased billing system can be integrated with an IBM-based customer order system using the database gateway technology that integrates the DB2 database with the Oracle database. This has been one of the earliest models applied for accessing information from databases, including • Batch file transfer • Open database connectivity (ODBC) • Database access middleware • Data transformation The data integration model provides greater flexibility than the presentation integration model; it simplifies access to data from multiple sources and also allows the data to be reused across other applications. However, integrating at the data level necessitates rewriting of any functionality required by each of the applications, which implies greater effort for avoiding inconsistencies, standardizing,

166

Digital Transformation of Enterprise Architecture

testing, and debugging for each of the applications on an ongoing basis. Since this model is highly sensitive to changes in the data models for each of the applications, this integration model is not very amenable for change and maintenance. 4. Business process integration: Achieving business process integration is often connected with business process reengineering and is not a sole technical problem. It, however, requires the implementation of several technical layers as the foundation and integrates applications at a higher level of abstraction. SOA, BPEL, and related technologies today provide new opportunities for making integrated information systems more flexible and adaptable to business process changes. This way, our information systems can get more agile, provide better support for changing requirements, and align closer to business needs. 5. Business-to-business integration: There  is a growing need to enable inter-enterprise integration, often referred to as business-to-business (B2B) integration, or e-business. The requirements today for online, up-to-date information, delivered with efficiency, reliability, and quality, are very high and gone are the days where a company could just publish offline catalogs on their Web pages. Customers today expect immediate response and are not satisfied with batch processing and several days of delay in confirming orders. However, these delays are often the case when e-business is not backed by an efficiently integrated enterprise information system. Immediate responsiveness, achieved by the highly coupled integration of the back-end (enterprise information systems) and the front-end (presentation) systems, is a key success factor. Nonintegrated systems fail to meet business expectations; the primary reason for this is the lack of enterprise integration. In an e-business scenario, Applications from one company are invoking operations on front-end applications belonging to other companies. Only if these front-end systems are satisfactorily connected with back-end systems can the other company be able to provide an immediate and accurate response, which is an essential prerequisite for successful B2B collaboration. 7A.2 Patterns of Integration Integration patterns can be grouped into point-to-point integration and hub-and-spoke integration based on the way applications are connected. In the first approach, the applications are directly connected, while in the second, message exchanges go through a third party before being delivered to the final destination. 7A.2.1 Point-to-Point Integration Point-to-point integration is the simplest way to integrate independently developed application silos and do not  require significant upfront investment. Each application is connected directly and explicitly to others. To link each application to another directly, an interface needs to be developed. This style may work well if the number of applications to be integrated is not  large and there is no intention to scale out. Otherwise, it may quickly become unmanageable as the number of applications silos increases: if there are N applications to be integrated, then the number of interfaces to be developed becomes N × N; that is, the number of interfaces to be developed grows on the order of N2.

Interoperability

167

Another problem with point-to-point integration is the inability to respond to changes quickly. This is because the interfaces are hardwired to the application pairs, and changes within the enterprise information infrastructure may require rewiring the interfaces. 7A.2.2 Message-Oriented Integration In message-oriented integration solutions, the applications communicate with each other by sending and receiving messages through a middleware that manages the message queue associated with each application. Integration of two applications is by sending and receiving messages to the appropriate queue and the middleware ensures that the messages are delivered. However, point-to-point aspect of the integration is not eliminated, since applications are required to specify the recipients of the messages. 7A.2.3 Hub–Spoke Integration Spoke–hub integration eliminates the need to encode the address of the recipient. A centralized enterprise application middleware routes messages to their destinations based on the content and the format of the message. All applications are connected to the central integration hub like the spokes on a bicycle wheel. For this reason, this integration style is called spoke–hub integration. The concept is effectively used in many industries, such as transportation and telecommunication. A collection of point-to-point integrations with one common end results effectively in a hub–spoke integration pattern. Spoke–hub integration reduces the number of connections from order of N2 to N. This means that only as many interfaces as the number of applications need to be developed. In practice, a centralized integration hub provides a place for the adapters and it is the responsibility of the application developer to provide an adapter for each hub they connect to. Without standardization of adapters, this requires some development and testing resources. The spoke–hub integration is effectively implemented by message broker software that translates messages from one protocol to another, making sure that the data structures are compatible. Message brokers allow the rules of communication between applications to be defined outside the applications so that application developers do not need to worry about designing adaptors for every other application. When a message is received, the message broker runs the rule over the received message, transforms the data if needed, and inserts it into the appropriate queue. The  rules are defined in a declarative way based on the communication protocols used by the applications. The message broker uses these rules to identify the message queues where the messages should be relayed. Publish/Subscribe software architecture style can be used to implement message brokers. Accordingly, applications publish their messages that are then relayed to the receiving applications that subscribe to them. The major drawback of the message broker approach is the difficulty in managing and configuring the rules when the dependencies between applications are complex. Also, because message-based communications are inherently asynchronous, the solution may not be well suited for synchronous communication requirements, such as real-time computing or near-real-time computing.

8 Scalability To withstand the disruptive digital storms of the future, enterprise architecture affects digital transformation of scalability aspects primarily through cloud computing (see  Chapter  16). This  chapter gives an introduction to scalability aspects of the enterprise architecture. Scalability aspects are also complementarily or orthogonally affected by other attributes, but this chapter focuses on factors directly related to scalability aspects of the enterprise architecture. Scaling is primarily a geometric notion. Conceptually, you can make a system bigger or smaller by merely stretching it equally in every dimension (isometrically). However, it has been documented since the time of Galileo that mechanical and biological systems have inherent limitations to growth (and to shrinkage). As a physical system is scaled up, intrinsic overheads (such as weight) begin to dominate and eventually cause the system to fail beyond some critical point. Therefore, scaling is actually allometric. The biologist J. B. S. Haldane pointed this out in his famous essay entitled “On Being the Right Size,” size does matter for biological systems. Haldane’s essay is an elaboration on Galileo’s almost three hundred years old observation that size also matters for mechanical structures like buildings and bridges. Galileo and Haldane recognized that any volume not only occupies space but it also has a mass since it is made of some kind of material, and that mass owing to gravity has a corresponding weight. As the volume grows, so does the weight—at some point, the volume will weigh so much that it will literally crush itself. Computer systems can also be scaled up in terms of their computing capacity—they also scale in a monotonically increasing fashion. But, just like physical systems, they also have intrinsic limits on scaling indefinitely. These limits do not  usually lead to catastrophic system failure, but the increasing overhead can degrade system performance and consequently have a significant impact on capacity planning.

8.1 Introduction to Scalability Scalability is the ability to improve system performance by adding more resources to a single node or multiple nodes—such as addition of CPUs, use of multicore systems instead of single-core, or adding additional memory. On-demand scaling (and descaling) of computation is one of the critical needs of a computing platform. Compute scaling can be either done at the infrastructure level or platform level. At the infrastructure level, it is about increasing the capacity of the compute power, while at the platform level, the techniques are mainly to intelligently manage the different client requests in a manner that best utilizes the compute infrastructure without requiring the clients to do anything special during peaks in demand.

169

170

Digital Transformation of Enterprise Architecture

Scale-up or vertical scaling is about adding more resources to a single node or a single system to improve performance—such as addition of CPUs, use of multicore systems instead of single-core, or adding additional memory. In order to support on-demand scaling of the cloud infrastructure, the system should be able to increase its compute power dynamically without impacting the platform or application executing over it. Unless a system is virtualized, it is generally not possible to increase the capacity of a computer system dynamically without bringing down the system. The  more powerful compute resource can now be effectively used by a virtualization layer to support more processes or more virtual machines—enabling scaling to many more clients. The advantage of scale-up systems is that the programming paradigm is simpler, since it does not involve distributed programming, unlike scale-out systems. Scale-out or horizontal scaling, on the other hand, is about expanding the compute resources by adding a new computer system or node to a distributed application. A web server (like Apache) is a typical example for such a system. In fact, given that most cloud applications are service-enabled, they need to be developed to expand on demand using scaling-out techniques. The advantage of scale-out systems is that commodity hardware, such as disk and memory, can be used for delivering high performance. A scale-out system such as interconnected compute nodes forming a cluster can be more powerful than a traditional supercomputer, especially with faster interconnect technologies. Scale-out systems will essentially be distributed systems with a shared high-performance disk storage used for common data. Unlike scale-up systems, in order to leverage full power of scale-out systems, there should be an effort from the programmer to design applications differently. Many design patterns exist for applications designed for scale-out systems like MapReduce. Scale-out solutions have much better performance and price/performance over scale-up systems. This  is because a search application essentially consists of independent parallel searches, which can easily be deployed on multiple processors. Scale-out techniques can be employed at application level as well. For example, a typical web search service is scalable where two client query requests can be processed completely as parallel threads. The challenge in scale-out systems, however, is the complex management of the infrastructure, especially when the infrastructure caters to dynamic scaling of resources. Additionally, as noted, applications that do not consist of independent computations are difficult to scale out. 8.1.1 Load Balancing for Scalability Distributed or grid or cloud computing in general is a special type of parallel computing that relies on complete or virtual computers (with on board CPU, storage, power supply, network interface, and so forth) connected to a network (private, public, or the internet) by a conventional or virtual network interface. This is in contrast to the traditional notion of a supercomputer, which has many processors connected together in a single machine. The primary advantage of distributed computing or sharing load is that each node can be purchased as commodity hardware; when combined, it can produce computing resources similar to a multiprocessor supercomputer, but at a significantly lower cost. This is due to the economies of scale of producing desktops and laptops, compared with the lower efficiency of designing and constructing a small number of custom supercomputers. A  parallel computer is a set of processors that are able to share the load i.e., work cooperatively to solve a computational problem. Parallel processing is performed by the simultaneous execution of program instructions that have been allocated across multiple processors with the objective of running a program in less time; parallelizing processing

171

Scalability

permits to handle larger data sets in reasonable time or to speed up complex operations and, therefore, represents the key to tackle the big data problem. Parallelization implies that the processing load or work is split and distributed across a number of processors, or processing nodes. The speedup S(p) is defined as the ratio of the elapsed time on a uniprocessor T1 to the elapsed time on a p-way multiprocessor Tp. S(p) =

T1 Tp

The efficiency E(p) is defined as the average speedup per processor E(p) =

S(p) p

The ideal scale-out behavior of a query has a linear relationship between the number of nodes and the amount of data that can processed in a certain time. The theoretical linear scale-out is hardly achieved, because a certain fraction of the query processing is normally not parallelizable, such as the coordinated startup of the processing or exclusive access to shared data structures. The serial part of the computation limits its parallel scalability— this relationship has been formulated as Amdahl’s Law: Let σ be the portion of the program that is sequential, and p be the number of processors (or nodes). The maximal speedup S(p) is then given by S(p) =

p 1 + σ(p − 1)

It is obvious that for a totally parallelizable program (σ  = 0), the speed up is p. However, since in practice, σ    0, then P(A|B)= P(B) P(A · B) = P(A) × P(B), if A and B are independent events. P(A · B) = 0, if A and B are mutually exclusive events. P(A + B) = P(A) + P(B), if A and B are mutually exclusive events. The probability that at least k out of n events are a success is given by n

P(≥ k out of n success ) =

 n

∑  i P (1 − P) i =k

i

n−i

196

Digital Transformation of Enterprise Architecture

9.2.2 Failure Rate Failure rate is defined as the expected number of failures per unit time. For hardware, the typical evolution of failure rate over the lifetime of a system is illustrated by the bathtub curve shown in Figure 9.2a. This curve has three phases: 1. Infant mortality: In the beginning, the failure rate sharply decreases due to frequent failures of weak components whose manufacturing defects were overlooked during manufacturing testing. 2. Useful life: After some time, the failure rate tends to stabilize and remains constant. 3. Wear out: As electronic or mechanical components wear out, the failure rate increases. During the useful life phase of the system, the failure rate function z(t) is assumed to have a constant value λ. Then, the reliability of the hardware system is given by the exponential failure law (Figure 9.2b): R(t) = e −λt

FIGURE 9.2 (a) Typical evolution of failure rate in a hardware system, (b) typical exponential failure law, and (c) typical evolution of failure rate in a software system.

197

Availability

Failure rate data are often available at component level. If failure rates of components are available, a crude estimate of the failure rate of a system without redundancy during its useful life phase, is given by n

λ=

∑λ

i

i =1

where λi is the failure rate of the component i, for i ∈ {1, 2,..., n}. In software systems, failure rate usually decreases as a function of time. Time-varying failure rate functions can be modeled using Weibull distribution is given by −1

z(t) = αλ (λt)α . where α and λ are constants determining the behavior of z(t) over time. We can distinguish the following three cases: 1. If α = 1, then z(t) = λ. 2. If α > 1, then z(t) increases with time. 3. If α  Y[C, S] which means that the attributes X1,…, Xn predict Y with a confidence C and a significance S. A useful selection of data analysis tasks: 1. Data summarization gives the user an overview of the structure of the data and is generally carried out in the early stages of a project. This type of initial exploratory data analysis can help to understand the nature of the data and to find potential hypotheses for hidden information. Simple descriptive statistical and visualization techniques generally apply. 2. Segmentation separates the data into interesting and meaningful sub-groups or classes. In  this case, the analyst can hypothesize certain subgroups as relevant for the business question based on prior knowledge or based on the outcome of data description and summarization. Automatic clustering techniques can detect previously unsuspected and hidden structures in data that allow segmentation. Clustering techniques, visualization and neural nets generally apply. 3. Classification assumes that a set of objects—characterized by some attributes or features—belong to different classes. The class label is a discrete qualitative identifier; for example, large, medium, or small. The objective is to build classification models that assign the correct class to previously unseen and unlabeled objects. Classification models are mostly used for predictive modeling. Discriminant analysis, decision tree, rule induction methods, and genetic algorithms generally apply. 4. Prediction is very similar to classification. The  difference is that in prediction, the class is not a qualitative discrete attribute but a continuous one. The goal of prediction is to find the numerical value of the target attribute for unseen objects; this problem type is also known as regression, and if the prediction deals with time series data, then it is often called forecasting. Regression analysis, decision trees, and neural nets generally apply. 5. Dependency analysis deals with finding a model that describes significant dependencies (or associations) between data items or events. Dependencies can be used to predict the value of an item given information on other data items.

294

Digital Transformation of Enterprise Architecture

Dependency analysis has close connections with classification and prediction because the dependencies are implicitly used for the formulation of predictive models. Correlation analysis, regression analysis, association rules, case-based reasoning and visualization techniques generally apply.

13.4 Summary This chapter proposed analyticity as an attribute of enterprise architecture. It discussed the characteristics of analyticity that may be relevant while considering the digital transformation of analyticity aspects primarily through soft computing (see Chapter 21). Analyticity is the ability for continuous iterative exploration and investigation of past performance, based on data and statistical methods, to gain insight and drive planning for the future. Analytics can be used for improving performance, driving sustainable growth through innovation, speeding up response time to market and environmental changes, and anticipating and planning for change while managing and balancing risk. These benefits are achieved through a framework that deploys automated data analysis within the business context. The paradigm shift is from intuition-driven decision making to datadriven, computer-assisted decision making that takes advantage of large amounts of data or data from multiple sources. The chapter described the various kinds of analytics like descriptive, predictive and prescriptive analytics. The chapter then provided an overview of the data science and related techniques. This  chapter’s appendix describes aspects related to analyticity: data mining aims to extract knowledge and insight through the analysis of large amounts of data using sophisticated modeling techniques; it converts data into knowledge and actionable information. Data mining models consist of a set of rules, equations, or complex functions that can be used to identify useful data patterns, understand, and predict behaviors. Data mining is a process that uses a variety of data analysis methods to discover the unknown, unexpected, interesting, and relevant patterns and relationships in data that may be used to make valid and accurate predictions. In general, there are two methods of data analysis: supervised and unsupervised.

Appendix 13A: Data Mining As stated in the beginning of this chapter, to withstand the disruptive digital storms of the future, enterprise architecture affects digital transformation of the analyticity aspects primarily through soft computing (see Chapter 21)—but data mining is an essential prerequisite to enable this transformation. Data mining is widely used by banking firms in soliciting credit card customers, by insurance and telecommunication companies in detecting fraud, by telephone companies and credit card issuers in identifying those potential customers most likely to churn, by manufacturing firms in quality control, and many other applications. Data mining is being applied to improve food and drug product safety and detection of

Analyticity

295

terrorists or criminals. Data mining involves statistical and/or artificial intelligence (AI) analysis, usually applied to large-scale data sets. Masses of data generated from cash registers, from scanning, and from topic specific databases throughout the company are explored, analyzed, reduced, and reused. Searches are performed across different models proposed for predicting sales, marketing response, and profit. Though automated AI methods are also used, classical statistical approaches are fundamental to data mining. Data mining tools need to be versatile, scalable, capable of accurately predicting responses between actions and results, and capable of automatic implementation. Versatile refers to the ability of the tool to apply a wide variety of models. Scalable tools imply that if the tools works on a small data set, it should also work on larger data sets. Automation is useful, but its application is relative. Some analytic functions are often automated, but human setup prior to implementing procedures is required. In fact, analyst judgment is critical to successful implementation of data mining. Proper selection of data to include in searches is critical: Too many variables produce too much output, while too few can overlook key relationships in the data. Data transformation also is often required. 13A.1 Introduction to Data Mining Traditional statistical analysis involves an approach that is usually directed, in that a specific set of expected outcomes exists. This  approach is referred to as supervised (hypothesis development and testing). But, data mining also involves a spirit of knowledge discovery (learning new and useful things). Knowledge discovery is referred to as unsupervised (knowledge discovery). Knowledge discovery by humans can be enhanced by graphical tools and identification of unexpected patterns through a combination of human and computer interaction. Much of this can be also accomplished through automatic means. A variety of analytic computer models have been used in data mining. The standard models employed in data mining include regression (e.g., normal regression for prediction and logistic regression for classification) and neural networks. This chapter discusses techniques like association rules for initial data exploration, fuzzy data mining approaches, rough set models, and genetic algorithms. Data mining requires identification of a problem, along with collection of data that can lead to better understanding, and computer models to provide statistical or other means of analysis. This  may be supported by visualization tools, that display data, or through fundamental statistical analysis, such as correlation analysis. Data mining aims to extract knowledge and insight through the analysis of large amounts of data using sophisticated modeling techniques; it converts data into knowledge and actionable information. Data mining models consist of a set of rules, equations, or complex functions that can be used to identify useful data patterns, understand, and predict behaviors. Data mining is a process that uses a variety of data analysis methods to discover the unknown, unexpected, interesting, and relevant patterns and relationships in data that may be used to make valid and accurate predictions. In general, there are two methods of data analysis: supervised and unsupervised. In both cases, a sample of observed data is required. This data may be termed the training sample. The training sample is used by the data mining activities to learn the patterns in the data.

296

Digital Transformation of Enterprise Architecture

Data mining models are of two kinds: 1. Directed or supervised models: In these models, there are input fields or attributes and an output or target field. Input fields are also called predictors, because they are used by the model to identify a prediction function for the output or target field. The model generates an input–output mapping function, which associates predictors with the output so that, given the values of input fields, it predicts the output values. Predictive models themselves are of two types, namely, classification or propensity models and estimation models. Classification models are predictive models with predefined target field or classes or groups, so that the objective is to predict a specific occurrence or event. The model also assigns a propensity score with each of these events that indicates the likelihood of the occurrence of that event. In  contrast, estimation models are used to predict a continuum of target values based on the corresponding input values. For instance, supervised model is used to estimate an unknown dependency from known input–output data: a. Input variables might include the following: – Quantities of different articles bought by a particular customer – Date of purchase – Location – Price b. Output variables might include an indication of whether the customer responds to a sales campaign or not. Output variables are also known as targets in data mining. Sample input variables are passed through a learning system, and the subsequent output from the learning system is compared with the output from the sample. In other words, we try to predict who will respond to a sales campaign. The difference between the learning system output and the sample output can be thought of as an error signal. Error signals are used to adjust the learning system. This process is done many times with the data from the sample, and the learning system is adjusted until the output meets a minimal error threshold. 2. Undirected or unsupervised models: In these models, there are input fields or attributes, but no output or target field. The goal of such models is to uncover data patterns in the set of input fields. Undirected models are also of two types, namely, cluster models, and, association and sequence models. Cluster models do not have predefined target field or classes or groups, but the algorithms analyze the input data patterns and identify the natural groupings of cases. In contrast, association or sequence models do not  involve or deal with the prediction of a single field. Association models detect associations between discrete events, products, or attributes; sequence models detect associations over time. Segmentation is much more complex than it may seem; simplified segmentation models, when tested in real life, seem to imply that people as customers change behavior radically. If  this was really true, there would be no trust, no loyalty, and, consequently, no collaboration. The  apparent paradox gets resolved only

Analyticity

297

when it is recognized that while people as customers do not  possess multiple personalities, they have differing customs and, hence, play differing roles based on different contexts or scenarios. The  problem arises on persisting with the stance of one-segment-fits-for-all-contexts-for-all-people-on-all-occasions. Unsupervised data analysis does not involve any fine-tuning. Data mining algorithms search through the data to discover patterns, and there is no target or aim variable. Only input values are presented to the learning system without the need for validation against any output. The  goal of unsupervised data analysis is to discover “natural” structures in the input data. In biological systems, perception is a task learnt via an unsupervised technique. 13A.1.1 Benefits Data mining can provide customer insight, which is vital for establishing an effective Customer Relationship Management strategy. It  can lead to personalized interactions with customers and hence increased satisfaction and profitable customer relationships through data analysis. It  can support an individualized and optimized customer management throughout all the phases of the customer life cycle, from the acquisition and establishment of a strong relationship to the prevention of attrition and the winning back of lost customers. 1. Segmentation: It  is the process of dividing the customer base into distinct and internally homogeneous groups in order to develop differentiated marketing strategies according to their characteristics. There  are many different segmentation types based on the specific criteria or attributes used for segmentation. In behavioral segmentation, customers are grouped by behavioral and usage characteristics. Data mining can uncover groups with distinct profiles and characteristics and lead to rich segmentation schemes with business meaning and value. Clustering algorithms can analyze behavioral data, identify the natural groupings of customers, and suggest a solution founded on observed data patterns. Data mining can also be used for the development of segmentation schemes based on the current or expected/estimated value of the customers. These segments are necessary in order to prioritize customer handling and marketing interventions according to the importance of each customer. 2. Direct marketing campaigns: Marketers use direct marketing campaigns to communicate a message to their customers through mail, the Internet, e-mail, telemarketing (phone), and other direct channels in order to prevent churn (attrition) and to drive customer acquisition and purchase of add-on products. More specifically, acquisition campaigns aim at drawing new and potentially valuable customers away from the competition. Cross-/deep-/up-selling campaigns are implemented to sell additional products, more of the same product, or alternative but more profitable products to existing customers. Finally, retention campaigns aim at preventing valuable customers from terminating their relationship with the organization. Although potentially effective, this can also lead to a huge waste of resources and to bombarding and annoying customers with unsolicited communications.

298

Digital Transformation of Enterprise Architecture

Data mining and classification (propensity) models, in particular, can support the development of targeted marketing campaigns. They  analyze customer characteristics and recognize the profiles or extended-profiles of the target customers. 3. Market basket analysis: Data mining and association models, in particular, can be used to identify related products typically purchased together. These models can be used for market basket analysis and for revealing bundles of products or services that can be sold together. However, to succeed with CRM, organizations need to gain insight into customers and their needs and wants through data analysis. This  is where analytical CRM comes in. Analytical CRM is about analyzing customer information to better address the CRM objectives and deliver the right message to the right customer. It  involves the use of data mining models in order to assess the value of the customers, understand, and predict their behavior. It  is about analyzing data patterns to extract knowledge for optimizing the customer relationships. For example, a. Data mining can help in customer retention as it enables the timely identification of valuable customers with increased likelihood to leave, allowing time for targeted retention campaigns. b. Data mining can support customer development by matching products with customers and better targeting of product promotion campaigns. c. Data mining can also help to reveal distinct customer segments, facilitating the development of customized new products and product offerings, which better address the specific preferences and priorities of the customers. The results of the analytical CRM procedures should be loaded and integrated into the operational CRM front-line systems so that all customer interactions can be more effectively handled on a more informed and personalized base. 13A.2 Data Mining Applications Data mining can be used by businesses in many ways; two of the most profitable application areas have been the use of customer segmentation by marketing organizations to identify those with marginally greater probabilities of responding to different forms of marketing media, and banks using data mining to more accurately predict the likelihood of people to respond to offers of different services offered. Table 13A.1 shows general application areas of data mining. Many companies are using data mining to identify their “valuable” customers so that they can provide them the service needed to retain them: • Customer profiling: Identifying those subsets of customers most profitable to the business • Targeting: Determining the characteristics of profitable customers who have been captured by competitors • Market basket analysis: Determining product purchases by consumer, which can be used for product positioning and for cross-selling

299

Analyticity

TABLE 13A.1 Data Mining Application Areas Application Area

Applications

Human management resource Credit card management

Churn Lift churn

Retailing

Affinity positioning, cross-selling Customer relationship management Fraud detection Churn Online information

Banking Insurance Telecommunications Telemarketing

Specifics Identify potential employee turnover Identify effective market segments Identify likely customer turnover Position products effectively Find more products for customers Identify customer value Develop programs to maximize revenue Identify claims meriting investigation Identify likely customer turnover Aid telemarketers with easy data

The key is to find actionable information, or information that can be utilized in a concrete way to improve profitability. Some of the earliest applications were in retailing, especially in the form of market basket analysis. Data mining methodologies can be applied to a variety of domains, from marketing and manufacturing process control to the study of risk factors in medical diagnosis, from the evaluation of the effectiveness of new drugs to fraud detection: 1. Relational marketing: It is useful for numerous tasks like identification of customer segments that are most likely to respond to targeted marketing campaigns, such as cross-selling and up-selling; identification of target customer segments for retention campaigns; prediction of the rate of positive responses to marketing campaigns; and, interpretation and understanding of the buying behavior of the customers. 2. Text mining: Data mining can be applied to different kinds of texts, which represent unstructured data, in order to classify articles, books, documents, e-mails, and web pages. Examples are web search engines or the automatic classification of press releases for storing purposes. Other text mining applications include the generation of filters for e-mail messages and newsgroups. 3. Web mining: It is useful for the analysis of so-called clickstreams—the sequences of pages visited and the choices made by a web surfer. They  may prove useful for the analysis of e-commerce sites, in offering flexible and customized pages to surfers, in caching the most popular pages, or in evaluating the effectiveness of an e-learning training course. 4. Image recognition: The treatment and classification of digital images, both static and dynamic, are useful to recognize written characters, compare and identify human faces, apply correction filters to photographic equipment, and detect suspicious behaviors through surveillance video cameras. 5. Medical diagnosis: Learning models are an invaluable tool within the medical field for the early detection of diseases using clinical test results. 6. Image analysis: For  diagnostic purpose, it is another field of investigation that is currently burgeoning.

300

Digital Transformation of Enterprise Architecture

7. Fraud detection: Fraud detection is relevant for different industries such as telephony, insurance (false claims), and banking (illegal use of credit cards and bank checks; illegal monetary transactions). 8. Risk evaluation: The purpose of risk analysis is to estimate the risk connected with future decisions. For example, using the past observations available, a bank may develop a predictive model to establish if it is appropriate to grant a monetary loan or a home loan, based on the characteristics of the applicant. 13A.3 Data Mining Analysis 13A.3.1 Supervised Analysis 13A.3.1.1 Exploratory Analysis This data mining task is primarily conducted by means of exploratory data analysis and therefore, it is based on queries and counts that do not require the development of specific learning models. The information so acquired is usually presented to users in the form of histograms and other types of charts. Before starting to develop a classification model, it is often useful to carry out an exploratory analysis whose purposes are as follows: • To achieve a characterization by comparing the distribution of the values of the attributes for the records belonging to the same class • To detect a difference, through a comparison between the distribution of the values of the attributes for the records of a given class and the records of a different class (or between the records of a given class and all remaining records) The primary purpose of exploratory data analysis is to highlight the relevant features of each attribute contained in a dataset, using graphical methods and calculating summary statistics, and to identify the intensity of the underlying relationships among the attributes. Exploratory data analysis includes three main phases: 1. Univariate analysis, in which the properties of each single attribute of a dataset are investigated 2. Bivariate analysis, in which pairs of attributes are considered, to measure the intensity of the relationship existing between them (for supervised learning models, it is of particular interest to analyze the relationships between the explanatory attributes and the target variable) 3. Multivariate analysis, in which the relationships holding within a subset of attributes are investigated 13A.3.1.2 Classification In a classification problem, a set of observations is available, usually represented by the records of a dataset, whose target class is known. Observations may correspond, for instance, to mobile phone customers and the binary class may indicate whether a given customer is still active or has churned. Each observation is described by a given number of attributes whose value is known; in the previous example, the attributes may correspond

Analyticity

301

to age, customer seniority, and outgoing telephone traffic distinguished by destination. A classification algorithm can therefore use the available observations relative to the past in order to identify a model that can predict the target class of future observations whose attributes values are known. The target attribute, whose value is to be predicted, is categorical in classification problems and therefore takes on a finite and usually rather small number of values. If the target is a binary variable, In most applications the target is even represented by a binary variable. Classification is intended for discrete targets, when the target variable takes on continuous values it is part of regression analysis. Classification models are supervised learning methods for predicting the value of a categorical target attribute, unlike regression models which deal with numerical attributes. Starting from a set of past observations whose target class is known, classification models are used to generate a set of rules that allow the target class of future examples to be predicted. Classification analysis has many applications in selection of the target customers for a marketing campaign, fraud detection, image recognition, early diagnosis of diseases, text cataloguing, and spam e-mail recognition are just a few examples of real problems that can be framed within the classification paradigm. 13A.3.1.3 Regression If one wishes to predict the sales of a product based on the promotional campaigns mounted and the sale price, the target variable may take on a very high number of discrete values and can be treated as a continuous variable; this would become a case of regression analysis. Based on the available explanatory attributes, the goal is to predict the value of the target variable for each observation. A  classification problem may be effectively be turned into a regression problem, and vice versa; for instance, a mobile phone company interested in the classification of customers based on their loyalty may come up with a regression problem by predicting the probability of each customer remaining loyal. The purpose of regression models is to identify a functional relationship between the target variable and a subset of the remaining attributes contained in the dataset. Regression models • Serve to interpret the dependency of the target variable on the other variables. • Are used to predict the future value of the target attribute, based upon the functional relationship identified and the future value of the explanatory attributes. The  development of a regression model allows knowledge workers to acquire a deeper understanding of the phenomenon analyzed and to evaluate the effects determined on the target variable by different combinations of values assigned to the remaining attributes. This is of great interest particularly for analyzing those attributes that are control levers available to decision makers. Thus, a regression model may be aimed at interpreting the sales of a product based on investments made in advertising in different media, such as daily newspapers, magazines, TV, and radio. Decision makers may use the model to assess the relative importance of the various communication channels, therefore directing future investments toward

302

Digital Transformation of Enterprise Architecture

those media that appear to be more effective. Moreover, they can also use the model to predict the effects on the sales determined by different marketing policies, so as to design a combination of promotional initiatives that appear to be the most advantageous. 13A.3.1.4 Time Series Sometimes the target attribute evolves over time and is therefore associated with adjacent periods on the time axis. In this case, the sequence of values of the target variable is said to represent a time series. For instance, the weekly sales of a given product observed over 2 years represent a time series containing 104 observations. Models for time-series analysis investigate data characterized by a temporal dynamic and are aimed at predicting the value of the target variable for one or more future periods. The  aim of models for time-series analysis is to identify any regular pattern of observations relative to the past, with the purpose of making predictions for future periods. Time-series analysis has many applications in business, financial, socioeconomic, environmental, and industrial domains—predictions may refer to future sales of products and services, trends in economic and financial indicators, or sequences of measurements relative to ecosystems, for example. 13A.3.2 Unsupervised Analysis 13A.3.2.1 Association Rules Association rules, also known as affinity groupings, are used to identify interesting and recurring associations between groups of records of a dataset. For example, it is possible to determine which products are purchased together in a single transaction, and how frequently. Companies in the retail industry resort to association rules to design the arrangement of products on shelves or in catalogs. Groupings by related elements are also used to promote cross-selling or to devise and promote combinations of products and services. 13A.3.2.2 Clustering The  term “cluster” refers to a homogeneous subgroup existing within a population. Clustering techniques are therefore aimed at segmenting a heterogeneous population into a given number of subgroups composed of observations that share similar characteristics; observations included in different clusters have distinctive features. Unlike classification, in clustering, there are no predefined classes or reference examples indicating the target class, so that the objects are grouped together based on their mutual homogeneity. By defining appropriate metrics and the induced notions of distance and similarity between pairs of observations, the purpose of clustering methods is the identification of homogeneous groups of records called clusters. With respect to the specific distance selected, the observations belonging to each cluster must be close to one another and far from those included in other clusters. Sometimes, the identification of clusters represents a preliminary stage in the data mining process, within exploratory data analysis. It may allow homogeneous data to be processed with the most appropriate rules and techniques and the size of the original dataset to be reduced, since the subsequent data mining activities can be developed autonomously on each cluster identified.

Analyticity

303

13A.3.2.3 Description and Visualization The  purpose of a data mining process is sometimes to provide a simple and concise representation of the information stored in a large dataset. Although, in contrast to clustering and association rules, descriptive analysis does not  pursue any particular grouping or partition of the records in the dataset, an effective and concise description of information is very helpful, since it may suggest possible explanations of hidden patterns in the data and lead to a better understanding the phenomena to which the data refer. Notice that it is not always easy to obtain a meaningful visualization of the data. However, the effort of representation is justified by the remarkable conciseness of the information achieved through a well-designed chart.

14 Usability To withstand the disruptive digital storms of the future, enterprise architecture affects digital transformation of usability aspects primarily through interactive computing (see Chapter 22). This chapter gives an introduction to usability aspects of the enterprise architecture. Usability aspects are also complementarily or orthogonally affected by other attributes, but this chapter focuses on factors directly related to usability aspects of the enterprise architecture. In order for computer-based systems to be widely accepted and used effectively, they need to be well designed via a user-centered approach. Computer-based systems should be designed for the needs and capabilities of the people for whom they are intended. Ultimately, users should not  even have to think about the complexity of how to use a computer. For  that reason, computers and related devices have to be designed with an understanding that people with specific tasks in mind will want to use them in a way that is seamless with respect to their work. User-centered design has become an important concept in the design of interactive systems. It  is primarily concerned with the design of sociotechnical systems that take into account not only their users, but also the use of technologies in users’ everyday activities, it can be thought of as the design of spaces for human communications and interaction. Human-computer Interaction (HCI) is a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use. The main purpose of using HCI in the design is to develop an efficient and effective user interface to suit the needs and requirements of the users. To achieve these features, HCI specialists need to involve the users in their design, integrating different kinds of knowledge and expertise, and making the design process iterative. Designers need to know how to think in terms of future users’ needs, values and supportable tasks and how to translate that knowledge, in the context of the latest or updated technology, into an executable system. Consequently, this field is now concerned majorly with understanding, designing for, and evaluating a wider range of aspects related to the user experience rather than merely the user-interface per se; in other words usability. HCI and usability are essential factors to consider when designing and developing an interaction interface that is more efficient and effective, and produces user satisfaction rather than frustration.

14.1 Introduction to Usability Usability is the ability to be used easily and effectively by the specified range of users, given specified training and user support, to fulfill the specified range of tasks, within the envisaged range of environmental scenarios. Usability is a quality attribute that assesses how easy user interfaces are to experience and use; it refers to the quality of the 305

306

Digital Transformation of Enterprise Architecture

interactivity and interaction in terms of parameters such as time taken to perform tasks, number of errors made, the time to become a competent user and methods for improving ease-of-use during the design process. Usability is a critical issue for websites as it improves competitive position, improves customer loyalty, and drives down costs. Therefore, if usability is highlighted in website design, it will keep the organization in an advantageous position.

14.2 Concept of Usability Usability is influenced by a number of factors like user, user tasks and system characteristics which impact each other in complex ways. The  system function is described as being the most important concept for usability. Usability under the development process must address: 1. Ease of learning: This  refers to the effort required to understand and operate an unfamiliar system; and this term depends on the user’s knowledge. 2. Ease of use: This refers to the effort that is required to operate a system once it has been understood and mastered by the user. 3. Task match: This refers to the extent to which the information and functions that a system provides matches the needs of the user; in other words, whether the system will provide the necessary functions that are essential as well as the information that the user needs to accomplish the goals. A user task is characterized by: 1. Frequency: The frequency term refers to the number of times any particular task is performed by a user. If users perform a task infrequently, then help and assistance should be available via the interface so that users know which step must be taken next to accomplish the task. On the other hand, if users perform a task frequently, then it will be easier for user to remember the steps, which are required in order to accomplish the task. 2. Openness: The  openness term refers to the extent to which a task is modifiable. This means that the information needs of the user are variable and the task must be structured to allow the user to acquire a wide range of requisite and relevant information. If the user information needs have completely been identified, then the task need not be open and flexible, as the same information is required each time the task is performed. The final set of independent variables concerns user characteristics, focusing on who is using the system, namely: 1. Knowledge: This refers to the user’s level of knowledge about computers and the tasks required. 2. Motivation: This refers to the degree of motivation depending on which more effort may be expended in overcoming problems and misunderstandings.

Usability

307

3. Discretion: This refers to the user’s ability to choose not to use some part, or even the whole of a system. In other words, high discretion means that there needs to be satisfaction and fulfillment while working with the new system, or else the user may abandon using the system altogether. User reaction focuses on the negative and positive outcomes of adopting the new system. Positive outcomes will lead to success of the system, while the negative outcomes will lead to suspension and discontinuation of the system. In other words, the user accumulates a knowledge base of task-system connections as the system is used in a sequence of task episodes; the resulting strategy for use may represent a positive outcome in which the user locates and uses appropriate system functions for every new task and progressively masters the system. The reverse scenario occurs when negative outcomes prevail and use of the system is discontinued.

14.3 Usability Principles To promote usability the various principles to be followed are Nielsen (1993): • Learnability: By which new users can begin effective interaction and achieve maximal performance • Flexibility: The multiplicity of ways the user and system exchange information • Robustness: The  level of support provided to the user in determining successful achievement and assessment of goals • Efficiency: Once the user learns about the system, the speed with which user can perform the tasks • Memorability: How easily the user will remember the system functions, after a period time of not using it • Errors: How many errors a user makes, how severe are these errors, and how easily the user recovers from these errors • Satisfaction: How enjoyable and pleasant it is to work with the system The purposes behind adopting these principles are to give more assistance and knowledge to system developers (and the users) regarding the system design.

14.4 Usability Specifications A usability specification defines the measure of success of a computer system or website and serves as an indicator about whether or not  the development of the website is on the right track. A usability specification should be developed during the first stage of the development process and monitored at each iteration to determine whether the interface is indeed converging toward an improved and more usable design.

308

Digital Transformation of Enterprise Architecture

Usability specifications can be evaluated in two ways: 1. Performance measures: These are directly observable by watching a user complete a task within a specific time. This includes monitoring the number of errors and time needed to accomplish the task. Since, these types of measures are quantifiable, they can be communicated through numbers. For example, by counting the number of minutes it tasks a user to complete a task or the number of negative comments that occur. 2. Preference measures: These give an indication of a user’s opinion about the interface which is not directly quantifiable. Preference measures can be ascertained by using questionnaires or interviews. As part of the system requirements, requirements concerning system usability are collected in the early stages of the design, as, following the top-down process, these requirements are gradually translated into design specifications, similar to other types of systems requirements. The  functionalities of user interaction, such as menus and controls, are determined through an iterative process, from concepts to components, by using the various levels of prototyping and evaluation, just as for the rest of the system requirements. In addition to usability testing, other testing methods that have been used in usability evaluation are heuristic evaluation, cognitive walk-through, and competitive evaluation. The idea of “heuristics” came from the fact that interface design principles are fairly broad and would apply to many types of user interfaces. Implementing usability testing could be costly and time consuming; heuristic evaluation and testing could be a faster and more economical way of obtaining initial results. Heuristic evaluation is done by having experts evaluate an interface and form an opinion about what is good and bad about it. After years of experience, heuristic evaluation has become a systematic inspection of a user interface design. Evaluation is usually performed by a small set of expert evaluators using recognized usability principles. Nielsen (1993) gave ten general heuristic evaluation criteria: 1. Visibility of system status: The system interface should always keep users informed about what is going on, providing a visible status for the system functions through an appropriate feedback format within a reasonable time frame. 2. Match between system and the real world: The  system should speak the users’ language, with terms, words, phrases, and concepts familiar to the user, matching users’ mental models and expectations with system functions. It  should follow real-world conventions, presenting information in a natural and logical order and format. 3. User control and freedom: When encountering problems and difficulties, there should be little difficulty for users to leave the current, undesired state easily with a clearly marked “exit,” without having to go through an extended number of tedious steps; the system should support the undoing and redoing of the most recent user actions and functions. 4. Consistency and standards: The style of the interaction throughout the system interface levels should be consistent to minimize unnecessary learning and confusion between different levels of interactions.

Usability

309

5. Error prevention: A good usable interface is always designed in a way that prevents a problem from occurring in the first place; this should include eliminating errorprone conditions or checking for those conditions and present users with options before they make an error. This feature is important for all levels of user groups, especially when the system complexity increases. 6. Recognition rather than recall: A good design should minimize the user’s memory load by making objects and actions visible and easily understandable. The user should be presented with ample information for them to choose from, not have to remember lots of information and retrieve them from memory. A good analogy for this principle is multiple-choice questions versus essay questions in a test; a multiple-choice question provides more recognition while an essay question primarily relies on one’s recall ability. 7. Flexibility and efficiency of use: A good interface should provide flexibility for different users to suit their needs for efficiency. For  example, while a novice user might need lots of detailed tutorials for them to learn to use the system efficiently, an experienced user might want to skip those tutorials and get straight to the tasks, even using some accelerators/shortcuts to speed up the tasks. A good system interface should have this flexibility to cater for both inexperienced and experienced users and allow users to tailor their actions. 8. Aesthetic and minimalist design: A  good usability interface should be attractive. This  requires that every piece of information in the design should be relevant, minimizing clutter, maximizing visibility, and complying with the psychological principles of user visual comfort. 9. Help users recognize, diagnose, and recover from errors: Error messages are inevitable in every system. When an error occurs, the system should provide the error message in plain language (no codes), indicate precisely the nature of the problem, and constructively suggest a solution or directions for solving the problem. 10. Help and documentation: A good system should provide help and documentation. Such information should be easy to search and written from users’ perspectives. This documentation should be available at all times and not be too large in size.

14.5 Interactions and the Quality of Experience Interaction can only take place if certain interactive acts are performed by at least two actors communicating with each other. An interaction or interactive pattern is a sequence of actions, references, and reactions wherein each reference or reaction has a certain, exante intended and ex-post recognizable interrelation with preceding event(s) in terms of timing and content. The  most common feature constituting the exchange of interactive acts and thereby interactivity is the recurring characteristic of the request-response pattern (Figure 14.1). The  surface structure of an interactive conversation between two partners A  and B may formally be described as a sequence of four states. These states are a result of each speaker alternating between active and passive periods that can be numbered according to

310

Digital Transformation of Enterprise Architecture

FIGURE 14.1 An interactivity constituting a request–response pattern.

their temporal order. Together with the delay added by the transmission channel, on each speaker’s side, this leads to a sequential chain made up of four states: “A” if speaker A is active (only) “B” if speaker B is active (only) “Double talk” if both speakers are active “Mutual silence” if none of them is active For defining the interactivity metric, it must not make a difference whether it is calculated based on the interaction pattern as seen by A or B (symmetry criterion). Figure 14.2 shows the influence interactivity has on the quality of experience via a taxonomy of influence factors, interaction performance aspects, and quality features.

FIGURE 14.2 Taxonomy of influence factors, interaction performance aspects, and quality feature.

Usability

311

The  process of interaction between two or more entities and their perception of this process on several dimensions is depicted in Figure 14.2 and described as follows: 1. Smoothness and interaction maintenance effort is how fluent and effortless the users experience the conversational flow. Typically, interaction has an inherent pace it establishes, thereby keeping the maintenance efforts of the interaction parties minimal. However, due to system impairments, the interaction pace can be changed, thereby accordingly demanding additional user effort in order to adapt to the changed pace. For human-to-machine interaction, this can severely impact the flow experience or the experienced smoothness, whereas, for humanto-human interaction, the conversational rhythm can be impaired. 2. Pace is the users’ perceived promptness of the interactional acts and respective actions by the other entity. 3. Response ability denotes if it is possible to issue a response following a prior message or a response from the system or other user. Response abilities through interruptions in human-to-human interactions can be severely obstructed by transmission delays, as interruptions may not arrive in time and are not able to interrupt in the way originally intended. In terms of browser-based applications, the modality type of the response can be a source of difficulty, but is rather caused by the website content in this application type. 4. Naturalness is related to the inherent knowledge about how an interaction takes place in a nonmediated or ideal case. 5. Comprehension effort is required to understand either the other interlocutor (in the case of human-to-human interaction) or needed to interpret the response from the machine. Comprehension can be distorted by, for example, double talk or nonrendered portions of the webpage, which might be needed for navigation or information retrieval. When interacting with or via a system, the initial situation is composed of: 1. A human (or humans) having perceptions, emotions, motivations, and behaviors that are built on the human’s personality, experiences, and the current state 2. A system providing a certain quality of service 3. A context (or contexts) To illustrate the role of interaction from the viewpoint of a user, the process of interaction can be understood as a tool for him or her to infer the quality provided by the system of interest. In the case of human-to-human communication, interaction problems experienced by the user can either be due to the person at the other end or due to the system. For some cases, the identification is obvious; for instance, if background noises are falsely amplified, it may be difficult to maintain a fluent conversation, but it can easily be identified as a technical problem. The quality can be rated low accordingly. In the human-to-machine context, the interaction quality directly provides information to the user on the quality of the system. The gathered information—for example, how comprehensible messages are, how easy it is to respond, or how fluently an interaction can be maintained—can directly be transferred into a judgment on the quality. Interaction measures can therefore be considered to have a direct link to QoE. However, for other cases,

312

Digital Transformation of Enterprise Architecture

the reason—that is, whether it is the other person or the system that is responsible for a low performance—is not clearly identifiable. When response times are rather long, it can for example either be due to a long transmission delay or to a slowly responding person at the other end. Similarly, inappropriate timing of utterances can be due to either the system sending out messages at an incorrect time or to someone being inattentive or impolite and not getting the timing right. 14.5.1 Factors Influencing Quality of Experience The  influence factor can be understood as any characteristic of a user, system, service, application, or context whose actual state or setting may have influence on the QoE for the user. QoE can be subject to a range of complex and strongly interrelated factors and falls into three categories of human, system, and context influence factors, respectively. With regard to human influence factors, we discuss variant and stable factors that may potentially bear an influence on QoE, either for low-level (bottom-up) or higher-level (top-down) cognitive processing. System influence factors are classified into four distinct categories, namely, content-, media-, network- and device-related influence factors. Finally, the broad category of possible context influence factors is decomposed into factors linked to the physical, temporal, social, economic, task, and technical information contexts (Table 14.1). 14.5.1.1 Human Influence Factors A human influence factor is any variant or invariant property or characteristic of a human user. The characteristic can describe the demographic and socioeconomic background, the physical and mental constitution, and/or the emotional state of the user. Human influence factors may bear an influence on a given experience and how it unfolds, as

TABLE 14.1 Overview and Examples of Potential Influence Factors (IFs) IF HIF

SIF

GIF

Type Low-level: physical, emotional, mental constitution High-level: understanding, interpretation, evaluation

Examples

Visual/auditory acuity and sensitivity; gender, age; lower-order emotions; mood; attention level Sociocultural background; socioeconomic position; values; goals; motivation; affective states; previous experiences; prior knowledge; skills Content-related Audio bandwidth, dynamic range; video motion and detail Media-related Encoding, resolution, sampling rate, frame rate; synchronization Network-related Bandwidth, delay, jitter, loss, error rate, throughput: transmission protocol Device-related Display resolution, colors, brightness; audio channel count Physical context Location and space; environmental attributes; motion Temporal context Time, duration and frequency of use Social context Interpersonal relations Economic context Costs, subscription type, brand Task context Nature of experience; task type, interruptions, parallelism Technical/informational context Compatibility, interoperability; additional informational artifacts

Usability

313

well as on its quality. They are highly complex because of their subjectivity and relation to internal states and processes. This makes them rather intangible and therefore much more difficult to grasp. In addition, human influence factors are strongly interrelated and may also strongly interplay with the other influence factors described in this chapter. As a result, the influence of human factors on QoE cannot only be considered at a generic level. 1. Low-level processing and human influence factors: At  the level of early sensory—or so-called low-level—processing, properties related to the physical, emotional, and mental constitution of the user may play a major role. These characteristics can be dispositional (e.g., the user’s visual and auditory acuity, gender, age) as well as variant and more dynamic (e.g., lower-order emotions, the user’s mood, motivation, attention). At  the same level, characteristics that are closely related to human perception of external stimuli might bear the strongest influence on QoE. 2. Higher-level processing and human influence factors: Top-down—or, so-called higher-level—cognitive processing relates to the understanding of stimuli and the associated interpretative and evaluative processes. It is based on knowledge, i.e., “any information that the perceiver brings to a situation.” As a result, a wide range of additional human influence factors are important at this level. Some of them have an invariant or relatively stable nature. Examples in this respect include the sociocultural and educational background, life stage, and socioeconomic position of a human user. 14.5.1.2 System Influence Factors System influence factors refer to properties and characteristics that determine the technically produced quality of an application or service. 1. Content-related system influence factors refer to the content itself, and its type are highly influential to the overall QoE of the system, as different content characteristics might require different system properties. For auditory information, the audio bandwidth and dynamic range are the two major system influence factors and their requirements vary with the content itself (e.g., for voice/spoken content versus musical content). When it comes to visual information, the amount of detail as well as the amount of motion in the scene is important. To a large extent, this has to do with human influence factors such as contrast sensitivity and visual masking, but also with the fact that current compression techniques are affected by these. Furthermore, it is also influenced by the content itself, as well as influenced by the higher-level processing. In three-dimensional image and video content, the amount of depth is an aspect that also influences the quality and especially the viewing comfort. 2. Media-related system influence factors refer to media configuration factors, such as encoding, resolution, sampling rate, frame rate, and media synchronization. They  are interrelated with the content-related system influence factors. Mediarelated system influence factors can change during the transmission due to variation in network-related system influence factors.

314

Digital Transformation of Enterprise Architecture

In  most cases, the resources for distributing media are limited. There  are both economical- as well as hardware-related reasons for limiting the size of media. This is usually accomplished by applying compression, which can be either lossless or lossy. Lossy compression gives higher compression rates at the cost of quality. However, the influence depends on the principle that the lossy coding is built upon. For  instance, for images and video, block-based compression techniques such as via JPEG or MPEG4/AVC a.k.a. H.264 are the most common. For stronger compression, these will usually give visible blocking (rectangular-shaped) distortions and blurring, whereas wavelet-based techniques mostly give blurring distortions, as seen in JPEG 2000. For audio, the coding also depends on the content type and service/application scenario. 3. Network-related system influence factors consider data transmission over a network. The main network characteristics are bandwidth; delay; jitter; loss, error rates, and distributions; and throughput. The network-related system influence factors may change over time or as a user changes his location and are tightly related to the network quality of service. Network-related system influence factors are impacted by errors occurring during the transmission over a network. Especially in case of delay, the impact of system influence factors also depends on whether the service is interactive or more passively consumed, as, for instance in the case of telephony versus radio broadcast or video conferencing versus streaming video. In an interactive, for example, conversational service, delay may have a negative impact on QoE. Most often, the video is deliberately delayed by using strategically placed buffers in order to be more resilient towards network capacity variations and errors. The popularity of over-the-top streaming video, for example YouTube or Netflix, has increased very rapidly. The  distribution method is Transmission Control Protocol- and Hypertext Transfer Protocol-based and, here, the influence of packet loss and bandwidth limitations is quite different. Network problems will result in freezes without loss of content in the video. Freezing also has a bad influence on the experienced quality, but can be avoided by using adaptive or scalable codecs in conjunction with over-the-top video services. 4. Device-related system influence factors refer to the end systems or devices of the communication path. The visual interface to the user is the display. Its capacity will have a tremendous impact on the end-user experience, but the content and signal quality will interact with it. For instance, if a high-quality, high-resolution (here meaning in terms of number of pixels) image is shown on a low-resolution display with few colors, most of the original intent of the image might be lost. However, if a low-resolution image is shown on a large, high-resolution display, most likely a very blocky and blurry image will be displayed, but the end result will be highly dependent on the final image scaling procedure. The main progress in the area of input devices is the increased usage of touchscreens, which are addressing the human tactile modality. The touchscreen as an input device can present a limitation if the user needs to input a larger amount of

Usability

315

information. The existing state-of-the-art mobile devices with multi-core processors and advanced graphics processing units can deliver a substantial amount of computational power, but at the cost of autonomy. Mobility, which is a context influence factor, thus strongly influences various characteristics of devices. 14.5.1.3 Context Influence Factors Context influence factors are factors that embrace any situational property to describe the user’s environment. 1. Physical context describes the characteristics of location and space, including movements within and transitions between locations; spatial locations (e.g., outdoor or indoor or in a personal, professional, or social place); functional places and spaces; sensed environmental attributes (e.g., peaceful place versus a noisy place, lights and temperature); movements and mobility (e.g., sitting, standing, walking, or jogging); and artifacts. 2. Temporal context is related with temporal aspects of a given experience, for example, time of day (e.g., morning, afternoon, or evening), week, month, season (e.g., spring, summer, fall, or winter) and year; duration and frequency of use (of the service/system); before/during/after the experience; actions in relation to time; and synchronism. 3. Social context is defined by the interpersonal relations existing during the experience. Hence, it is important to consider if the application/system user is alone or with other persons as well as how different persons are involved in the experience, namely including interpersonal actions. Moreover, the cultural, educational, professional levels (e.g., hierarchical dependencies, internal versus external), and entertainment (dependent of random or rhythmic use) also need to be considered. The social context becomes very important at the recommendation level. Content recommendation based on the gathered context information allows guaranteeing better user experience. Collaborative recommendations, wherein the user recommends items that are consumed by other users with similar preferences, can also be made possible. 4. Economic context refers to costs, subscription type, or brand of the application/system that are part of the economic context. Network cost information (e.g., relative distances between the peers), jointly with some physical and social factors, can be used to enable network optimization strategies for media delivery. 5. Task context is determined by the nature of the experience. Depending on this, three situations may arise: multitasking, interruptions, or task type. An additional task does not have an influence on the perceived quality, independently of the difficulty (hard or easy) of that task, as stalling did affect the perceived quality to a similar extent under both task conditions. 6. Technical and information context describes the relationship between the system of interest and other relevant systems and services including: devices (e.g., existing interconnectivity of devices over Bluetooth or nearfield communication);

316

Digital Transformation of Enterprise Architecture

applications (e.g., availability of an application instead of the currently used browser-based solution of a service); networks (e.g., availability of other networks than the one currently used); or additional informational artifacts (e.g., additional use of pen and paper for better information assimilation from the service used). Characteristics like interoperability, informational artifacts, and access as well as mixed reality also need to be considered. 14.5.2 Features of Quality of Experience This subsection describes how the factors of the user, system, and context of use, which influence QoE, are perceived by the user. For this purpose, the notion of a feature—that is, a perceivable, recognized, and nameable characteristic of an experience—is deployed. Such a feature can be considered as a dimension of the perceptual space and the nature and dimensionality of this space can be analyzed. The characteristics of the individual’s experience is analyzed by decomposing it into so-called quality features. Quality can be understood as the outcome of an individual’s comparison and judgment process, requiring perception, reflection, and description processes to take place. Knowledge about these characteristics is however necessary when investigating why a specific experience is suboptimum, i.e., not judged with the highest-possible rating, and what can be done in order to improve the situation (quality diagnosis). A perceptual event is triggered by a physical event, or the physical signal reaching the individual’s sensory organs in a specific situation. The physical event is first processed by the low-level sensation and perception processes, resulting in a perceptual character of the event. This perceptual character is then reflected by the individual during the quality judgment process, resulting in the perceptual event, which is characterized by its decomposing features. Thus, a feature can be seen as a dimension of a multidimensional perceptual event, in a multidimensional perceptual space. As a perceptual event is always situation and context-dependent, a feature also depends on the situation and the context it has been extracted in. Thus, an empirical analysis of features commonly reveals only those features that are perceivable and nameable in that respective context (the others cannot be discerned). The concept of a feature space can be helpful for explaining the relation between features and the quality of a perceptual event. The (integral) quality is a scalar value that can in general not directly be represented in the feature space. However, functional relations can be established by mapping the feature values of perceptual events onto corresponding quality scores. Depending on the form of this mapping, the nature of the features with respect to quality can be analyzed. The perceptual event that a physical stimulus provokes can be conceived as being located in a multidimensional feature space. In this feature space, each of the underlying axes corresponds to one feature of the perceptual event. The perceptual event can mathematically be described by a position vector, where its coordinates correspond to specific feature values of the perceptual event. If the underlying axes of this space and, thus, the features are orthogonal, then the features can also be referred to as perceptual dimensions. The number of the dimensions—that is, the nameable perceptual features that are orthogonal to each other—corresponds to the dimensionality of the feature space. Once the perceptual features have been identified, they can be mapped to integral quality judgments obtained for the same physical events and trigger hopefully the same perceptual events. This way, it becomes possible to identify the weighting or importance of each perceptual feature for the integral quality. This process is called external preference mapping, and the resulting weights can be used for modeling overall quality on the basis of features.

Usability

317

14.5.2.1 Feature Levels When referring to a service that is potentially interactive, wherein the individual usage situation spans over a delimited period of time and/or which is being used regularly over a longer time, there are additional features that may play a role for the global quality, utility, and acceptability of the service. Features are grouped on five levels: 1. Level of service, which is related to the usage of the service beyond a particular instance. This  category includes features like appeal, usefulness, utility, and acceptability. 2. Level of the usage instance of the service, which includes also the physical and social usage situations. Examples of such features are the learnability and intuitiveness of the service, its effectiveness and efficiency for reaching a particular goal during the current usage instance, and the ease of using the service as well as nonfunctional features such as the “personality” of the interaction partner (human or machine) or its aesthetics. 3. Level of interaction, or the level that includes the constant exchange of actions and reactions, be it between humans (human-to-human interaction) or between humans and systems (human-to-machine interaction), includes features like responsiveness, naturalness of interaction, communication efficiency, and conversation effectiveness. 4. Level of action, or the level that relates to the human perception of his/her own actions, may include involvement and immersion and the perception of space (as far as this is supported by the user perceiving his/her own motions in the virtual space) as well as the perception of one’s own motions in the case of video services. In the case of speech services, this may include talking-only features such as the perception of sidetone, echo, or double-talk degradations. 5. Level of direct perception summarizes all quality features that are related to the perceptual event and which are created immediately and spontaneously during the experience. These features may be related to individual sensory channels, such as visual features, auditory features, or tactile features, but may also be linked to the perception via multiple senses in parallel (e.g., audio–visual features). For the visual channel, examples include sharpness, darkness, brightness, contrast, flicker, distortion, and color perception. For  the auditory channel, example features of audio-streaming services are localization and timbre, while example features of speech-transmission services include coloration, noisiness, loudness, or continuity. For services that address multiple sensory channels simultaneously, relevant features are, for example, balance and synchronism.

14.6 Usability of Interactive Systems Usability is becoming an increasingly important issue in the development and evaluation of spoken language dialogue systems (SLDSs). Many companies have been spending a lot of money to know exactly which features make SLDSs attractive to users and how to evaluate whether these systems have these features.

318

Digital Transformation of Enterprise Architecture

In order to measure the usability of interactive systems, performance indicators (metrics) must be appointed according to the usability requirements. The most used usability metric in VUI applications is word error rate (WER) which is related to accuracy. Usability metrics are divided into: 1. Objective metrics: Objective indicators are related to the effectiveness with the system; the objectives metrics are: task completion time; dialogue completion time; mean user time; system response times; system Turns; user (Initiated) turns; total turns; word error rate (WER); response latency; response latency variance; exact scenario completion; the number of “help” requests and barge-ins; the number of completed tasks and sub-tasks; the number of mean length of utterances; time and turns used for confirmations; degree and usage of different modalities; time and number of turns used for error corrections. 2. Subjective metrics: Subjective indicators collect the user’s opinion about the system; the subjective metrics are user satisfaction, cognitive load, user preferences, task ease, user expertise, expected behavior, and social acceptance. Commercially reported recognition rates are generally above 95%. Several factors contribute for obtaining such rates: • Vocabulary affects speech recognition through its size and domain coverage. This way, large vocabularies with good domain coverage are interesting, because they make possible recognition of more words. On the other hand, small vocabularies increase the possibility of correct recognition. Small vocabularies are, however, mostly relevant for voice navigation. • Speakers influence speech recognition rates by the clarity and consistency of pronunciation. Speaker-dependent systems have higher recognition rates than speaker-independent systems but require some training sessions and may be more sensitive to the background noise, microphone, voice (e.g., due to a cold) and atypical speakers, including non-natives, children and elderly. • Noise can affect speech recognition in two ways: (a) It  can cause alterations of speech signal, that can make it more difficult to distinguish the spoken words. (b) In noisy environments, people change their voice in an attempt to counter the distortion of the speech signal. • All speech recognition systems are based on principles of statistical pattern matching. In spite of these similarities, each system can differ in their parameters of the speech signal, the acoustic model of each phoneme, and the language model used in predicting the words which are most likely to follow the preceding words. Thus, different systems can differ in recognition errors, even when they have similar recognition rates. These set of issues must be observed by developers when designing VUI to ubiquitous applications (see Chapter 14 Appendix).

Usability

319

14.7 Summary This chapter proposed usability as an attribute of enterprise architecture. It discussed the characteristics of usability that may be relevant while considering the digital transformation of usability aspects primarily through interactive computing (see Chapter 22). HCI and usability are essential factors to consider when designing and developing an interaction interface that is more efficient and effective and produces user satisfaction rather than frustration. Human-computer interaction (HCI) is a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use. The main purpose of using HCI in the design is to develop an efficient and effective user interface to suit the needs and requirements of the users. is now concerned majorly with understanding, designing for, and evaluating a wider range of aspects related to the user experience rather than merely the user-interface per se; in other words usability. Usability is the ability to be used easily and effectively by the specified range of users, given specified training and user support, to fulfill the specified range of tasks, within the envisaged range of environmental scenarios. Usability is a quality attribute that assesses how easy user interfaces are to experience and use; it refers to the quality of the interactivity and interaction in terms of parameters such as time taken to perform tasks, number of errors made, the time to become a competent user and methods for improving ease-of-use during the design process. The  resulting or consequent quality of experience (QoE) for interaction is dependent on various factors like effort, smoothness, pace, response ability, naturalness, attention, comprehension and so on. This chapter’s appendix describes aspects related to interactive interfaces: user-centered design has become an important concept in the design of interactive systems. It is primarily concerned with the design of sociotechnical systems that take into account not  only their users, but also the use of technologies in users’ everyday activities. It can be thought of as the design of spaces for human communications and interaction.

Appendix 14A: Interactive Interfaces As stated in the beginning of this chapter, to withstand the disruptive digital storms of the future, enterprise architecture affects digital transformation of the usability aspects primarily through interactive computing (see Chapter 22)—but interactive interfaces is an essential pre-requisite to enable this transformation. In order for computer-based systems to be widely accepted and used effectively, they need to be well designed via a user-centered approach. Computer-based systems should be designed for the needs and capabilities of the people for whom they are intended. Ultimately, users should not  even have to think about the complexity of how to use a computer. For  that reason, computers and related devices have to be designed with an understanding that people with specific tasks in mind will want to use them in a way that

320

Digital Transformation of Enterprise Architecture

is seamless with respect to their work. User-centered design has become an important concept in the design of interactive systems. It  is primarily concerned with the design of sociotechnical systems that take into account not only their users, but also the use of technologies in users’ everyday activities, it can be thought of as the design of spaces for human communications and interaction. 14A.1 Communication Spectrum Communications between systems occurs across the electromagnetic spectrum. This appendix will focus on interaction through audible and visible part of the spectrum, namely, voice, eye gaze and gesture interfaces. 14A.2 Voice Interfaces The first era of Voice User Interfaces (VUIs) were the interactive voice response (IVR) systems, which were capable of understanding human speech over the telephone in order to carry out tasks. Early users of Charles Schwab’s speech recognition trading service were happy to call in and get quotes over and over again using the automated system whereas prior to IVR systems they limited their requests so as not  to appear bothersome to the operators fielding their calls. In the second era of VUIs, mobile apps like Siri and Google Now  which combine visual and auditory information and voice-only devices such as the Amazon Echo and Google Home became mainstream. Google reports that 20% of its searches are now done via voice. Conversational interfaces enable back-and-forth interactions, that is, verbal exchange of information. The presentation and approach in this subsection is adopted from Salvador et al. (2010). Voice has some important advantages: • Speed: A  recent Stanford study showed speaking (dictating) text messages was faster than typing, even for expert texters. • Hands-free: Some cases, such as driving or cooking, or even when you’re across the room from your device, make speaking rather than typing or tapping much more practical (and safer). • Intuitiveness: Everyone knows how to talk. Hand a new interface to someone and have it ask that person a question, and even users who are less familiar with the technology can reply naturally. Voice communication has some distinctive characteristics: • Linearity: Vocal space is basically linear, which finally reduces control of navigation. Graphical User Interfaces (GUI) is a visual language with interactive tools. While GUI is nonlinear, voice is linear, meaning that users navigate over information only according to a given order. As a matter of fact, text is a better medium than voice as far as navigation is concerned, because texts or books are typical nonlinear media. • Locality: Voice interaction does not offer global perspectives properly. Since provision of global perspectives helps users understand the information, its context, tasks, activities, and communication more properly, lack of global perspectives deteriorates users’ tasks and their awareness of various contexts. For  example,

Usability

321

scanning a newspaper for about 10 min enables capturing an overview of a day’s news. Similarly, home pages on the web enable grasping of the relevant global perspectives. While quick overview is possible in GUI and other visual spaces, it is difficult to achieve in voice spaces. The problem in voice interaction is locality, which means that information is provided only locally at one time. • Privacy intrusion: Voice is a source of privacy intrusion unless the speaker is in a singular space. For example, if one listens to other voices without a headset or speaks loudly, it clearly interferes with others’ activities, which in fact intrudes their privacy in workplaces with many people. Unlike IVR systems, with mobile devices there is an opportunity to have a visual component. This can be a big advantage in many ways, from communicating information to the user, to confirming it, even to helping the user know when it’s their turn to speak. Allowing users to interact both via voice and by using a screen is an example of a multimodal interface. The differences between graphical user interfaces (GUI) and speech or voice user interfaces (VUI) originate mostly from the fundamental difference between spoken and visual communication human-to-human and human-to-computer. These basic differences affect the user interface design of speech vs. graphical interfaces, impact the implementation of applications with speech or graphical interfaces, and affect the design of an architecture intended to effectively support both of these activities. Evaluating VUI is different from evaluating GUI, mainly because: 1. Visibility: Speech is invisible, which makes it challenging to communicate the functional boundaries of an application to the user. 2. Transience: Speech input and output are transient, once you hear it or say it, its gone. 3. Bandwidth asymmetry: Speech input is typically much faster than typed input whereas speech output can be much slower than reading graphical output, particularly in circumstances that permit visual scanning. 4. Temporality: Speech input is neither instantaneous nor discrete since an utterance may take many seconds to be spoken and consists of continuous data that is transformed to a word sequence by the speech recognizer. Although the final speech recognition result is effectively an instantaneous event, it may be delayed by a noticeable time from when the user stops speaking. 5. Concurrency: Speech-only communication tends to be both single-channel and serial. A subset of requirements based on usability principles for VUI: 1. Modality appropriateness: Speech-only interaction is not  appropriate for all tasks and applications (e.g., to say their pin code out loud to the bank teller machine on the street). So, developers should attempt to make sure that spoken input and output, possibly combined with other input/output modalities, is an appropriate modality choice for the planned application. There are an increasing number of systems that combine spoken input/output with other modalities to support ubiquitous applications. 2. Attention: As users are handling other physical or mental tasks in parallel to interacting with VUI devices. They  may be using those VUI devices in a variety of environments, with a variety of different people nearby.

322

Digital Transformation of Enterprise Architecture

3. Adequacy of dialogue initiative: To support natural interaction, a system needs a reasonable choice of dialogue initiative, an appropriate dialogue structure, complete task and domain coverage, and reasoning capabilities. This is dependent on how much the user knows about the system. System directed dialogue can work well for tasks in which the system simply requires a series of specific pieces of information from the user, especially if the user is new to the system. On the other hand, to satisfy experienced users, the system must be able to cope with the larger packages of input information that are natural to these users. 4. Input recognition adequacy: From the user’s point of view, good speech recognition means that the system rarely gets the user’s spoken input wrong or fails to recognize what the user had just said. However, there are many others factors that interfere in speech recognition, like noisy or quiet environment; variety of dialects and accents; gender; age; voice quality; a low or a loud voice; etc. 5. Naturalness of user’s speech: Speaking to an SLDS should feel as easy and natural as possible. The vocabulary and grammar input in the system must accord with the vocabulary and grammar expected from the user. Depending on factors like the task and user’s experience, there may be considerable variation in what could be the “natural” input language. 6. Output voice quality: From the user’s point of view, good output voice quality means that the system speech is clear and intelligible, does not demand an extra listening effort, is not particularly noise sensitive or distorted by clicks and other extraneous sounds, has natural intonation and prosody, uses an appropriate speaking rate, and is pleasant to listen to. There are three main types of output speech: • Recordings of entire system utterances when the information is not dynamic • Concatenation of recorded words and phrases • Text-to-speech (TTS) when there is a component of the system that synthesizes the voice in real time Obviously, the first type is more pleasant to listen to. 7. Output phrasing adequacy: The  contents of the system’s output should be correct, relevant and sufficiently informative without being over-informative. The form of system expressions should be clear and unambiguous and language terminology should be consistent and familiar to the user. 8. Feedback adequacy: Adequate feedback means that the user feels in control during interaction. The user must feel confident that the system has understood the information input in the way it was intended, and the user must be told which actions the system has taken and what the system is currently doing. There are there levels of feedback: • Hardware level means that the system has understood the speaker’s input. • Sequence level means that the system has understood the required action by the speaker. • Functional level indicates that the system is working at the problem (messages like “please, wait a moment”). 9. Error handling adequacy: VUIs should be able to handle errors using; mixed initiative dialogue, tell the user what it understood and ask for confirmation or correction or transfer the call to a human attendant (in telephony systems).

Usability

323

14A.3 Gaze Interfaces The eyes play an important role both in perception and communication. Technical interfaces that make use of their versatility can bring significant improvements to those who are unable to speak or to handle selection tasks elsewise such as with their hands, feet, noses or tools handled with the mouth. Using the eyes to enter texts into a computer system, which is called gaze-typing, is the most prominent gaze-based assistive technology. The typical gaze-typing system presents an interactive keyboard to the user and tracks the point of regard of the user on this virtual interface. By changing the point of regard to a specific button and by some technique to trigger a selection, the user can, step-by-step, enter texts. The key ideas behind this are discussed in more details later. The approach in this subsection is adopted from Pfeiffer (2018). While aiming at a specific target is the example par excellence for an appropriate gazebased interaction, the problem of providing a robust but also swift technique to trigger a selection is common for many gaze-based applications. One idea is dwelling time explained below. Another idea instead of dwelling is to use eye blinks as triggers. But participants have to increase the time they close their eyes when triggering the selection to let the corresponding algorithms differentiate between involuntary blinks and intentional blinks. Our eyes are active sensory organs. They are sensors that capture light that is cast from the objects in our environment and project them internally on a light-sensitive surface, the retina (Figure 14A.1). However, the resolution of the retina is not equally distributed: there is a certain circular area of very high acuity called fovea, and acuity diminishes the further away from the fovea the incoming rays hit the surface. We constantly orient our eyes towards the area in our field of view we want to inspect, and we do it in such a way that the interesting parts fall onto the fovea. The  fast eye movements are called saccades (from the French saccadè, meaning jerky). The moments of rest, when the eyes are aligned with our target of visual attention, are called fixations. When we visually inspect a scene, we produce a sequence of fixations and saccades, until we finally have satisfied our curiosity. If we connect the areas that have been the target of the fixations by lines representing the saccades, we can create a scanpath depicting the temporal sequence of our visual exploration.

FIGURE 14A.1 The human eye.

324

Digital Transformation of Enterprise Architecture

Usability research uses scanpaths to depict the time course of visual attention over a certain stimulus When analyzing complex user interfaces, the scanpaths can tell the expert how the users perceive the interface and how the interface guides the visual attention of the users. For example, if the scanpath shows that the user has switched his gaze back and forth between two different buttons, this can be taken as a hint that the user was having difficulties to decide which action to take next and what the alternative actions were. The duration of a fixation thus helps us to differentiate between accidental gazes, gazes during visual search and, e.g., intentional gazes during communication. The duration a fixation rests on a certain object is also called dwell time. During the localization and figurative processing of the visual scene, fixations durations are basically shorter than 250 ms— thus it is reasonable to use dwell-times well above 250 ms to trigger a selection. The systems most commonly available today when measuring the point of regard are video-based corneal reflection eye trackers. The principles of this technology have been developed over 100 years ago. This computer vision-based approach detects distinguishable features of the human eye, such as the pupil and corneal reflections of a stimulating infra-red light (also called first Purkinje image). This infra-red light is part of the eye tracking system and serves as external reference. By combining both features, the position of the pupil and the position of the corneal reflection, the eye tracking system can differentiate between movements of the head and the eye. An advantage of this video-based approach is its use of inexpensive hardware—in principle such a system can be created from off-the-shelf electronic parts. 14A.4 Gesture Interfaces This  subsection provides a brief on gestures and the Kinect sensor, which has recently emerged as an important machinery to capture human-gestures. The  presentation and approach in this subsection is adopted from Konar and Saha (2018). The word gesture refers to orientation of the whole/part of human physiques to convey non-verbal information, representing emotions, victory in games/war, traffic control/ signaling by police and many others. Although gestures and postures are synonymous, gestures carry more information than postures as it includes facial expression as well with body language. Gestures include both static and dynamic description of human physiques. A  static gesture, as evident from its name, is a fixed orientation of the components/fragments of our external organs that together conveys the non-verbal message to others. Examples of static gesture include a victory sign (represented by a V formed by the forefinger and the middle finger of our palm), finger-pointing to an object/person for referencing, and putting palms together to indicate gratitude to God or respectable elderly people. Dynamic gestures include motion of one’s body part, as used in signaling by police or by referees in certain outdoor tournaments. Besides static and dynamic gestures, we have mixed gestures, containing partly static and partly dynamic. Dances, for instance, include both static and dynamic gestures to represent complex emotions, theme and culture. Figure 14A.2 shows semantic classification of gestures. Intrinsic gestures refer to latent subjective intent and meant for oneself and not for communication to others for better explanation of any situation and/or theme. Examples of intrinsic gestures include subjective look, while pondering over a solution of a complex problem/situation, anxious thought, represented by finger biting and shying. Extrinsic gestures are meant for other people to communicate certain messages. We classify

Usability

325

FIGURE 14A.2 Semantic classification of gestures.

extrinsic gestures into two major classes: Emotional gestures and Non-emotional gestures. Emotional gestures again could be of two types, gestures conveying positive emotions and negative emotions. Non-emotional gesture again is classified into several types including instructing gestures, signaling gestures and responsive gestures. Gesture recognition refers to understanding and classifying gestures from human gestural data. Recognition of gestures usually is a pattern classification problem, where the individual and distinct gestures are classes and gestural instances containing gesturedata are instances. Both classical and machine learning approaches to gesture recognition are available. Among the traditional approaches, state machines, Hidden Markov model (HMM) and particle filter need special mentioning. Machine learning approach, on the other hand involves designing a mapping function from gestural data to gesture-classes. Several machine learning techniques have also been employed to construct the functional mapping from gestural instances to gesture-classes. Since the classes are known a priori, gesture recognition is performed by supervised learning, where a supervisor fixes the training data set, describing gestural instances and the corresponding class information for each gesture instance. After the mapping function is built from the gestural instances and class information, we can recognize an unknown gestural instance using the already available mapping function. 14A.4.1 Kinect Sensor Microsoft developed the Kinect sensor that captures the gestural information compositely from the camera information and infra-radiometric sensors. The infra-radiometric sensors include: • Infrared source: The source is realized using a laser/Light Emitting Diode (LED). • Infrared detectors: The detector is realized with a PIN diode.

326

Digital Transformation of Enterprise Architecture

Such source-detector pairs employ time-of-flight principle which determines the time required to traverse the infrared rays two times, once during forward journey and the other during return of the signal after reflection from the object-surface. The difference in time-interval is used to measure the approximate distance of the object from the infrared source. The Kinect sensor includes both an Infrared (IR) camera and one RGB camera. It also includes a microphone array to capture voice of the experimental subject. The IR camera and the RGB camera are used jointly to capture twenty body junction coordinates of the subjects including their depth. To recognize a gesture, we pre-process the gestural data, then extract necessary features and finally classify the gesture from the noisefree/noise-reduced data. The Kinect sensor also has the ability to detect facial expressions and voice gestures. The origin of the reference frame is considered at the RGB camera. The direction parallel to the length of Kinect is the x-direction, the vertical direction is the y-direction and the z-direction is the depth measure. Kinect records at a sampling rate of 30 frames per second (the sampling rate can be altered). Gesture recognition can be done using two different modalities: two-dimensional (2-D) and three-dimensional (3-D) approaches. The first technique is mostly done using image processing and further based on computer vision methods to obtain features from the 2-D shapes of the human body parts. In contrast, 3-D human body recognition schemes deal with kinematics analysis, i.e. by calculating motions of concerned body parts by either intrusive or non-intrusive manners. Intrusive technique deals with placing markers on human body for tracing the movement of it, whereas nonintrusive methods rely on sensory data. In all the applications of gesture recognition, nonintrusive approaches are preferred over intrusive ones for better feasibility. 14A.5 3D Interfaces Modern computer users have become intimately familiar with a specific set of UI components, including input devices such as the mouse and touchscreen, output devices such as the monitor, tablet, or cell phone display, interaction techniques such as drag-and-drop and pinch to zoom, interface widgets such as pull-down menus, and UI metaphors such as the Windows, Icons, Menus, Pointer (WIMP) desktop metaphor. These interface components, however, are often inappropriate for the nontraditional computing environments and applications under development today. For example, a virtual reality (VR) user wearing a fully immersive head-worn display (HWD) won’t be able to see the physical world, making the use of a keyboard impractical. An HWD in an augmented reality (AR) application may have limited resolution, forcing the redesign of text-intensive interface components such as dialog boxes. A VR application may allow a user to place an object anywhere in 3D space, with any orientation—a task for which a 2D mouse is inadequate. 3D interaction involves real 3D spatial input such as hand gestures or physical walking. Desktop 3D interaction requires different interaction techniques and design principles; desktop 3D interactions are those human–computer interaction in which the user’s tasks are performed directly in a real or virtual 3D spatial context.

327

Usability

3D interactions happen in context of increasing degree of environmental visuality: 1. Telerobotics: The ability to control a robot that is geographically separated from the user, thus requiring remote operation. Robots often have many degrees of freedom and operate in the physical world, making 3D UIs applicable to their operation. 2. Ubiquitous computing (UbiCom): The  notion that computing devices and infrastructure (and access to them) may be mobile or scattered throughout the real environment so that users have “anytime, anyplace” access to computational power. 3. Mixed reality (MR): A set of approaches, including both VR and AR, in which real and virtual information is mixed in different combinations. A system’s position on the mixed reality continuum indicates the mixture of virtuality and reality in the system (with the extremes being purely virtual and purely real). Mixed reality systems may move along this continuum as the user interacts with them. 4. Augmented reality (AR): An approach that uses displays, tracking, and other technologies to enhance (augment) the user’s view of a real-world environment with synthetic objects or information. 5. Virtual reality (VR): An approach that uses displays, tracking, and other technologies to immerse the user in a VE. Note that in practice VE and VR are often used almost interchangeably. 6. Virtual environment (VE): A synthetic, spatial (usually 3D) world seen from a firstperson point of view. The view in a virtual environment is under the real-time control of the user. Thus, nontraditional systems need a new set of interface components: new devices, new techniques, new metaphors. Some of these new components may be simple refinements of existing components; others must be designed from scratch. Because many of these nontraditional environments work in real and/or virtual 3D space; we term these interfaces 3D interfaces. The Author would like to propose the smartphone itself as the best 3D user interface.

Section III

Digital Transformation of Enterprise Architecture Chapters 15 through 22 explain the digital transformation-enabling technologies of service-oriented, cloud, big data, context-aware, Internet of Things, blockchain, soft, and interactive computing. Please note that a transformation that affects exponential change (amplification or attenuation) in any aspect of enterprise architecture performance is termed as a digital transformation. Chapter 15 on service-oriented computing describes the interoperability (or flexibility) advantages resulting from integration-enabled and loosely coupled connections. Chapter 16 on cloud computing describes the scalability (or elasticity) advantages resulting from virtualization-enabled on-demand, online, zero latency pay-per-use upsizing, or downsizing of resources. Chapter 17 on big data computing describes the availability advantages resulting from replication-enabled processing and storage of big volume, velocity, variety, and veracity of structured, semistructured, and unstructured data. Chapter 18 on context-aware computing describes the mobility advantages resulting from spatio-temporal databases-enabled recording of the location and moments-in-time of registered entities and/or events. Chapter 19 on Internet of Things computing describes the ubiquity advantages resulting from embedded systems-enabled polling of situational information from environmentembedded sensors, transducers, and actuators. Chapter 20 on blockchain computing describes the security advantages resulting from cryptography-enabled distributed, decentralized, immutable, and content-embodied links. Chapter 21 on soft computing describes analyticity advantages resulting from data mining-enabled nature inspired approximate but tractable robust solutions to imprecisely or precisely formulated problems at a low or acceptable quantity of effort, time, and cost. Chapter 22 on interactive computing describes the usability advantages resulting from interactive interfaces-enabled change of the paradigm from using to interacting with computers.

15 Service-Oriented Computing As stated right in the beginning of Chapter 7, to withstand the disruptive digital storms of the future, enterprise architecture affects digital transformation i.e., exponential change (amplification or attenuation) in any performance measure of interoperability aspects primarily through service-oriented computing. This chapter explains service-oriented computing. A  service-oriented architecture (SOA) maps IT systems easily and directly to a business’s operational processes and supports a better division of labor between the business and technical staff. One of the great potential advantages of solutions created using an SOA with SOAP or REST Web Services is that they can help resolve this perennial problem by providing better separation of concerns between business analysts and service developers. Analysts can take responsibility for defining how services fit together to implement business processes, while the service developers can take responsibility for implementing services that meet business requirements. This  will ensure that the business issues are well enough understood to be implemented in technology and the technology issues are well enough understood to meet the business requirements. Integrating existing and new applications using an SOA involves defining the basic Web Service interoperability layer to bridge features and functions used in current applications such as security, reliability, transactions, metadata management, and orchestration; it also involves the ability to define automated business process execution flows across the Web Services after an SOA  is in place. An SOA  with Web Services enables the development of services that encapsulate business functions and that are easily accessible from any other service; composite services allow a wide range of options for combining Web Services and creating new business process and, hence, application functionality.

15.1 Introduction to Service-Oriented Architecture Integration seems to be one of the most important strategic priorities mainly because new innovative business solutions demand integration of different business units, business systems, enterprise data, and applications. Integrated information systems improve the competitive advantage with unified and efficient access to the information. Integrated applications make it much easier to access relevant, coordinated information from a variety of sources. It  is clear that replacing existing systems with new solutions will often not be a viable proposition. Companies soon realize that the replacement is more complicated, more complex, more time consuming, and more costly than even their worst-case scenarios could have predicted: too much time and money have been invested in them, and there is too much knowledge incorporated in these systems. Therefore, standard ways to reuse existing systems and integrate them into the global, enterprise-wide information system must be defined.

331

332

Digital Transformation of Enterprise Architecture

The modern answer to application integration is a SOA with Web Services; SOA is a style of organizing (services), and Web Services are its realization. An SOA with Web Services is a combination of architecture and technology for consistently delivering robust, reusable services that support today’s business needs and that can be easily adapted to satisfy changing business requirements. An SOA  enables easy integration of IT systems, provides multichannel access to your systems, and automates business processes. When an SOA with its corresponding services is in place, developers can easily reuse existing services in developing new applications and business processes. A service differs from an object or a procedure because it is defined by the messages that it exchanges with other services. A service’s loose coupling to the applications that host it gives it the ability to more easily share data across the department, enterprise, or Internet. An SOA defines the way in which services are deployed and managed. Companies need IT systems with the flexibility to implement specialized operations, to change quickly with the changes in business operations, to respond quickly to internal as well as external changes in conditions, and consequently gain a competitive edge. Using an SOA increases reuse, lowers overall costs, and improves the ability to rapidly change and evolve IT systems, whether old or new. As a prerequisite, one will have to deal with a plethora of legacy technologies in order to service-enable them. But the beauty of services and SOA is that the services are developed to achieve interoperability and to hide the details of the execution environments behind them. In particular, for Web Services, this means the ability to emit and consume data represented as XML, regardless of development platform, middleware, operating system, or hardware type. Thus, an SOA is a way to define and provision an IT infrastructure to allow different applications to exchange data and participate in business processes, regardless of the operating systems or programming languages underlying these applications. 15.1.1 Defining SOA SOA provides an agile technical architecture that can be quickly and easily reconfigured as business requirements change. The promise of SOA is that it will break down the barriers in IT to implement business process flows in a cost-effective and agile manner that would combine the best of custom solutions as well as packaged applications while simultaneously reducing lock-in to any single IT vendor. A service-oriented architecture is a style of organization that guides all aspects of creating and using business services throughout their life cycle (from conception to retirement), as well as defining and provisioning the IT infrastructure that allows different applications to exchange data and participate in business processes regardless of the operating systems or programming languages underlying these applications. The  key organizing concept of an SOA itself is a service. The processes, principles, and methods defined by SOA are oriented toward services; the development tools selected by an SOA are oriented toward creating and deploying services; and the runtime infrastructure provided by an SOA is oriented to executing and managing services. A service is a sum of constituting parts including a description, the implementation, and the mapping layer (termed as transformation layer) between the two. The service implementation, termed as the executable agent, can be any environment for which Web Service support is available. The description is separated from its executable agent; one description might have multiple different executable agents associated with it and vice versa.

Service-Oriented Computing

333

The executable agent is responsible for implementing the Web Service processing model as per the various Web Service specifications and runs within the execution environment, which is typically a software system or programming language environment. The description is separated from the execution environment using a mapping or transformation layer often implemented using proxies and stubs. The mapping layer is responsible for accepting the message, transforming the XML data to be native format, and dispatching the data to the executable agent. Web Service roles include requester and provider; a requester can be a provider and vice versa. The service requester initiates the execution of a service by sending a message to a service provider, which executes the service and returns the results, if any specified, to the requester. 15.1.1.1 Services Services are coarse-grained, reusable IT assets that have well-defined interfaces (or service contracts) that clearly separate the service accessible interface from the service technical implementation. This separation of interface and implementation serves to decouple the service requesters from the service providers so that both can evolve independently as long as the service contract remains unchanged. A service is a location on the network that has a machine-readable description of the messages it receives and optionally returns. A  service is therefore defined in terms of the message exchange patterns it supports. A schema for the data contained in the message is used as the main part of the contract between a service requester and a service provider; other items of metadata describe the network address for the service, the operations it supports, and its requirements for reliability, security, and transactional integrity. However, developing a service is quite different from developing an object because a service is defined by the message it exchanges with other services, rather than a method signature. A service usually defines a coarse-grained interface that accepts more data in a single invocation than an object because of the need to map to an execution environment, process the XML, and often access it remotely. Services are executed by exchanging messages according to one or more supported message exchange patterns (MEPs), such as request/response, one-way asynchronous, or publish/subscribe. Services are meant to solve interoperability problems between applications and for use in composing new applications or application systems, but not meant like objects to create the detailed business logic for the applications. From a business perspective, services are IT assets that correspond to real-world business activities or identifiable business functions that can be accessed according to the service policies related to: • • • • •

Who is authorized to access the service When can the service be accessed What is the cost of using the service What are the reliability levels of using the service What are the performance levels for the service

A  service is normally defined at a higher level of abstraction than an object because it is possible to map a service definition to a procedure-oriented language, such as COBOL or PL/1, or to a message queuing system such as JMS or MSMQ, as well as to

334

Digital Transformation of Enterprise Architecture

an object-oriented system such as J2EE or the .NET Framework. Whether the service’s execution environment is a stored procedure, message queue, or object does not matter; the data are seen through the filter of a Web Service, which includes a layer that maps the Web Service to whatever execution environment is implementing the service. The use of XML in Web Services provides a clear separation between the definition of a service and its execution so that Web Services can work with any software system. The XML representation of the data types and structures of a service via the XML schema allows the developer to think solely in terms of the data being passed between the services without having to consider the details of the service’s implementation. This is quite in contrast to the traditional nature of the integration problem that involves figuring out the implementation of the service in order to be able to talk to it. One of the greatest benefits of service abstraction is its ability to easily access a variety of service types, including newly developed services, wrapped legacy applications, and applications composed of other newer and legacy services.

15.2 SOA Benefits SOA delivers the following business benefits: 1. Increased business agility: SOA improves throughput by dramatically reducing the amount of time required to assemble new business applications, from existing services and IT assets. SOA also makes IT significantly easier and less expensive to reconfigure and adapt services and IT assets to meet new and unanticipated requirements. Thus, the business adapts quickly to new opportunities and competitive threats, while IT quickly changes existing systems. 2. Better business alignment: As SOA  services directly support the services that the enterprise provides to customers. 3. Improved customer satisfaction: As SOA services are independent of specific technology, they can readily work with an array of customer-facing systems across all customer touch points that effectively reduce development time, increase customer engagement time, and, hence, increase customer solutioning, enabling enhanced customer satisfaction. 4. Improved ROI of existing assets: SOA dramatically improves the ROI of existing IT assets by reusing them as services in the SOA  by identifying the key business capabilities of existing systems and using them as the basis for new services as part of the SOA. 5. Reduced vendor lock-in and switching costs: As SOA is based on loosely coupled services with well-defined, platform-neutral service contracts, it avoids vendor and technology lock-in at all levels, namely, application platform and middleware platform. 6. Reduced integration costs: SOA  projects can focus on composing, publishing, and developing Web Services independently of their execution environments, thus obviating the need to deal with avoidable complexity. Web Services and XML simplify integration because they focus on the data being exchanged instead of the underlying programs and execution environments.

Service-Oriented Computing

335

Technical benefits of SOA include the following: 1. More reuse: Service reuse lowers development costs and speed. 2. Efficient development: As services are loosely coupled, SOA promotes modularity that enables easier and faster development of composite applications. Once service contracts have been defined, developers can separately and independently design and implement each of the various services. Similarly, service requestors too can be designed and implemented based solely with reference to the published service contracts without any need to contact the concerned developers or without any access to the developers of the service providers. 3. Simplified maintenance: As services are modular and loosely coupled, they simplify maintenance and reduce maintenance costs. 4. Incremental adoption: As services are modular and loosely coupled, they can be developed and deployed in incremental steps.

15.3 Characteristics of SOA 1. Dynamic, discoverable, metadata driven: Services should be published in a manner by which they can be discovered and consumed without intervention of the provider. Service contracts should use metadata to define service capabilities and constraints and should be machine readable so that they can be registered and discovered dynamically to lower the cost of locating and using services, reduce corresponding errors, and improve management of services. 2. Designed for multiple invocation styles: Design and implement service operations that implement business logic that supports multiple invocation styles, including asynchronous queuing, request/response, request/callback, request/polling, batch processing, and event-driven publish/subscribe. 3. Loosely coupled: This implies loose coupling of interface and technology; interface coupling implies that the interface should encapsulate all implementation details and make them nontransparent to service requesters, while technology coupling measures the extent to which a service depends on a particular technology, product, or development platform (operating systems, application servers, packaged applications, and middleware). 4. Well-defined service contracts: Service contracts are more important than the service implementations because it defines the service capabilities and how to invoke the service in an interoperable manner. A  service contract clearly separates the service’s externally accessible interface from the service’s technical implementation. Consequently, the service contract is independent of any single service implementation. The service contract is defined based on the knowledge of the business domain and not so much on the service implementation. The service contract is defined and managed as a separate artifact, is the basis for sharing and reuse, and is the primary mechanism for reducing interface coupling.

336

5.

6.

7.

8.

9.

Digital Transformation of Enterprise Architecture

Changing a service contract is generally more expensive than modifying the implementation of a service because while changing a service implementation is relatively a localized effort, changing a service contract may entail changing hundreds or thousands of service requesters. Standard based: Services should be based on open standards as much as possible to the following benefits: • Minimizing vendor lock-in by isolating from proprietary, vendor-specific technologies and interfaces • Increasing the choice of service requesters for alternate service providers and vice versa It is important to base the service-level data and process models on mature business domain standards as and when they become available. This  is in addition to complying with technology standards like SOAP, WSDL, UDDI, and the WS* specification. Granularity of services and service contracts: Services and service contracts must be defined at a level of abstraction that makes sense to service requesters as also service providers. To achieve this, services should perform discrete tasks and provide simple interfaces to encourage reuse and loose coupling. An abstract interface at the appropriate level of granularity promotes ready substitutability, which enables any of the existing service providers to be replaced by a new service provider without affecting any of the service requesters. Stateless: Services should be stateless because they scale more efficiently as any service request can be routed to any service instance. In contrast, stateful interactions do not  scale efficiently because the server needs to track which service is serving which client and cannot reuse a service until the conversation is finished or a time-out has occurred. Thus, services should be implemented so that each invocation is independent and does not depend on the service maintaining client-specific conversations in memory or in persistent state between the invocations. Predictable service-level agreements (SLAs): Service delivery platform must provide service-level management capabilities for defining, monitoring, incident logging, and metering of SLAs for service usage. SLAs should be established early because they affect service design, implementation, and management. There should also be provision for fine-tuning of SLAs based on the feedback of ongoing operations. Typically, SLAs define metrics for services such as response time, throughput, availability, and meantime between failures. Above all, SLAs are usually tied up to a business model whereby service requesters pay more for higher or more stringent SLAs but charge a penalty when service providers default on their SLA commitments. Design services with performance in mind: Service invocation should not be modeled on local function calls since local transparency may result in a service that is on another machine on the same LAN or another LAN or WAN.

Service-Oriented Computing

337

15.4 Advantages of SOA Enterprises may use SOA for the following: 1. Implementing end-to-end collaborative business processes: The term end-to-end business process signifies that a succession of automated business processes and information systems in different enterprises (which are typically involved in intercompany business transactions) are successfully integrated. The aim is to provide seamless interoperation and interactive links between all the relevant members in an extended enterprise—ranging from product designers, suppliers, trading partners, and logistics providers to end customers. At this stage, an organization moves into the highest strategic level of SOA implementation. Deployment of services becomes ubiquitous, and federated services collaborate across enterprise boundaries to create complex products and services. Individual services in this extended enterprise may originate from many providers, irrespective of companyspecific systems or applications. 2. Implementing enterprise service orchestrations: This  basic SOA  entry point focuses on a typical implementation within a department or between a small number of departments and enterprise assets and comprises two steps: The first step is transforming enterprise assets and applications into an SOA implementation. This can start by service enabling existing individual applications or creating new applications using Web Services technology. This can begin by specifying a Web Service interface into an  individual application or application element (including legacy systems). The next step after this basic Web Service implementation is implementing service orchestrations out of the service-enabled assets or newly created service applications. 3. Service enabling the entire enterprise: The  next stage in the SOA  entry point hierarchy is when an enterprise seeks to provide a set of common services based on SOA components that can be used across the entire organization. Enterprise-wide service integration is achieved on the basis of commonly accepted standards. This  results in achieving service consistency across departmental boundaries and is a precursor to integrating an organization with its partners and suppliers. Consistency is an important factor for this configuration as it provides both a uniform view to the enterprise and its customers as well as ensuring compliance with statutory or business policy requirements.

15.5 SOA Applications An SOA  can be thought of as an approach to building IT systems in which business services are the key organizing principle to align IT systems with the needs of the business. Any business that can implement an IT infrastructure that allows it to change more rapidly than its competitors has an advantage over them. The use of an SOA  for integration, business process management, and multichannel access allows

338

Digital Transformation of Enterprise Architecture

any enterprise to create a more strategic environment, one that more closely aligns with the operational characteristics of the business. Earlier approaches to building IT systems resulted in systems that were tied to the features and functions of a particular environment technology (such as CORBA, J2EE, and COM/DCOM) since they employed environment-specific characteristics like procedure or object or message orientation to provide solutions to business problems. The way in which services are developed aligns them better with the needs of the business than was the case with previous generations of technology. What is new in the concept of SOA is the clear separation of the service interface from execution technology, enabling choice of the best execution environment for any job and tying all of these executional agents together using a consistent architectural approach. 15.5.1 Rapid Application Integration The combination of Web Services and SOA provides a rapid integration solution that readily aligns IT investments and corporate strategies by focusing on shared data and reusable services rather than proprietary integration products. These enterprise application integration (EAI) products proved to be expensive, consumed considerable time and effort, and were prone to higher rates of failure. Applications can more easily exchange data by using a Web Service defined at the business logic layer than by using  a different integration technology because Web Services represent a common standard across all types of software. XML can be used to independently define the data types and structures. Creating a common Web Service layer or overlay of services into the business logic tiers of application also allows you to use a common service repository in which to store and retrieve service descriptions. If a new application seeks to use an existing service into one of these applications, it can query the repository to obtain the service description to quickly generate (say) SOAP messages to interact with it. Finally, the development of service-oriented entry points at the business logic tier allows a business process management engine to drive an automatic flow of execution across the multiple services. 15.5.2 Multichannel Access Enterprises often use many channels to ensure good service and maintain customer loyalty; therefore, they benefit from being able to deliver customer services over a mixture of access channels. In the past, enterprises often developed monolithic applications that were tied to single access channel, such as a 3270 terminal, a PC interface, or a Web browser. The proliferation of access channels represented a significant challenge to IT departments to convert monolithic applications to allow multichannel access. The basic solution is to service-enable these using an SOA with Web Services that are good for enabling multichannel access because they are accessible from a broad range of clients, including Web, Java, C#, and mobile devices. In general, business services change much less frequently than the delivery channels through which they are accessed. Business services refer to operational functions such as vendor management, purchase order management, and billing, which do not vary very often, whereas client devices and access channels are based on new technologies, which tend to change.

Service-Oriented Computing

339

15.5.3 Business Process Management A business process is a real-world activity that consists of a set of logically related tasks that, when performed in an appropriate sequence and in conformity with applicable rules, produce a business outcome. Business process management (BPM) is the name for a set of software systems, tools, and methodologies that enable enterprises to identify, model, develop, deploy, and manage such business processes. BPM systems are designed to help align business processes with desirable business outcomes and ensure that the IT systems support those business processes. BPM systems let business users model their business processes graphically in a way that the IT department can implement; the graphical depiction of a business process can be used to generate an executable specification of the process. Unlike traditional forms of system development where the process logic is deeply embedded in the application code, BPM explicitly separates the business process logic from other application code. Separating business process logic from other application code renders increased productivity, reduced operational costs, and improved agility. When implemented correctly, enterprises can quickly respond to changing market conditions and seize opportunities for gaining competitive advantage. SOA with Web Services can better automate business processes because Web Services help achieve the goals of BPM more quickly and easily.

15.6 SOA Ingredients Web Services are new standards for creating and delivering cooperative applications over the Internet. Web Services allow applications to communicate irrespective of the platform or the operating system. 15.6.1 Objects, Services, and Resources Any distributed system involves sending messages to some remote entity. Underlying the differences between many systems are the abstractions used to model these entities; they define the architectural qualities of the system. Three abstractions—in particular, object, resource, and service—are commonly used to describe remote entities; their definitions, however, are not always clearly distinguished. Yet the nature of these abstractions has a profound effect on the distributed communication paradigms that result from their use. One approach to identifying the similarities and differences between them is to understand them in terms of their relationship to two properties: state and behavior. 15.6.1.1 Objects Objects have both state and behavior. The state is maintained through the internal data, and the behavior of the object is defined through the public operations on that data. A primary issue in these systems is the management of object identifiers, which are global pointers to instances of objects. It has been argued that architectures based on global pointers lead to

340

Digital Transformation of Enterprise Architecture

brittle systems if they scale to Internet size because of the proliferation of references and the need to maintain the integrity of the pointers. As a result, these systems are considered to be best suited to medium-sized networks within a single domain, with known latencies and static addressing and intimate knowledge of the middleware models used. 15.6.1.2 Services A service is a view of some resource, usually a software asset. Implementation detail is hidden behind the service interface. The interface has well-defined boundaries, providing encapsulation of the resource behind it. Services communicate using messages. The structure of the message and the schema, or form, of its contents are defined by the interface. Services are stateless. This means all the information needed by a service to perform its function is contained in the messages used to communicate with it. The  service abstraction shares commonalities with the object abstraction but displays crucial differences: • Like an object, a service can have an arbitrary interface. • Like distributed object systems that use an IDL, services usually describe this interface in a description language. • Unlike objects, services use a message-oriented model for communication. This has quite different semantics and implications to invoking a procedure on a remote object. In the latter, what the remote entity is plays a part. In the case of objects, the class of an object must be known. Once this class is known, behavior based on the class can be inferred by the consumer. Services, however, do not share class. Instead, they share contracts and schema. Therefore, what an entity is has no bearing on communication, and nothing is inferred. Furthermore, communication with an object involves addressing an instance. This is not the case with services as is discussed in the next item. • Unlike objects, services do not have state. Object orientation teaches us that data and the logic that acts on that data should be combined, while service orientation suggests that these two things should be separate. Therefore, a service acts upon state but does not expose its own state. Put another way, services do not have instances. You cannot create and destroy a service in the way you can to an object. 15.6.1.3 Resources The term resource is used here to specifically refer to the abstraction used by the Web and related initiative such as the Semantic Web. Such a resource is different from a distributed object in a number of ways. Resource state is not hidden from a client as it is in object systems. Instead, standard representations of state are exposed. In  object systems, the public interface of an object gives access to hidden state: • Unlike distributed objects, resources do not have operations associated with them. Instead, manipulation and retrieval of resource representations rely on the transfer protocol used to dereference the uniform resource identifier (URI). • As a consequence, a resource can be viewed as an entity that has state, but not the logic to manipulate that state, that is, no behavior.

Service-Oriented Computing

341

Because resources have no behavior, they do not define how their state can be manipulated. While this could be viewed as limiting and potentially leading to ad hoc, underspecified interactions, in the case of the Web, the opposite is actually true. While an object-oriented system defines proprietary behavioral interfaces for every object, leading to a proliferation of means of manipulating objects, the Web uses a single, shared interface: HTTP. The few methods defined by HTTP allow arbitrary resources to be exchanged and manipulated, making interactions between entities far simpler and hence scalable. Imagine, for example, that every Web server defined its own interface to accessing the resources in its charge. This  would require a browser to digest a new service interface and generate client-side code every time you clicked on a link, a process that would severely influence the scalability of the system as a whole. 15.6.2 SOA and Web Services Web Services are new standards for creating and delivering cooperative applications over the Internet. They allow applications to communicate irrespective of the platform or the operating system. By using Web Services, developers can eliminate major porting and quality testing efforts, potentially saving millions of dollars. They will radically change the way that applications are built and deployed in future. A  developer can create an application out of reusable components. But what good is it to have a large library of reusable components if nobody knows that they exist, where they are located, and how to link to and communicate with such programmatic components? Web Services are standards for finding and integrating object components over the Internet. They enable a development environment where it is no longer necessary to build complete and monolithic applications for every project. Instead, the core components can be combined from other standard components available on the Web to build the complete applications that run as services to the core applications. Some of the past approaches for enabling program-to-program communications included combinations of program-to-program protocols such as Remote Procedure Call (RPC) and Application Programming Interfaces (APIs) coupled with architectures such as Common Object Model (COM), the Distributed Common Object Model (DCOM), and the Common Object Request Broker Architecture (CORBA). But, without a common underlying network, common protocols for program-to-program communication, and a common architecture to help applications to declare their availability and services, it has proven difficult to implement cross platform program-to-program communication between application modules. These previous attempts to set up standards for accomplishing these objectives were not very successful because • They were not functionally rich enough and are difficult to maintain as best of breed. • They were vendor specific as opposed to using open and cross vendor standards. • They were too complex to deploy and use. The use of Web Service standards holds the potential for correcting each of these deficiencies. This new approach presents applications as services to each other and enables applications to be rapidly assembled by linking application objects together. With the advent of the Internet and its protocols, most vendors and enterprises have graduated to a common communication and network protocol—the Internet’s TCP/IP.

342

Digital Transformation of Enterprise Architecture

And with the availability of Web standards such as Extensible Markup Language (XML); Simple Object Access Protocol (SOAP); Universal Description, Discovery and Integration (UDDI); and Web Services Description Language (WSDL), vendors enable customers to: 1. Publish specifications about application modules via WSDL. 2. Find those modules (either on the internal intranet or on the Internet) via the UDDI. 3. Bind the applications together to work seamlessly and cooperatively and deliver the holistic functionality of composite application via SOAP and XML. Figure 15.1 shows the Web Services usage model. Web Services (WS) usage model (Figure 15.1) consists of: 1. Describing Web Services: Web Services Description Language (WSDL) The  Web Services Description Language is an XML-based language used to describe the goods and services that an organization offers and provides a way to access those services electronically. WSDL is an XML vocabulary that provides a standard way of describing service IDLs. WSDL is the resulting artifact of a convergence of activities between NASSL (IBM) and SDL (Microsoft). It provides a simple way for service providers to describe the format of requests and response Messages for Remote Method Invocations (RMIs). WSDL addresses this topic of service IDLs independent of the underlying protocol and encoding requirements. In general, WSDL provides an abstract language for defining the published operations of a service with their respective parameters and data types. The language also addresses the definition of the locations and binding details of the service. 2. Accessing Web Services: Simple Object Access Protocol (SOAP) Simple Object Access Protocol is an XML-based lightweight protocol for the exchange of information in a decentralized distributed environment. SOAP is a means by which different systems can communicate, based on the HTTP Web

FIGURE 15.1 Web Services usage model.

Service-Oriented Computing

343

standard. It specifies how to encode an hypertext transfer protocol (HTTP) header and XML file so that different applications, running on different systems, can share data. SOAP defines a messaging protocol between requestor and provider objects, such that the requesting objects can perform a remote method invocation on the providing objects in an object-oriented programming fashion. The SOAP specification was coauthored by Microsoft, IBM, Lotus, UserLand, and DevelopMentor. The  specification subsequently spawned the creation of W3C XML Protocol Workgroup, which is constituted of over 30 participating companies. In most vendor implementations of SOA, SOAP forms the basis for distributed object communication. Although SOA does not define a messaging protocol, SOAP has recently been referred to as the Service-Oriented Architecture protocol due to its common use in SOA implementations. The beauty of SOAP is that it is completely vendor neutral, allowing for independent implementations relative to platform, operating system, object model, and programming language. Additionally, transport and language bindings as well as data-encoding preferences are all implementation dependent. 3. Finding Web Services: Universal Description, Discovery, and Integration (UDDI) The Universal Description, Discovery and Integration is an XML-based registry that enables organizations in a global business-to-business (B2B) environment to locate each other. UDDI specification provides a common set of SOAP APIs that enable the implementation of a service broker. The UDDI specification was outlined by IBM, Microsoft, and Ariba to help facilitate the creation, description, discovery, and integration of Web-based services. UDDI is like a telephone directory that also • Indicates the suitability of a potential partner • Describes the access mechanism by which an enterprise can be interfaced with The  motivation behind UDDI.org, a partnership and cooperation between more than 70 industry and business leaders, is to define a standard for B2B interoperability. The significance of Web Services for the future is by reason of the following: • Web Services will enable enterprises to reduce development costs and expand application functionality at a fraction of the cost per traditional application development and deployment method. • Web Services will enable independent software vendors (ISVs) to bring products to market more quickly and respond to competitive threats with more flexibility. • Web Services will enable enterprises to reuse existing legacy application functionality with the latest applications and technologies. • Web Services will obviate the need of porting applications to different hardware platforms and operating systems at great expense. • Web Services enable applications to communicate irrespective of platform or operating system. • Web Services will have the effect of leveling the playing field because it will enable even specialized boutique application firms to compete easily with well-established and resourceful original equipment manufacturers (OEMs).

344

Digital Transformation of Enterprise Architecture

• Web Services will enable only those OEMs to flourish that focus on providing comprehensive implementations and highly productive application development environments for Web Services. • Web Services will enable applications to be packaged not  only as licenses but also as services; this will give a big fillip to ASP services (as discussed in the earlier section) and will consequently expand the overall market size tremendously. • Web Services will enable value-added resellers (VARs) to rapidly add new functionality to current product offerings or to customize the existing applications of customers. • Web Services will enable enterprises to adapt better to changing market conditions or competitive threats. 15.6.3 SOA and RESTful Services REpresentational State Transfer (REST) is a style of software architecture for distributed hypermedia systems such as the World Wide Web. As such, it is not strictly a method for building Web Services. The  terms representational state transfer and REST were introduced in 2000 in the doctoral dissertation of Roy Fielding, one of the principal authors of the Hypertext Transfer Protocol (HTTP) specifications 1.0 and 1.1. REST refers to a collection of network architecture principles, which outline how resources are defined and addressed. The term is often used in a looser sense to describe any simple interface that transmits domain-specific data over HTTP without an additional messaging layer such as SOAP or session tracking via HTTP cookies. These two meanings can conflict as well as overlap. It  is possible to design a software system in accordance with Fielding’s REST architectural style without using HTTP and without interacting with the World Wide Web. It is also possible to design simple XML+HTTP interfaces that do not conform to REST principles but instead follow a model of remote procedure call. Systems that follow Fielding’s REST principles are often referred to as RESTful. Proponents of REST argue that the Web’s scalability and growth are a direct result of a few key design principles. Application state and functionality are abstracted into resources. Every resource is uniquely addressable using a universal syntax for use in hypermedia links, and all resources share a uniform interface for the transfer of state between client and resource. This transfer state consists of a constrained set of well-defined operations and a constrained set of content types, optionally supporting code on demand. State transfer uses a protocol that is client–server based, stateless and cacheable, and layered. An important concept in REST is the existence of resources, each of which is referenced with a global identifier (e.g., a URI in HTTP). In order to manipulate these resources, components of the network (user agents and origin servers) communicate via a standardized interface (e.g., HTTP) and exchange representations of these resources (the actual documents conveying the information). For example, a resource, which is a circle, may accept and return a representation that specifies a center point and radius, formatted in Scalable Vector Graphics (SVG), but may also accept and return a representation that specifies any three distinct points along the curve as a comma-separated list. Any number of connectors (clients, servers, caches, tunnels, etc.) can mediate the request, but each does so without seeing past its own request (referred to as layering, another constraint of REST and a common principle in many other parts of information and networking architecture). Thus, an application can interact with a resource by knowing

Service-Oriented Computing

345

two things: the identifier of the resource and the action required—it does not need to know whether there are caches, proxies, gateways, firewalls, tunnels, or anything else between it and the server actually holding the information. The application does, however, need to understand the format of the information (representation) returned, which is typically an HTML, XML, or JSON document of some kind, although it may be an image, plain text, or any other content. RESTful Web Services rely on HTTP as a sufficiently rich protocol to completely meet the needs of Web Service applications. In the REST model, the HTTP GET, POST, PUT, and DELETE verbs are used to transfer data (often in the form of XML documents) between client and services. These documents are representations of resources that are identified by normal Web URIs (Uniform Resource Identifiers). This use of standard HTTP and Web technologies means that RESTful Web Services can leverage the full Web infrastructure, such as caching and indexing. The transactional and database integrity requirements of CRUD (Create, Retrieve, Update, and Delete) correspond to HTTP’s POST, GET, PUT, and DELETE. One benefit that should be obvious with regard to web-based applications is that a RESTful implementation allows a user to bookmark specific queries (or requests) and allows those to be conveyed to others across e-mail and instant messages or to be injected into wikis, etc. Thus, this representation of a path or entry point into an application state becomes highly portable. A RESTFul Web Service is a simple Web Service implemented using HTTP and the principles of REST. Such a Web Service can be thought of as a collection of resources comprising three aspects: 1. The URI for the Web Service 2. The MIME type of the data supported by the Web Service (often JSON, XML, or YAML but can be anything) 3. The set of operations supported by the Web Service using HTTP methods, including but not limited to POST, GET, PUT, and DELETE REST provides improved response time and reduced server load due to its support for the caching of representations. REST improves server scalability by reducing the need to maintain session state. This means that different servers can be used to handle different requests in a session. REST requires less client-side software to be written than other approaches, because a single browser can access any application and any resource. REST depends less on vendor software and mechanisms, which layer additional messaging frameworks on top of HTTP. It provides equivalent functionality when compared to alternative approaches to communication, and it does not require a separate resource discovery mechanism, because of the use of hyperlinks in representations. REST also provides better long-term compatibility because of the capability of document types such as HTML to evolve without breaking backward or forward compatibility and the ability of resources to add support for new content types as they are defined without dropping or reducing support for older content types. SOAs can be built using REST services—an approach sometimes referred to as (ROA) REST-oriented architecture. The main advantage of ROA is ease of implementation, agility of the design, and the lightweight approach. The latest version of WSDL now contains HTTP verbs and is considered an acceptable method of documenting REST services. There is also an alternative known as WADL (Web Application Description Language).

346

Digital Transformation of Enterprise Architecture

15.7 Enterprise Service Bus The Enterprise Service Bus (ESB) is an open standard-based message backbone designed to enable the implementation, deployment, and management of SOA-based solutions with a focus on assembling, deploying, and managing distributed service-oriented architecture (SOAs). An ESB is a set of infrastructure capabilities implemented by middleware technology that enable an SOA and alleviate disparity problems between applications running on heterogeneous platforms and using diverse data formats. The  ESB supports service invocations, message, and event-based interactions with appropriate service levels and manageability. The  ESB is designed to provide interoperability between larger-grained applications and other components via standard-based adapters and interfaces. The bus functions as both transport and transformation facilitator to allow distribution of these services over disparate systems and computing environments. Conceptually, the ESB has evolved from the store and forward mechanism found in middleware products and now is a combination of EAI, Web Services, XSLT, and orchestration technologies, such as BPEL. To achieve its operational objectives, the ESB draws from traditional EAI broker functionality in that it provides integration services such as connectivity and routing of messages based on business rules, data transformation, and adapters to applications. These capabilities are themselves SOA based in that they are spread out across the bus in a highly distributed fashion and hosted in separately deployable service containers. This is a crucial difference from traditional integration brokers, which are usually heavyweight, highly centralized, and monolithic in nature. The ESB approach allows for the selective deployment of integration broker functionality exactly where it is needed with no additional overbloating where it is not required. To surmount problems of system heterogeneity and information model mismatches in an SOA implementation, an EAI middleware supporting hub-and-spoke integration patterns could be used. The hub-and-spoke approach introduces an integration layer between the client and server modules that must support interoperability among and coexist with deployed infrastructure and applications, and not attempt to replace them. However, this approach has its own drawbacks as a hub can be a central point of failure and can quickly become a bottleneck. Communication through a bus connection reduces complexity, which is considered a common problem within an enterprise. Integration through point-to-point connections can often become very complex when a company grows. As a result they acquire more software. The applications are all directly connected with each other, which result in a spider-web-like image. Applications are, therefore, more difficult to manage (Figure 15.2). Using an Enterprise Service Bus, the software components are integrated by only connecting to the bus. The bus will then establish further links between the components. A scalable distributed architecture such as an SOA needs to employ a constellation of hubs. The requirements to provide an appropriately capable and manageable integration infrastructure for Web Services and SOA are coalescing into the concept of the Enterprise Service Bus (ESB), which will be the subject of this section. The two key ideas behind this approach are to loosely couple the systems taking part in the integration and break up the integration logic into distinct, easily manageable pieces. Figure 15.3 below shows a simplified view of an ESB that integrates a J2EE application using JMS, a .NET application using a C# client, an MQ application that interfaces with

Service-Oriented Computing

347

FIGURE 15.2 ESB reducing connection complexity. (a) Direct point-to-point connections (n*n). (b) Connecting through the bus (n).

FIGURE 15.3 ESB linking disparate systems and computing environments.

legacy applications, and external applications and data sources using Web Services. In an ESB application, development tools allow new or existing distributed applications to be exposed as Web Services and be accessed via a portal. In general, resources in the ESB are modeled as services that offer one or more business operations. Technologies like J2EE Connector Architecture (JCA) may also be used to create services by integrating packaged applications (like ERP systems), which would then be exposed as Web Services.

348

Digital Transformation of Enterprise Architecture

The ESB distributed processing infrastructure is aware of applications and services and uses content-based routing facilities to make informed decisions about how to communicate with them. In essence, the ESB provides docking stations for hosting services that can be assembled and orchestrated and are available for use to any other service on the bus. Once a service is deployed into a service container, it becomes an integral part of the ESB and can be used by any application or service connected to it. The service container hosts, manages, and dynamically deploys services and binds them to external resources, for example, data sources, enterprise, and multiplatform applications, such as shown in Figure 15.3. The distributed nature of the ESB container model allows individual event-driven services to be plugged into the ESB backbone on an as-needed basis. It  allows them to be highly decentralized and work together in a highly distributed fashion, while they are scaled independently from one another. Applications running on different platforms are abstractly decoupled from each other and can be connected together through the bus as logical endpoints that are exposed as event-driven services. Endpoints in the ESB depicted in the Figure  15.3 above provide abstraction of physical destination and connection information (like TCP/IP host name and port number). In addition, they facilitate asynchronous and highly reliable communication between service containers using reliable messaging conventions. Endpoints allow services to communicate using logical connection names, which an ESB will map to actual physical network destinations at runtime. This destination independence gives the services that are part of the ESB the ability to be upgraded, moved, or replaced without having to modify code and disrupt existing ESB applications. For  instance, an existing ESB invoicing service could be easily upgraded or replaced with a new service without disrupting other applications. Additionally, duplicate processes can be set up to handle failover if a service is not available. The endpoints can be configured to use several levels of QoS, which guarantee communication despite network failures and outages.

15.8 Summary This  chapter proposed service-oriented computing as the digital transformation of primarily the interoperability aspects of enterprise architecture (see Chapter 7). Services and service-oriented architectures are pragmatic responses to the complexity and interoperability problems encountered by the builders of previous generations of large-scale integrated applications. Although it is possible to design and build serviceoriented systems using any distributed computing or integration middleware, only Web Services technologies can today meet the critical requirement for seamless interoperability that is such an important part of the service-oriented vision. This chapter presented the definition and characteristics of service-oriented architectures along with approaches to realizing the vision of service-oriented systems with Web Services and RESTful services respectively. The  last part of the chapter shows how with an ESB SOA  implementation, previously isolated ERP, CRM, supply chain management, and financial and other legacy systems can become SOA enabled and integrated more effectively than when relying on custom, point-to-point coding or proprietary EAI technology. An ESB supporting Web Services with more established application integration techniques enables an enterprisewide solution that combines the best of both of these worlds.

16 Cloud Computing As stated right in the beginning of Chapter 8, to withstand the disruptive digital storms of the future, enterprise architecture affects digital transformation i.e., exponential change (amplification or attenuation) in any performance measure of scalability aspects primarily through cloud computing. This chapter explains cloud computing. Many motivating factors have led to the emergence of cloud computing. Businesses require services that include both infrastructure and application workload requests, while meeting defined service levels for capacity, resource tiering, and availability. IT delivery often necessitates costs and efficiencies that create a perception of IT as a hindrance, not  a strategic partner. Issues include under-utilized resources, over-provisioning or under-provisioning of resources, lengthy deployment times, and lack of cost visibility. Virtualization is the first step towards addressing some of these challenges by enabling improved utilization through server consolidation, workload mobility through hardware independence, and efficient management of hardware resources. The virtualization system is a key foundation for the cloud computing system. We stitch together compute resources so as to appear as one large computer behind which the complexity is hidden. By coordinating, managing, and scheduling resources such as CPUs, network, storage, and firewalls in a consistent way across internal and external premises, we create a flexible cloud infrastructure platform. This platform includes security, automation and management, interoperability and openness, self-service, pooling, and dynamic resource allocation. In the view of cloud computing we are advocating, applications can run within an external provider, in internal IT premises, or in combination as a hybrid system—it matters how they are run, not where they are run. Cloud computing builds on virtualization to create a service-oriented computing model. This is done through the addition of resource abstractions and controls to create dynamic pools of resources that can be consumed through the network. Benefits include economies of scale, elastic resources, self-service provisioning, and cost transparency. Consumption of cloud resources is enforced through resource metering and pricing models that shape user behavior. Consumers benefit through leveraging allocation models such as pay-asyou-go to gain greater cost efficiency, lower barrier to entry, and immediate access to infrastructure resources.

16.1 Introduction to Cloud Computing Here is National Institute of Standards and Technology (NIST) working definition: Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal

349

350

Digital Transformation of Enterprise Architecture

management effort or service provider interaction. This cloud model promotes scalability and is composed of five essential characteristics, three delivery models, and four deployment models. The five essential characteristics are: 1. 2. 3. 4. 5.

On Demand Self-service Broad Network Access Resource Pooling Rapid Elasticity Measured Service

The three delivery models are: 1. Infrastructure as a Service (IaaS) 2. Platform as a Service (PaaS) 3. Software as a Service (SaaS) The four deployment models are: 1. 2. 3. 4.

Public Cloud Private Cloud Hybrid Cloud Community Cloud

Cloud computing is the IT foundation for cloud services and it consists of technologies that enable cloud services. The key attributes of cloud computing are shown in Table 16.1. Key attributes of cloud services are described in Table 16.2.

TABLE 16.1 Key Attributes of Cloud Computing Attributes Offsite. Third-party provider Accessed via the Internet Minimal or no IT skill required Provisioning Pricing User interface System interface Shared resources

Description In the cloud execution, it is assumed that third-party provides services. There is also a possibility of in-house cloud service delivery. Services are accessed via standard-based, universal network access. It can also include security and quality-of-service options. There is a simplified specification of requirements. It includes self-service requesting, near real-time deployment, and dynamic and fine-grained scaling. Pricing is based on usage-based capability and it is fine-grained. User interface include browsers for a variety of devices and with rich capabilities. System interfaces are based on Web services APIs providing a standard framework for accessing and integrating among cloud services. Resources are shared among cloud services users; however via configuration options with the service, there is the ability to customize.

351

Cloud Computing

TABLE 16.2 Key Attributes of Cloud Services Attributes Infrastructure systems Application software Application development and deployment software System and application management software IP networks

Description It includes servers, storage, and networks that can scale as per user demand. It provides Web-based user interface, Web services APIs, and a rich variety of configurations. It supports the development and integration of cloud application software. It supports rapid self-service provisioning and configuration and usage monitoring. They connect end users to the cloud and the infrastructure components.

16.2 Cloud Characteristics Large organizations such as IBM, Dell, Microsoft, Google, Amazon and Sun have already started to take strong positions with respect to cloud computing provision. They are so much behind this latest paradigm that the success is virtually guaranteed. The essential characteristics of cloud environment include: • On-demand self-service: This enables users to consume computing capabilities (e.g., applications, server time, network storage) as and when required. • Broad network access: Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations). • Multi-tenancy and resource pooling: This allows combining heterogeneous computing resources (e.g., hardware, software, processing, servers, network bandwidth) to serve multiple consumers—such resources being dynamically assigned. • Rapid elasticity and scalability: This allows functionalities and resources to be rapidly, elastically and automatically scaled out or in, as demand rises or drops. • Measured provision: This controls and optimizes resource allocation and to provide a metering capability to determine the usage for billing purpose, allowing easy monitoring, controlling and reporting.

16.3 Cloud Delivery Models Cloud computing is not  a completely new concept for the development and operation of web applications. It allows for the most cost-effective development of scalable web portals on highly available and fail-safe infrastructures. In  the cloud computing system, we have to address different fundamentals like virtualization, scalability, interoperability, quality of service, failover mechanism, and the cloud deployment

352

Digital Transformation of Enterprise Architecture

models (private, public, hybrid) within the context of the taxonomy. The taxonomy of cloud includes the different participants involved in the cloud along with the attributes and technologies that are coupled to address their needs and the different types of services like “XaaS” offerings where X is software, hardware, platform, infrastructure, data, and business. Portfolio of services for the three cloud delivery models are shown in Figure 16.1.

FIGURE 16.1 Portfolio of services for the three cloud delivery models.

Cloud Computing

353

16.3.1 Infrastructure as a Service (IaaS) The IaaS model is about providing compute and storage resources as a service. According to NIST, IaaS is defined as follows: The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls). The  user of IaaS has single ownership of the hardware infrastructure allotted to him (may be a virtual machine) and can use it as if it is his own machine on a remote network and he has control over the operating system and software on it. IaaS is illustrated in Figure 16.2. The IaaS provider has control over the actual hardware and the cloud user can request allocation of virtual resources, which are then allocated by the IaaS provider on the hardware (generally without any manual intervention). The cloud user can manage the virtual resources as desired, including installing any desired OS, software and applications. Therefore IaaS is well suited for users who want complete control over the software stack that they run; for example, the user may be using heterogeneous software platforms from different vendors, and they may not  like to switch to a PaaS platform where only selected middleware is available. Well-known IaaS platforms include Amazon EC2, Rackspace, and Rightscale. Additionally, traditional vendors such as HP, IBM and Microsoft offer solutions that can be used to build private IaaS.

FIGURE 16.2 The cloud reference model.

354

Digital Transformation of Enterprise Architecture

16.3.2 Platform as a Service (PaaS) The PaaS model is to provide a system stack or platform for application deployment as a service. NIST defines PaaS as follows: The  capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The  consumer does not  manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations. Figure 16.2 shows a PaaS model diagramatically. The hardware, as well as any mapping of hardware to virtual resources, such as virtual servers, is controlled by the PaaS provider. Additionally, the PaaS provider supports selected middleware, such as a database, web application server, etc. shown in the figure. The cloud user can configure and build on top of this middleware, such as define a new database table in a database. The PaaS provider maps this new table onto their cloud infrastructure. Subsequently, the cloud user can manage the database as needed, and develop applications on top of this database. PaaS platforms are well suited to those cloud users who find that the middleware they are using matches the middleware provided by one of the PaaS vendors. This enables them to focus on the application. Windows Azure, Google App Engine, and Hadoop are some well-known PaaS platforms. As in the case of IaaS, traditional vendors such as HP, IBM, and Microsoft offer solutions that can be used to build private PaaS. 16.3.3 Software as a Service (SaaS) SaaS is about providing the complete application as a service. SaaS has been defined by NIST as follows: The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email). The consumer does not  manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. Any application that can be accessed using a web browser can be considered as SaaS. These points are illustrated in Figure 16.2. The SaaS provider controls all the layers apart from the application. Users who log in to the SaaS service can both use the application as well as configure the application for their use. For example, users can use Salesforce.com to store their customer data. They can also configure the application, for example, requesting additional space for storage or adding additional fields to the customer data that is already being used. When configuration settings are changed, the SaaS infrastructure performs any management tasks needed (such as allocation of additional storage) to support the changed configuration. SaaS platforms are targeted towards users who want to use the application without any software installation (in fact, the motto of Salesforce.com, one of the prominent SaaS vendors, is “No Software”). However, for advanced usage, some small amount of programming or scripting may be necessary to customize the application for usage by the business (for example, adding additional fields to customer data). In fact, SaaS platforms like Salesforce.com allow many of these customizations to be performed without programming, but by specifying business rules that are simple enough for non-programmers to implement. Prominent SaaS applications include Salesforce.com for CRM, Google Docs for

355

Cloud Computing

TABLE 16.3 Comparison of Cloud Delivery Models Service Type

IaaS

Service Category

VM Rental, Online Storage

Service Customization Service Provisioning Service Accessing and Using

Server Template Automation Remote Console, Web 2.0

Service Monitoring

Physical Resource Monitoring Dynamic Orchestration of Physical Resources Network Virtualization, Server Virtualization, Storage Virtualization Physical Resource Metering Load Balance

Service Level Management Service Resource Optimization Service Measurement Service Integration and Combination Service Security

Storage Encryption and Isolation, VM Isolation, VLAN, SSL/SSH

PaaS Online Operating Environment, Online Database, Online Message Queue Logic Resource Template Automation Online Development and Debugging, Integration of Offline Development Tools and Cloud Logic Resource Monitoring Dynamic Orchestration of Logic Resources Large-Scale Distributed File System, Database, Middleware, etc. Logic Resource Usage Metering SOA Data Isolation, Operating Environment Isolation, SSL

SaaS Application and Software Rental Application Template Automation Web 2.0

Application Monitoring Dynamic Orchestration of Application Multi-Tenancy

Business Resource Usage Metering SOA, Mashup Data Isolation, Operating Environment Isolation, SSL, Web Authentication and Authorization

document sharing, and web email systems like Gmail, Hotmail, and Yahoo! Mail. IT vendors such as HP and IBM also sell systems that can be configured to set up SaaS in a private cloud; SAP, for example, can be used as an SaaS offering inside an enterprise. Table 16.3 presents a comparison of the three cloud delivery models.

16.4 Cloud Deployment Models 16.4.1 Private Clouds A private cloud has an exclusive purpose for a particular organization. The cloud resources may be located on or off premise and could be owned and managed by the consuming organization or a third party. This may be an example of an organization who has decided to adopt the infrastructure cost-saving potential of a virtualized architecture on top of existing hardware. The organization feels unable to remotely host their data, so they are looking to the cloud to improve their resource utilisation and automate the management of such resources. Alternatively an organization may wish to extend its current IT capability by using an exclusive, private cloud that is remotely accessible and provisioned by a third party. Such an organization may feel uncomfortable with their data being held alongside a potential competitor’s data in the multi-tenancy model.

356

Digital Transformation of Enterprise Architecture

16.4.2 Public Clouds A  public cloud, as its name implies, is available to the general public and is managed by an organization. The organization may be a business (such as Google), academic or a governmental department. The cloud computing provider owns and manages the cloud infrastructure. The existence of many different consumers within one cloud architecture is referred to as a multi-tenancy model. 16.4.3 Hybrid Clouds Hybrid clouds are formed when more than one type of cloud infrastructure is utilized for a particular situation. For instance, an organization may utilize a public cloud for some aspect of its business, yet also have a private cloud on premise for data that is sensitive. As organizations start to exploit cloud service models, it is increasingly likely that a hybrid model is adopted as the specific characteristics of each of the different service models are harnessed. The key enabler here is the open standards by which data and applications are implemented, since if portability does not exist, then vendor lock-in to a particular cloud computing provider becomes likely. Lack of data and application portability has been a major hindrance for the widespread uptake of grid computing, and this is one aspect of cloud computing that can facilitate much more flexible, abstract architectures. 16.4.4 Community Clouds Community clouds are a model of cloud computing where the resources exist for a number of parties who have a shared interest or cause. This  model is very similar to the singlepurpose grids that collaborating research and academic organizations have created to conduct large-scale scientific experiments (e-science). The cloud is owned and managed by one or more of the collaborators in the community, and it may exist either on or off premise.

16.5 Cloud Benefits Cloud computing is an attractive paradigm that promises numerous benefits, inherent in the characteristics, as mentioned above. These include: • Optimization of a company’s capital investment by reducing costs of purchasing hardware and software, resulting in a much lower total cost of ownership and, ultimately, a whole new way of looking at the economics of scale and operational IT • Simplicity and agility of operations and use, requiring minimal time and effort to provision additional resources • Enabling an enterprise to tap into a talent pool, as and when needed, for a fraction of the cost of hiring staff or retaining the existing staff and, thus, enabling the key personnel in the organizations to focus more on producing value and innovation for the business • Enabling small organizations to access the IT services and resources that would otherwise be out of their reach, thus placing large organizations and small businesses on a level playing field

Cloud Computing

357

• Providing novel and complex computing architectures and innovation potential • Providing mechanism for disaster recovery and business continuity through a variety of fully outsourced ICT services and resources Cloud computing can be massively scalable, and there are built-in benefits of efficiency, availability and high utilization that, in turn, result in reduced capital expenditure and reduced operational costs. It permits seamless sharing and collaboration through virtualization. In general, cloud computing promises cost savings, agility, innovation, flexibility and simplicity. The offerings from vendors, in terms of services of the application, platform and infrastructure nature, are continuing to mature, and the cost savings are becoming particularly attractive in the current competitive economic climate. Another broader aim of cloud technology is to make supercomputing available to the enterprises, in particular, and the public, in general. The major benefits of the cloud paradigm can be distilled to its inherent flexibility and resiliency, the potential for reducing costs, availability of very large amounts of centralized data storage, means to rapidly deploy computing resources, and scalability. 1. Flexibility and resiliency: A major benefit of cloud computing is the flexibility, though cloud providers cannot provide infinite configuration and provisioning flexibility and will seek to offer structured alternatives. They might offer a choice among a number of computing and storage resource configurations at different capabilities and costs, and the cloud customer will have to adjust his or her requirements to fit one of those models. The flexibility offered by cloud computing can be in terms of: • Automated provisioning of new services and technologies • Acquiring increased resources on an as-needed basis • Ability to focus on innovation instead of maintenance details • Device independence • Freedom from having to install software patches • Freedom from concerns about updating servers Resiliency is achieved through the availability of multiple redundant resources and locations. As autonomic computing becomes more mature, self-management and self-healing mechanisms can ensure the increased reliability and robustness of cloud resources. Also, disaster recovery and business continuity planning are inherent in using the provider’s cloud computing platforms. 2. Reduced costs: Cloud computing offers reductions in system administration, provisioning expenses, energy costs, software licensing fees, and hardware costs. The cloud paradigm, in general, is a basis for cost savings because capability and resources can be paid for incrementally without the need for large investments in computing infrastructure. This model is especially true for adding storage costs for large database applications. Therefore, capital costs are reduced and replaced by manageable, scalable operating expenses. There might be some instances, particularly for long-term, stable computing configuration usage, where cloud computation might not have a cost advantage over using one’s internal resources or directly leasing equipment. For example, if the volume of data storage and computational resources required are essentially constant and there

358

Digital Transformation of Enterprise Architecture

is no need for rapid provisioning and flexibility, an organization’s local computational capabilities might be more cost effective than using a cloud. Resources are used more efficiently in cloud computing, resulting in substantial support and energy cost savings. The need for highly trained and expensive IT personnel is also reduced; client organizational support and maintenance costs are reduced dramatically because these expenses are transferred to the cloud provider, including 24/7 support that in turn is spread onto a much larger base of multiple tenants or clients. Another reason for migrating to the cloud is the drastic reduction in the cost of power and energy consumption. 3. Centralized data storage: Many data centers are an ensemble of legacy applications, operating systems, hardware, and software and are a support and maintenance nightmare. This  situation requires more specialized maintenance personnel, increased costs because of lack of standardization, and a higher risk of crashes. The cloud not only offers a larger amounts of data storage resources than are normally available in local, corporate computing systems, it also enables decrease or increase in the resources used per requirements—with the corresponding adjustments in operating cost. This centralization of storage infrastructure results in cost efficiencies in utilities, real-estate, and trained personnel. Also, data protections mechanisms are much easier to implement and monitor in a centralized system than on large numbers of computing platforms that might be widely distributed geographically in different parts of an organization. 4. Reduced time to deployment: In a competitive environment where rapid evaluation, development and deployment of new approaches, processes, solutions or offerings is critical, the cloud offers the means to use powerful computational or large storage resources on short-notice within a short period of time frame, without requiring sizeable initial investments of finances, efforts or time (in hardware, software, and personnel). Thus, this rapid provisioning of latest technologically upgraded and enhanced resources can be accomplished at relatively small cost (with minimal cost of replacing discontinued resources) and offers the client access to advanced technologies that are constantly being acquired by the cloud provider. Improved delivery of services obtained by rapid cloud provisioning improves time to market and, hence, market growth. 5. Scalability: Cloud computing provides the means, within limits, for a client to rapidly provision computational resources to meet increases or decreases in demand. Cloud scalability provides for optimal resources so that computing resources are provisioned per requirements seamlessly ensuring maximum cost-benefit to the clients. Since the cloud provider operates on a multi-tenancy utility model, the client organization has to pay only for the resources it is using at any particular time. Table  16.4 presents a comparison of cloud benefits for small and medium enterprises (SMEs) as well as large enterprises.

359

Cloud Computing

TABLE 16.4 Comparison of Cloud Benefits for Small and Medium Enterprises (SMEs) and Large Enterprises Economic Benefits Strategic flexibility

Cost reduction

Small and Medium Enterprises (SMEs) Critical in getting quickly to market. Cloud services allow startups to rapidly develop and deploy their products as long as they can use the open source or proprietary development platforms of the cloud providers. As the cloud market offerings mature, there will be many more platform options available. Pay-as-you-go pricing may be critical if operating capital or venture capital funding is not available. With cloud services, growth can more easily be funded through operating revenues and there may be tax advantages to converting what would have been longer-term depreciation expenses to fully loaded current expenses.

Software availability

Software as a Service (SaaS) and Platform as a Service (PaaS) provide necessary software and infrastructure at low entry cost. Limited online version functionality may be more than offset by dramatic cost savings.

Scalability

One of the most dramatic benefits for SMEs and startups. If successful, applications designed to autoscale can scale endlessly in a cloud environment to meet the growing demand.

Large Enterprises Cloud services can provide large enterprises the same strategic benefits as startups for new initiatives as long as legacy software integration and data issues are not significant. With appropriate software development talent, operating units can rapidly develop and market test new innovations without putting additional strain on IT budgets, staff, or hardware. Longstanding internal IT management policies and standards may have to be re-examined and modified to allow this to happen. Cloud services provide the same cost benefits for isolated and exploratory initiatives. Instant availability and low setup costs for new development and deployment environments allow operating units to explore new initiatives quickly at low cost without increasing internal IT hardware or staff overheads. For high data traffic volumes, it may become more economical to bring the operations in-house. Because maintaining legacy hardware and software absorb the majority of IT costs, large corporations may see significant costs savings by selectively moving noncritical applications and processes to external clouds. Existing volume licensing of legacy desktop and process-integrated enterprise software may make the status quo more attractive if end-user retraining, process modifications, and other change costs are high. Legacy desktop software may have more features and functionality than is currently available in SaaS versions. But the legacy software licensing costs may dramatically increase if it is hosted in a private cloud environment. Large enterprises with significant hardware, legacy software, and staff resources can benefit from cloud scalability by identifying CPU-intensive processes such as image processing, PDF conversion, and video encoding that would benefit from the massively scalable parallel processing available in clouds. While this may require modifying legacy applications, the speed benefits and reduced local hardware requirements may far outweigh the software modification costs.

(Continued)

360

Digital Transformation of Enterprise Architecture

TABLE 16.4 (Continued) Comparison of Cloud Benefits for Small and Medium Enterprises (SMEs) and Large Enterprises Economic Benefits Skills and staffing

Energy efficiency

System redundancy and data backup

Small and Medium Enterprises (SMEs) While the proper design of cloud applications requires high-level software development skills, their maintenance and support is vastly simplified in the cloud environment. Cloud providers handle all maintenance and support issues for both hardware and platform software at costs that are either bundled into the usage fees or available in various configurations as premium services. This allows significant cost savings through reduced staff overheads. Because SMEs can dramatically reduce or eliminate local servers, cloud computing provides direct utility cost savings as well as environmental benefits.

This is a large benefit for SMEs, the majority of which are poorly prepared for hardware failures and disaster recovery. Cloud storage can reduce downside risks at low cost.

Large Enterprises Because the majority of enterprise IT costs goes to support legacy applications and hardware, the greatest staffing benefits will be seen in new cloud initiatives that do not add to the staffing burden. Longer term, as the enterprise begins to analyze cloud technology potential for its legacy operations, retraining of existing staff or bringing in new staff with cloud technology skills will be necessary to take advantage of the new paradigm. Thus, some investment will have to be made before large-scale or long-term benefits will be seen. The staffing investment may be significant if the enterprise is attempting to create a private cloud to handle dynamic resource allocation and scalability across its operating units. In this case, it may face significant staff investment as well as the required hardware, software, and network investment to implement and maintain their private cloud. Even very large enterprise IT data centers cannot achieve the energy efficiencies found in the massive facilities of public cloud providers even with aggressive high-density server and virtualization strategies. In periods of economic downturns, green initiatives typically cannot compete for scarce capital funds. By employing a mixed strategy that off-loads applications and process big to external clouds when feasible, IT managers are able to minimize their energy costs and carbon footprint. Because cloud technologies distribute both data storage and data processing across potentially large number of servers, the likelihood of data loss due to hardware failure is much lower than in most large private data centers. The cloud data storage can provide a cost effective supplemental back-up strategy.

16.6 Cloud Challenges There  are multiple technical challenges that any cloud platform or application needs to address in order to truly provide a utility-like computing infrastructure. The  three key challenges are described below: • Scalability: Ability to scale to millions of clients simultaneously accessing the cloud service. • Multi-tenancy: Ability to provide the isolation as well as good performance to multiple tenants using the cloud infrastructure. • Availability: Ability that ensures that the infrastructure as well as applications are highly available regardless of hardware and software faults.

Cloud Computing

361

16.6.1 Scalability On-demand scaling (and de-scaling) of computation is one of the critical needs of any cloud computing platform. Compute scaling can be either done at the infrastructure level or platform level. At the infrastructure level, it is about increasing the capacity of the compute power, while at the platform level, the techniques are mainly to intelligently manage the different client requests in a manner that best utilizes the compute infrastructure without requiring the clients to do anything special during peaks in demand. 1. Scale up or vertical scaling: This  is about adding more resources to a single node or a single system to improve performance—such as addition of CPUs, use of multi-core systems instead of single-core or adding additional memory. In order to support on-demand scaling of the cloud infrastructure, the system should be able to increase its compute power dynamically without impacting the platform or application executing over it. Unless a system is virtualized, it is generally not possible to increase the capacity of a compute system dynamically without bringing down the system. The more powerful compute resource can now  be effectively used by a virtualization layer to support more  processes or more virtual machines—enabling scaling to many more clients. The advantage of scale up systems is that the programming paradigm is simpler, since it does not  involve distributed programming, unlike scaleout systems. 2. Scale out or horizontal scaling: This, on the other hand, is about expanding the compute resources by adding a new computer system or node to a distributed application. A web server (like Apache) is a typical example for such a system. In fact, given that most cloud applications are service-enabled, they need to be developed to expand on-demand using scaling-out techniques. The  advantage of scale out systems is that commodity hardware, such as disk and memory, can be used for delivering high performance. A scale out system such as interconnected compute nodes forming a cluster can be more powerful than a traditional supercomputer, especially with faster interconnect technologies. Scale out systems will essentially be distributed systems with a shared high-performance disk storage used for common data. Unlike scale up systems, in order to leverage full power of scale out systems, there should be an effort from the programmer to design applications differently. Many design patterns exist for applications designed for scale out systems like MapReduce. Scale-out solutions have much better performance and price/performance over scale up systems. This is because a search application essentially consists of independent parallel searches, which can easily be deployed on multiple processors. Scale out techniques can be employed at application-level as well. For example, a typical web search service is scalable where two client query requests can be processed completely as parallel threads. The challenge in scale out systems, however, is the complex management of the infrastructure, especially when the infrastructure caters to dynamic scaling of resources. Additionally, as noted, applications that do not  consist of independent computations are difficult to scale out.

362

Digital Transformation of Enterprise Architecture

The CAP theorem implies that consistency guarantees in large-scale distributed systems cannot be as strict as those in centralized systems. Specifically, it suggests that distributed systems may need to provide BASE guarantees instead of the ACID guarantees provided by traditional database systems. The  CAP theorem states that no distributed system can provide more than two of the following three guarantees: consistency, availability, and partitioning-tolerance. Here, consistency is defined as in databases; i.e., if multiple operations are performed on the same object (which is actually stored in a distributed system), the results of the operations appear as if the operations were carried out in some definite order on a single system. Availability is defined to be satisfied if each operation on the system (e.g., a query) returns some result. The  system provides partitioning-tolerance if the system is operational even when the network between two components of the system is down. Since distributed systems can satisfy only two of the three properties due to the CAP theorem, there are three types of distributed systems. CA (consistent, available) systems provide consistency and availability, but cannot tolerate network partitions. An example of a CA system is a clustered database, where each node stores a subset of the data. Such a database cannot provide availability in the case of network partitioning, since queries to data in the partitioned nodes must fail. CA systems may not be useful for cloud computing, since partitions are likely to occur in medium to large networks (including the case where the latency is very high). If there is no network partitioning, all servers are consistent, and the value seen by both clients is the correct value. However, if the network is partitioned, it is no longer possible to keep all the servers consistent in the face of updates. There are then two choices. One choice is to keep both servers up, and ignore the inconsistency. This leads to AP (available, partition-tolerant) systems where the system is always available, but may not return consistent results. The other possible choice is to bring one of the servers down, to avoid inconsistent values. This leads to CP (consistent, partition-tolerant) systems where the system always returns consistent results, but may be unavailable under partitioning— including the case where the latency is very high. AP systems provide weak consistency. An important subclass of weakly consistent systems is those that provide eventual consistency. A system is defined as being eventually consistent if the system is guaranteed to reach a consistent state in a finite amount of time if there are no failures (e.g., network partitions) and no updates are made. The inconsistency window for such systems is the maximum amount of time that can elapse between the time that the update is made, and the time that the update is guaranteed to be visible to all clients. If the inconsistency window is small compared to the update rate, then one method of dealing with stale data is to wait for a period greater than the inconsistency window, and then re-try the query. 16.6.2 Multi-Tenancy This deals with implementation of multi-tenancy with fine grained resource sharing while ensuring security and isolation between customers, and also allowing customers to customize the database. 1. Ad Hoc/Custom instances: In  this lowest level, each customer has their own custom version of the software. This represents the situation currently in most enterprise data centers where there are multiple instances and versions of the software. It was also typical of the earlier ASP model represented the first attempt to offer software for rent over the Internet. The ASP model was similar to the SaaS model

Cloud Computing

363

in that ASP customers (normally businesses), upon logging in to the ASP portal, would be able to rent use of a software application like CRM. However each customer would typically have their own instance of the software being supported. This would imply that each customer would have their own binaries, as well as their own dedicated processes for implementation of the application. This makes management extremely difficult, since each customer would need their own management support. 2. Configurable instances: In  this level, all customers share the same version of the program. However, customization is possible through configuration and other options. Customization could include the ability to put the customer’s logo on the screen, tailoring of workflows to the customer’s processes, and so on. In this level, there are significant manageability savings over the previous level, since there is only one copy of the software that needs to be maintained and administered. For instance, upgrades are seamless and simple. 3. Configurable, multi-tenant efficient instances: Cloud systems at this level in addition to sharing the same version of the program, also have only one instance of the program running which is shared among all the customers. This leads to additional efficiency since there is only one running instance of the program. 4. Scalable, configurable, multi-tenant efficient instances: In addition to the attributes of the previous level, the software is also hosted on a cluster of computers, allowing the capacity of the system to scale almost limitlessly. Thus the number of customers can scale from a small number to a very large number, and the capacity used by each customer can range from being small to very large. Performance bottlenecks and capacity limitations that may have been present in the earlier level are eliminated. For instance, in a cloud email service like Gmail or Yahoo! Mail, multiple users share the same physical email server as well as the same email server processes. Additionally, the emails from different users are stored in the same set of storage devices, and perhaps the same set of files. This results in management efficiencies; as a contrary example, if each user had to have a dedicated set of disks for storing email, the space allocation for each user would have to be managed separately. However, the drawback of shared storage devices is that security requirements are greater; if the email server has vulnerabilities and can be hacked, it is possible for one user to access the emails of another. 16.6.3 Availability Cloud services also need special techniques to reach high levels of availability. Missioncritical enterprise services generally have availability in the 99.999% range. This  corresponds to a downtime of 5  min in an entire year! Clearly, sophisticated techniques are needed to reach such high levels of reliability. Even for non-mission critical applications, downtime implies loss of revenue. It is therefore extremely important to ensure high availability for both mission-critical as well as non-mission-critical cloud services. There are basically two approaches to ensuring availability. The  first approach is to ensure high availability for the underlying application upon which the cloud service is built. This generally involves one of three techniques: Infrastructure availability ensuring redundancy in infrastructure, such as servers, so that new servers are always ready to replace failed servers; middleware availability achieved with middleware redundancy; and, application availability achieved via application redundancy.

364

Digital Transformation of Enterprise Architecture

16.7 Cloud Technologies Virtualization is widely used to deliver customizable computing environments on demand. Virtualization technology is one of the fundamental components of cloud computing. Virtualization allows the creation of a secure, customizable, and isolated execution environment for running applications without affecting other users’ applications. The basis of this technology is the ability of a computer program—or a combination of software and hardware—to emulate an executing environment separate from the one that hosts such programs. For instance, we can run Windows OS on top of a virtual machine, which itself is running on Linux OS. Virtualization provides a great opportunity to build elastically scalable systems that can provision additional capability with minimum costs. 16.7.1 Virtualization Resource virtualization is at the heart of most cloud architectures. The concept of virtualization allows an abstract, logical view on the physical resources and includes servers, data stores, networks, and software. The basic idea is to pool physical resources and manage them as a whole. Individual requests can then be served as required from these resource pools. For instance, it is possible to dynamically generate a certain platform for a specific application at the very moment when it is needed—instead of a real machine, a virtual machine is instituted. Resource management grows increasingly complex as the scale of a system as well as the number of users and the diversity of applications using the system increase. Resource management for a community of users with a wide range of applications running under different operating systems is a very difficult problem. Resource management becomes even more complex when resources are oversubscribed and users are uncooperative. In addition to external factors, resource management is affected by internal factors, such as the heterogeneity of the hardware and software systems, the ability to approximate the global state of the system and to redistribute the load, and the failure rates of different components. The traditional solution for these in a data center is to install standard operating systems on individual systems and rely on conventional OS techniques to ensure resource sharing, application protection, and performance isolation. System administration, accounting, security, and resource management are very challenging for the providers of service in this setup; application development and performance optimization are equally challenging for the users. The  alternative is resource virtualization, a technique analyzed in this chapter. Virtualization is a basic tenet of cloud computing—which simplifies some of the resource management tasks. For instance, the state of a virtual machine (VM) running under a virtual machine monitor (VMM) can be saved and migrated to another server to balance the load. At  the same time, virtualization allows users to operate in environments with which they are familiar rather than forcing them to work in idiosyncratic environments. Resource sharing in a virtual machine environment requires not  only ample hardware support and, in particular, powerful processors but also architectural support for multilevel control. Indeed, resources such as CPU cycles, memory, secondary storage, and I/O and communication bandwidth

Cloud Computing

365

are shared among several virtual machines; for each VM, resources must be shared among multiple instances of an application. There  are two distinct approaches for virtualization: • Full virtualization: Full virtualization is feasible when the hardware abstraction provided by the VMM is an exact replica of the physical hardware. In this case, any operating system running on the hardware will run without modifications under the VMM. • Paravirtualization: Paravirtualization requires some modifications of the guest operating systems because the hardware abstraction provided by the VMM does not support all the functions the hardware does. One of the primary reasons that companies have implemented virtualization is to improve the performance and efficiency of processing of a diverse mix of workloads. Rather than assigning a dedicated set of physical resources to each set of tasks, a pooled set of virtual resources can be quickly allocated as needed across all workloads. Reliance on the pool of virtual resources allows companies to improve latency. This increase in service delivery speed and efficiency is a function of the distributed nature of virtualized environments and helps to improve overall time-to-realize value. Using a distributed set of physical resources, such as servers, in a more flexible and efficient way delivers significant benefits in terms of cost savings and improvements in productivity: 1. Virtualization of physical resources (such as servers, storage, and networks) enables substantial improvement in the utilization of these resources. 2. Virtualization enables improved control over the usage and performance of the IT resources. 3. Virtualization provides a level of automation and standardization to optimize your computing environment. 4. Virtualization provides a foundation for cloud computing. Virtualization increases the efficiency of the cloud that makes many complex systems easier to optimize. As a result, organizations have been able to achieve the performance and optimization to be able to access data that were previously either unavailable or very hard to collect. Big data platforms are increasingly used as sources of enormous amounts of data about customer preferences, sentiment, and behaviors (see Chapter 8, Section 8.1.1 “What Is Big Data?”). Companies can integrate this information with internal sales and product data to gain insight into customer preferences to make more targeted and personalized offers. 16.7.1.1 Characteristics of Virtualized Environment In a virtualized environment, there are three major components: guest, host, and virtualization layer. The guest represents the system component that interacts with the virtualization layer rather than with the host, as would normally happen. The  host represents the original environment where the guest is supposed to be managed.

366

Digital Transformation of Enterprise Architecture

The virtualization layer is responsible for recreating the same or a different environment where the guest will operate. Virtualization has three characteristics that support the scalability and operating efficiency required for big data environments: 1. Partitioning: In  virtualization, many applications and operating systems are supported in a single physical system by partitioning (separating) the available resources. 2. Isolation: Each virtual machine is isolated from its host physical system and other virtualized machines. Because of this isolation, if one virtual instance crashes, the other virtual machines and the host system are not affected. In addition, data are not shared between one virtual instance and another. 3. Encapsulation: A virtual machines can be represented (and even stored) as a single file, so you can identify it easily based on the services it provides. For example, the file containing the encapsulated process could be a complete business service. This encapsulated virtual machine could be presented to an application as a complete entity. Thus, encapsulation could protect each application so that it does not interfere with another application. Virtualization abstracts the underlying resources and simplifies their use, isolates users from one another, and supports replication, which, in turn, increases the elasticity of the system. Virtualization is a critical aspect of cloud computing, equally important to the providers and consumers of cloud services, and plays an important role in. • System security because it allows isolation of services running on the same hardware • Portable performance and reliability because it allows applications to migrate from one platform to another • Development and management of services offered by a provider • Performance isolation 1. Virtualization advantages: Virtualization—the process of using computer resources to imitate other resources—is valued for its capability to increase IT resource utilization, efficiency, and scalability. One obvious application of virtualization is server virtualization, which helps organizations to increase the utilization of physical servers and potentially save on infrastructure costs; companies are increasingly finding that virtualization is not  limited only to servers but is valid and applicable across the entire IT infrastructure, including networks, storage, and software. For  instance, one of the most important requirements for success with big data is having the right level of performance to support the analysis of large volumes and varied types of data. If a company only virtualizes the servers, they may experience bottlenecks from other infrastructure elements such as storage and networks; furthermore, they are less likely to achieve the latency and efficiency that they need and more likely to expose the company to higher costs and increased security risks. As a result, a company’s entire IT environment needs to be optimized at every layer from the network to the databases, storage, and servers—virtualization adds efficiency at every layer of the IT infrastructure.

Cloud Computing

367

For a provider of IT services, the use of virtualization techniques has a number of advantages: a. Resource usage: Physical servers rarely work to capacity because their operators usually allow for sufficient computing resources to cover peak usage. If virtual machines are used, any load requirement can be satisfied from the resource pool. In  case the demand increases, it is possible to delay or even avoid the purchase of new capacities. b. Management: It  is possible to automate resource pool management. Virtual machines can be created and configured automatically as required. c. Consolidation: Different application classes can be consolidated to run on a smaller number of physical components. Besides server or storage consolidation, it is also possible to include entire system landscapes, data and databases, networks, and desktops. Consolidation leads to increased efficiency and thus to cost reduction. d. Energy consumption: Supplying large data centers with electric power has become increasingly difficult, and seen over its lifetime, the cost of energy required to operate a server is higher than its purchase price. Consolidation reduces the number of physical components. This, in turn, reduces the expenses for energy supply. e. Less space required: Each and every square yard of data center space is scarce and expensive. With consolidation, the same performance can be obtained on a smaller footprint and the costly expansion of an existing data center might possibly be avoided. f. Emergency planning: It is possible to move virtual machines from one resource pool to another. This ensures better availability of the services and makes it easier to comply with service-level agreements. Hardware maintenance windows are inherently no longer required. 2. Virtualization benefits: Since the providers of cloud services tend to build very large resource centers, virtualization leads not only to a size advantage but also to a more favorable cost situation. This  results in the following benefits for the customer: a. Dynamic behavior: Any request can be satisfied just in time and without any delays. In  case of bottlenecks, a virtual machine can draw on additional resources (such as storage space and I/O capabilities). b. Availability: Services are highly available and can be used day and night without stop. In the event of technology upgrades, it is possible to hot-migrate applications because virtual machines can easily be moved to an up-to-date system. c. Access: The virtualization layer isolates each virtual machine from the others and from the physical infrastructure. This way, virtual systems feature multitenant capabilities and, using a roles concept, it is possible to safely delegate management functionality to the customer. Customers can purchase IT capabilities from a self-service portal (customer emancipation). The  most direct benefit from virtualization is to ensure that MapReduce engines work better. Virtualization will result in better scale and performance for MapReduce. Each one of the map and reduce tasks needs to be

368

Digital Transformation of Enterprise Architecture

executed independently. If the MapReduce engine is parallelized and configured to run in a virtual environment, you can reduce management overhead and allow for expansions and contractions in the task workloads. MapReduce itself is inherently parallel and distributed. By encapsulating the MapReduce engine in a virtual container, you can run what you need whenever you need it. With virtualization, you can increase your utilization of the assets you have already paid for by turning them into generic pools of resources. 3. Virtualization challenges: There are side effects of virtualization, notably the performance penalty and the hardware costs. All privileged operations of a VM must be trapped and validated by the VMM, which ultimately controls system behavior; the increased overhead has a negative impact on performance. The  cost of the hardware for a VM is higher than the cost for a system running a traditional operating system because the physical hardware is shared among a set of guest operating systems and it is typically configured with faster and/or multicore processors, more memory, larger disks, and additional network interfaces compared with a system running a traditional operating system. A  drawback of virtualization is the fact that the operation of the abstraction layer itself requires resources. Modern virtualization techniques, however, are so sophisticated that this overhead is not too significant: Due to the particularly effective interaction of current multicore systems with virtualization technology, this performance loss plays only a minor role in today’s systems. In view of possible savings and the quality benefits perceived by the customers, the use of virtualization pays off in nearly all cases. 16.7.2 Service-Oriented Computing SOA  introduces a flexible architectural style that provides an integration framework through which software architects can build applications using a collection of reusable functional units (services) with well-defined interfaces which it combines in a logical flow. Applications are integrated at the interface (contract) and not at the implementation level. This allows greater flexibility since applications are built to work with any implementation of a contract, rather than take advantage of a feature or idiosyncrasy of a particular system or implementation. For example, different service providers (of the same interface) can be dynamically chosen based on policies, such as price, performance, or other QoS guarantees, current transaction volume, and so on. Another important characteristic of an SOA is that it allows many-to-many integration; that is, a variety of consumers across an enterprise can use and reuse applications in a variety of ways. This ability can dramatically reduce the cost/complexity of integrating incompatible applications and increase the ability of developers to quickly create, reconfigure, and repurpose applications as business needs arise. Benefits include reduced IT administration costs, ease of business process integration across organizational departments and with trading partners, and increased business adaptability. SOA is a logical way of designing a software system to provide services to either enduser applications or to other services distributed in a network, via published and discoverable interfaces. To achieve this, SOA reorganizes a portfolio of previously siloed software applications and support infrastructure in an organization into an interconnected collection of services, each of which is discoverable and accessible through standard interfaces and messaging protocols. Once all the elements of an SOA are in place, existing and future applications can access the SOA-based services as necessary. This architectural approach

Cloud Computing

369

is particularly applicable when multiple applications running on varied technologies and platforms need to communicate with each other. The essential goal of an SOA is to enable general-purpose interoperability among existing technologies and extensibility to future purposes and architectures. SOA lowers interoperability hurdles by converting monolithic and static systems into modular and flexible components, which it represents as services that can be requested through an industry standard protocol. Much of SOA’s power and flexibility derives from its ability to leverage standards-based functional services, calling them when needed on an individual basis or aggregating them to create composite applications or multistage business processes. The building-block services might employ pre-existing components that are reused and can also be updated or replaced without affecting the functionality or integrity of other independent services. In this latter regard, the services model offers numerous advantages over large monolithic applications, in which modifications to some portions of the code can have unintended and unpredictable effects on the rest of the code to which it is tightly bundled. Simply put, an SOA  is an architectural style, inspired by the service-oriented approach to computing, for enabling extensible interoperability. SOA as a design philosophy is independent of any specific technology, for example, Web Services or J2EE. Although the concept of SOA is often discussed in conjunction with Web Services, these two are not  synonymous. In  fact SOA  can be implemented without the use of Web Services, for example, using Java, C#, or J2EE. However, Web Services should be seen as a primary example of a message delivery model that makes it much easier to deploy an SOA. Web Services standards are key to enabling interoperability as well as key issues including quality of system (QoS), system semantics, security, management, and reliable messaging. One problem when implementing an SOA at the enterprise level or implementing a cross-enterprise collaborative SOA is how to manage the SOA model, how to categorize the elements in this model, and how to organize them in such a way that the different stakeholders reviewing the model can understand it. Toward this end, it is often convenient to think of the SOA as comprising a number of distinct layers of abstraction that emphasize service interfaces, service realizations, and compositions of services into higherlevel business processes. Each of these describes a logical separation of concerns by defining a set of common enterprise elements; each layer uses the functionality of the layer below it, adding new functionality, to accomplish its objective. The logical flow employed in the layered SOA  development model may focus on a top-down development approach, which emphasizes how business processes are decomposed into a collection of business services and how these services are implemented in terms of pre-existing enterprise assets. 16.7.2.1 Layers in SOA SOA can considered to be comprised of the following six distinct layers: 1. Domains: A business domain is a functional domain comprising a set of current and future business processes that share common capabilities and functionality and can collaborate with each other to accomplish a higher-level business objective, such as loans, insurance, banking, finance, manufacturing, marketing, and human resources. 2. Business processes: This layer is formed by subdividing a business domain, such as distribution, into a small number of core business processes, such as purchasing, order management, and inventory, which are made entirely standard for

370

Digital Transformation of Enterprise Architecture

use throughout the enterprise; having a large number of fine-grained processes leads to tremendous overhead and inefficiency, and hence, having a small collection of coarser-grained processes that are usable in multiple scenarios is a better option. 3. Business services: For any process, the right business service is to subdivide it into increasingly smaller subprocesses until the process cannot be divided any further. The resulting subprocesses then become candidate indivisible (singular) business services for implementation. Business services automate generic business tasks that provide value to an enterprise and are part of standard business process. The more processes that an enterprise decomposes in this way, the more commonality across these subprocesses can be achieved. In this way, an enterprise has the chance of building an appropriate set of reusable business services. This layer relies on the orchestration interface of a collection of business-aligned services to realize reconfigurable end-to-end business processes. Individual services or collections of services that exhibit various levels of granularity are combined and orchestrated to produce new composite services that not only introduce new levels of reuse but also allow the reconfiguration of business processes. The interfaces get exported as service descriptions in this layer using a service description language, such as WSDL. The service description can be implemented by a number of service providers, each offering various choices of qualities of service based on technical requirements in the areas of availability, performance, scalability, and security. During the exercise of defining business services, it is also important to take existing utility logic, ingrained in code, and expose it as services, which themselves become candidate services that specify not the overall business process but rather the mechanism for implementing the process. This exercise should thus yield two categories of services: business functionality services that are reusable across multiple processes and a collection of fine-grained utility (or commodity) services, which provide value to and are shared by business services across the organization. Examples of utility services include services implementing calculations, algorithms, and directory management services. 4. Infrastructure services: Infrastructure services are subdivided into technical utility services, access services, management and monitoring services, and interaction services; these are not  specific to a single line of business but are reusable across multiple lines of business. They also include mechanisms that seamlessly interlink services that span enterprises. This  can, for example, include the policies, constraints, and specific industry messages and interchange standards (such as the need to conform to specific industry message and interchange standards like EDIFACT, SWIFT, xCBL, ebXML BPSS, or RosettaNet) that an enterprise, say within a particular vertical marketplace, must conform to in order to work with other similar processes. Access services are dedicated to transforming data and integrating legacy applications and functions into the SOA  environment. This includes the wrapping and service enablement of legacy functions. 5. Service realizations: This  layer is the component realization layer that uses components for implementing services out of pre-existing applications and systems found in the operational systems layer. Components comprise autonomous units of software that may provide a useful service or a set of functionality to a client

Cloud Computing

371

(business service) and have meaning in isolation from other components with which they interoperate. 6. Operational systems: This layer is used by components to implement business services and processes. This  layer contains existing enterprise systems or applications, including customer relationship management (CRM) and ERP systems and applications, legacy applications, database systems and applications, and other packaged applications. These systems are usually known as enterprise information systems.

16.8 Summary This  chapter proposed service-oriented computing as the digital transformation of primarily the scalability aspects of enterprise architecture (see Chapter 8). This chapter introduced the concept of cloud computing. It describes its definition, presents the cloud delivery and deployment models, and highlights its benefits for enterprises. The  last part of the chapter identifies the primary pre-requisites for cloud computing, namely, virtualization and service-oriented computing. The virtualization system is a key foundation for the cloud computing system. We stitch together compute resources so as to appear as one large computer behind which the complexity is hidden. By coordinating, managing, and scheduling resources such as CPUs, network, storage, and firewalls in a consistent way across internal and external premises, we create a flexible cloud infrastructure platform. This platform includes security, automation and management, interoperability and openness, self-service, pooling, and dynamic resource allocation. Cloud computing builds on virtualization to create a service-oriented computing model. This is done through the addition of resource abstractions and controls to create dynamic pools of resources that can be consumed through the network. Benefits include economies of scale, elastic resources, self-service provisioning, and cost transparency. Consumption of cloud resources is enforced through resource metering and pricing models that shape user behavior. Consumers benefit through leveraging allocation models such as pay-as-you-go to gain greater cost efficiency, lower barrier to entry, and immediate access to infrastructure resources.

17 Big Data Computing As stated right in the beginning of Chapter 9, to withstand the disruptive digital storms of the future, enterprise architecture affects digital transformation i.e., exponential change (amplification or attenuation) in any performance measure of availability aspects primarily through big data computing. This chapter explains big data computing. The  rapid growth of the Internet and World Wide Web has led to vast amounts of information available online. In addition, business and government organizations create large amounts of both structured and unstructured information which needs to be processed, analyzed, and linked. It  is estimated the amount of information currently stored in a digital form in 2007 at 281 exabytes and the overall compound growth rate at 57% with information in organizations growing at even a faster rate. It is also estimated that 95% of all current information exists in unstructured form with increased data processing requirements compared to structured information. The storing, managing, accessing, and processing of this vast amount of data represents a fundamental need and an immense challenge in order to satisfy needs to search, analyze, mine, and visualize this data as information. The web is believed to have well over a trillion web pages, of which at least 50 billion have been catalogued and indexed by search engines such as Google, making them searchable by all of us. This massive web content spans well over 100 million domains (i.e., locations where we point our browsers, such as http://www.wikipedia.org). These are themselves growing at a rate of more than 20,000  net domain additions daily. Facebook and Twitter each has over 900  million users, who between them generate over 300 million posts a day (roughly 250 million tweets and over 60 million Facebook updates). Added to this are over 10,000  credit-card payments made per second, well over 30 billion point-of-sale transactions per year (via dial-up POS devices), and finally over 6 billion mobile phones, of which almost 1 billion are smartphones, many of which are GPS-enabled, which access the Internet for e-commerce, tweets, and to post updates on Facebook. Finally, and last but not least, there are the images and videos on YouTube and other sites, which by themselves outstrip all these put together in terms of the sheer volume of data they represent. Big data batch computing is a model of storing and then computing, such as the MapReduce framework open-sourced by the Hadoop implementation. Big data stream computing (BDSC) is a model of straight-through computing, such as Storm and S4, which does for stream computing what Hadoop does for batch computing. For  applications such as sensor networks, network monitoring, microblogging, web exploring, social networking, and so on, a big data input stream has the characteristics of high speed, real time, and large volume. These data sources often take the form of continuous data streams, and timely analysis of such a data stream is very important as the life cycle of most of this data is very short. The volume of data is so high that there is not enough space for storage, and not all data need to be stored. Thus, the storing and then computing batch computing model does not fit at all for such requirements. Increasingly, 373

374

Digital Transformation of Enterprise Architecture

data in big data environments is beginning to have the feature of streams, and BDSC has emerged as a panacea for computing data online within real-time constraints. Stream computing is a computing paradigm that reads data from collections of software or hardware sensors in stream form and computes such continuous data streams, where the output results are also in the form of a real-time data stream. Thus, 1. Data stream is a sequence of data sets. 2. Continuous stream is an infinite sequence of data sets. 3. Parallel streams have more than one stream to be processed at the same time. Stream computing is one effective way to support big data by providing extremely lowlatency velocities with massively parallel processing architectures and is becoming the fastest and most efficient way to obtain useful knowledge from big data, allowing organizations to react quickly when problems appear or to predict new trends in the near future (see author’s forthcoming book High-Performance Computing: Paradigms and Programming).

17.1 Introduction to Big Data This deluge of data, along with emerging techniques and technologies used to handle it, is commonly referred to today as “big data.” Such big data is both valuable and challenging, because of its sheer volume. So much so that the volume of data being created in the current 5 years from 2010 to 2015 will far exceed all the data generated in human history. The web, where all this data is being produced and resides, consists of millions of servers, with data storage soon to be measured in zettabytes. Cloud computing provides the opportunity for organizations with limited internal resources to implement large-scale big data computing applications in a cost-effective manner. The fundamental challenges of big data computing are managing and processing exponentially growing data volumes, significantly reducing associated data analysis cycles to support practical, timely applications, and developing new algorithms that can scale to search and process massive amounts of data. The answer to these challenges is a scalable, integrated computer systems hardware and software architecture designed for parallel processing of big data computing applications. This  chapter explores the challenges of big data computing. 17.1.1 What Is Big Data? Big data can be defined as volumes of data available in varying degrees of complexity, generated at different velocities and varying degrees of ambiguity that cannot be processed using traditional technologies, processing methods, algorithms, or any commercial offthe-shelf solutions. Data defined as big data includes weather, geospatial, and geographic information system (GIS) data; consumer-driven data from social media; enterprisegenerated data from legal, sales, marketing, procurement, finance and human-resources department; and device-generated data from sensor networks, nuclear plants, X-ray and scanning devices, and airplane engines (Figures 17.1 and 17.2).

Big Data Computing

FIGURE 17.1 4V characteristics of big data.

FIGURE 17.2 Use cases for big data computing.

375

376

Digital Transformation of Enterprise Architecture

17.1.1.1 Data Volume The  most interesting data for any organization to tap into today is social media data. The amount of data generated by consumers every minute provides extremely important insights into choices, opinions, influences, connections, brand loyalty, brand management, and much more. Social media sites not only provide consumer perspectives but also competitive positioning, trends, and access to communities formed by common interest. Organizations today leverage the social media pages to personalize marketing of products and services to each customer. Many additional applications are being developed and are slowly becoming a reality. These applications include using remote sensing to detect underground sources of energy, environmental monitoring, traffic monitoring, and regulation by automatic sensors mounted on vehicles and roads, remote monitoring of patients using special scanners and equipment, and tighter control and replenishment of inventories using radio-frequency identification (RFID) and other technologies. All these developments will have associated with them a large volume of data. Social networks such as Twitter and Facebook have hundreds of millions of subscribers worldwide who generate new data with every message they send or post they make. Every enterprise has massive amounts of emails that are generated by its employees, customers, and executives on a daily basis. These emails are all considered an asset of the corporation and need to be managed as such. After Enron and the collapse of many audits in enterprises, the U.S. government mandated that all enterprises should have a clear life-cycle management of emails, and that emails should be available and auditable on a case-by-case basis. There are several examples that come to mind like insider trading, intellectual property, competitive analysis, and much more, to justify governance and management of emails. If companies can analyze petabytes of data (equivalent to 20  million four-drawer file cabinets filled with text files or 13.3 years of HDTV content) with acceptable performance to discern patterns and anomalies, businesses can begin to make sense of data in new ways. Table 17.1 indicates the escalating scale of data. The list of features for handling data volume included the following: • Nontraditional and unorthodox data processing techniques need to be innovated for processing this data type. • Metadata is essential for processing this data successfully. • Metrics and key performance indicators (KPIs) are key to provide visualization. • Raw data do not need to be stored online for access. TABLE 17.1  Scale of Data Size of Data 1,000 megabytes 1,000 gigabytes 1,000 terabytes 1,000 petabytes 1,000 exabytes 1,000 zettabytes

Scale of Data 1 gigabyte (GB) 1 terabyte (TB) 1 petabyte (PB) 1 exabyte (EB) 1 zettabyte (ZB) 1 yottabyte (YB)

Big Data Computing

377

• Processed output is needed to be integrated into an enterprise level analytical ecosystem to provide better insights and visibility into the trends and outcomes of business exercises including customer relationship management (CRM), optimization of inventory, clickstream analysis, and more. • The enterprise data warehouse (EDW) is needed for analytics and reporting. 17.1.1.2 Data Velocity The business models adopted by Amazon, Facebook, Yahoo!, and Google, which became the defacto business models for most web-based companies, operate on the fact that by tracking customer clicks and navigations on the website, you can deliver personalized browsing and shopping experiences. In this process of clickstreams, there are millions of clicks gathered from users at every second, amounting to large volumes of data. This data can be processed, segmented, and modeled to study population behaviors based on time of day, geography, advertisement effectiveness, click behavior, and guided navigation response. The result sets of these models can be stored to create a better experience for the next set of clicks exhibiting similar behaviors. The velocity of data produced by user clicks on any website today is a prime example for big data velocity. Real-time data and streaming data are accumulated by the likes of Twitter and Facebook at a very high velocity. Velocity is helpful in detecting trends among people that are tweeting a million tweets every 3  min. Processing of streaming data for analysis also involves the velocity dimension. Similarly, high velocity is attributed to data associated with the typical speed of transactions on stock exchanges; this speed reaches billions of transactions per day on certain days. If these transactions must be processed to detect potential fraud or billions of call records on cell phones daily must be processed to detect malicious activity, we are dealing with the velocity dimension. The most popular way to share pictures, music, and data today is via mobile devices. The  sheer volume of data that is transmitted by mobile networks provides insights to the providers on the performance of their network, the amount of data processed at each tower, the time of day, the associated geographies, user demographics, location, latencies, and much more. The velocity of data movement is unpredictable, and sometimes can cause a network to crash. The data movement and its study have enabled mobile service providers to improve the QoS (quality of service), and associating this data with social media inputs has enabled insights into competitive intelligence. The list of features for handling data velocity included the following: • System must be elastic for handling data velocity along with volume • System must scale up and scale down as needed without increasing costs • System must be able to process data across the infrastructure in the least processing time • System throughput should remain stable independent of data velocity • System should be able to process data on a distributed platform

17.1.1.3 Data Variety Data comes in multiple formats as it ranges from emails to tweets to social media and sensor data. There is no control over the input data format or the structure of the data. The  processing complexity associated with a variety of formats is the availability of

378

Digital Transformation of Enterprise Architecture

TABLE 17.2 Use Cases for Big Data Computing

Banking and securities Communications and media services Education Government Health-care providers Insurance Manufacturing Chemicals and natural resources Retail Transportation Utilities

Volume of Data

Velocity of Data

Variety of Data

Underutilized Data (Dark Data)

Big Data Value Potential

High High

High High

Low High

Medium Medium

High High

Very low High Medium Medium High High High Medium Medium

Very low Medium High Medium High High High Medium Medium

Very low High Medium Medium High High High Medium Medium

High High Medium Medium High High Low High Medium

Medium High High Medium High Medium High Medium Medium

appropriate metadata for identifying what is contained in the actual data. This is critical when we process images, audio, video, and large chunks of text. The absence of metadata or partial metadata means processing delays from the ingestion of data to producing the final metrics, and, more importantly, in integrating the results with the data warehouse (Table 17.2). Sources of data in traditional applications were mainly transactions involving financial, insurance, travel, healthcare, retail industries, and governmental and judicial processing. The types of sources have expanded dramatically and include Internet data (e.g., clickstream and social media), research data (e.g., surveys and industry reports), location data (e.g., mobile device data and geospatial data), images (e.g., surveillance, satellites, and medical scanning), emails, supply chain data (e.g., EDI—electronic data interchange, vendor catalogs), signal data (e.g., sensors and RFID devices), and videos (YouTube enters hundreds of minutes of video every minute). Big data includes structured, semistructured, and unstructured data in different proportions based on context. The list of features for handling data variety included the following: • • • • •

Scalability Distributed processing capabilities Image processing capabilities Graph processing capabilities Video and audio processing capabilities

17.1.1.4 Data Veracity The veracity dimension of big data is a more recent addition than the advent of the Internet. Veracity has two built-in features: the credibility of the source and the suitability of data for its target audience. It is closely related to trust; listing veracity as one of the dimensions

Big Data Computing

379

of big data amounts to saying that data coming into the so-called big data applications have a variety of trustworthiness, and therefore before we accept the data for analytical or other applications, it must go through some degree of quality testing and credibility analysis. Many sources of data generate data that is uncertain, incomplete, and inaccurate, therefore making its veracity questionable. 17.1.2 Common Characteristics of Big Data Computing Systems There are several important common characteristics of big data computing systems that distinguish them from other forms of computing. 1. Principle of co-location of the data and programs or algorithms to perform the computation: To achieve high performance in big data computing, it is important to minimize the movement of data. This  principle—“move the code to the data”—that was designed into the data-parallel processing architecture is extremely effective since program size is usually small in comparison to the large datasets processed by big data systems and results in much less network traffic since data can be read locally instead of across the network. In direct contrast to other types of computing and supercomputing that utilize data stored in a separate repository or servers and transfer the data to the processing system for computation, big data computing uses distributed data and distributed file systems in which data is located across a cluster of processing nodes, and instead of moving the data, the program or algorithm is transferred to the nodes with the data that needs to be processed. This  characteristic allows processing algorithms to execute on the nodes where the data resides reducing system overhead and increasing performance. 2. Programming model utilized: Big data computing systems utilize a machineindependent approach in which applications are expressed in terms of high-level operations on data, and the runtime system transparently controls the scheduling, execution, load balancing, communications, and movement of programs and data across the distributed computing cluster. The programming abstraction and language tools allow the processing to be expressed in terms of data flows and transformations incorporating new dataflow programming languages and shared libraries of common data manipulation algorithms such as sorting. Conventional supercomputing and distributed computing systems typically utilize machinedependent programming models that can require low-level programmer control of processing and node communications using conventional imperative programming languages and specialized software packages that add complexity to the parallel programming task and reduce programmer productivity. A  machinedependent programming model also requires significant tuning and is more susceptible to single points of failure. 3. Focus on reliability and availability: Large-scale systems with hundreds or thousands of processing nodes are inherently more susceptible to hardware failures, communications errors, and software bugs. Big data computing systems are designed to be fault resilient. This includes redundant copies of all data files on disk, storage of intermediate processing results on disk, automatic detection of node or processing failures, and selective recomputation of results. A processing cluster configured for big data computing is typically able to continue operation with a reduced

380

Digital Transformation of Enterprise Architecture

number of nodes following a node failure with automatic and transparent recovery of incomplete processing. 4. Scalability: A final important characteristic of big data computing systems is the inherent scalability of the underlying hardware and software architecture. Big data computing systems can typically be scaled in a linear fashion to accommodate virtually any amount of data, or to meet time-critical performance requirements by simply adding additional processing nodes to a system configuration in order to achieve billions of records per second processing rates (BORPS). The number of nodes and processing tasks assigned for a specific application can be variable or fixed depending on the hardware, software, communications, and distributed file system architecture. This scalability allows computing problems once considered to be intractable due to the amount of data required or amount of processing time required to now be feasible and affords opportunities for new breakthroughs in data analysis and information processing. One of the key characteristics of the cloud is elastic scalability: Users can add or subtract resources in almost real time based on changing requirements. The cloud plays an important role within the big data world. Dramatic changes happen when these infrastructure components are combined with the advances in data management. Horizontally expandable and optimized infrastructure supports the practical implementation of big data. Cloudware technologies like virtualization increase the efficiency of the cloud that makes many complex systems easier to optimize. As a result, organizations have the performance and optimization to be able to access data that was previously either unavailable or very hard to collect. Big data platforms are increasingly used as sources of enormous amounts of data about customer preferences, sentiment, and behaviors. Companies can integrate this information with internal sales and product data to gain insight into customer preferences to make more targeted and personalized offers.

17.2 Tools and Techniques of Big Data 17.2.1 Processing Approach Current big data computing platforms use a “divide and conquer” parallel processing approach combining multiple processors and disks in large computing clusters connected using high-speed communications switches and networks that allow the data to be partitioned among the available computing resources and processed independently to achieve performance and scalability based on the amount of data. We define a cluster as “a type of parallel and distributed system, which consists of a collection of interconnected standalone computers working together as a single integrated computing resource.” This approach to parallel processing is often referred to as a “shared nothing” approach since each node consisting of processor, local memory, and disk resources shares nothing with other nodes in the cluster. In  parallel computing, this approach is considered suitable for data processing problems that are “embarrassingly parallel,” that is, where it is relatively easy to separate the problem into a number of parallel tasks and there is no dependency or communication required between the tasks other than overall management of the tasks. These types of data processing problems are inherently adaptable to

Big Data Computing

381

FIGURE 17.3 Parallel architectures.

various forms of distributed computing including clusters and data grids and cloud computing. Analytical environments are deployed in different architectural models. Even on parallel platforms, many databases are built on a shared everything approach in which the persistent storage and memory components are all shared by the different processing units. Parallel architectures are classified by what shared resources each processor can directly access. One typically distinguishes shared memory, shared disk, and shared nothing architectures (as depicted in Figure 17.3). 1. In a shared memory system, all processors have direct access to all memory via a shared bus. Typical examples are the common symmetric multiprocessor systems, where each processor core can access the complete memory via the shared memory bus. To preserve the abstraction, processor caches, buffering a subset of the data closer to the processor for fast access, have to be kept consistent with specialized protocols. Because disks are typically accessed via the memory, all processes also have access to all disks. 2. In a shared disk architecture, all processes have their own private memory, but all disks are shared. A cluster of computers connected to a SAN is a representative for this architecture. 3. In a shared nothing architecture, each processor has its private memory and private disk. The data is distributed across all disks, and each processor is responsible only for the data on its own connected memory and disks. To operate on data that spans the different memories or disks, the processors have to explicitly send data to other processors. If a processor fails, data held by its memory and disks is unavailable. Therefore, the shared nothing architecture requires special considerations to prevent data loss. When scaling out the system, the two main bottlenecks are typically the bandwidth of the shared medium and the overhead of maintaining a consistent view of the shared data in the presence of cache hierarchies. For that reason, the shared nothing architecture is considered the most scalable one, because it has no shared medium and no shared data. While it is often argued that shared disk architectures have certain advantages for transaction processing, the shared nothing is the undisputed architecture of choice for analytical queries. A shared-disk approach may have isolated processors, each with its own memory, but the persistent storage on disk is still shared across the system. These types of architectures are layered on top of symmetric multiprocessing (SMP) machines. While there may be

382

Digital Transformation of Enterprise Architecture

applications that are suited to this approach, there are bottlenecks that exist because of the sharing, because all I/O and memory requests are transferred (and satisfied) over the same bus. As more processors are added, the synchronization and communication needs increase exponentially, and therefore the bus is less able to handle the increased need for bandwidth. This means that unless the need for bandwidth is satisfied, there will be limits to the degree of scalability. In  contrast, in a shared-nothing approach, each processor has its own dedicated disk storage. This  approach, which maps nicely to an massively parallel processing (MPP) architecture, is not only more suitable to discrete allocation and distribution of the data, it enables more effective parallelization and consequently does not introduce the same kind of bus bottlenecks from which the SMP/shared-memory and shared-disk approaches suffer. Most big data appliances use a collection of computing resources, typically a combination of processing nodes and storage nodes. 17.2.2 Big Data System Architecture A  variety of system architectures have been implemented for big data and large-scale data analysis applications including parallel and distributed relational database management systems that have been available to run on shared nothing clusters of processing nodes for more than two decades. These include database systems from Teradata, Netezza, Vertica, and Exadata/Oracle and others that provide high-performance parallel database platforms. Although these systems have the ability to run parallel applications and queries expressed in the SQL language, they are typically not general-purpose processing platforms and usually run as a back-end to a separate front-end application processing system. Although this approach offers benefits when the data utilized is primarily structured in nature and fits easily into the constraints of a relational database, and often excels for transaction processing applications, most data growth is with data in unstructured form and new processing paradigms with more flexible data models were needed. Internet companies such as Google, Yahoo!, Microsoft, Facebook, and others required a new processing approach to effectively deal with the enormous amount of web data for applications such as search engines and social networking. In addition, many government and business organizations were overwhelmed with data that could not be effectively processed, linked, and analyzed with traditional computing approaches. Several solutions have emerged including the MapReduce architecture pioneered by Google and now  available in an open-source implementation called Hadoop used by Yahoo!, Facebook, and others (see Kale 2017). 17.2.2.1 BASE (Basically Available, Soft State, Eventual Consistency) BASE follows an optimistic approach accepting stale data and approximate answers while favoring availability. Some ways to achieve this are by supporting partial failures without total system failures, decoupling updates on different tables (i.e., relaxing consistency), and item potent operations that can be applied multiple times with the same result. In this sense, BASE describes more a spectrum of architectural styles than a single model. The eventual state of consistency can be provided as a result of a read-repair, where any outdated data is refreshed with the latest version of the data as

Big Data Computing

383

a result of the system detecting stale data during a read operation. Another approach is that of weak consistency. In this case, the read operation will return the first value found, not checking for staleness. Any stale nodes discovered are simply marked for updating at some stage in the future. This is a performance-focused approach but has the associated risk that data retrieved may not be the most current. In the following sections, we will discuss several techniques for implementing services following the BASE principle. Conventional storage techniques may not be adequate for big data and, hence, the cloud applications. To scale storage systems to cloud-scale, the basic technique is to partition and replicate the data over multiple independent storage systems. The  word independent is emphasized, since it is well-known that databases can be partitioned into mutually dependent subdatabases that are automatically synchronized for reasons of performance and availability. Partitioning and replication increase the overall throughput of the system, since the total throughput of the combined system is the aggregate of the individual storage systems. To scale both the throughput and the maximum size of the data that can be stored beyond the limits of traditional database deployments, it is possible to partition the data and store each partition in its own database. For scaling the throughput only, it is possible to use replication. Partitioning and replication also increase the storage capacity of a storage system by reducing the amount of data that needs to be stored in each partition. However, this creates synchronization and consistency problems and discussion of this aspect is out of scope for this book. The  other technology for scaling storage is known by the name Not only SQL (NoSQL). NoSQL was developed as a reaction to the perception that conventional databases, focused on the need to ensure data integrity for enterprise applications, were too rigid to scale to cloud levels. As an example, conventional databases enforce a schema on the data being stored, and changing the schema is not  easy. However, changing the schema may be a necessity in a rapidly changing environment like the cloud. NoSQL storage systems provide more flexibility and simplicity compared to relational databases. The  disadvantage, however, is greater application complexity. NoSQL systems, for example, do not  enforce a rigid schema. The  trade-off is that applications have to be written to deal with data records of varying formats (schema). BASE is the NoSQL operating premise, in the same way that traditional transactionally focused databases use ACID: One moves from a world of certainty in terms of data consistency to a world where all we are promised is that all copies of the data will, at some point, be the same. Partitioning and replication techniques used for scaling are as follows: 1. The first possible method is to store different tables in different databases (as in multidatabase systems, MDBS). 2. The second approach is to partition the data within a single table onto different databases. There are two natural ways to partition the data from within a table are to store different rows in different databases and to store different columns in different databases (more common for NoSQL databases). 17.2.2.2 Functional Decomposition As stated previously, one technique for partitioning the data to be stored is to store different tables in different databases, leading to the storage of the data in a MDBS.

384

Digital Transformation of Enterprise Architecture

17.2.2.3 Master–Slave Replication To increase the throughput of transactions from the database, it is possible to have multiple copies of the database. A  common replication method is master–slave replication. The master and slave databases are replicas of each other. All writes go to the master and the master keeps the slaves in sync. However, reads can be distributed to any database. Since this configuration distributes the reads among multiple databases, it is a good technology for read-intensive workloads. For write-intensive workloads, it is possible to have multiple masters, but then ensuring consistency if multiple processes update different replicas simultaneously is a complex problem. Additionally, time to write increases, due to the necessity of writing to all masters and the synchronization overhead between the masters rapidly, becomes a limiting overhead. 17.2.3 Row Partitioning or Sharding In  cloud technology, sharding is used to refer to the technique of partitioning a table among multiple independent databases by row. However, partitioning of data by row in relational databases is not  new and is referred to as horizontal partitioning in parallel database technology. The distinction between sharding and horizontal partitioning is that horizontal partitioning is done transparently to the application by the database, whereas sharding is explicit partitioning done by the application. However, the two techniques have started converging, since traditional database vendors have started offering support for more sophisticated partitioning strategies. Since sharding is similar to horizontal partitioning, we first discuss different horizontal partitioning techniques. It can be seen that a good sharding technique depends on both the organization of the data and the type of queries expected. The different techniques of sharding are as follows: 1. Round-robin partitioning: The round-robin method distributes the rows in a roundrobin fashion over different databases. In  the example, we could partition the transaction table into multiple databases so that the first transaction is stored in the first database, the second in the second database, and so on. The advantage of round-robin partitioning is its simplicity. However, it also suffers from the disadvantage of losing associations (say) during a query, unless all databases are queried. Hash partitioning and range partitioning do not suffer from the disadvantage of losing record associations. 2. Hash partitioning method: In this method, the value of a selected attribute is hashed to find the database into which the tuple should be stored. If queries are frequently made on an attribute (say) Customer_Id, then associations can be preserved by using this attribute as the attribute that is hashed, so that records with the same value of this attribute can be found in the same database. 3. Range partitioning: The range partitioning technique stores records with “similar” attributes in the same database. For example, the range of Customer_Id could be partitioned between different databases. Again, if the attributes chosen for grouping are those on which queries are frequently made, record association is preserved and it is not  necessary to merge results from different databases. Range partitioning can be susceptible to load imbalance, unless the partitioning is chosen carefully. It is possible to choose the partitions so that there is an imbalance in the amount of data stored in the partitions (data skew) or in the execution of

Big Data Computing

385

queries across partitions (execution skew). These problems are less likely in round robin and hash partitioning, since they tend to uniformly distribute the data over the partitions. Thus, hash partitioning is particularly well suited to large-scale systems. Round robin simplifies a uniform distribution of records but does not facilitate the restriction of operations to single partitions. While range partitioning does supports this, it requires knowledge about the data distribution in order to properly adjust the ranges. 17.2.4 Row versus Column-Oriented Data Layouts Most traditional database systems employ a row-oriented layout, in which all the values associated with a specific row are laid out consecutively in memory. That layout may work well for transaction processing applications that focus on updating specific records associated with a limited number of transactions (or transaction steps) at a time. These are manifested as algorithmic scans performed using multiway joins; accessing whole rows at a time when only the values of a smaller set of columns are needed may flood the network with extraneous data that is not immediately needed and ultimately will increase the execution time. Big data analytics applications scan, aggregate, and summarize over massive datasets. Analytical applications and queries will only need to access the data elements needed to satisfy join conditions. With row-oriented layouts, the entire record must be read in order to access the required attributes, with significantly more data read than is needed to satisfy the request. Also, the row-oriented layout is often misaligned with the characteristics of the different types of memory systems (core, cache, disk, etc.), leading to increased access latencies. Subsequently, row-oriented data layouts will not  enable the types of joins or aggregations typical of analytic queries to execute with the anticipated level of performance. Hence, a number of appliances for big data use a database management system that uses an alternate, columnar layout for data that can help to reduce the negative performance impacts of data latency that plague databases with a row-oriented data layout. The values for each column can be stored separately, and because of this, for any query, the system is able to selectively access the specific column values requested to evaluate the join conditions. Instead of requiring separate indexes to tune queries, the data values themselves within each column form the index. This speeds up data access while reducing the overall database footprint, while dramatically improving query performance. The  simplicity of the columnar approach provides many benefits, especially for those seeking a high-performance environment to meet the growing needs of extremely large analytic datasets. 17.2.5 NoSQL Data Management NoSQL or “not only SQL” suggests environments that combine traditional SQL (or SQLlike query languages) with alternative means of querying and access. NoSQL data systems hold out the promise of greater flexibility in database management while reducing the dependence on more formal database administration. NoSQL databases have more relaxed modeling constraints, which may benefit both the application developer and the end-user analysts when their interactive analyses are not throttled by the need to cast each query in terms of a relational table-based environment.

386

Digital Transformation of Enterprise Architecture

Different NoSQL frameworks are optimized for different types of analyses. For example, some are implemented as key–value stores, which nicely align to certain big data programming models, while another emerging model is a graph database, in which a graph abstraction is implemented to embed both semantics and connectivity within its structure. In fact, the general concepts for NoSQL include schemaless modeling in which the semantics of the data are embedded within a flexible connectivity and storage model; this provides for automatic distribution of data and elasticity with respect to the use of computing, storage, and network bandwidth in ways that do not  force specific binding of data to be persistently stored in particular physical locations. NoSQL databases also provide for integrated data caching that helps reduce data access latency and speed performance. The  key–value store does not  impose any constraints about data typing or data structure—the value associated with the key is the value, and it is up to the consuming business applications to assert expectations about the data values and their semantics and interpretation. This demonstrates the schemaless property of the model. A relatively simple type of NoSQL data store is a key–value store, a schemaless model in which distinct character strings called keys are associated with values (or sets of values, or even more complex entity objects)—not unlike hash table data structure. If you want to associate multiple values with a single key, you need to consider the representations of the objects and how they are associated with the key. For example, you may want to associate a list of attributes with a single key, which may suggest that the value stored with the key is yet another key–value store object itself. Key–value stores are essentially very long, and presumably thin tables (in that there are not many columns associated with each row). The table’s rows can be sorted by the key– value to simplify finding the key during a query. Alternatively, the keys can be hashed using a hash function that maps the key to a particular location (sometimes called a “bucket”) in the table. The representation can grow indefinitely, which makes it good for storing large amounts of data that can be accessed relatively quickly, as well as allows massive amounts of indexed data values to be appended to the same key–value table, which can then be sharded or distributed across the storage nodes. Under the right conditions, the table is distributed in a way that is aligned with the way the keys are organized, so that the hashing function that is used to determine where any specific key exists in the table can also be used to determine which node holds that key’s bucket (i.e., the portion of the table holding that key). NoSQL data management environments are engineered for the following two key criteria: 1. Fast accessibility, whether that means inserting data into the model or pulling it out via some query or access method. 2. Scalability for volume, so as to support the accumulation and management of massive amounts of data. The  different approaches are amenable to extensibility, scalability, and distribution and these characteristics blend nicely with programming models (like MapReduce) with straightforward creation and execution of many parallel processing threads. Distributing a tabular data store or a key–value store allows many queries/accesses to be performed simultaneously, especially when the hashing of the keys maps to

Big Data Computing

387

different data storage nodes. Employing different data allocation strategies will allow the tables to grow indefinitely without requiring significant rebalancing. In  other words, these data organizations are designed for high performance computing of reporting and analysis. The model will not inherently provide any kind of traditional database capabilities (such as atomicity of transactions, or consistency when multiple transactions are executed simultaneously)—those capabilities must be provided by the application itself.

17.3 NoSQL Databases NoSQL databases have been classified into four subcategories: 1. Column family stores: An extension of the key–value architecture with columns and column families; the overall goal was to process distributed data over a pool of infrastructure, for example, HBase and Cassandra. 2. Key–value pairs: This  model is implemented using a hash table where there is a unique key and a pointer to a particular item of data creating a key–value pair, for example, Voldemort. 3. Document databases: This class of databases is modeled after Lotus Notes and similar to key–value stores. The data are stored as a document and is represented in JSON or XML formats. The biggest design feature is the flexibility to list multiple levels of key–value pairs, for example, Riak and CouchDB. 4. Graph databases: Based on the graph theory, this class of database supports the scalability across a cluster of machines. The complexity of representation for extremely complex sets of documents is evolving, for example, Neo4J. 17.3.1 Column-Oriented Stores or Databases Hadoop HBase is the distributed database that supports the storage needs of the Hadoop distributed programming platform. HBase is designed by taking inspiration from Google BigTable; its main goal is to offer real-time read/write operations for tables with billions of rows and millions of columns by leveraging clusters of commodity hardware. The internal architecture and logic model of HBase is very similar to Google BigTable, and the entire system is backed by the Hadoop Distributed File System (HDFS), which mimics the structure and services of GFS. 17.3.2 Key–Value Stores (K–V Store) or Databases Apache Cassandra is a distributed object store from an aging large amounts of structured data spread across many commodity servers. The  system is designed to avoid a single point of failure and offer a highly reliable service. Cassandra was initially developed by Facebook; now, it is part of the Apache incubator initiative. Facebook in the initial years had used a leading commercial database solution for their internal architecture in conjunction with some Hadoop. Eventually, the tsunami of users led the company to start thinking in

388

Digital Transformation of Enterprise Architecture

terms of unlimited scalability and focus on availability and distribution. The nature of the data and its producers and consumers did not mandate consistency but needed unlimited availability and scalable performance. The team at Facebook built an architecture that combines the data model approaches of BigTable and the infrastructure approaches of Dynamo with scalability and performance capabilities, named Cassandra. Cassandra is often referred to as hybrid architecture since it combines the column-oriented data model from BigTable with Hadoop MapReduce jobs, and it implements the patterns from Dynamo like eventually consistent, gossip protocols, a master–master way of serving both read and write requests. Cassandra supports a full replication model based on NoSQL architectures. The  Cassandra team had a few design goals to meet, considering the architecture at the time of first development and deployment was primarily being done at Facebook. The goals included • • • • • • •

High availability Eventual consistency Incremental scalability Optimistic replication Tunable trade-offs between consistency, durability, and latency Low cost of ownership Minimal administration

Amazon Dynamo is the distributed key–value store that supports the management of information of several of the business services offered by Amazon Inc. The main goal of Dynamo is to provide an incrementally scalable and highly available storage system. This  goal helps in achieving reliability at a massive scale, where thousands of servers and network components build an infrastructure serving 10 million requests per day. Dynamo provides a simplified interface based on get/put semantics, where objects are stored and retrieved with a unique identifier (key). The main goal of achieving an extremely reliable infrastructure has imposed some constraints on the properties of these systems. For example, ACID properties on data have been sacrificed in favor of a more reliable and efficient infrastructure. This creates what it is called an eventually consistent model (i.e., in the long term, all the users will see the same data). 17.3.3 Document-Oriented Databases Document-oriented databases or document databases can be defined as a schema-less and flexible model of storing data as documents, rather than relational structures. The document will contain all the data it needs to answer specific query questions. Benefits of this model include • Ability to store dynamic data in unstructured, semistructured, or structured formats • Ability to create persisted views from a base document and store the same for analysis • Ability to store and process large data sets

Big Data Computing

389

The design features of document-oriented databases include • Schema-free: There is no restriction on the structure and format of how the data need to be stored. This flexibility allows an evolving system to add more data and allows the existing data to be retained in the current structure. • Document store: Objects can be serialized and stored in a document, and there is no relational integrity to enforce and follow. • Ease of creation and maintenance: A simple creation of the document allows complex objects to be created once and there is minimal maintenance once the document is created. • No relationship enforcement: Documents are independent of each other and there is no foreign key relationship to worry about when executing queries. The effects of concurrency and performance issues related to the same are not a bother here. • Open formats: Documents are described using JSON, XML, or some derivative, making the process standard and clean from the start. • Built-in versioning: Documents can get large and messy with versions. To avoid conflicts and keep processing efficiencies, versioning is implemented by most solutions available today. Document databases express the data as files in JSON or XML formats. This allows the same document to be parsed for multiple contexts and the results scrapped and added to the next iteration of the database data. Apache CouchDB and MongoDB are two examples of document stores. Both provide a schema-less store whereby the primary objects are documents organized into a collection of key–value fields. The  value of each field can be of type string, integer, float, date, or an array of values. The databases expose a RESTful interface and represent data in JSON format. Both allow querying and indexing data by using the MapReduce programming model, expose JavaScript as a base language for data querying and manipulation rather than SQL, and support large files as documents. From an infrastructure point of view, the two systems support data replication and high availability. CouchDB ensures ACID properties on data. MongoDB supports sharding, which is the ability to distribute the content of a collection among different nodes. 17.3.4 Graph Stores or Databases Social media and the emergence of Facebook, LinkedIn, and Twitter have accelerated the emergence of the most complex NoSQL database, the graph database. The graph database is oriented toward modeling and deploying data that is graphical by construct. For example, to represent a person and their friends in a social network, we can either write code to convert the social graph into key–value pairs on a Dynamo or Cassandra or simply convert them into a node-edge model in a graph database, where managing the relationship representation is much more simplified. A  graph database represents each object as a node and the relationships as an edge. This means person is a node and household is a node and the relationship between them is an edge. Like the classic ER model for RDBMS, we need to create an attribute model for a graph database. We can start by taking the highest level in a hierarchy as a root node

390

Digital Transformation of Enterprise Architecture

(similar to an entity) and connect each attribute as its subnode. To represent different levels of the hierarchy, we can add a subcategory or subreference and create another list of attributes at that level. This creates a natural traversal model like a tree traversal, which is similar to traversing a graph. Depending on the cyclic property of the graph, we can have a balanced or skewed model. Some of the most evolved graph databases include Neo4J, InfiniteGraph, GraphDB, and AllegroGraph.

17.4 Aadhaar Project The Aadhaar project undertaken by the Unique Identification Authority of India (UIDAI) has the mission of identifying 1.2 billion citizens of India uniquely and reliably to build the largest biometric identity repository in the world (while eliminating duplication and fake identities) and provide an online, anytime anywhere, multifactor authentication service. This makes possible to identify any individual and get it authenticated at any time, from any place in India, in less than a second. The UIDAI project is a Hadoop-based program that is well into production. At the time of this writing, over 700 million people have been enrolled and their identity information has been verified. The target is to reach a total of at least 1 billion enrolments during 2015. Currently the enrolment rate is about 10 million people every 10 days, so the project is well positioned to meet that target. In India, there is no social security card, and much of the population lacks a passport. Literacy rates are relatively low, and the population is scattered across hundreds of thousands of villages. Without adequately verifiable identification, it has been difficult for many citizens to set up a bank account or otherwise participate in a modern economy. For India’s poorer citizens, this problem has even more dire consequences. The government has extensive programs to provide widespread relief for the poor—for example, through grain subsidies to those who are underfed and through government sponsored work programs for the unemployed. Yet many who need help do not  have access to benefit programs, in part because of the inability to verify who they are and whether they qualify for the programs. In addition, there is a huge level of so-called “leakage” of government aid that disappears to apparent fraud. For example, it has been estimated that over 50% of funds intended to provide grain to the poor goes missing, and that fraudulent claims for “ghost workers” siphon off much of the aid intended to create work for the poor. There are clearly immense benefits from a mechanism that uniquely identifies a person and ensures instant identity verification. The  need to prove one’s identity only once will bring down transaction costs. A  clear identity number can transform the delivery of social welfare programs by making them more inclusive of those communities now cut off from such benefits due to their lack of identification. It also enables the government to shift from indirect to direct benefit delivery by directly reaching out to the intended beneficiaries. A single universal identity number is also useful in eliminating fraud and duplicate identities, since individuals can no longer be able to

Big Data Computing

391

represent themselves differently to different agencies. This results in significant savings to the state exchequer. Aadhaar is in the process of creating the largest biometric database in the world, one that can be leveraged to authenticate identities for each citizen, even on site in rural villages. A  wide range of mobile devices from cell phones to microscanners can be used to enroll people and to authenticate their identities when a transaction is requested. People will be able to make payments at remote sites via micro-ATMs. Aadhaar ID authentication will be used to verify qualification for relief food deliveries and to provide pension payments for the elderly. Implementation of this massive digital identification system is expected to save the equivalent of millions and perhaps billions of dollars each year by thwarting efforts at fraud. While the UIDAI project will have broad benefits for the Indian society as a whole, the greatest impact will be for the poorest people. All application components are built using open source components and open standards. Aadhaar software currently runs across two of the data centers within India managed by UIDAI and handles 1 million enrolments a day and at the peak doing about 600 trillion biometric matches a day. Current system already has about 4 PB (4,000 terabytes) of raw data and continues grow as new enrolments come in. Aadhaar Authentication service is built to handle 100 million authentications a day across both the data centers in an active-active fashion and is benchmarked to provide sub-second response time. Central to Aadhaar system is its biometric subsystem that performs de-duplication and authentication in an accurate way. Figure 17.4 presents the solution architecture for the Aadhaar project. Application modules are built on common technology platform that contains frameworks for persistence, security, messaging, etc. The Platform standardizes on a technology stack based on open standards and using open source where prudent. A list of extensively used open source technology stacks is as follows: • • • • • • • • • • • • • •

Spring Framework: Application container for all components and runtime Spring Batch: Runtime for various batch jobs Spring Security: For all application security needs Mule ESB: Runtime for loosely coupled SEDA stages of enrolment flow RabbitMQ: Messaging middleware Hadoop Stack: HDFS, Hive, HBase, Pig, and Zookeeper Pentaho: Open source BI Solution Quartz: Scheduling of batch jobs MongoDB: NoSQL Document Database MySQL: RDBMS for storing relational data Apache Solr: Index for full text search Apache Tomcat: Web container Liferay: Portal framework Several other open source libraries for random number generation, hashing, advanced data structures, HTML UI components, and others.

392

FIGURE 17.4 Solution architecture for the Aadhaar project.

Digital Transformation of Enterprise Architecture

Big Data Computing

393

17.5 Summary This chapter proposed big data computing as the digital transformation of primarily the availability aspects of enterprise architecture (see Chapter 9). Big data is defined as volumes of data available in varying degrees of complexity, generated at different velocities and varying degrees of ambiguity that cannot be processed using traditional technologies, processing methods, algorithms, or any commercial offthe-shelf solutions. This chapter explored the challenges of big data computing: managing and processing exponentially growing data volumes, significantly reducing associated data analysis cycles to support practical, timely applications, and developing new algorithms that can scale to search and process massive amounts of data. The answer to these challenges is a scalable, integrated computer systems hardware and software architecture designed for parallel processing of big data computing applications. Cloud computing provides the opportunity for organizations with limited internal resources to implement large-scale big data computing applications in a cost-effective manner. This chapter described the characteristic features of big data systems including big data architecture, row versus column-oriented data layouts, NoSQL data management, in-memory computing, and developing big data applications.

18 Context-Aware Computing As stated right in the beginning of Chapter 10, to withstand the disruptive digital storms of the future, enterprise architecture affects digital transformation i.e. exponential change (amplification or attenuation) in any performance measure of mobility aspects predominantly through context-aware computing. This chapter explains context-aware computing. Most mobile applications are location-aware systems. Specifically, tourist guides are based on users’ locations in order to supply more information on the city attraction closer to them or the museum exhibit they are seeing. Nevertheless, recent years have seen many mobile applications trying to exploit information that characterizes the current situation of users, places, and objects in order to improve the services provided. The principle of context-aware applications (CAA) can be explained using the metaphor of the Global Positioning System (GPS). In aircraft navigation, for example, a GPS receiver derives the speed and direction of an aircraft by recording over time the coordinates of longitude, latitude, and altitude. This contextual data is then used to derive the distance to the destination, communicate progress to date, and calculate the optimum flight path. For  a GPS-based application to be used successfully, the following activities are a prerequisite: 1. The region in focus must have been GPS-mapped accurately. 2. The  GPS map must be superimposed with the relevant information regarding existing landmarks, points of civic and official significance, and, facilities and service points of interest in the past to the people—this is the context in this metaphor. 3. There  must be a system available to ascertain the latest position per the GPS system. 4. The latest reported position must be mapped and transcribed on to the GPS-based map of the region. 5. This latest position relative to the context (described in the second activity) is used as the point of reference for future recommendation(s) and action(s). It should be noted that the initial baseline of the context (described in the second activity) is compiled and collated separately and then uploaded into the system to be accessible by the CAA. However, with passage of time, this baseline gets added further with the details of each subsequent transaction. We can also imagine an equivalent of the Global Positioning System for calibrating the performance of enterprises. The coordinates of longitude, latitude, and altitude might be replaced by ones of resource used, process performed, and product produced. If we designed a GPS for an enterprise, we could measure its performance (e.g., cost or quality) in the context of the resource used, the process performed, and the product delivered as compared with its own performance in the past (last month or one year back) or in particular cases that of another target organization. Such an approach could help us specify our targets, communicate our performance, and signal our strategy. 395

396

Digital Transformation of Enterprise Architecture

18.1 Introduction to Context Most of the current context-aware systems have been built in an ad hoc approach and are deeply influenced by the underlying technology infrastructure utilized to capture the context. To ease the development of context-aware ubicom (ubiquitous computing) and mobile applications, it is necessary to provide universal models and mechanisms to manage context. Even though significant efforts have been devoted to research methods and models for capturing, representing, interpreting, and exploiting context information, we are still not close to enabling an implicit and intuitive awareness of context nor efficient adaptation to behavior at the standards of human communication practice. Context information can be a decisive factor in mobile applications in terms of selecting the appropriate interaction technique. Designing interactions among users and devices, as well as among devices themselves, is critical in mobile applications. Multiplicity of devices and services calls for systems that can provide various interaction techniques and the ability to switch to the most suitable one according to the user’s needs and desires. Current mobile systems are not  efficiently adaptable to the user’s needs. The  majority of ubicom and mobile applications try to incorporate the users’ profile and desires into the system’s infrastructure either manually or automatically observing their habits and history. According to our perspective, the key point is to give them the ability to create their own mobile applications instead of just customizing the ones provided. Thus, mobile applications can be used not only for locating users and providing them with suitable information but also for • Providing them with a tool necessary for composing and creating their own mobile applications • Supporting the system’s selection of appropriate interaction techniques • A selection of recommendation(s) and consequent action(s) conforming with the situational constraints judged via the business logic and other constraints sensed via the context • Enabling successful closure of the interaction (answering to a query, qualifying an objection, closing an order, etc.)

18.2 Context-Aware Applications Context is understood as “the location and identities of nearby people and objects, and changes to those objects.” Initially, the term context was equivalent to the location and identity of users and objects. Very soon, though, the term expanded to include a more refined view of the environment assuming either three major components—computing, user, and physical environment—or four major dimensions, system, infrastructure, domain, and physical context. The interaction between the user and application was added by Dey (2001); according to them, a context is “any information that can be used to characterize the situation of an entity.” An entity should be treated as anything relevant to the interaction between a user and an application, such as a person, a place, or an object, including the user and the application themselves and, by extension, the environment the user and applications are

Context-Aware Computing

397

embedded in. Thus, a system is context-aware if it uses context to provide relevant information and/or services to the user, where relevancy depends on the user’s task. Context-aware features include using context to 1. Present information and services to a user. 2. Automatically execute a service for a user. 3. Tag information to support later retrieval. In supporting these features, context-aware applications can utilize numerous different kinds of information sources. Often, this information comes from sensors, whether they are software sensors detecting information about the networked, or virtual, world or hardware sensors detecting information about the physical world. Sensor data can be used to recognize the usage situation, for instance, from illumination, temperature, noise level, and device movements. A context-aware application can make adaptive decisions based on the context of interaction in order to modulate the information presented to the user or to carry out semantic transformation on the data, like converting text to speech for an audio device. The promise and purpose of context-awareness are to allow computing systems to take action autonomously and enable systems to sense the situation and act appropriately. For example, in context-aware mobile applications, location is the most commonly used variable in context recognition as it is relatively easy to detect. User activity is much more difficult to identify than location, but some aspects of this activity can be detected by placing sensors in the environment. Location is the most commonly used piece of context information, and several different location detection techniques have been utilized in context-awareness research. Global positioning system (GPS) is a commonly used technology when outdoors, utilized, for example, in car navigation systems. Network cellular ID can be used to determine location with mobile phones. Measuring the relative signal strengths of Bluetooth and WLAN hotspots and using the hotspots as beacons are the frequently used techniques for outdoor and indoor positioning. Other methods used indoors include ultrasonic or infrared-based location detection. Other commonly used forms of context are time of day, day of week, identity of the user, proximity to other devices and people, and actions of the user. Context-aware device behavior may not rely purely on the physical environment. While sensors have been used to directly provide this physical context information, sensor data often need to be interpreted to aid in the understanding of the user’s goals. Information about a user’s goals, preferences, and social context can be used for determining contextaware device behavior as well. Knowledge about a user’s goals helps prioritize the device actions and select the most relevant information sources. A  user’s personal preferences can offer useful information for profiling or personalizing services or refining information retrieval. The user may also have preferences about the quality of service issues such as cost efficiency, data connection speed, and reliability, which relate closely to mobile connectivity issues dealing with handovers and alternative data transfer mediums. Finally, social context forms an important type of context as mobile devices are commonly used to support communication between two people and used in the presence of other people. Challenges of context-aware systems include the following: • A  main issue regarding context-aware computing is the fear that control may be taken away from the user. Apart from control issues, privacy and security issues arise. The main parameters of context are user location and activity, which users

398













Digital Transformation of Enterprise Architecture

consider as part of their privacy. Users are especially reluctant to exploit contextaware systems, when they know that private information may be disclosed to others. There  is a gap between how people understand context and what systems consider as context. The environment in which people live and work is very complex; the ability to recognize the context and determine the appropriate action requires considerable intelligence. A context-aware system cannot decide with certainty which actions the user may want to be executed; as the human context is inaccessible to sensors, we cannot model it with certainty. A  context-aware system cannot be developed to be so robust that it will rarely fail, as ambiguous and uncertain scenarios will always occur and even for simple operations exceptions may exist. A  context-aware system can add more and more rules to support the decision-making process; unfortunately, this may lead to large and complex systems that are difficult to understand and use. A context-aware application is based on context information that may be imperfect. The ambiguity over the context soundness arises due to the speed at which the context information changes and the accuracy and reliability of the producers of the context, like sensors. It is a challenge for context-aware systems to handle context, which may be nonaccurate or ambiguous, in an appropriate manner—more information is not necessarily more helpful; context information is useful only when it can be usefully interpreted.

18.2.1 Types of Context-Awareness The type of context-awareness determine the type of contexts required. The various types of context-awareness are 1. Selection of information that can be used as context: Here the selection of the context is based on the proximity of the service required. It could also just be the information needed rather than a choice of service. For  example, nurses at a nearby location or restaurants of choice in the proximity or printers in the vicinity are presented and the user chooses one of them. 2. Automatic reconfiguration of the context: The system automatically reconfigures itself on the basis of the context of the user. For example, on recognizing the context of the user, when the user is in a cinema theatre, a mobile phone goes on silent mode and calls may be diverted to one’s secretary without the need for one to select this service. It is a system-level technique that creates an automatic binding to an available resource on the basis of the current context. It could also be a process of adding/removing components or altering connections on the basis of the context. There is also a possibility that the results of an application are modified on the basis of the context, as one has seen typically in object oriented systems. 3. Automatic triggering of action on the basis of context: Applications that execute commands for the user on the basis of the available context use context-triggered actions. They may be in the form of simple if–then rules that specify how contextaware systems should adapt.

Context-Aware Computing

399

4. Tagging of context for future use: Once the context has been identified, it can be tagged for future use. A  wide spectrum of applications can use such as tagged context. 18.2.2 Types of Contexts Various kinds of context can be examined. Context information may be permanent or temporary in nature, static or dynamic, and may be manually entered or automatically captured. On the basis of the types of sensors, categories of context can be grouped as follows: 1. Location-based context: Information in this category includes that related to location, physical environment (light conditions, noise levels, traffic conditions, or temperature), and situation (in the canteen, in a conference, in a meeting, or in a movie theater). The initial context-aware applications were built using these contexts as these were minimal and measurable, and could be used in available systems without much difficulty. 2. Time-based context: Date and time-based information (day, week, time, season) that can be easily collected from the clock built into mobile devices. This context is useful in many applications such as health-care applications where reminders can be given for dispensing medication at appropriate times and also requesting stocks at regular intervals. 3. Application-based context: This type of context generates application- and serverspecific information. This information can be used by any service. For example, the traffic density at various locations on a particular route can be used for suggesting alternative routes when there is a traffic jam on a subscriber’s route. 4. Network/device-based context: These consist of network and device parameters and variables. One possible application would be to look at the available bandwidth and reconfigure the information/services that are triggered. For instance, bandwidth utilization while using multiple applications on a mobile device. 5. User state-based context: Information in this category includes the user’s biological and emotional state, which can be collected by using the physiological sensors mentioned previously. This  context can be extremely useful in emergency situations. For example, alterations in dosage/timing of automatically dispensed medications (e.g., insulin), on the basis of the user’s condition, can be an excellent application for saving lives. 6. Context history: To give the user the experience of “ecstasy,” the history of the user as well as the contextual information can be used for creating user profiles. These would be extremely useful in many applications. The limitations of memory and processing power will need to be addressed before this type of context becomes widely used. The context can also be bundled in two categories of “physical context” and “logical” context. The physical context, such as location, time, temperature, and so on, would be more directly obtainable, whereas the logical context is a little more difficult to gather as it needs certain processing before one can use it, for example, preferences of the user, relationships, privileges, and so on. Most work has been in the area of physical context processing.

400

Digital Transformation of Enterprise Architecture

18.2.3 Context Acquisition As the need for context has become clear, the next issue to be addressed is how to incorporate this context information in system design. There are two standard ways of obtaining information: one is to pull the required/desired information from the environment and the second is to have the context pushed: 1. Pulling the Context: This type of information becomes available to the application only when it is requested for. An obvious advantage is that the user will have complete control over the context information requested. Also, “requesting” a certain context, as per the need of the application, could be built into the application. The user then has to be aware of the needs of the application and has to “work” to incorporate the context. If the application is going to automatically pull out the required context information, the bandwidth limitation and costs will come into the picture. In case the required bandwidth is not available, the application may suffer. Also, if the context does not change as frequently as the frequency of “pull” then one will be wasting the bandwidth, without real use. In times of emergencies such as earthquakes or other natural disasters, a large number of consumers may pull out the same information from a supplier, causing severe scalability problems. 2. Getting the Context Pushed: The other alternative is similar to a “broadcast” facility in networks. The context information will keep getting pushed out at regular intervals to all the users. This  simplifies the life of the user (and complexity of application development), in the sense that now the user/application is relieved of the job of getting the contextual information. The utility of the context information is not known unless the user has specifically subscribed for it. It could be possible that a particular context information is irrelevant to a particular user/application. In  that case, it is waste of expensive resources in mobile networks. Also, there would be additional need for buffering this context in order to process and check its utility. This increases the cost of the component. The  best approach would then be to combine both these approaches to address all the issues such as bandwidth limitation, cost, frequency of pull, and the utility of context information. The  application would request for certain specialized context information (pull) and there could be some information that could keep getting pushed out at regular intervals. 18.2.4 Context Models Contextual information can be provided to/obtained by the application in two ways: • The user feeds the information classified as profiled information. • Meaningful information is extracted from raw data collected by the sensors, that is, data need to be modeled to reflect the physical entities for interpretation. Evaluation of the context implies the verification of proposed information generated by modules against a specific context. There  are different ways in which context data are modeled; these context data models specify the way in which data structures are defined for representing and exchanging the desired context with the system.

Context-Aware Computing

401

1. Key structure models: The simplest of data structures, the value of context information (e.g., location information, sound level, and so on) is provided to an application as an environment variable. This  model is most widely used in various service frameworks and quite frequently in distributed service frameworks. In such frameworks, the services themselves are usually described with a list of simple attributes in a key–value manner. For instance, the context vocabulary may contain (Temperature, Cold) or (Temperature, Hot) where the environment variable is “temperature” and the context is “cold” or “hot.” The  service discovery procedure used then operates an exact matching algorithm on these attributes. Note that key–value pairs are easy to manage but lack the capabilities of sophisticated structuring for enabling efficient context retrieval algorithms. Additional attributes such as timestamp, source, and confidence in the accuracy of the context are also of great importance. 2. Graphical models: The current generic graphical language UML used to represent objects and their interactions, is found to be suitable to model the context in multiple scenarios. Another modeling approach can be extended to the object role modeling included in the context. 3. Markup scheme models: Once systems became web-based it became natural to represent context as a data structure consisting of a hierarchical context and markup tags (e.g., location: state, city, street, building, and so on). Profiles contain typical markup scheme models. This XML-based language or encoding in RDF/S allows accounting for contextual information and dependencies when defining interaction patterns on a limited scale. Owing to the fact that no design criteria and only parts of the language are available to the public, the actual appropriateness and capabilities of this context modeling approach remains unknown. 4. Object-oriented models: Use of object-oriented techniques to model the context gives it all the capabilities of object orientation such as encapsulation, polymorphism, and abstraction. Existing approaches use various objects to represent different types of context. They encapsulate the details of context processing and representation. Access to them is only through well-defined interfaces. An example is the use of objects to represent various inputs from a sensor and the processing of that information necessary to represent the context can be the methods defined for the particular object. 5. Logic-based models: It is suitable to use a logic-based system when the contextual information shows a high degree of formality. These systems manage the facts, expressions, and rules used to define the context model. They  not only permit updating or deleting existing facts/rules but also permit the addition of new ones. 6. Ontology-based models: The nature of context-aware service demands that the context information should be modeled in a platform-independent manner and should support interoperation. Thus, use of ontology in a context model at a semantic level and establishing a common understanding of terms are imperative. The description of the concepts and the relationships between them can be represented as ontologies. This  would enable context sharing, reasoning, and reusing in such environments. The simplicity of representation, flexibility, extensibility, genericity, formal expressiveness of ontology, and the possibility of applying the ontology reasoning technique have made these models the most promising instrument in multiple current frameworks for modeling contextual information.

402

Digital Transformation of Enterprise Architecture

18.2.5 Generic Context-Aware Application Architecture A generic architecture for a context-aware application will include components like: 1. Context providers/acquirers: These are sensors or devices responsible for collecting context. These sensors could be physical sensors (such as photodiodes, temperature sensors, GPS, and so on), virtual sensors (use of retrieved electronic calendars as estimators for position parameters), or logical sensors (some combination of physical and virtual sensor data). 2. Context aggregator: This  is a module that will take inputs from various context providers/acquirers, aggregate the context, and provide it to a context interpreter. To perform the aggregation, a set of statistical models may be required along with some learning techniques. This layer may be merged in the context inferencer/ interpreter in some simple models. However, this extra layer of abstraction will enhance encapsulation and ease the development process and the flexibility available to the developer. Also, it will help improve the network performance when the aggregator can be accessed remotely. Then the client application will not need to access multiple context providers, thus reducing the network traffic. 3. Context interpreter/inferencer: This module, using the inference engine, provides a “why” to the context. Gathered/interpreted context will be evaluated on the basis of the need of the application. 4. Context processor: This  module, based on the repositories and aggregated, interpreted context, generates a list of services required to provide a specific service. The simplest of the models will have the context processing logic embedded in identifying the required service. However, separating the context logic module from other components and the architecture from supporting service providers will make context processing more generic and also dynamic. 5. Service fetcher: A network may be used to fetch the desired service; the levels of abstraction, and object orientation enhance network performance. 6. Service disburser: This is a component for deploying the required/desired service on the mobile device on the basis of the context. The functions of components 4, 5, and 6 may be carried out by the server, the network, and the actual mobile device, respectively. 18.2.6 Illustrative Context-Aware Applications Context-aware applications can be classified into five categories: 1. Location-aware applications: These are applications where the context is basically obtained from the location and the application set provided is also about the location or the services relevant to that location. These applications use either pull or push techniques to gather context and may be made to respond in a specific way on the basis of the context. 2. Time-aware applications: These are applications where the context is basically obtained from time and date, and they also respond to the time of day or the season. This type of context is relatively easy to obtain, from the clock embedded in the device and the simple temperature sensor, and so on.

Context-Aware Computing

403

3. Device-aware applications: In these applications, the context is basically dependent on various devices or the state of the network and other devices that will share the context collected by them. 4. Personalized applications: In these applications, the user specifies limits on the contexts that devices should respond to. This means that the user will have to manually customize the conditions that the application is expected to respond to. 5. Adaptable applications: These applications/services automatically reconfigure themselves on the basis of the context in the environment and respond to the changing context. These types of systems are still in the development stage. A category of these applications respond dynamically to the emotional and physical state of the user along with the environment.

18.3 Location Based Systems Since the days of the early location tracking functionalities introduced in Japan and in some U.S. networks in 2001, LBS have made considerable progress. In addition to the currently emerging satellite-based systems, such as GPS (United States), GLONASS (Russian), GALILEO (EU), and COMPASS (China), that will provide wider coverage to benefit LBS, some location information can also be derived and used from sensors, radio-frequency identification (RFID), Bluetooth, WiMax, and wireless local area networks (WLANs). These systems can be used as stand-alone or to supplement the coverage for location tracking in indoor environments, where satellite coverage is intermittent or inaccurate. Wi-Fi, especially, could be used as the basis for determining position—like an indoor form of GPS, with access points acting as satellites. In the development of LBS standards, there are many organizations that play significant roles, such as the Open Mobile Alliance (OMA) and the Open Geospatial Consortium (OGC). These organizations offer various location-based protocols and standards for the development of LBS. The most important specification that OMA has come up with is the Mobile Location Protocol (MLP). MLP enables LBS applications to interoperate with a wireless network regardless of its interfaces (Global System for Mobile Communications [GSM], CDMA, etc.) and positioning methods. MLP defines a common interface that facilitates exchange of location information between the LBS application and location servers in wireless networks. It also supports the privacy of users by providing access to location information only to those who are authorized users. Hence, the OMA is the key enabler of mobile service specification standards that support the creation of interoperable end-to-end mobile services. The OGC is an international organization responsible for the development of standards for geospatial and LBS. To complement Location Interoperability Forums (LIF) advanced MLP services, the OGC has come up with OpenLS services that address geospatial interoperability issues. Key services handled by OpenLS specifications are coordinate transformation, web mapping, geography markup language (GML), geoprocessing, and web integration. 18.3.1 Sources of Location Data Location sources can take various forms, but can be classified into two basic forms: those which determine location with respect to global coordinates, using known reference points; and those which simply determine location with respect to other nodes in the system.

404

Digital Transformation of Enterprise Architecture

18.3.1.1 Cellular Systems A  primary source of location information is gained through proximity to a sensor. The range of the sensor may vary, but if presence is detected then the located object can be said to be within that sensor’s range (or cell). In some cases, this requires contact with the sensor: pressure on a seat, etc. If cells are designed such that there is no overlap, then cell ID can translate to a region in a trivial manner. Note that the cell may broadcast its ID, or a device may broadcast a query for location information. In a system where a device will only see or be seen by one reference node (or beacon, base station) there is little difference, apart from any overheads. In some cases, this contact (or other very close connection) is substituted with some proxy for the located object, for example: • Terminal activity on a computer • Swipe cards on doors as proxies for user location • RFID tags as proxies for object location As cells become more dense, so that regions usually overlap, the location can become more accurate as the intersection of cells reduces the possible region referred to. If the cells are defined as regions around the known location of reference nodes in a regular grid, then the average coordinate in each axis of the reference nodes seen at some point defines the centroid of the device to be located. As the overlap ratio of beacon range to separation increases the average error, in this centroid becomes a smaller multiple of the beacon separation. In real-world situations, the movement of beacon position and range due to the environmental changes means that a regular grid may not be seen, but geometric intersection of known (or expected) signal patterns can still be used to identify a region that contains the device to be located. Where cells overlap there is a greater possibility of beacons or query responses overlapping, although timing for stationary beacon may be easier than the response to a query from a device with an unknown location. Some location systems require inputs from multiple sources in order to arrive at a location, by triangulation or trilateration, for example, GPS, ultrasonic systems with a mesh of sensors in the room and use of signal strength from multiple wireless network base stations. Here geometry is applied so that multiple distance (trilateration) or angle measurements (triangulation) from beacons, which do not map into a single plane, are combined to find the place of intersection. In general, using distance measurements requires (at least) D + 1 measurements, that is, three measurements for 2D and four measurements for 3D. Where the system implies extra information about the relative location of object of interest and transmitters is known, this number can be reduced; for example, in GPS satellites are above the receiver. Using angular measurements is similar, but the distance separating two sources is usually required; sometimes a constant reference such as magnetic north is used. In both cases, installation-specific knowledge can be applied to simplify the problem. Distance is rarely measured directly, with a tape measure, but through other measures such as time of flight and signal attenuation. These measures can sometimes be had with little cost due to existing infrastructure, for example 802.11 base stations, but can require calibration to the local environment. Time of flight either requires precisely synchronized clocks (rather hard to achieve in practice); or two signals, commonly RF and audio, so that time of flight is determined from the difference.

Context-Aware Computing

405

18.3.1.2 Multireference Point Systems 18.3.1.2.1 Global Positioning System (GPS) It uses highly accurate atomic clocks on satellites to transmit a time signal, which also send data on their orbits (“ephemeris”) to allow calculation of time of flight and approximate data on other satellite positions to aid signal acquisition (“almanac”). This  orbit means that each satellite moves across the sky when seen from a stationary point on the Earth’s surface, requiring a receiver to scan for new signals but providing robustness against individual satellite failure and ensuring satellites are distributed across the sky to facilitate global coverage and accuracy. At least four satellites must be visible to achieve an accurate 3D location plus time, from a constellation of at least 24 satellites in medium Earth orbit. A  message from an individual satellite is encoded using Code Division Multiple Access (CDMA), allowing all satellites to share a frequency band. This  message is decoded and arrival time recorded using an internal clock at the receiver. This  clock is not  synchronized with the satellite clock, so cannot by itself compute a distance. However, the difference in time of flight between satellites together with ephemeris can be used to compute an initial distance estimate. Due to the very short times small timing errors imply large positioning errors and the distance calculation must include corrections for relativistic effects. This distance describes a sphere in space centered on the satellite (although it is assumed the receiver is below the orbit, so a full sphere is not required). As additional satellite signals are decoded, additional spheres can be described. Receiver position is computed from the intersection of the spheres. This requires that the satellites are not  lying on the same plane as each other; but their number and orbits help to ensure this condition is met. A fourth satellite signal is usually used to correct the GPS receiver’s internal clock, by measuring the distance from its sphere to the three-satellite position estimate. With an accurate clock a new estimate of time of flight, and hence distance to satellite, can be produced. This computation is typically an iterative estimation, improving the local clock accuracy and acquiring additional signals to refine the accuracy of the distance estimates and hence reducing the error in the position estimate. During initial signal acquisition movement makes this refinement of timing rather difficult and leads to delays in achieving an accurate estimate of location. The principle of lateration used in GPS is illustrated in Figure 18.1 that illustrates of two possible intersecting points from signals emanating from three satellites. There is some inherent uncertainty in GPS data, arising from various factors including the following: • Satellite clock accuracy causing drift over time, this is frequently updated. • Accuracy of the measured signal delay limiting resolution. • Accuracy of the model of real satellite position for calculating signal delay, this is frequently updated. • Atmospheric variation, which is the least easy to model and predict the effect on signal time of flight. • Selective availability, introducing deliberate errors of up to 100 m into the signal for military purposes, can be corrected for using differential GPS.

406

Digital Transformation of Enterprise Architecture

FIGURE 18.1 Principle of lateration.

18.3.1.3 Tagging Location can be determined by systems with tags in two ways: • By fixing tags at known locations, when a tag is scanned the scanner must be within scanning range (which depends on the technology) of the tag’s location. • By having tags report to some infrastructure cellular or triangulation techniques can be applied. Tag systems are characterized by the following: 1. The cost and integration involved in deploying the tags and the cost and integration involved in reading tags. There may be setup costs for a system where tags refer to an infrastructure. 2. The behavior mode of the tag-reader system, including whether the reading of a tag is automatic or manually triggered; and whether the reading is an entry or trigger event or a state of presence. 3. The  range that a reader can read a tag over. This  may be variable for a given technology, influenced by tag and reader design, available power, and the environment. 4. The “location” of the tag may be fixed, for example, tags to identify rooms and bus stops, or the tag may be attached to a mobile object, in which case the location only describes proximity rather than absolute position. 5. The  tags may be unique, for example, Bluetooth tags with a MAC address; the tag may have duplicates, for example, identifying a shop by brand not location:

Context-Aware Computing

407

or a given location may have multiple tags, for example, at different entrances or identifying different properties of the place. Where tags are intended to be unique, one should consider whether security or functionality issues arise from forgery or accidental reuse or tag repositioning. 18.3.1.3.1 Bar Codes Bar codes are commonly found on printed product packaging and provide a simple optical ID system. The  tag can encode numbers, alphanumeric codes, 8-bit binary, or kanji. The code can be read from any angle, is somewhat damage resistant, and the detail required depends on the amount of data being encoded. To deploy codes requires some software to encode the data into a black and white image and a printer or display. To read the code requires a simple camera and software—as available with any smart phone. The  reading of such a code is a one way process; the tag being unaware unless it is displayed by a device that is informed of any data access triggered by the encoded data. Lighting conditions (bright lights and darkness) can interfere with tag reading, particularly with glossy weatherproofing. Bar code systems are characterized by the following: • Cameras are built into many consumer devices, where these already exist, the cost of reading approaches zero. The monetary and power cost for additional tags is low for printed tags, higher for active displays. • Reading a tag is typically manually triggered, as the device often has other purposes and required re-enabling after a read, but this behavior could be changed. The reading of a tag is typically an event. • The tags are easily duplicated, and a given location may have multiple tags, if there is space to display them. • The  range depends on lighting, printer size, and camera quality, but is usually under a meter in practical applications. 18.3.1.3.2 Bluetooth Bluetooth-enabled devices can be used as a form of tagging. To act as a tag a device needs to have its Bluetooth switched on and discoverable. To act as a reader, a device needs to scan for other Bluetooth devices. The tag data can simply be the tag’s unique Bluetooth address, which tells the reader little about the device but is easy to acquire and could be correlated with other data. In this case, the tag is not aware of the reader detecting it, and, where tags are in fact personal devices, it may give rise to privacy concerns—particularly where the device has been given a personal name, or where the readers store scans and correlate with other data (a network of readers in other locations, very particular reader locations, video observations, etc.). Bluetooth systems are characterized by the following: • Bluetooth circuits are built into many consumer devices, where these already exist, the cost approaches zero. The cost for additional tags is low. The energy cost of tag beacons and reader scanning can be quite high, if mobile phone battery life with Bluetooth on. • Once switched on and properly coded, the scanning can be automatic, frequent, registers presence, and is invisible to the human observer—although each of these properties can be modified in code and/or device configuration.

408

Digital Transformation of Enterprise Architecture

• The tags are generally unique, but a given location may have multiple tags. • The  range depends on power and antenna design for both reader and tag. A  typical configuration would allow detection over 5–10  m, although longer range devices may extend this and substantial walls and metal structures may reduce it. 18.3.1.3.3 Radio Frequency ID (RFID) Radio Frequency ID (RFID) tags are widely used in warehouse, retail, library and transport situations, supporting theft detection, stock identification, and tracking and account identity for payment. The tag data depends on the design of the tag, and can range from a simple numeric ID (used in a similar way to a bar-code for stock control), to more complex structured records. The tag computation may range from unauthenticated, unencrypted data exchange on demand, to protocols which require an identifier and provide simple encryption; some protocols may also record tag reading events or cause tag data to be changed. The modification of tag data increases the need for security provision but allows the tag system to function where access to a database indexed by tag data is problematic. The main barrier to invisible tag reading is the need to camouflage long-range antennas; and as possession of tagged objects could reveal quite personal data, their use has been somewhat limited. Reading is contact-less, using radio (unlike a smartcard), with range depending on available power and antenna design that typically spans from a few centimeters to a meter. A  tag consists of an aerial and some small memory and/or computing capacity plus two-way radio interface circuitry, typically printed on a label or embedded in a card. A wide range of radio frequencies and protocols (both standard and proprietary) are in use, with the choice affecting tag and reader size, power consumption, range, and data transfer speed. Different systems also have differing abilities to handle reading multiple in-range tags simultaneously. The tag and aerial are often several centimeters across; corresponding miniaturization is conditioned by the antenna design necessary for a desired range. It is possible to embed a power source into a tag, but passive-RFIDs are the norm wherein power is extracted from the radio signal of the reader in order to power the tag while it is being read. RFID systems are characterized by the following: • The  marginal cost of a tag is very low and they are widely used as disposable labels for low-value goods. The  costs of tag producing/writing hardware are higher. Reader costs are moderate, often requiring integration with other systems. Power costs of reading are significant, but passive RFID has no routine maintenance cost. • Once switched on and properly coded the scanning is automatic, frequent, registers presence and is invisible to the human observer. Modifying this behavior is hard as the tags and scanner are less likely to be easily user programmable and incorporate displays than in the Bluetooth case. • The tags may be unique where suitable security provisions have been made but for simpler technology should be treated as duplicatable, a given location may have multiple tags and it may be possible to read them all in one scan. • Tag range is from mm to a few meters, generally requiring clear air.

Context-Aware Computing

409

18.3.2 Mobility Data The number of mobile phone users worldwide was estimated about 2 billion in 2015. While the location technologies, such as GSM and UMTS, currently used by wireless phone operators are capable of providing an increasingly better estimate of a user’s location, the integration of various positioning technologies is progressing rapidly: 1. GPS-equipped mobile devices can transmit their trajectories to some service provider. 2. Wi-Fi and Bluetooth devices can be a source of data for indoor positioning. 3. Wi-Max can be a source of data for outdoor positioning. As computing and communication devices are all pervasive, it renders possible sensing of all human activity merely as a side effect of the ubiquitous services provided to all mobile users. Thus, the wireless phone network, designed to provide mobile communication, can also become an infrastructure to gather mobility data, if used to record the location of its users at different times. We have today a chance of collecting and storing mobility data of unprecedented quantity, quality and timeliness at a very low cost: in principle, a dream for traffic engineers and urban planners, compelled until yesterday to gather data of limited size and precision only through highly expensive means such as field experiments, surveys to discover travelling habits of commuting workers and ad hoc sensors placed on streets. 18.3.2.1 Mobility Data Mining The  way people live and move, and, their everyday actions and activities leave digital traces in the information systems of the organizations that provide services through the wireless networks for mobile communication. Because of the increasing pervasiveness and positioning accuracy that can be achieved, the potential value of these traces in recording the human activities in an area is increasing tremendously every day. Each time a mobile phone is used on a given network, the phone company records realtime data about it, including time and cell location. Source data are positioning logs from mobile cellular phones, reporting user’s locations with reference to the cells in the GSM network; these mobility data come as streams of raw log entries recording: • • • •

Users entering a cell (userID, time, cellID, in) Users exiting a cell (userID, time, cellID, out) User’s absolute position, in the case of GPS/Galileo equipped devices User’s position within a cell (userID, time, cellID, X, Y) as a future norm

The location tracking process entailing the analysis of mobility with reference to a particular geographical location at appropriate scales and granularity would entail the following steps. 18.3.2.1.1 Trajectory Reconstruction The  stream of raw mobility data has to be processed to reconstruct trajectories of individual moving objects. The reconstruction accuracy of trajectories, as well as their level

410

Digital Transformation of Enterprise Architecture

of spatio-temporal granularity, depends on the quality of the log entries, since the precision of the position may range from the granularity of a cell of varying size to the relative (approximated) position within a cell. Sophisticated reconstruction of trajectories from raw mobility data needs to be undertaken to take into account the spatial and temporal imperfections in the reconstruction process. Each moving object trajectory is typically represented as a set of localization points of the tracked device, called sampling. This representation has intrinsic imperfection mainly due to: 1. The sampling rate and involves the trajectory reconstruction process that approximates the movement of the objects between two localization points. 2. The  measurement error of the tracking device. For  example, a GPS-enabled device introduces a measurement error of a few meters, whereas the imprecision introduced in a GSM/UMTS network is the dimension of a cell, which could be from less than 100 meters in urban settings to a few kilo meters in rural areas. The  resulting trajectories are stored into appropriate repositories, such as a trajectory database (such as moving object databases) or data warehouse. The  management and querying of large volumes of mobility data and reconstructed trajectories also pose specific problems that can now be addressed by big data computing technologies. 18.3.2.1.2 Trajectory Mapping Spatio-temporal data mining methods are needed to extract useful patterns out of trajectories including: 1. Frequent patterns, the discovery of frequently followed (sub)paths or trajectories. Such information can be useful in urban planning, for example, by spotlighting frequently followed inefficient vehicle paths, which can be the result of a mistake in the road planning (Figure 18.2a). 2. Classification, the discovery of behavior rules, aimed at explaining the behavior of current users and predicting that of future ones. Urban traffic simulations are a straightforward example of application for this kind of knowledge, since a classification model can represent a sophisticated alternative to the simple ad hoc behavior rules, provided by domain experts, on which actual simulators are based (Figure 18.2b). 3. Clustering, the discovery of groups of “similar” trajectories, together with a summary of each group. Knowing which are the main routes (represented by clusters) followed by people or vehicles during the day can represent precious information for mobility analysis. For example, trajectory clusters may highlight the presence of important routes not adequately covered by the public transportation service (Figure 18.2c). 18.3.2.1.3 Trajectory Visualization Trajectory maps are usually devoid of the corresponding geographical information. Once suitable methods for interpreting and delivering geographic knowledge on trajectories are available, several application scenarios become possible. For instance, trajectory

Context-Aware Computing

411

FIGURE 18.2 Trajectory mapping: (a) Trajectory patterns, (b) trajectory prediction, and (c) trajectory clustering.

visualization can enable to support and improve decision making in mobility-related issues including: • • • • • • •

Timely detecting problems that emerge from the movement behavior Timely detecting changes that occur in the movement behavior Localizing new services in our towns Forecasting traffic-related phenomena Organizing postal and logistics systems Planning traffic and public mobility systems in metropolitan areas Planning physical communication networks, such as new roads or railways

18.4 Context-Aware Recommender Systems Many existing approaches to recommender systems focus on recommending the most relevant items to individual users and do not take into consideration any contextual information, such as time, place, and the company of other people (e.g., for watching movies

412

Digital Transformation of Enterprise Architecture

or dining out). In other words, traditionally recommender systems deal with applications having only two types of entities, users and items, and do not  put them into a context when providing recommendations. However, in many applications, such as recommending a vacation package, personalized content on a Web site, or a movie, it may not  be sufficient to consider only users and items—it is also important to incorporate the contextual information into the recommendation process in order to recommend items to users under certain circumstances. For example, using the temporal context, a travel recommender system would provide a vacation recommendation in the winter that can be very different from the one in the summer. Similarly, in the case of personalized content delivery on a Web site, it is important to determine what content needs to be delivered (recommended) to a customer and when. More specifically, on weekdays a user might prefer to read world news when she logs on in the morning and the stock market report in the evening, and on weekends to read movie reviews and do shopping. There are three main algorithmic paradigms for incorporating contextual information into rating-based recommender systems, namely, contextual pre-filtering, post-filtering, and modeling.

18.5 Summary This chapter proposed context-aware computing as the digital transformation of primarily the mobility aspects of enterprise architecture (see Chapter 10). Context-aware applications are envisaged to emerge as an area with high growth business potential in the future because of the tremendous growth in mobile-based and sensorbased services and applications. Context information can be a decisive factor in mobile applications in terms of selecting the appropriate interaction technique. Designing interactions among users and devices, as well as among devices themselves, is critical in mobile applications. Multiplicity of devices and services calls for systems that can provide various interaction techniques and the ability to switch to the most suitable one according to the user’s needs and desires. Current mobile systems are not efficiently adaptable to the user’s needs. Most of the current context-aware systems have been built in an ad hoc approach and are deeply influenced by the underlying technology infrastructure utilized to capture the context. To ease the development of context-aware ubicom (ubiquitous computing) and mobile applications, it is necessary to provide universal models and mechanisms to manage context. None of these services and applications will be viable without being enabled by cloud computing, big data and Internet of Things (IoT). This  chapter introduced the fundamentals of context-aware systems with specific context of the mobile-based applications that are presently witnessing the highest growth in the consumer market space.

19 Internet of Things (IoT) Computing As stated right in the beginning of Chapter 11, to withstand the disruptive digital storms of the future, enterprise architecture affects digital transformation i.e. exponential change (amplification or attenuation) in any performance measure of ubiquity aspects primarily through Internet of Things (IoT) computing. This chapter explains IoT computing. A single smart device (e.g., in refrigerator) will communicate to a router installed in the house or with a cellular tower and the same thing will happen for similar devices installed in other equipment and places. But in places where a large number of these devices are used, an aggregation point might be required to collect the data and then send it to a remote server. Examples of such deployment can be industrial process control, monitoring of utilities supply lines, such as oil pipelines or water sewage lines, product supply chain in a warehouse or some secured area. IoT refers to a network of inter-connected things, objects, or devices on a massive scale connected to the Internet. These objects being smart sense their surroundings, gather and exchange data with other objects. Based on the gathered data, the objects make intelligent decision to trigger an action or send the data to a server over the Internet and wait for its decision. Most common nodes in IoT are: • Sensors used in many areas from industrial process control • Sensors used inside ovens and refrigerators • RFID chips used as tags in many products of everyday use Almost all of these smart devices have a short communication range and require very little power to operate. Bluetooth and IEEE ZigBee are the most common communication technologies used in this regard. Just like the Internet and Web connects humans, the IoT is a revolutionary way of architecting and implementing systems and services based on evolutionary changes. The Internet as we know it is transforming radically, from an academic network in the 1980s and early 1990s to a mass-market, consumer-oriented network. Now, it is set to become fully pervasive, connected, interactive, and intelligent. Real-time communication is possible not only by humans but also by things at any time and from anywhere. It is quite likely that sooner or later the majority of items connected to the Internet will not be humans, but things. IoT will primarily expand communication from the 7 billion people around the world to the estimated 50–70  billion machines. This  would result in a world where everything is connected and can be accessed from anywhere—this has a potential of connecting 100 trillion things that are deemed to exist on Earth. With the advent of IoT, the physical world itself will become a connected information system. In the world of the IoT, sensors and actuators embedded in physical objects are linked through wired and wireless networks that connect the Internet. These information systems churn out huge volumes of data that flow to computers for analysis. When objects can both sense the environment and communicate, they become tools for understanding the complexity of the real world and responding to it swiftly. 413

414

Digital Transformation of Enterprise Architecture

The  increasing volume, variety, velocity, and veracity of data produced by the IoT will continue to fuel the explosion of data for the foreseeable future. With estimates ranging from 16 to 50  billion Internet connected devices by 2020, the hardest challenge for largescale, context-aware applications and smart environments is to tap into disparate and ever growing data streams originating from everyday devices and to extract hidden but relevant and meaningful information and hard-to-detect behavioral patterns out of it. To reap the full benefits, any successful solution to build contextaware data-intensive applications and services must be able to make this valuable or important information transparent and available at a much higher frequency to substantially improve the decision making and prediction capabilities of the applications and services.

19.1 Introduction to Internet of Things Internet of Things was originally introduced by Auto-ID research center at MIT (Massachusetts Institute of Technology) where an important effort was made on the unique identification of products named EPC (Electronic Product Code) which was then commercialized by EPCglobal. EPCglobal was created to follow the AutoID objectives in the industry, with EAN.UCC (European Article Numbering—Uniform Code Council) now called GS1 as a partner to commercialize Auto-ID research, mainly the electronic product code. IoT aims to integrate, collect information from, and offer services to a very diverse spectrum of physical things used in different domains. “Things” are everyday objects for which IoT offers a virtual presence on the Internet, allocates a specific identity and virtual address, and adds capabilities to self-organize and communicate with other things without human intervention. To ensure a high quality of services, additional capabilities can be included such as context awareness, autonomy, and reactivity. Very simple things, like books, can have radio frequency identification—RFID tags that help tracking them without human intervention. For example, in an electronic commerce system, a RFID sensor network can detect when a thing left the warehouse and can trigger specific actions like inventory update or customer rewarding for buying a high end product. RFIDs enable the automatic identification or things, the capture of their context (for example the location) and the execution of corresponding actions if necessary. Sensors and actuators are used to transform real things into virtual objects with digital identities. In this way, things may communicate, interfere and collaborate with each other over the Internet. Adding part of application logic to things transforms them into smart objects, which have additional capabilities to sense, log and understand the events occurring in the physical environment, autonomously react to context changes, and intercommunicate with other things and people. A tool endowed with such capabilities could register when and how the workers used it and produce a financial cost figure. Similarly, smart objects used in the e-health domain could continuously monitor the status of a patient and adapt the therapy according to the needs. Smart objects can also be general purpose portable devices like smart phones and tablets, that have processing and storage capabilities, and are endowed with different types of sensors for time, position, temperature, etc. Both specialized and general purpose smart objects have the capability to interact with people.

415

Internet of Things (IoT) Computing

The IoT includes a hardware, software and services infrastructure for things networking. IoT infrastructure is event-driven and real-time, supporting the context sensing, processing, and exchange with other things and the environment. The infrastructure is very complex due to the huge number (50–100 trillion) of heterogeneous, (possibly) mobile things that dynamically join and leave the IoT, generate and consume billions of parallel and simultaneous events geographically distributed all over the world. The complexity is augmented by the difficulty to represent, interpret, process, and predict the diversity of possible contexts. The  infrastructure must have important characteristics such as reliability, safety, survivability, security and fault tolerance. Also, it must manage the communication, storage and compute resources. The main function of the IoT infrastructure is to support communication among things (and other entities such as people, applications, etc.). This function must be flexible and adapted to the large variety of things, from simple sensors to sophisticated smart objects. More specific, things need a communication infrastructure that is low-data-rate, lowpower, and low-complexity. Actual solutions are based on short-range radio frequency (RF) transmissions in ad-hoc wireless personal area networks (WPANs). A main concern of the IoT infrastructure developers is supporting heterogeneous things by adopting appropriate standards for the physical and media access control (MAC) layers, and for communication protocols. The  protocol and compatible interconnection for the simple wireless connectivity with relaxed throughput (2–250  kbps), low range (up to 100  m), moderate latency (10–50  ms) requirements and low cost, adapted to devices previously not  connected to the Internet were defined in IEEE 802.15.4. The main scope of IoT specialists is the world-wide network of interconnected virtual objects uniquely addressable and communicating through standard protocols. The challenge here is coping with a huge number of (heterogeneous) virtual objects. Table 19.1 presents a comparison of conventional Internet with Internet of Things (IoT). TABLE 19.1 Comparison of Conventional Internet with Internet of Things (IoT) Internet 1

2 3

4

5

6

In the Internet, the end-nodes are full-computers ranging from workstations to smart phones, with regular access to the public power-supply networks. In the Internet, the number of connected devices are envisaged to be in billions. In the Internet, not only the long-distance connection but also the last-mile connection has become very fast (in the range of megabits/sec). In the Internet, there are globally accepted identification and address schemes (e.g., IP and MAC addresses). In the Internet, the major share of functionality is addressed to human users like World Wide Web (www), e-mail, chat and e-commerce. Internet enabled a breakthrough in human communication and interaction.

Internet of Things (IoT) In the IoT, the end-nodes are very small electronic devices with low energy consumption. Compared to Internet computers their functionality is limited and they cannot interact directly with humans. In the IoT, the number of connected devices are envisaged to be in trillions. In the IoT, the speed of the last-mile to an RFID tag is quite slow (in the range of kilobits/sec). In the IoT, such standards cannot be employed devices because they require too much energy. Many vendor-specific solutions exist, but they prevent objects from being globally identified and addressed. In the IoT, devices usually interact directly, not via human intervention. IoT enabled a breakthrough in sensing the physical environment; sensing enables measurement, which in turn enables their management.

416

Digital Transformation of Enterprise Architecture

Characteristics of successful IoT deployments are: • Distributivity: IoT will likely evolve in a highly distributed environment. In fact, data might be gathered from different sources and processed by several entities in a distributed manner. • Interoperability: Devices from different vendors will have to cooperate in order to achieve common goals. In addition, systems and protocols will have to be designed in a way that allows objects (devices) from different manufacturers to exchange data and work in an interoperable way. • Scalability: In IoT, billions of objects are expected to be part of the network. Thus, systems and applications that run on top of them will have to manage this unprecedent amount of generated data. • Resources scarcity: Both power and computation resources will be highly scarce. • Security: User’s feelings of helplessness and being under some unknown external control could seriously hinder IoT’s deployment. 19.1.1 IoT Building Blocks The  IoT architecture supports physical things’ integration in Internet and the complex interaction flow of services triggered by events occurrence. The main concepts involved the basis for the development of independent cooperative services and applications are: 1. Sensing technologies: Radio-frequency identification (RFID) and Wireless Sensor Networks (WSN) are the two main building blocks of sensing and communication technologies for IoT. However, these technologies suffer from different constraints (e.g., energy limitation, reliability of wireless medium, security and privacy, etc.). In particular, the scarcity of energy resources available in the embedded devices is a sensitive issue. Consequently, to increase energy efficiency, a number of solutions have been introduced in the literature. For instance, lightweight MAC protocols, energy efficient routing protocols, and tailored security protocols have been proposed to mitigate the impact of resources scarcity on sensing technologies. Nevertheless, their limited autonomy remains a considerable obstacle to their widespread deployment into our daily lives. a. Radio-frequency identification (RFID): RFID is a tiny microchip (e.g., 0.4 × 0.4  ×  0.15  mm) attached to an antenna (called tags), which is used for both receiving the reader signal and transmitting the tag identity, which can be appended to an object of our daily life. Like an electronic barcode, stored data in these tags can automatically be used to identify and extract useful information about the object. RFID devices are classified into two categories: passive and active. The passive RFID tags are not battery powered; they use the power of the readers interrogation signal to communicate their data. They are also used in bank cards and road toll tags as a means to control access; they are also used in many retail, supply chain management, and transportation applications. b. Wireless Sensor Networks (WSN): WSN along with RFID systems enable to better track the status of things (e.g., their location, temperature, movements, etc.). Sensor networks consist of a large number of sensing nodes communicating in a wireless multi-hop fashion, reporting their sensing results into a small number of special nodes called sinks (or base stations).

Internet of Things (IoT) Computing

417

The main issues of concern are • Energy efficiency (which is a limited resource in WSN) • Scalability (the number of nodes can rise significantly) • Reliability (the system might be involved in critical applications) • Robustness (nodes might be subject to failure) c. Integration: Integration of heterogeneous technologies like sensing technologies into passive RFID tags would bring completely new applications into the IoT context; sensing RFID systems will allow to build RFID sensor networks, which consist of small RFID-based sensing and computing devices. RFID readers would constitute the sinks of data generated by sensing RFID tags. Moreover, they would provide the power for the different network operations. Efficiently networking tag readers with RFID sensors would allow real-time queries on the physical world leading to better forecasts, new business models, and improved management techniques. 2. Compute technologies: The middleware is a software interface between the physical layer (i.e., hardware) and the application one. It  provides the required abstraction to hide the heterogeneity and the complexity of the underlying technologies involved in the lower layers. Indeed the middleware is essential to spare both users and developers from the exact knowledge of the heterogeneous set of technologies adopted by the lower layers. The  service-based approaches lying on a cloud infrastructure open the door toward highly flexible and adaptive middleware for the IoT. Decoupling the application logic from the embedded devices and moving it to the cloud will allow developers to provide applications for the heterogeneous devices that will compose the future IoT environment. It  is possible to create a set of sensor services to be exploited in different applications for different users through the cloud. A sensor-cloud infrastructure provides to the end user service instances based on virtual sensors in an automatic way. 3. Actuate technologies: Internet of Things enhances the passive objects around us with communication and processing capabilities to transform into pervasive objects. However, this can be enabled only realized with corresponding physical support. Cloud-robotics abstracts robotic functionalities and provides a means for utilizing them. Various equipments and devices, like individual robots, sensors, and smartphone, that can measure the world or interact with people in both the physical and digital worlds are treated uniformly. These  robots are logically gathered to form a cloud of robots by networking. Single-board computers (SBCs) are the main IoT hardware components for acquiring and presenting indoor information from its surroundings by sensors, either embedded in them/or connected to them consist of a single circuit board memory and the processor. Most SBCs have I/O interfaces where different kinds of sensors can be plugged in: these computers usually do not have extension slots like regular PCs, and the processors used in them are usually of low cost. The size of these devices ranges from a matchbox to a playing card. Some of them have USB and memory card interfaces. SBCs can run versions of embedded Linux and even Windows, while some only have programmable

418

Digital Transformation of Enterprise Architecture

microprocessors which provide output to their proprietary workbench. There  are more than 20 different types of SBCs including: • • • •

Arduino Development Boards BeagleBoard Raspberry Pi Orange Pi

19.1.2 IoT Architecture The Internet of Things connects a large number of “smart” objects, each of them possessing a unique address. This creates a huge amount of traffic and therefore the demand for large volumes of computational power and huge data storage capabilities. Additional challenges arise in the areas of scalability, interoperability, security and Quality of Service (QoS). Figure 19.1 shows the IoT architecture. The envisaged architecture consists of: 1. The perception layer is the lowest layer in the IoT architecture and its purpose is to perceive the data from the environment and identify objects or things.

FIGURE 19.1 IoT architecture.

Internet of Things (IoT) Computing

419

The perception layer consists of the physical objects and all the sensor devices. These devises can be RFID, barcodes, infrared sensors, embedded sensors as well as actuator nodes. In this layer the information is gathered and categorized. First, the physical properties of each object (like location, orientation, temperature, motion, etc.) are perceived. However, in order to successfully perceive the desired properties, most objects need to have appropriate microchips installed in them that sense the desired information and perform preliminary processing to convert it into corresponding digital signals for their use by the next layer for network transmission. 2. The network layer is like the Network and Transport layers of OSI model. It collects the data from the perception layer and sends it to the Internet. The network layer may only include a gateway, having one interface connected to the sensor network and another to the Internet. In some scenarios, it may include a network management centre or information processing centre. The main role of this layer is to securely transfer the information gathered by the sensor devices to the processing centre of the system. The transmission is possible through all the common mediums, such as the wireless network, the cable network and even through LANs (Local Area Networks). The different communication technologies and protocols enabling this transfer include 3G/4G cellular networks as well as Wi-Fi (Wireless Fidelity), Bluetooth, ZigBee, Infrared, etc. This layer also includes several protocols like IPv6 (Internet Protocol version 6), whose responsibility is the proper IP addressing of the objects. 3. The middleware layer receives data from the network layer. Its purpose is service management, storing data, performing information processing and taking decisions based on the results automatically. It then passes the output to the next layer, the application layer. The middleware layer or “processing” layer and is considered as the core layer of the IoT system. Initially, all the information received from the network layer is stored in the system’s database. Furthermore, the information is analyzed and processed, and based on the results appropriate automated decisions are taken. This  layer is also responsible for the service management of the devices. Since each IoT device implements specific types of services, this layer helps in the management of all the different services of the system, by deciding which devices should connect and communicate in order to provide the optimum requested service results. 4. The  application layer performs the final presentation of data. The  application layer receives information from the middleware layer and provides global management of the application presenting that information, based on the information processed by the middleware layer. According to the needs of the user, the application layer presents the data in the form of: smart city, smart home, smart transportation, vehicle tracking, smart farming, smart health and many other kinds of applications. The role of this layer is very important since it practically supports the applications envisaged by different industries. The  applications implemented by the IoT cover a huge area of technology fields, varying from healthcare, intelligent transportation, smart home, smart cities, supply chains and logistics management.

420

Digital Transformation of Enterprise Architecture

5. The business layer is all about making money from the service being provided. Data received at the application layer is molded into a meaningful service and then further services are created from those existing services. Based on the analysis of the results of the above, the users can determine their business strategies and future actions. The business layer is responsible for managing the higher level of the IoT system, mainly the applications and their services to the users. Unless IoT can provide efficient and effective business models to the users, their long-term use and development won’t be feasible. Finally, another task of this layer is managing the issues related to the privacy and security of the users. 19.1.3 Widened Address Space with IPv6 Like URLs (Uniform Resource Locators) used by Web applications and HTTP, the network layer protocols, such as IP, use network layer addresses, such as IP addresses like 124.65.120.43. For  a network node to forward packets to their destinations, it is necessary that both source and destination computers must have unique addresses. In  the Internet, these addresses are known as the IP addresses or the Internet Addresses. Internet Corporation for Assigning Names and Numbers (ICANN) and numerous private companies/organizations that are authorized by ICANN, manage the IP address and domain names. The IP addresses are assigned typically in groups when the domain names are registered. In IPv4, the source and destination IP addresses are indicated in a packet by a 4-byte (32-bit) field each. The IP address example shown above is called the dotted decimal notation, where a decimal number corresponding to each byte, rather than a binary number, is shown for human consumption only. Based on the so called Classfull Addressing scheme, ICANN assigns the IP addresses in “classes,” varying the size of the address pool, depending on the size of the organization. This is an inefficient and wasteful assignment resulting in a shortage of IP addresses to be assigned for new entrants into the network. IPv6 uses a 128-bit field (16  bytes) allowing an extremely large number (2128) of IP addresses to be assigned. IPv6 does not use dotted decimal notation, instead it relies on a hexadecimal colon notation to make the addresses more readable. An IPv6 address consists of 32 hexadecimal digits grouped into four and separated by a colon. The following is an example of an IPv6 address: F45A:B2C4:04C5:0043:ADB0:0000:0039:FFEF Figure 19.2 shows formats of IPv4 and IPv6 messages. Table 19.2 shows a comparison between IPv4 and IPv6. Once an organization obtains the range of IP addresses, it is now up to this organization to assign specific IP addresses to its computers within its private network. Various techniques including manual, centralized, automated, or a combination of them may be used for this purpose. The most common technique is called subnetting. A group of computers on a LAN may be given a group of IP addresses with the same prefix, which is called the subnet address. Subnet addresses make it easier to separate the subnet part of the address from the host part. By applying a corresponding Subnet Mask, a router/gateway can easily determine whether the packets stay in the same subnet or goes outside of the subnet. The  assignment of an IP address to an individual computer can be permanent or temporary. Typically, the computers providing server functions have permanent IP addresses. Also, ISPs assign IP addresses permanently to broadband modems.

421

Internet of Things (IoT) Computing

4 bits

4 bits

8 bits

16 bits

Version

Header length

Type of service

Total length

Identifiers Hop count

Flags

Packet offset

Protocol

Header checksum

Source IP address Destination IP address Options and padding Data (varying length)

(a) 4 bits

8 bits

20 bits

Version

Traffic class

Flow label

Payload length

Next header

Hop limit

Source IP address (128 bits) Destination IP address (128 bits) Data (varying length)

(b)

FIGURE 19.2 (a) Format of IPv4 messages and (b) Format of IPv6.

TABLE 19.2 Comparison of IPv4 and IPv6 Feature Packet header Address field Address allocation

Address notation (numeric) Quality of service

Security

IPv4 Variable size (40+ bytes), complex 32 bits (4 bytes) resulting in over 109 possible addresses Classful: network classes A, B, C CIDR: stopgap measure to deal with address space limitation Dotted decimal notation Defined but not generally used consistently

No IP-level authentication or encryption Relies on other protocols

IPv6 Fixed size (32 bytes), simple 128 bits (16 octets) resulting in over 1038 possible addresses Hierarchical by registry, provider, subscriber, and subnet Hierarchical by geographic region IPv4 compatibility Hexadecimal with colons and abbreviations IPv4 addresses a special case Support for real-time data and multimedia distribution Flow labeling Priority Authentication (validation of packet origin) Encryption (privacy of contents) Requires administration of “security associations” to handle key distribution, etc.

422

Digital Transformation of Enterprise Architecture

However, almost all client computers in an enterprise do not have permanent IP addresses. Once a range of IP addresses are allocated to subnet, the assignment of individual IP addresses to individual computers are usually performed dynamically via a server called Dynamic Host Control Protocol (DHCP) server. This eliminates the tedious task of assigning addresses manually to every client computer in the network. When a computer connected to the subnet is turned on, it communicates with a DHCP server requesting an IP address (along with the subnet mask and gateway address). The server, from the existing pool, determines an available IP address and assigns it to this computer. When the computer is turned off, the IP address is returned to the pool and may be assigned to some other computers. There is no guarantee that the same IP will be assigned next time to the same computer. As discussed above, the application layer protocols typically use application layer addresses involving domain names. But the network layer operates on the IP addresses. Therefore, a mechanism is needed to translate between the domain name based application layer addresses and the IP addresses. This  is called Address Resolution and is performed via the Domain Name System (DNS). DNS is a set of distributed and hierarchical directories that contain the mappings of the IP addresses to the domain names. ICANN maintains the root DNS directories for the root domains such as .com and .org. Then each individual organization maintains at least one DNS directory for their domains and subdomains. Smaller organizations rely on their ISPs to provide this service. Furthermore, each computer maintains a local directory to keep track of the IP addresses used and the corresponding application layer addresses, so that the next time the user tries to exchange information with an application, the computer can easily determine the corresponding IP address by looking it up in its local directory without polling an external DNS server.

19.2 RFID (Radio Frequency Identification) RFID, also known as radio frequency identification, is a form of Auto ID (automatic identification) i.e., the identification of an object with minimal human interaction. Barcodes identify items through the encoding of data in various sized bars using a variety of symbologies, or coding methodologies. The most familiar type of barcode is the UPC, or universal product code, which provides manufacturer and product identification. While barcodes have proven to be very useful, and indeed, have become an accepted part of product usage and identity, there are limitations with the technology. Barcode scanners must have line of sight in order to read barcode labels. Label information can be easily compromised by dirt, dust, or rips. Barcodes take up a considerable footprint on product labels. Even the newer barcode symbologies, such as 2D, or twodimensional, which can store a significant amount of data in a very small space remain problematic. Limitations of barcodes are overcome through the use of RFID labeling to identify objects. While using RFID, since the data is exchanged using radio waves, it is not necessary to have line of sight between a reader and a tag, such as is required with barcodes. This permits a great deal more flexibility in the use of the RFID. RFID was first used commercially

Internet of Things (IoT) Computing

423

in the 1960s by companies that developed security related devices called “electronic article surveillance (EAS) equipment.” Although EAS could only present the detection or absence of a tag, the tags were low cost and provided valuable deterrents to theft. The mining industry was an early adopter of RFID technology. One use of RFID tags in mining was in validating the correct movement of haulage vehicles. These vehicles have read only tags permanently attached to the vehicles dump bed. As a vehicle approaches a dump area, a reader validates that it is approaching the correct dump bed. Information concerning the transaction, including the vehicle number, the weight of the vehicle before and after the dump, and the time, are recorded automatically. Using RFID obviates the need for human interaction between the scales operator and the vehicle driver The radio frequency transmissions in RFID travel between the two primary components: • RFID reader can be mobile or stationary, consists of an antenna and a transceiver, is supplied with power, and generates and transmits a signal from its antenna to the tag and then reads the information reflected from the tag. The antenna is used to send and receive signals; the transceiver is used to control and interpret the signals sent and received. • RFID tag, a transponder, consist of three components: Antenna, silicon chip, and substrate or encapsulation material. The antenna is used for receiving and transmitting radio frequency waves to and from the reader. The chip contains information pertaining to the item tagged such as part number and manufacturer. Chips can be either read-only or read-write; the costs are higher for the read-write chips. There is a crucial difference between read-only chips and read-write chips. Read-only chips are essentially electronic barcodes. Once the data has been encoded onto the chip, it cannot be modified and, therefore, cannot transmit information about the product as it moves through a sequence of events. The tag is affixed to the object being identified, such as an automobile, a shipping pallet, or a tagged marine mammal. The  portion of the electromagnetic spectrum used by radio frequency identification includes LF (low frequency), HF (high frequency), and UHF (ultra-high frequency), which are all portions of the radio wave frequency bands, hence the term “radio frequency identification.” An advantage of radio waves over visible light is that radio waves can penetrate many substances that would block visible light. Radio waves range from 300 kHz to 3 GHz. Electromagnetic waves are composed of a continuum of emanations, including visible light waves, and invisible frequencies such as television and radio waves, which are lower frequency than light, and x-rays and gamma rays, which are higher frequency than light. The  range over which devices using radio waves can consistently communicate is affected by the following factors: • • • •

Power contained in the wave transmitted Sensitivity of the receiving equipment Environment through which the wave travels Presence of interference

424

Digital Transformation of Enterprise Architecture

19.3 Sensor Networks 19.3.1 Wireless Networks The main characteristics of wireless networks are as follows: • • • • •

Access for anybody, from anywhere, at any time—mobility On-line/real-time access Relative high communication speed Shared access to files, data/knowledge bases Exchange of picture, voice—multimedia applications

Table 19.3 shows characteristics of several wireless networks. TABLE 19.3 Characteristics of Various Wireless Networks Wireless Network Type Satellite WWAN GSM (2–2.5 G) 3G/UMTS iMode (3G/ FOMA) FLASH-OFDM

WMAN IEEE 802.16 WWLAN IEEE 802.11A IEEE 802.11b IEEE 802.11g WPAN BLUETOOTH UWB ZigBee Infrared RFID

Operation Frequency

Data Rate

Operation Range

Characteristics

2170–2200 MHz

Different (9.6 Kbps–2 Mbps)

Satellite coverage

Relative high cost, availability

824–1880 MHz

9.6–384 Kbps (EDGE)

Cellular coverage

1755–2200 MHz

2.4 Mbps

Cellular coverage

800 MHz

Cellular coverage

450 MHz

64–384 Kbps (W-CDMA) Max. 3 Mbps

Reach, quality, low cost Speed, big attachments Always on, easy to use High speed, respond time less than 50 milliseconds

2–11 GHz

Max. 70 Mbps

3–10 km (max. 45)

Speed, high operation range

5 GHz 2.4 GHz 2.4 GHz

54 Mbps 11 Mbps 54 Mbps

30 m 100 m 100–150 m

Speed, limited range Medium data rate Speed, flexibility

2.4 GHz 1.5–4 GHz 2.4 GHz, 915–868 MHz 300 GHz

720 Kbps 50–100 Mbps 250 Kbps

10 m 100–150 m 1–75 m

9.6 Kbps–4 Mbps

0.2–2 m

30–500 kHz 850–950 MHz 2.4 2.5 GHz

Linked to bandwidth, max. 2 Mbps

0.02–30 m

Cost, convenience Low cost, low power Reliable, low power, cost effective Non interfere, low cost High reading speeds, responding in less than 100 ms

Cellular coverage

Internet of Things (IoT) Computing

425

Wireless networks can be categorized into five groups based on their coverage range: 1. Satellite communication (SC): The handheld satellite telephones provide voice, fax, Internet access, short messaging, and remote location determination services (GPS) in the covered area. All of this is provided through geosynchronous satellites, but when satellite coverage is not necessary, the handset can also access the GSM cellular network. Fax and digital data is transmitted at 9600 bps throughputs, but in case users need high-speed Internet access, in special cases, 2 Mbps data transmission rate can be achieved. A satellite phone can fulfill all the requirements regarding mobile communications in many application fields. 2. Wireless Wide Area Network (WWAN): This includes the following: a. Mobile phone: Mobile phone was the device that offered for a great number of people the possibility to make contact with others from anywhere, at anytime, and for anybody. Different mobile systems/network Protocols are: • CDMA (Code Division Multiple Access—2G): CDMA  networks incorporate spread-spectrum technology to gracefully allocate data over available cells. • CDPD (Cellular Digital Packet Data—2G): CDPD is a protocol built exclusively for sending wireless data over cellular networks. CDPD is built on TCP/IP standards. • GSM (Global System for Mobile Communications—2G): GSM networks, mainly popular in Europe. • GPRS (General Packet Radio Service—2.5 G): GPRS technology offers significant speed improvements over existing 2G technology. • iMode (from DoCoMo—2.5G): iMode was developed by DoCoMo and is the standard wireless data service for Japan. iMode is known for its custom markup language enabling multimedia applications to run on phones. • 3G: 3G networks promise speeds rivaling wired connections. Both in Europe and North America, carriers have aggressively bid for a 3G spectrum but no standard has yet emerged. The  introduction of WAP (Wireless Application Protocol) was a big step forward for the mobile communication as this protocol made it possible to connect mobile devices to the Internet. By enabling WAP applications, a full range of wireless devices, including mobile phones, smart-phones, PDAs, and handheld PCs, gain a common method for accessing Internet information. The  spread of WAP became even more intensive as the mobile phone industry actively supported WAP by installing it into the new devices. WAP applications exist today to view a variety of WEB content, manage e-mail from the handset and gain better access to network operators’ enhanced services. Beyond these information services, content providers have developed different mobile solutions, such as mobile e-commerce (mCommerce). b. FLASH-OFDM: FLASH-OFDM (Fast, Low-latency Access with seamless Handoff—Orthogonal Frequency Division Multiplexing) is a cellular, IP-based broadband technology for data services on the 450 MHz band. It has full cellular mobility, 3.2 Mbps peak data rates, 384 Kbps at the edge of the cell and less than 20 ms of latency. The FLASH-OFDM system consists of an airlink, an

426

Digital Transformation of Enterprise Architecture

integrated physical and media access control layer, and IP-based layers above the network layer (layer 3). The  IP-based layers support applications using standard IP protocols. FLASH-OFDM is a wide-area technology enabling full mobility up to speeds of up to 250 km/h (critical to vehicle and rail commuters). Its ability to support a large number of users over a large area, and nationwide build outs (via wireless carriers), will do for data as the cellular networks did for voice. The IP (Internet Protocol) Interfaces in Flash-OFDM enable operators to offer their enterprise customers access to their LANs (Local Area Networks) and users the benefits of the mobile Internet. FLASH-OFDM support voice-packet switched voice, not circuit-switched voice, Radio routers, IP routers with radio adjuncts, would handle packet traffic, and serve as the equivalent of cellular base stations. Consumers would connect with Flash-OFDM networks via PC cards in their notebooks and via flash-memory cards in handheld devices. 3. Wireless Metropolitan Area Network (WMAN): WiMAX is considered the next step beyond Wi-Fi because it is optimized for broadband operation, fixed and later mobile, in the wide area network. It  already includes numerous advances that are slated for introduction into the 802.11 standard, such as quality of service, enhanced security, higher data rates, and mesh and smart antenna technology, allowing better utilization of the spectrum. The  term WiMAX (Worldwide Interoperability for Microwave Access) has become synonymous with the IEEE 802.16 Metropolitan Area Network (MAN) air interface standard. Metropolitan area networks or MANs are large computer networks usually spanning a campus or a city. They typically use optical fiber connections to link their sites. WiMAX is the new shorthand term for IEEE Standard 802.16, also known as “Air Interface” for Fixed Broadband Wireless Access Systems. In its original release (in early 2002) the 802.16 standard addressed applications in licensed bands in the 10–66 GHz frequency range and requires line-of-sight towers called fixed wireless. Here a backbone of base stations is connected to a public network, and each base station supports hundreds of fixed subscriber stations, which can be both public Wi-Fi “hot spots” and enterprise networks with firewall. 4. Wireless Local Area Networks (WLAN): Local area wireless networking, generally called Wi-Fi, or Wireless Fidelity, enabled computers send and receive data anywhere within the range of a base station with a speed that is several times faster than the fastest cable modem connection. Wi-Fi connects the user to others and to the Internet without the restriction of wires, cables or fixed connections. Wi-Fi gives the user freedom to change locations (mobility)—and to have full access to files, office, and network connections wherever she or he is. In addition Wi-Fi will easily extend an established wired network. Wi-Fi networks use radio technologies called IEEE 802.11b or 802.11a standards to provide secure, reliable, and fast wireless connectivity. A Wi-Fi network can be used to connect computers to each other, to the Internet, and to wired networks (which use IEEE 802.3 or Ethernet). Wi-Fi networks operate in the 2.4 (802.11b) and 5 GHz (802.11a) radio bands, with an 11 Mbps (802.11b) or 54 Mbps (802.11a) data rate or with products that contain both bands (dual band), so they can provide real-world performance similar to the basic 10BaseT wired Ethernet networks used in many offices. 802.11b has a range of approximately 100 m. Products based on the 802.11a standard were first introduced in late 2001.

Internet of Things (IoT) Computing

427

Wi-Fi networks can work well both for home (connecting a family’s computers together to share such hardware and software resources as printers and the Internet) and for small businesses (providing connectivity between mobile salespeople, floor staff, and “behind-the-scenes” departments). Because small businesses are dynamic, the built-in flexibility of a Wi-Fi network makes it easy and affordable for them to change and grow. Large companies and universities use enterprise-level Wi-Fi technology to extend standard wired Ethernet networks to public areas like meeting rooms, training classrooms, and large auditoriums, and also to connect buildings. Many corporations also provide wireless networks to their off-site and telecommuting workers to use at home or in remote offices. 5. Wireless Personal Area (or Pico) Network (WPAN): WPAN represents wireless personal area network technologies such as Ultra-wideband (UWB), ZigBee, Bluetooth and RFID. Designed for data and voice transmission, low data rate standards include ZigBee, and Bluetooth (IEEE 802.15.1), and enables wireless personal area networks to communicate over short distances, generating a new way of interacting with our personal and business environment. a. Bluetooth: Bluetooth is a short-range radio device that replaces cables with lowpower radio waves to connect electronic devices, whether they are portable or fixed. It is a wireless personal area network (WPAN) specified in IEEE 802.15, Working Group for wireless personal area networks. Bluetooth, named after Harald Bluetooth, the tenth century Viking king, is a consortium of companies (3Com, Ericsson, Intel, IBM, Lucent Technologies, Motorola, Nokia, and Toshiba) bonded together to form a wireless standard. The Bluetooth device also uses frequency hopping to ensure a secure, quality link, and it uses ad hoc networks, meaning that it connects peer-to-peer. When devices are communicating with each other they are known as piconets, and each device is designated as a master unit or slave unit, usually depending on who initiates the connection. However, both devices have the potential to be either a master or a slave. The  Bluetooth user has the choice of pointto-point or point-to-multipoint links, whereby communication can be held between two devices, or up to eight. b. RFID (Radio Frequency Identification): The main purpose of the RFID (radio frequency identification) technology is the automated identification of objects with electromagnetic fields. An RFID system has three basic components: transponders (tags), readers (scanners), and application systems for further processing of the acquired data. There is a large variety of different RFID systems; they can use low, high, or ultrahigh frequencies, tags may emanate only a fixed identifier or can have significant memory and processing capabilities. Transponders can contain effective security protocols or no security features at all. Most of the tags have passive powered by the radio field emitted by the reader but there are also active tags with a separate power supply. RFID systems can be distinguished according to their frequency ranges. Lowfrequency (30–500 kHz) systems have short reading ranges and lower system costs. They are usually used in, for example, security access and animal identification applications. High-frequency (850–950  MHz and 2.4–2.5  GHz) systems, offering long read ranges (greater than 25 m) and high reading speeds, are used for such applications as railroad car tracking and automated toll collection. However, the higher performance of high-frequency RFID systems generate higher system costs.

428

Digital Transformation of Enterprise Architecture

19.3.2 Sensor The objective of sensor is to measure a physical quantity from the environment and transform it into a signal (either analog or digital) that drives outputs/actuators or feeds a data analysis tool (see Figure 19.2). The choice of a sensor in a WSN is application dependent and strictly related to the physical phenomenon object of the monitoring action. The wide and heterogeneous range of possible physical phenomena to be monitored is well reflected by the complexity and the heterogeneity of the sensors that have been designed and developed to acquire data from them. Examples of possible applications, physical phenomena to be monitored, and the corresponding sensors of interest are summarized in Tables 19.4 and 19.5. Figure 19.3 shows the measurement chain of sensors. Table 19.4 presents examples of possible applications, physical phenomena to be monitored and the corresponding sensors, and, Table 19.5 presents types of sensors for various measurement objectives.

FIGURE 19.3 The measurement chain of sensors.

TABLE 19.4 Examples of Possible Applications, Physical Phenomena to Be Monitored, and the Corresponding Sensors Applications Environmental monitoring

Visual and audio processing Motion/acceleration analysis Location Biological and chemical analysis

Physical Phenomenon Temperature Light Humidity Pressure Visual Audio Motion/acceleration Indoor Outdoor Biological Chemical

Sensors Thermometers, thermocouples, thermistor Photodiodes/phototransistor, CCD cameras Humidity sensors Switches, strain gauges CCD and other cameras Microphones Accelerometers, motion detectors, magnetic fields, angular sensors Tags, badges GPS, GSM cells ECG, heart rate sensors, blood pressure sensors, pulse oximetry, skin resistance, DNA analyzer Contaminant sensors, solid/liquid/gas sensors, electrochemical cells

429

Internet of Things (IoT) Computing

TABLE 19.5 Types of Sensors for Various Measurement Objectives Sensor Linear/rotational sensor Liner/rotational variable differential transducer (LVDT/RVDT) Optical encoder Electrical tachometer Hall effect sensor capacitive transducer Strain gauge elements Interferometer Magnetic pickup Gyroscope Inductosyn Acceleration sensors Seismic and Piezoelectric accelerometers

Features High resolution with wide range capability. Very stable in static and quasi-static applications. Simple, reliable, and low-cost solution. Good for both absolute and incremental measurements Resolution depends on type such as generator or magnetic pickups. High accuracy over a small to medium range. Very high resolution with high sensitivity. Low power requirements Very high accuracy in small ranges. Provides high resolution at low noise levels Laser systems provide extremely high resolution in large ranges. Very reliable and expensive. Output is sinusoidal Very high resolution over small ranges

Force, torque, and pressure sensor Strain gauge, dynamometers/load cells, piezoelectric load cells, tactile sensor, and ultrasonic stress sensor

Good for measuring frequencies up to 40% of its natural frequency. High sensitivity, compact, and rugged. Very high natural frequency (100 kHz typical) Good for both static and dynamic measurements. They are also available as micro- and nanosensors. Good for high precision dynamic force measurements. Compact, has wide dynamic range. Good for small force measurements

Proximity sensors Inductance, eddy current, hall effect, photoelectric, capacitance, etc.

Robust noncontact switching action. The digital outputs are often directly fed to the digital controller

Light sensors Photo resistors, photodiodes, photo transistors, photo conductors, etc. Charge-coupled diode.

Measure light intensity with high sensitivity. Inexpensive, reliable, and noncontact sensor. Compares digital image of a field of vision

Temperature sensors Thermocouples Thermostats Thermo diodes, and thermo transistors RTD-resistance temperature detector Infrared type and Infrared thermograph Flow sensors Pitot tube, orifice plate, flow nozzle, and venture tubes Rotameter Ultrasonic type

This is the cheapest and the most versatile sensor. Applicable over wide temperature ranges (−200°C–1200°C typical) Very high sensitivity in medium ranges (up to 100°C typical). Compact but nonlinear in nature Ideally suited for chip temperature measurements. Minimized self heating More stable over a long period for time compared to thermocouple. Linear over a wide range Noncontact point sensor with resolution limited by wavelength. Measures whole-field temperature distribution Widely used as a flow rate sensor to determine speed in aircrafts. Least expensive with limited range. Accurate on wide range of flow. More complex and expensive Good for upstream flow measurements. Used in conjunction with variable inductance sensor Good for very high flow rates. Can be used for upstream and downstream flow measurements (Continued)

430

Digital Transformation of Enterprise Architecture

TABLE 19.5 (Continued) Types of Sensors for Various Measurement Objectives Sensor Turbine flow meter Electromagnetic flow meter

Smart material sensors Optical fiber as strain sensor, as level sensor, as force sensor, and as temperature sensor Piezoelectric as strain sensor, as force sensor, and as accelerometer Magnetostrictive as force sensors and as torque sensor Micro- and nano-sensors Micro CCD image sensor, fiberscope micro-ultrasonic sensor, microtactile sensor

Features Not suited for fluids containing abrasive particles. Relationship between flow rate and angular velocity is linear Least intrusive as it is noncontact type. Can be used with fluids that are corrosive, contaminated, etc., the fluid has to be electrically conductive Alternate to strain pages with very high accuracy and bandwidth. Sensitive to the reflecting surface’s orientation and status. Reliable and accurate. High resolution in wide ranges. High resolution and range (up to 2000°C) Distributed sensing with high resolution and bandwidth. Most suitable for dynamic applications. Least hysteresis and good set point accuracy Compact force sensor with high resolution and bandwidth. Good for distributed and noncontact sensing applications. Accurate, high bandwidth, and noncontract sensor Small size, full field image sensor, Small (0.2 mm diameter) field vision scope using SMA coil actuators. Detects flaws in small pipes. Detects proximity between the end of catheter and blood vessels

The measurement of a physical quantity and the subsequent conversion into signals are performed by sensors through a measurement chain that is generally composed of three different stages: • Transducer stage transforms one form of energy into another, here converting the monitored physical quantity into an electric signal. • Conditioning circuit stage conditions the signal provided by the transducer to reduce the effect of noise, amplify the sensitivity of the sensor, and adapt the electrical properties of the signal to the needs of the next stage. • Output formatting stage aims at transforming the conditioned signal into a signal ready to be used to drive outputs or actuators and to be processed by the data analysis tool. The measurement chain of sensors is characterized by: 1. Functional attributes specifying the functional behavior. Functional attributes refer to specific functions or behaviors the sensor must exhibit. Examples of functional attributes are the measurement range (i.e., the range of measurements the sensor can acquire), the duty-cycle sampling rates (i.e., the range of sampling rates the sensor can support), and the condition stage parameters (e.g., the filter parameters). 2. Nonfunctional attributes specifying the requirements/properties of the sensor itself. Nonfunctional attributes refer to properties characterizing how the sensor

Internet of Things (IoT) Computing

431

is providing its functionalities. Examples of nonfunctional attributes are the size/ weight of the sensor, the accuracy and the precision of the measurement chain, and the power consumption. Sensor does not necessarily refer only to physical hardware devices but also to every source of data providing information of interest. Hence, in addition to physical hardware sensors (which is however the most frequently considered source of information in WSNs), virtual and logical sensors are often employed. Virtual sensors refer to nonphysical sources of information; for example, the location of a person can be determined by means of tracking systems through Global Positioning System (GPS) location (physical devices) as well as by looking at its electronic calendar or by analyzing travel bookings or e-mails (nonphysical sources of information). Electronic calendars, travel bookings, or e-mails do not  require a real measurement of the person position and are referred to as virtual sensors. Logical sensors rely on two or more sources of information to provide high-level data; for example, a logical sensor could aggregate data coming from both a physical sensor and a virtual sensor as well as information coming from a database system. Regardless of the kind of sensor (physical, virtual, or logical) or the object of the sensing activities (places, people, or object), defines four essential characteristics of the sensing activity: • Identity refers to the ability to univocally identify the entity which is the object of the sensing, e.g., by assigning unique identifiers in the namespace used by the application. • Location encompasses all the aspects related to the position of the sensed entity. It could include not only a position in the space but also orientation, elevation, and proximity to relevant locations. • Status refers to the physical quantity (or the virtual/logical quantity) object of the sensing. In case of a temperature sensor, the status refers to the temperature value measured at a specific time instant. • Time indicates temporal information associated with the sensed measurements. The temporal information often refers to a timestamp specifying the exact time instant at which the measurement has been acquired. A wide range of time coarseness could be considered (e.g., from the nanosecond to the decades) depending on the application needs. 19.3.3 Wireless Sensor Networks Wireless Sensor Networks (WSNs) have been widely considered as one of the most important technologies for the twenty first century. Enabled by recent advances in microelectronic mechanical systems (MEMS) and wireless communication technologies, tiny, cheap, and smart sensors deployed in a physical area and networked through wireless links and the Internet provide unprecedented opportunities for a variety of applications like environmental monitoring and industry process control. In contrast with the traditional wireless communication networks like cellular systems and mobile ad hoc networks (MANET), There are many challenges in the development and application of WSNs because of many have unique characteristics like denser level of node deployment, higher unreliability of sensor nodes, and severe energy, computation, and storage constraints.

432

Digital Transformation of Enterprise Architecture

A sensor network consists of a large number of sensor nodes that are densely deployed in a sensing region and collaborate to accomplish a sensing task. It  requires a suite of network protocols to implement various network control and management functions, for example, synchronization, self-configuration, medium access control, routing, data aggregation, node localization, and network security. However, existing network protocols for traditional wireless networks, for example, cellular systems and mobile ad hoc networks (MANETs), cannot be applied directly to sensor networks because they do not consider the energy, computation, and storage constraints in sensor nodes. On the other hand, most sensor networks are application specific and have different application requirements. For these reasons, a new suite of network protocols is required, which take into account not only the resource constraints in sensor nodes, but also the requirements of different network applications. For this reason, it is important to define a protocol stack to facilitate the protocol design for WSNs. As discussed in Section 19.1.1 earlier, the Wireless Sensor Networks (WSN) hardware platforms are composed of: • The  sensing module contains one or more sensors, each of which is devoted to measure a physical quantity of interest and generate the associated codeword (i.e., the digital data). • The processing module, which is generally equipped with a small memory/storage unit, manages the unit and carries out the processing tasks according to the application needs. • The  transmission module provides the mechanisms to communicate with the other units of the WSN or to a remote control room. Hardware platforms for WSN are presented in Table 19.6. Operating systems (OSs) represent the intermediate layer between the hardware of the WSN unit and WSN applications. They aim to control the hardware resources and providing services and functionalities to the application software (e.g., hardware abstraction, scheduling of tasks, memory management, and file system). Simplified versions of desktop/mainframe OSs could be considered only in high-performance hardware platforms (e.g., the StarGate platform is able to run a simplified version of Linux). The constraints on hardware and energy of WSN units (as pointed out in Table 19.5) led to the development of OSs specifically devoted to networked embedded systems. Examples of OSs for WSNs are TinyOS, Contiki and Mantis (see Table 19.7). TinyOS is an open-source, flexible, and component-based OS for WSN units. It is characterized by a very small footprint (400  byte), which makes it particularly suitable for low-power hardware platforms (such as the MICAz). TinyOS relies on a monolithic architecture where software components (including those providing hardware abstraction) are connected together through interfaces. It supports a non-preemptive first-in-first-out scheduling activity (making it unsuitable for computational-intensive or real-time applications). TinyOS applications must be written in NesC (which is a dialect of the C language) and cannot be updated/modified at runtime (applications are compiled together with the OS kernel as a single image). Contiki is an open-source OS for networked embedded systems. It  is built around a modular architecture composed of an event-driven-based microkernel, OS libraries, a program loader, and application processes. Interestingly, Contiki supports preemptive multithreading and remote reprogramming (through the program loader). The memory

ATMega128 ATMega128 M16C/62P MSP430 MSP430

PXA271 PXA255 STM32F103

MICAz BTnode Mulle Shimmer TelosB

Imote2 StarGate NetBrick

n.a. not available, ext. external.

Type

Platform

13–416 MHz 400 MHz 8–72 MHz

8 MHz 7.38 MHz Up to 24 MHz 8 MHz 8 MHz

Frequency

Hardware Platforms for WSN

TABLE 19.6

256K/32 Mb 64K/64 Mb 64K/1 Mb

4K/n.a. 4K/n.a. 20K/n.a. 10K/n.a. 10K/n.a.

RAM (byte) int./ ext.

Processor

32 Mb/n.a. 32 Mb/ext. 512 Kb/128 Mb

128K/n.a. 128K/n.a. 128K/n.a. 48K/μSD 48K/1 Mb

Flash (byte) int./ ext.

0.39/up to 992 15/1455 0.083/226

0.036/60 0.036/60 0.015/75 0.006/4 0.006/4

Power Consumption (mW) Sleep/Run

CC2420 n.a. JN5148

CC2420 CC1000 Bluetooth CC2420 CC2420

Type

Up to 0 n.a. Up to +20

Up to 0 Up to +10 Up to +4 Up to 0 Up to 0

TX Power (dB)

Radio

n.a. n.a. Temperature MEMS Temperature, humidity, light n.a. n.a. Temperature, humidity, MEMS

Sensors Types

Internet of Things (IoT) Computing 433

434

Digital Transformation of Enterprise Architecture

TABLE 19.7 Operating Systems for WSN

TinyOS Contiki Mantis

Architecture

Preemption

Multithreading

Remote Reprogramming

Language

Monolithic Modular Layered

No Yes Yes

Partial Yes Yes

No Yes Yes

NesC C C

occupation is 2KB of RAM and 4KB of read-only memory (ROM). Both the OS and the applications are written in C. MANTIS is a multithreaded OS for WSNs that is based on a traditional layered architecture. Each layer provides services to upper layers from the hardware to the application threads. MANTIS supports cross-platform design by preserving the programming interface across different platforms. The scheduling activity is based on a preemptive prioritybased scheduler, while remote reprogramming is supported by the dynamic loading of threads. Even in this case, both the OS and the application threads are written in C. 19.3.3.1 WSN Characteristics A WSN typically consists of a large number of low-cost, low-power, and multi-functional sensor nodes that are deployed in a region of interest. These sensor nodes are small in size, but are equipped with sensors, embedded microprocessors, and radio transceivers, and therefore have not only sensing capability, but also data processing and communicating capabilities. They communicate over a short distance via a wireless medium and collaborate to accomplish a common task. 1. Application specific: Sensor networks are application specific. A network is usually designed and deployed for a specific application. The  design requirements of a network change with its application. 2. No global identification: Due to the large number of sensor nodes, it is usually not possible to build a global addressing scheme for a sensor network because it would introduce a high overhead for the identification maintenance. 3. Dense node deployment: Sensor nodes are usually densely deployed in a field of interest. The number of sensor nodes in a sensor network can be several orders of magnitude higher than that in a MANET. 4. Frequent topology change: Network topology changes frequently due to node failure, damage, addition, energy depletion, or channel fading. 5. Many-to-one traffic pattern: In most sensor network applications, the data sensed by sensor nodes flow from multiple source sensor nodes to a particular sink, exhibiting a many-to-one traffic pattern. 6. Battery-powered sensor nodes: Sensor nodes are usually powered by battery. In most situations, they are deployed in a harsh or hostile environment, where it is very difficult or even impossible to change or recharge the batteries. 7. Severe energy, computation, and storage constraints: Sensor nodes are highly limited in energy, computation, and storage capacities.

Internet of Things (IoT) Computing

435

8. Selfconfigurable: Sensor nodes are usually randomly deployed without careful planning and engineering. Once deployed, sensor nodes have to autonomously configure themselves into a communication network. 9. Unreliable sensor nodes: Sensor nodes are usually deployed in harsh or hostile environments and operate without attendance. They are prone to physical damages or failures. 10. Data redundancy: In most sensor network applications, sensor nodes are densely deployed in a region of interest and collaborate to accomplish a common sensing task. Thus, the data sensed by multiple sensor nodes typically have a certain level of correlation or redundancy.

19.4 Summary This chapter proposed Internet of Things (IoT) computing as the digital transformation of primarily the ubiquity aspects of enterprise architecture (see Chapter 11). IoT refers to a network of inter-connected things, objects, or devices on a massive scale connected to the Internet. These objects being smart sense their surroundings, gather and exchange data with other objects. Based on the gathered data, the objects make intelligent decision to trigger an action or send the data to a server over the Internet and wait for its decision. IoT aims to integrate, collect information from-, and offer services to a very diverse spectrum of physical things used in different domains. “Things” are everyday objects for which IoT offers a virtual presence on the Internet, allocates a specific identity and virtual address, and adds capabilities to self-organize and communicate with other things without human intervention. To ensure a high quality of services, additional capabilities can be included such as context awareness, autonomy, and reactivity. This  chapter began with an introduction to IoT enabled primarily by the availability of cloud and big data technologies. It  then described the radio frequency identification (RFID), a wireless automatic identification technology whereby an object with an attached RFID tag is identified by an RFID reader. Following this was introduced the principal concept of wireless sensor network technology enabled by the availability of smaller, cheaper and intelligent sensors that has important applications like remote environmental monitoring. The last part of the chapter discussed processing and storage of the sensor data.

20 Blockchain Computing As stated right in the beginning of Chapter 12, to withstand the disruptive digital storms of the future, enterprise architecture affects digital transformation i.e. exponential change (amplification or attenuation) in any performance measure of security aspects primarily through blockchain computing. This chapter explains blockchain computing. Blockchain refers to a distributed, encrypted database, which is a public depository of information that cannot be reversed and is incorruptible. In other words, a blockchain can be defined as a distributed public ledger or database of records of every transaction that has been carried out and shared among those participating in the network. Every transaction or digital event in the public ledger has to be authenticated via the agreement of more than half of those participating in the network. This implies that no participant or user as an individual can modify any data within a blockchain without the consent of other users (participants). It could be observed clearly, that the technological concept behind the blockchain is almost identical to that of a database. The  blockchain makes it possible for first time participants to reach an agreement on how a specific transaction or digital event can occur without requiring any controlling authority. Thus this technology eliminates the role of a trusted third party or middleman.

20.1 Peer-to-Peer Systems During the past few decades, the vast number of independent data sources and the high rate of data generation have rendered the central assembly of data at one location untenable. As  a consequence, data management and storage become increasingly distributed. Although the client-server architecture as communication model is still popular for application development, traditional client-server solutions are prone to bottleneck risks and therefore do not scale unproblematically as the number of participating sources increases. Peer-to-peer (P2P) systems comprise a scalable collaborative architecture of autonomous participants (called peers), where each peer serves both as a client and as a server. Attractive features of P2P networks include 1. 2. 3. 4.

Scalability to large numbers of peers Resilience to failures Loose coupling Peer autonomy

However, the high degree of distribution comes at a cost: Efficient search and query processing mechanisms need to be established to decrease the cost of data retrieval.

437

438

Digital Transformation of Enterprise Architecture

P2P systems refer to distributed computer architectures that are designed for sharing resources, by direct exchange, rather than requiring a central coordinating server. In P2P systems, the interconnected computers called peers or nodes are organized in a network in a distributed and self-organizing way, and share resources while respecting the autonomy of peers. The main characteristic of a P2P system is the ability to adapt to peer failures (fault-tolerance) and accommodate a large number of participating peers (scalability), while keeping the performance of the network at an acceptable level and maintaining the peer connectivity. Each peer has a collection of files or data to share. Two peers that maintain an open connection between them are called neighbors. The number of neighbors of a node defines its outdegree or degree. Any peer can pose a query, in order to retrieve interesting content; this peer is called querying peer. When a peer receives a query, first the query is evaluated against its local data collection and thereafter, if necessary other peers are contacted through its neighbors. Query messages are forwarded only between open connections, i.e., neighboring peers. If a message has to be exchanged between two non-neighboring peers, more than one message is required. The cost of this message is estimated considering that it is proportional to the length of the path, also called number of hops. Usually query results are forwarded back to the querying peer through the reverse path of the query message. Alternatively, some approaches allow the establishment of a temporary connection, in order to transfer the query results directly to the querying peer. P2P systems are classified into unstructured and structured P2P systems, based on the way the content is located in the network, with respect to the network topology. Unstructured P2P systems are loosely-connected and do not impose any constraints on the way peers are connected, whereas in structured P2P systems peers are organized in a more rigid topology: 1. In structured P2P systems, a hash function is used in order to couple keys with objects. Then a distributed hash table (DHT) is used to route key-based queries efficiently to peers that hold the relevant objects. In this way, object access is guaranteed within a bounded number of hops. Examples of popular structured P2P systems are Chord, CAN, Pastry and Tapestry. Structured P2P systems assume a relation between the peer content and the network topology. The  topology of the network is strongly controlled and data is stored or indexed by specific peers. Therefore, there is a mapping between the data objects and the location, i.e., the peer that stores or indexes it, so that queries can be routed easily to peers that store relevant data objects. Usually the mapping is realized as a distributed hash table (DHT). 2. In unstructured P2P, each peer maintains a limited number of connections (also called links) to other neighboring peers in the network. Searching in an unstructured P2P environment usually leads to either flooding queries in the network using a time-to-live (TTL) or query forwarding based on constructed routing indices that give a direction for the search. Examples of such unstructured P2P networks include Gnutella and Freenet. In unstructured P2P networks peers connect to other peers in a random way and there is no relation of the placement of the data objects with the network topology. Therefore, peers have no or limited information about data objects stored by other peers, thus searching in unstructured P2P networks may lead to querying all neighbors for data items that match the query.

Blockchain Computing

439

Deployment of a P2P system requires an underlying network, which the peer or nodes use as a communication base, so forming an overlay. A popular class for underlying networks is IP-based networks, e.g., P2P nodes are located at hosts distributed over the Internet. One node can contact another node using communication primitives and infrastructure of the IP. Based on the degree of centralization, P2P systems are subdivided into: • • • •

Purely decentralized Hybrid Hybrid centralized Hybrid decentralized

In pure P2P systems all peers perform the same tasks without any central coordination, whereas hybrid systems include some form of centralization, either by one peer (hybrid centralized) or a set of peers (hybrid decentralized). Peer-to-peer systems represent a paradigm for the construction of distributed systems and applications in which data and computational resources are contributed by many hosts on the Internet, all of which participate in the provision of a uniform service. Peer-to-peer applications have been used to provide file sharing, web caching, information distribution and other services, exploiting the resources of tens of thousands of machines across the Internet. Three generations of peer-to-peer system and application development can be identified. The  first generation was launched by the Napster music exchange service, which we describe in the next section. A  second generation of file sharing applications offering greater scalability, anonymity and fault tolerance quickly followed including Freenet, Gnutella, Kazaa and BitTorrent. The third generation is characterized by the emergence of middleware layers for the application-independent management of distributed resources on a global scale. The  best-known and most fully developed examples include Pastry, Tapestry, CAN, Chord and Kademlia. A  key problem for peer-to-peer systems is the placement of data objects across many hosts and subsequent provision for access to them in a manner that balances the workload and ensures availability without adding undue overheads. Peer-to-peer middleware systems enable sharing of computing resources, storage and data present in computers “at the edges of the Internet” on a global scale. They exploit existing naming, routing, data replication and security techniques in new ways to build a reliable resource-sharing layer over an unreliable and untrusted collection of computers and networks. The use of peer-to-peer systems for applications that demand a high level of availability for the objects stored requires careful application design to avoid situations in which all of the replicas of an object are simultaneously unavailable. There is a risk of this for objects stored on computers with the same ownership, geographic location, administration, network connectivity, country or jurisdiction. The  use of randomly distributed globally unique identifiers (GUIDs) assists by distributing the object replicas to randomly located nodes in the underlying network. If the underlying network spans many organizations across the globe, then the risk of simultaneous unavailability is much reduced. This protects it against tampering by untrusted nodes on which it may be stored, but this technique requires that the states of resources are immutable, since a change to the state would result in a different hash value. Hence peer-to-peer storage systems are inherently best suited to the storage of immutable objects (such as music or video files).

440

Digital Transformation of Enterprise Architecture

TABLE 20.1 Distinction between IP and Overlay Routing for Peer-to-Peer Systems

Scale

Load balancing

Network dynamics (addition/deletion of objects/nodes) Fault tolerance

Target identification Security and anonymity

IP

Application-Level Routing Overlay

IPv4 is limited to 2 addressable nodes. The IPv6 namespace is much more generous (2128), but addresses in both versions are hierarchically structured and much of the space is preallocated according to administrative requirements. Loads on routers are determined by network topology and associated traffic patterns. IP routing tables are updated asynchronously on a best-effort basis with time constants on the order of 1 hour. Redundancy is designed into the IP network by its managers, ensuring tolerance of a single router or network connectivity failure. n-fold replication is costly. Each IP address maps to exactly one target node. Addressing is only secure when all nodes are trusted. Anonymity for the owners of addresses is not achievable.

Peer-to-peer systems can address more objects. The GUID namespace is very large and flat (>2128), allowing it to be much more fully occupied.

32

Object locations can be randomized and hence traffic patterns are divorced from the network topology. Routing tables can be updated synchronously or asynchronously with fractions-of-a-second delays. Routes and object references can be replicated n-fold, ensuring tolerance of n failures of nodes or connections. Messages can be routed to the nearest replica of a target object. Security can be achieved even in environments with limited trust. A limited degree of anonymity can be provided.

These platforms are designed to place resources (data objects, files) on a set of computers that are widely distributed throughout the Internet and to route messages to them on behalf of clients, relieving clients of any need to make decisions about placing resources and to hold information about the whereabouts of the resources they require. Unlike the second-generation systems, they provide guarantees of delivery for requests in a bounded number of network hops. They place replicas of resources on available host computers in a structured manner, taking account of their volatile availability, their variable trustworthiness and requirements for load balancing and locality of information storage and use. 20.1.1 Overlay Routing versus IP Routing Routing overlays share many characteristics with the IP packet routing infrastructure that constitutes the primary communication mechanism of the Internet (Table 20.1). As mentioned in the last subsection, a key problem in the design of peer-to-peer applications is providing a mechanism to enable clients to access data resources quickly and dependably wherever they are located throughout the network. Peer-to-peer middleware systems are designed specifically to meet the need for the automatic placement and subsequent location of the distributed objects managed by peer-to-peer systems and applications:

Blockchain Computing

441

1. Functional requirements: The  function of peer-to-peer system is to simplify the construction of services that are implemented across many hosts in a widely distributed network. To achieve this it must enable clients to locate and communicate with any individual resource made available to a service, even though the resources are widely distributed amongst the hosts. Other important requirements include the ability to add new resources and to remove them at will and to add hosts to the service and remove them. Like other middleware, peer-to-peer systems should offer a simple programming interface to application programmers that is independent of the types of distributed resource that the application manipulates. 2. Nonfunctional requirements include the following. a. Accommodating highly dynamic host availability: Most peer-to-peer systems are constructed from host computers that are free to join or leave the system at any time. The  hosts and network segments used in peer-to-peer systems are not  owned or managed by any single authority; neither their reliability nor their continuous participation in the provision of a service is guaranteed. A major challenge for peer-to-peer systems is to provide a dependable service despite these facts. As hosts join the system, they must be integrated into the system and the load must be redistributed to exploit their resources. When they leave the system whether voluntarily or involuntarily, the system must detect their departure and redistribute their load and resources. b. Global scalability: One of the aims of peer-to-peer applications is to exploit the hardware resources of very large numbers of hosts connected to the Internet. Peer-to-peer middleware must therefore be designed to support applications that access millions of objects on tens of thousands or hundreds of thousands of hosts. c. Load balancing: The  performance of any system designed to exploit a large number of computers depends upon the balanced distribution of workload across them. For  the systems we are considering, this will be achieved by a random placement of resources together with the use of replicas of heavily used resources. d. Optimization for local interactions between neighboring peers: The  “network distance” between nodes that interact has a substantial impact on the latency of individual interactions, such as client requests for access to resources; network traffic loadings are also impacted by it. The middleware should aim to place resources close to the nodes that access them the most. e. Security of data in an environment with heterogeneous trust: In  global-scale systems with participating hosts of diverse ownership, trust must be built up by the use of authentication and encryption mechanisms to ensure the integrity and privacy of information. f. Anonymity, deniability and resistance to censorship: Anonymity for the holders and recipients of data is a legitimate concern in many situations demanding resistance to censorship. A related requirement is that the hosts that hold data should be able to plausibly deny responsibility for holding or supplying it. The use of large numbers of hosts in peer-to-peer systems can be helpful in achieving these properties.

442

Digital Transformation of Enterprise Architecture

Knowledge of the locations of objects must be partitioned and distributed throughout the network. Each node is made responsible for maintaining detailed knowledge of the locations of nodes and objects in a portion of the namespace as well as a general knowledge of the topology of the entire namespace. A high degree of replication of this knowledge is necessary to ensure dependability in the face of the volatile availability of hosts and intermittent network connectivity. In  peer-to-peer systems a distributed algorithm known as a routing overlay takes responsibility for locating nodes and objects. The name denotes the fact that the middleware takes the form of a layer that is responsible for routing requests from any client to a host that holds the object to which the request is addressed. The objects of interest may be placed at and subsequently relocated to any node in the network without client involvement. It is termed an overlay since it implements a routing mechanism in the application layer that is quite separate from any other routing mechanisms deployed at the network level such as IP routing. The routing overlay ensures that any node can access any object by routing each request through a sequence of nodes, exploiting knowledge at each of them to locate the destination object. Peer-to-peer systems usually store multiple replicas of objects to ensure availability. In that case, the routing overlay maintains knowledge of the location of all the available replicas and delivers requests to the nearest “live” node (i.e., one that has not failed) that has a copy of the relevant object. The main tasks of a routing overlay are the following: • Routing of requests to objects: A client wishing to invoke an operation on an object submits a request including the object’s GUID to the routing overlay, which routes the request to a node at which a replica of the object resides. • Insertion of objects: A node wishing to make a new object available to a peer-to-peer service computes a GUID for the object and announces it to the routing overlay, which then ensures that the object is reachable by all other clients. • Deletion of objects: When clients request the removal of objects from the service the routing overlay must make them unavailable. • Node addition and removal: Nodes (i.e., computers) may join and leave the service. When a node joins the service, the routing overlay arranges for it to assume some of the responsibilities of other nodes. When a node leaves (either voluntarily or as a result of a system or network fault), its responsibilities are distributed amongst the other nodes. An objects GUID is computed from all or part of the state of the object using a function that delivers a value that is, with very high probability, unique. Uniqueness is verified by searching for another object with the same GUID. A hash function is used to generate the GUID from the object’s value. Because these randomly distributed identifiers are used to determine the placement of objects and to retrieve them, overlay routing systems are sometimes described as distributed hash tables (DHT). This is reflected by the simplest form of API used to access them: put(GUID, data)

Stores data in replicas at all nodes responsible for the object identified by GUID. remove(GUID)

Blockchain Computing

443

Deletes all references to GUID and the associated data. value = get(GUID)

Retrieves the data associated with GUID from one of the nodes responsible for it. A slightly more flexible form of API is provided by a distributed object location and routing (DOLR) layer, as shown below: publish(GUID)

GUID can be computed from the object (or some part of it, e.g., its name). This function makes the node performing a publish operation the host for the object corresponding to GUID. unpublish(GUID)

Makes the object corresponding to GUID inaccessible. sendToObj(msg,GUID,[n])

Following the object-oriented paradigm, an invocation message is sent to an object in order to access it. This might be a request to open a TCP connection for data transfer or to return a message containing all or part of the object’s state. The final optional parameter [n], if present, requests the delivery of the same message to n replicas of the object. With this interface objects can be stored anywhere and the DOLR layer is responsible for maintaining a mapping between object identifiers (GUIDs) and the addresses of the nodes at which replicas of the objects are located. Objects may be replicated and stored with the same GUID at different hosts, and the routing overlay takes responsibility for routing requests to the nearest available replica. Pastry and Tapestry employ a routing mechanism known as prefix routing to determine routes for the delivery of messages based on the values of the GUIDs to which they are addressed. Prefix routing narrows the search for the next node along the route by applying a binary mask that selects an increasing number of hexadecimal digits from the destination GUID after each hop. Other routing schemes have been developed that exploit different measures of distance to narrow the search for the next hop destination. Chord bases the choice on the numerical difference between the GUIDs of the selected node and the destination node. CAN uses distance in a d-dimensional hyperspace into which nodes are placed. Kademlia uses the XOR of pairs of GUIDs as a metric for distance between nodes. Because XOR is symmetric, Kademlia can maintain participants’ routing tables very simply; they always receive requests from the same nodes contained in their routing tables.

20.2 Blockchain A  blockchain user operates on a blockchain from a node in a blockchain network; a blockchain user is the owner of a node in the blockchain network for operations on the blockchain and must also own a unique key pair in public key cryptography. The public key identifies a blockchain user.

444

Digital Transformation of Enterprise Architecture

An operation on a blockchain is a transaction initiated by a blockchain user. A transaction creates a record, for example, about the transfer of Bitcoins, data, physical property, or some other asset from a blockchain user to another user of the same blockchain. The transaction record is signed by the blockchain user who initiated the transaction, and is sent to all blockchain network nodes. Each blockchain network node tries to validate the received transaction record with the public key of the initiator of the related transaction. Transaction records that cannot be validated by all the blockchain network nodes are considered invalid and get discarded. Blockchain technology was introduced in Nakamoto (2008) as a platform for the Bitcoin cryptocurrency. A blockchain is a distributed database for storing a continuously growing list of records called blocks. A blockchain is replicated in a peer-to-peer network of nodes, where each node stores a copy of the entire database. The topology of a blockchain is a chain of blocks since each block except the first block, the so-called Genesis Block, contains a link to the preceding block implemented as a hash of the preceding block. Each block in a blockchain is also time stamped. The basic structure of a blockchain is depicted in Figure 20.1. Special blockchain network nodes called mining nodes collect validated transaction records and store them as lists in time-stamped candidate blocks. A  mining node performs a distributed computational review process, called mining, on its candidate block before it can be linked to the blockchain. There are several implementations of mining for blockchains: 1. Proof-of-Work (PoW): In the Bitcoin blockchain, mining is based on POW wherein each mining node repeatedly creates a hash of the concatenation of the last blockchain block and a new nonce (a randomly chosen value) until a hash of required difficulty is created. Created hashes are different because the nonce is different in each one of them. The mining node, which first creates a hash of required difficulty, can link its candidate block to the blockchain. The required difficulty is defined by the number of leading zero bits in a created hash. For example, if the difficulty is 10 zero bits, then on the average 210 attempts are needed until a hash of the required difficulty is created. Thus, the mining node with the highest processing power will link its candidate block to the blockchain with the highest probability. In the Bitcoin blockchain, the PoW difficulty is increased after 2 weeks to control the blockchain growth rate. 2. Proof of stake (PoS) scheme: This  serves as an alternative to the proof of work scheme. Proof of stake is a scheme built on less-costly computations. This implies

FIGURE 20.1 Basic structure of blockchain.

Blockchain Computing

445

that the proof of stake scheme is not based on costly computations as compared to the proof of wok scheme. Rather than depending on the scarce resources (costly computations), the proof of stake scheme is dependent on the entities that hold stake within the network (this implies a proof of stake holding). In other words, we can say that the resource that the network security is dependent on is the ownership of the coin itself, which implies proof-of-ownership that is also scarce. For  the authentication and reception of a transaction to occur (whether fees of transaction or new coins), some of the coin must be owned by a miner. The probability that a miner is successful in the creation of a new block is dependent on the amount of coin owned by the miner and not dependent on the computational power whenever the proof of stake scheme is used. Therefore, the energy cost in this transaction is every minute. In order to dent the reliability of the system, one would have ownership of over 50% of the coin presently being staked, which would be very costly. A blockchain can be of various types: 1. Public: Public blockchains are not owned by anyone; they are open to the public, and anyone can participate as a node in the decision-making process. Users may or may not be rewarded for their participation. All users of these permissionless or unpermissioned ledgers maintain a copy of the ledger on their local nodes and use a distributed consensus mechanism to decide the eventual state of the ledger. Bitcoin and Ethereum are both considered public blockchains. All software of a public blockchain is open source. Anyone can use a public blockchain, for example, the Bitcoin blockchain, by joining the blockchain network. A  user joins a blockchain network by copying the entire blockchain and installing the blockchain software on the own node. Any blockchain user can also own a mining node, for example, by installing the mining software on the own node in a blockchain network. 2. Private: Private blockchain are open only to a consortium or group of individuals or organizations who have decided to share the ledger among themselves. There are various blockchains now available in this category, such as HydraChain and Quorum. Optionally, both of these blockchains can also run in public mode if required, but their primary purpose is to provide a private blockchain. Semiprivate blockchains, though only at a conceptual stage currently, are envisaged to be partly private and partly public; the private part is controlled by a group of individuals, while the public part is open for participation by anyone. This  hybrid model can be used in scenarios where the private part of the blockchain remains internal and shared among known participants, while the public part of the blockchain can still be used by anyone, optionally allowing mining to secure the blockchain. This way, the blockchain as a whole can be secured using PoW, thus providing consistency and validity for both the private and public parts. This  type of blockchain can also be called a semi-decentralized model, where it is controlled by a single entity but still allows for multiple users to join the network by following appropriate procedures. More recent blockchain implementations that extend the functionality offered by the Bitcoin blockchain are denoted as Blockchain 2.0. One of the interesting features in Blockchain 2.0 is the support for smart contract. A smart contract is a computer program

446

Digital Transformation of Enterprise Architecture

that encompasses contractual terms and conditions that enable the verification, negotiation, or enforcement of a contract. An example of a blockchain platform focused on smart contracts is Ethereum, whose cryptocurrency is called Ether. A smart contract in Ethereum is called DApp (Decentralized App), which can be executed on a server or directly on an Ethereum node. Blockchain was developed as the main authentication and verification technology behind the Bitcoin, the first decentralized crypto digital currency. In Bitcoin, a transaction is initiated when the future owner of the coins (or digital tokens) sends his/her public key to the original owner. The coins are transferred by the digital signature of a hash. Public keys are cryptographically generated addresses stored in the blockchain. Every coin is associated with an address, and a transaction in the crypto-economy is simply a trade of coins from one address to another. In  blockchain, the data used in the transactions is stored in an immutable public record, or giant spreadsheet, that is secured by concerned members who participate in a peer to peer network and act like verifiers of its authenticity and credibility. 20.2.1 Blockchain Transaction Transactions occur when certain member/members of a blockchain network authorize the transfer of digital currencies to another within the same blockchain network or another. The authorization of transfer of digital currencies may also be seen as the authorization of transfer of ownership. Blockchain transactions are similar to what one finds in a standard double-entry ledger. Every transaction contains at least one input or debit request and at least one output which credit requests are. Transaction operations move digital currencies or values of digital currencies from one input to output or form sender to receiver. When an owner authorizes a change of ownership on digital currencies, the transaction output receives this message and assigns a new owner (the receiver) to the digital currency by associating it with a key. Credits from one transaction cab be propagated as inputs of another transaction thereby creating a transaction chain or a chain of ownership. A blockchain transaction between two parties starts when one of the participants signals a message to the network about the terms and conditions governing the transactions between the two stakeholders. Then, the other participant broadcasts its acceptance to the network, which by default triggers the request for the network participants to authenticate and verify the transaction. Consequently, network members automatically play the role of authenticators that validate and guard the transaction against double spending through a validation system called “proof-of-work,” which represents a competition among network members to validate the transaction. Before the invention of Bitcoin and its blockchain, digital currencies were perceived not to be practicable as a result of the comparative effortlessness in the manner digital currencies could be replicated. This  was referred to as “doublespend” problem in which every transaction bears a risk involving the sending of a copy of the digital transaction to the merchant by the holder with the holder keeping the original copy of the digital transaction. This risk was conventionally guarded against by deploying a trusted centralized intermediary which is missing in the case of blockchain. When the transaction is validated, the public ledger (blockchain record) as well as the users of network will be collectively updated with the status of the recently added transaction. This mechanism helps in establishing trust between concerned stakeholders through the use of a decentralized public ledger as well as cryptographic algorithms that can guarantee that approved transactions cannot be altered after being validated.

Blockchain Computing

447

The steps involved are listed below: 1. Transaction definition: It is the first step where the sender creates a transaction that holds information about the receiver’s public address, the value of the transaction and a cryptographic digital signature that verify the transaction’s validity and credibility. 2. Transaction authentication: When the nodes in the network receive the transaction, they first validate the message by decrypting the digital signature and then the message is held temporarily until being used to create the block. 3. Block creation: One of the nodes in the network uses the pending transactions in order to update the ledger or the block. Then, at a specific time interval the updated block is broadcasted to the other nodes waiting for validation. 4. Block validation: When the nodes responsible about the validation in the network receive a request to validate an updated block, they go through an iterative process, which requires agreement from the other nodes in order to authenticate the block. 5. Block chaining: When all the transactions in a block are approved, then, the new block is attached “chained” to the current blockchain, resulting in broadcasting the new state of the block to the rest of the network. The blockchain is a distributed ledger of blocks that hold valid transactions that have been executed in a network. Each block contains a timestamp of creation and a hash or pointer to the previous block which links it to the previous block. The continuous linking of blocks forms a chain. In theory the blockchain behaves like conventional databases except that information stored is publicly visible but only privately accessible. Every peer in a decentralized peer-to-peer network has a copy of the latest version of the transaction ledger. Blockchain systems have an algorithm for scoring different versions of the transaction ledger so that the highest scoring version takes precedence. Peers in the network keep the highest scoring version at all times (which they currently know of) and overwrites their old version when a new version has a higher score. Each block is identified with a key hash using a crypto hash algorithm. It also has a reference to the previous block called its Parent Block. Although each block can be linked to just one parent block and parent key hash, a parent block can have many children and all its children bear the same parent key hash. 20.2.2 Blockchain Features In summation, the salient features of blockchain are: 1. Blockchain is distributed and decentralized in nature. Every computer that joins the blockchain network gets a copy of blockchain immediately; each of these computer nodes are then used to validate and relay transactions. 2. Blockchain is immutable in nature. This means the blockchain transaction cannot be edited or deleted. To modify or undo a transaction a new transaction must be proposed which would go through the same process of approval: only when the majority of nodes approve it will it get accepted. 3. Blockchain is fully secure in nature as it uses hash cryptography for handling identities.

448

Digital Transformation of Enterprise Architecture

4. Every record added to the Blockchain ledger has a unique key associated with it. 5. Every record added is trusted and stamped by the party that added that record. 6. When the next record is written, everything from the first record including the key and the content of the second record is put in the formula, generating the key for the second record. Blockchain differentiates from traditional transaction systems with respect to how it irreversibly stores transaction data in a distributed ledger. Once verified and stored, there is no way to manipulate data on the blockchain, as changes are immediately reflected in all active copies of the ledger across the network. Economic transactions (e.g., payments) are tracked and combined into blocks, each of them with a unique block header, which cryptographically commits to the contents of the block, a timestamp and the previous block header. Together with previous block headers they form a chain. Each block also contains the chain’s Merkle root or “hash of all hashes,” which prevents the need to download the entire chain in order to verify the validity of the chain for each transaction. As transactions come by, the chain builds a digital ledger that is distributed, sequential, digitally signed and contains validated records of ownership. Like the decentralization of communication lead to the creation of the internet, the blockchain is believed to decentralize the way we manage information. 20.2.3 Blockchain Benefits Blockchain is a great way to store and organize data without the need for trusted authority. The benefits of using blockchain are: • All the changes on the public blockchain can be viewed publicly by all parties, thereby creating transparent systems. • Ensures transactions are immutable in nature, which means transactions cannot be altered or deleted. In order to modify existing transactions, the new transaction needs to be proposed. • Blockchain transaction is processed 24/7 and can also help reduce transaction time to minutes. • Provides a secured way to avoid cybersecurity and fraud risk by using trust secured algorithms. • By eliminating third-party intermediaries and overhead cost, blockchain has great potential to reduce transaction feeds. • Provides alternate options of trust using centralized systems. • Provides ways for identification and verification. • Improves the efficiency of the system. 20.2.4 Blockchain Architecture A block, it is a data structure that records transactions to be included in a public ledger. A block comprises of a header, which contains metadata of the block details, a list of valid transactions, the key hash of the previous block and its own key hash. A single block can contain more than 500 transactions and the number of transactions in each block is termed the “Block Height.” The block header is 80 bytes and the average size of one transaction is 250 bytes.

Blockchain Computing

449

As for the block header, if a block is altered, the hash of that blocks changes and the corresponding hashes of all other blocks in the chain changes as well. This cascading model of key hashes ensures that a block cannot be modified without forcing a change in at least 80% of the blocks in the chain. The huge computation required for a recalculation of every block in the chain makes the blockchain immutable which is a major factor of the blockchain security. It would seem as more blocks are added to a chain, the more secure it becomes as it becomes more difficult to alter the block: as the length of the blockchain decreases, the probability of a block being altered decreases. As for block identifiers, the block header contains three sets of metadata: • A reference to the previous block • Information relating to the mining competition of the block • A summary of all transactions or entries in the block; information related to the mining competition details the timestamp, difficulty and a proof of work The genesis block was established in 2009 and refers to the first block in any blockchain. It is the ancestor of all blocks in a chain. The genesis block acts as a secure root to every node in a blockchain network and every node knows the key hash and block structure of the genesis block. Each new block added to a blockchain is placed on top of the previous and is one position higher than the previous block. Therefore, every block in a blockchain can be identified by: • Its cryptographic hash • Its block height Although the primary unique identifier of a block is its cryptographic hash created by the SHA256 algorithm, the position of the block in the chain or its distance from the genesis block can also be used to identify a block. This is however not unique due a concept called “blockchain forking.” Two or more blocks can have the same height, which may also be stored in the blocks metadata or an indexed database. The blockchain is designed to run a peer-to-peer network on top of the Internet. A peerto-peer network means that computers talk to each other directly without the need of a central server for information exchange. In this model there are no special nodes or hierarchy and each nodes requests information directly from respective nodes. Peer-to-peer networks are therefore decentralized and open. The blockchain network is therefore simply a collection of nodes running a block chain system protocol with decentralization of control as its core principle. On the physical layer of cyberspace, the wireless and wired connections hold the infrastructure that supports communication on the Internet. The  network manages node addressing and routing between different nodes in the network. The  transport layer manages transmissions and connection states and protocols e.g. TCP, HTTP, HTTPS, SSH. Similarly, the blockchain network works in the same fashion. However, unlike in a standard Internet network where central nodes may be assigned for specifically performing these tasks, in a blockchain every participating node performs all of these functions needed to keep the network running. Thus, nodes in a peer-to-peer network may be decentralized but in a traditional blockchain system, they take up different roles depending on their respective functions in the blockchain. A blockchain network mainly has to route information across nodes, manage

450

Digital Transformation of Enterprise Architecture

a database of stored information, perform mining tasks and maintain a service for user interfaces e.g. a wallet service. To this effect, each node in a blockchain may act as a miner or a database or a wallet and a routing node. All nodes perform the routing functionality to participate effectively in the blockchain.

20.3 Blockchain Security The  security of a blockchain is based on the hash-interdependence between successive blocks in combination with the distribution of copies of the entire blockchain to all nodes in a blockchain network. A blockchain is in practice tamperproof—that is, resistant to modification attempts—since a block cannot be altered without alteration of all the subsequent blocks and participation of the entire network to verify and log the change. Moreover, a blockchain is not controlled by any single centralized authority, which could be an attack target, since complete blockchain copies are stored in all nodes of a peer-to-peer network. However, if an attacker can achieve control of a sufficient number of nodes in a peer-topeer blockchain network, including some mining nodes, then there can be data losses and/or the insertion of corrupted data in the attacked blockchain. Examples of security attacks against blockchains are • The  selfish mining attack: In  a selfish mining attack malicious mining nodes do not transmit all mined new blocks for validation in the blockchain network. • The  history-revision attack: In  a history-revision attack a malicious miner having more than twice as much computational power than all honest mining nodes put together inserts corrupted blocks in a blockchain. • The eclipse attack: In an eclipse attack all incoming and outgoing connections of a target node in a blockchain network are controlled. • The stubborn mining attack: A stubborn mining attack combines selfish mining with an eclipse attack. Aspects of blockchain security are described below: 1. Information propagation is the distribution of a transaction or block throughout the network. It is like broadcast or replication model. Node1 may send a message or transaction to Node2, which is any of the other nodes in the network. Node1 does not have to be directly connected to Node2. Any network node that receives a new transaction or message forwards it to all other nodes on the network it is directly connected to, thus propagating the new transaction across all nodes in the network. 2. Consensus: A consensus algorithm allows for users to securely update states using pre-defined state transition rules where the rights to state transitions is distributed to all nodes in a securely decentralized network. Consensus provides a protocol by which new blocks are allowed to be added to the ledger. For the concept of consensus to be effective, three things required are: a. Common acceptance of laws, rules, transitions and states in the blockchain

Blockchain Computing

451

b. Common acceptance of nodes, methods and stakeholders that apply these laws and rules c. A sense of identity such that members feel that all members are equal under the consensus laws The basic parameters of a consensus mechanism are as follows: • Decentralized governance: No single central entity or authority can finalize any transaction or process. • Quorum structure: Nodes in a consensus mechanism exchange messages in a pre-defined set of stages or steps. • Integrity: It enforces the validation and verification of process integrity. • Nonrepudiation: This provides means to verify that the supposed sender really sent the message. • Authentication: This  protocol provides means to verify the participants’ identities. • Privacy: This protocol ensures that only intended recipients of a message have access to and can read the message. • Fault tolerance: The speed and efficiency of network operations in such a way that network operations are not dependent on the non-failure of any specific node or server. Consequently, the types of consensus are: a. Proof-of-Work Consensus: The miners create a proposed block with transactions and calculate the hash of the block headers. Then, the miners match this hash to the intending target or the last block of the desired blockchain. If the hash does not match, it repeats the calculation but with an adjusted cryptographic pseudo-random number called a “nounce.” Nounces are used to vary the input of the cryptographic function until a match is achieved. A nounce can only be used once and is usually updated by simply incrementing by one. The  new block may be added to chain when a match is found. b. Proof-of-Stake Consensus: The  stakeholders with the highest incentives in the system are identified and only these stakeholders participate in mining. Active participation in the blockchain networks gives participants the rights to generate new blocks for the chain. Blocks are generated similarly to the Proof-ofWork methodology except hashing operation is done in a limited search space rather than the computationally intensive unlimited search space. 3. Mining refers to the distributed computation performed on each block of data in a chain that allows for the creation and addition of new blocks to the chain. Beside the creation of new blocks, miners also serve the following purposes: a. Secure the blockchain against fraudulent or unverified usage. b. Miners provide processing power to the blockchain network. c. In the case of digital currencies, miners are solely in charge of validating new transactions and adding them to a global ledger. To complete a task, miners have to solve a difficult mathematical problem based on a hashing algorithm. The solution is called the “proof-of-work” and this is included in the new block created. Since each miner has to work to solve mathematical

452

Digital Transformation of Enterprise Architecture

problems before rewards are given, a competition is created. The  mining process creates new currencies in each block. Transactions are propagated through the network but are not added to the blockchain until verification has occurred. For their mining activities, miners receive new currencies and transaction fees as rewards. Mining ensures trust by ensuring that sufficient computational effort is devoted to the creation of each block. Therefore, the number of blocks, the complexity of computation and the level of trust are all directly correlated. A transaction usually has all the information necessary for processing itself, therefore it does not matter where on the network it is transmitted through. 4. Immutability refers to something that cannot be changed over time or whose value remain the same over a specific period of time. In the context of block chains, data already written into data blocks cannot be changed by anyone—even an administrator. The  process of a single rewrite is tedious and would require a consensus from every member of the chain: every single participating member would have to be persuaded to make a single change. A collaborative consensus of every participating member is needed. Moreover, any attempt to modify the contents of any single block or transaction in a block would also require a re-calculation of the block’s key hash. A  recalculation of the any block’s key hash would also lead to a break in the entire chain as blocks are linked via their key hashes.

20.4 Blockchain Security Architecture for Internet of Things Despite the many economic and societal benefits of IoT, the technology has raised major concerns on account of security issues that are also hampering its widespread adoptability. The crosslinking of various types of objects in IoT enables each object to influence the behavior of others and the exchange of vast amounts of information occurring automatically without the users’ awareness has potential for resulting in unprecedented levels of security, privacy and integrity issues that would hinder proper functioning of critical systems as also the data protection rights at individual levels. IoT ecosystems have characteristic vulnerabilities such as the following: • Low computing power/battery life of many devices make them unfit for computation-intensive cryptographic operations. • Many of the current IoT devices lack basic security requirements. • There is a lack of standardization for IoT standards and protocols, which invariably give rise to security loopholes. • There are scalability problems associated with the sizes of machine-to-machine (M2M) networks. The challenge of conceiving an secure IoT ecosystems arises fundamentally from the centralized architecture of typical IoT ecosystems that necessitate authentication of participating devices through trusted third parties (TTP) with high processing power in the cloud.

Blockchain Computing

453

While this may be adequate in the short term, it may fall short of envisaged requirements of IoT networks of the future which may need security solutions that are: 1. 2. 3. 4. 5.

Energy-efficient Easy to operate across a variety of IoT devices Scalable for huge device networks Provide real-time authentication and data integrity verification Ensure end-to-end security of disseminated data in line with the protection rights of individuals

It is possible to propose an architecture that is based on the application of the blockchain technology to IoT in order to provide secure device authentication and protect/verify the integrity of data collected by sensors and other devices in an IoT network. 20.4.1 Hierarchical Blockchain Architecture As explained in the earlier subsection, blockchain provides an immutable record of data secured by a peer-to-peer (P2P) network of participants, who validate all transactions against the ever-growing database, using the cryptographic hash of each block of data in the chain that links it to its predecessor. The database updates to include new transaction records occur by broadcasting the transaction to the entire network and running a distributed consensus algorithm typically involving the solution to a cryptographic puzzle (proof of work) by special participants of the network (miners). The whole network has a consistent copy of the ledger, which provides transaction transparency. One important feature of blockchain that makes it an effective technique for data security at scale is that it does not rely on trust between the participants in the network, and there is no central trusted authority involved. Upon a new transaction, the blockchain is validated across the distributed network before including the transaction as the next block in the chain (i.e., it relies on a distributed consensus mechanism). Blocks are made by miners (utilizing special equipment/software) coming to agreement on which transactions fit in each block, and which block will be the next in the chain. The  security of blockchain relies purely on cryptography and distributed consensus in the network. In order to be able to write an alternative transaction history or replace any transaction in the chain, an attacker would have to invest in significant computational resources, which serves as a deterrent for adversaries. The transparency feature (i.e., everyone participating in the network has full visibility of all transactions) is a significant aspect for verifiability of all data transactions by all parties involved. In  light of the storage, processing and energy limitations of IoT devices, a hierarchical blockchain architecture can be an appropriate solution for data security in IoT. In the proposed architecture, the resource-limited IoT devices are connected to an upper layer of “data collectors” that are powerful devices with larger storage and less energy constraints (e.g., cloud servers). Blockchain provides a secure distributed database and eliminates the necessity of a central server. The blockchain at the upper layer having powerful servers maintains the ledger of all transactions from the various blockchains at the layer below. This enables resetting of the device-level blockchains periodically to handle resource limitations, while still maintaining a transparent, data integrity-preserving complete history of transactions replicated at multiple sites.

454

Digital Transformation of Enterprise Architecture

The  model ensures that the computation required for data verification is securely offloaded to nearby edge/cloud servers by resource-constrained devices in an adaptable manner considering: • Device capacity • Battery level • Available storage space This will lead to optimized resource utilization through a context-aware hybrid security architecture with computation-intensive work assigned to devices of greater capacity. The envisaged model addresses IoT specific concerns as follows: 1. Once a reading enters the blockchain, it will not be possible to change it, as the alteration would require building the whole chain from the beginning, relying on the utilization of expensive computational resources, which will outweigh the benefits of changing the meter reading/trade transaction for the adversary. Additionally, transactions that are not possible (such as negative meter readings) will not be validated by the blockchain network. 2. As the whole transaction history will be replicated at multiple sites in a transparent manner, there will be no single point of failure. 3. It will not be possible for malicious devices to spoof others, as transactions will require digital signatures, validated by all network participants. 4. Due to the transparency of transactions and the validation process, fraudulent transactions will not be possible.

20.5 Summary This chapter proposed blockchain computing as the digital transformation of primarily the security aspects of enterprise architecture (see Chapter 12). Blockchain refers to a distributed, encrypted database, which is a public depository of information that cannot be reversed and is incorruptible. In other words, a blockchain can be defined as a distributed public ledger or database of records of every transaction that has been carried out and shared among those participating in the network. Every transaction or digital event in the public ledger has to be authenticated via the agreement of more than half of those participating in the network. This implies that no participant or user as an individual can modify any data within a Blockchain without the consent of other users (participants). Blockchain is an immutable distributed ledger that can be used to record everything of value. Blockchain until now has been associated with financial transactions, but eventually, blockchain can be used to record and manage anything that has value. Blockchain technology delivers unbelievable traceability that is autonomous and trusted by creating permanent records of data and transactions. It thereby brings the single source of truth and no single point of failure to all stakeholders, as everything related to data

Blockchain Computing

455

and transactions are always visible on the network. Blockchain replaces the need to trust accredited third parties with the trust of the system as a whole. After introducing peer-to-peer networks, the chapter began by introducing blockchain concept, architecture, benefits and operations. It  then discussed aspects of blockchain that highlighted its potential for provisioning of security. In the last part of the chapter, it explained blockchain enabled security solution and architecture for Internet of Things (IoT) systems.

21 Soft Computing As stated right in the beginning of Chapter 13, to withstand the disruptive digital storms of the future, enterprise architecture affects digital transformation i.e. exponential change (amplification or attenuation) in any performance measure of analyticity aspects predominantly through soft computing. This chapter explains soft computing. The primary considerations of traditional hard computing are precision, certainty, and rigor. In  contrast, the principal notion in soft computing is that precision and certainty carry a cost; and that computation, reasoning, and decision making should exploit (wherever possible) the tolerance for imprecision, uncertainty, approximate reasoning, and partial truth for obtaining low-cost solutions. The corresponding facility in humans leads to the remarkable human ability to understand distorted speech, deciphering sloppy handwriting, comprehending the nuances of natural language, summarizing text, recognizing and classifying images, driving a vehicle in dense traffic, and, more generally, making rational decisions in an environment of uncertainty and imprecision. The challenge, then, is to exploit the tolerance for imprecision by devising methods of computation that lead to an acceptable solution at low cost. Soft computing is a consortium of methodologies that works synergistically and provides, in one form or another, flexible information processing capability for handling real-life ambiguous situations. Its aim is to exploit the tolerance for imprecision, uncertainty, approximate reasoning, and partial truth in order to achieve tractability, robustness, and low-cost solutions. The guiding principle is to devise methods of computation that lead to an acceptable solution at low cost, by seeking for an approximate solution to an imprecisely or precisely formulated problem. Unlike soft computing, the traditional hard computing deals with precise computation. The rules of hard computing are strict and binding; as inputs, outputs, and procedures are all clearly defined, it generates the same precise answers without any degree of uncertainty—every time that the procedure is applied. Unless the rules or procedures are changed, the output result would never change. Table 21.1 compares characteristics of soft computing with the traditional hard computing. The main constituents of soft computing include the following: • • • • •

Artificial neural networks (ANNs) Fuzzy logic and fuzzy inference systems Evolutionary and genetic algorithms Rough sets Signal processing tools such as wavelets

Though each of them contribute a distinct methodology for addressing problems in its domain, they are complementary to each other and can be blended effectively. The result is a more intelligent and robust system providing a human-interpretable, low-cost, approximate solution, as compared to traditional techniques. There  is no universally best soft computing method; choosing particular soft computing tool(s) or some combination with 457

458

Digital Transformation of Enterprise Architecture

TABLE 21.1 Characteristics of Soft Computing Compared with Traditional Hard Computing Traditional Hard Computing Conventional computing requires a precisely stated analytical model Often requires a lot of computation time Not suited for real world problems for which ideal model is not present It requires full truth It is precise and accurate High cost for solution

Soft Computing Soft computing is tolerant of imprecision Can solve some real world problems in reasonably less time Suitable for real world problems Can work with partial truth Imprecise Low cost for solution

traditional methods is entirely dependent on the particular application, and it requires human interaction to decide on the suitability of a blended approach. Fuzzy sets provide a natural framework for the process in dealing with uncertainty or imprecise data. Generally, they are suitable for handling the issues related to understandability of patterns, incomplete and noisy data, and mixed media information and human interaction and can provide approximate solutions faster. ANNs are nonparametric and robust, and exhibit good learning and generalization capabilities in data-rich environments. Genetic algorithms (GAs) provide efficient search algorithms to optimally select a model, from mixed media data, based on some preference criterion or objective function. Rough sets are suitable for handling different types of uncertainty in data. Neural networks and rough sets are widely used for classification and rule generation. Application of wavelet-based signal processing techniques is new in the area of soft computing. Wavelet transformation of a signal results in decomposition of the original signal in different multiresolution subbands. This is useful in dealing with compression and retrieval of data, particularly images. Other approaches like case-based reasoning and decision trees are also widely used to solve data mining problems.

21.1 Artificial Neural Networks The human brain is composed of an ensemble of millions of small cells or processing units called neurons that work in parallel. Neurons are connected to each other via neuron connections called the synapses. A particular neuron takes its input from a set of neurons, it then processes these inputs and passes on the output to another set of neurons. The brain as a whole is a complex network of such neurons in which connections are established and broken continuously. Artificial Neural Networks (ANNs) have resulted from efforts to imitate the functioning of human brain. Like the human brain, ANNs are able to learn from historical or sample data; it is able to learn by repeating the learning process for a number of iterations—the performance demonstrates improvement with every completed iteration. Once learned, ANNs can reproduce the same output whenever the same input is applied. The precision and correctness of the answer depends on the learning and nature of the data given: Sometimes, ANN may be able to learn even complex data very quickly, while at other times, it may refuse to learn from another set of data. The precision depends upon on how well the ANN was able to learn from the presented data.

Soft Computing

459

Once ANNs have learned from historical or sample data, they have another extraordinary capability that enables them to predict the outputs of unknown inputs with quite high precision. This capability known as generalization results from the fact that ANNs can emulate any type of simple or complex function. This gives ANNs the power to model almost any problem that is encountered in real world. ANNs functioning is highly resistant to disruption or perturbation by noise; Even if there is any noise in the input data, ANNs are still able to perform their normal functions quite appreciably well. A  neural network is a powerful data modeling tool that is able to capture and represent complex input/output relationships. The  motivation for the development of neural network technology stemmed from the desire to develop an artificial system that could perform “intelligent” tasks similar to those performed by the human brain. Neural networks resemble the human brain in the following two ways: • A neural network acquires knowledge through learning. • A neural network’s knowledge is stored within inter-neuron connection strengths known as synaptic weights. 21.1.1 The Biological Neuron A  neuron’s dendritic tree is connected to a thousand neighboring neurons. When one of those neurons fires; a positive or negative charge is received by one of the dendrites. The strengths of all the received charges are added together through the processes of: • Spatial summation occurs when several weak signals are converted into a single large one, whereas • Temporal summation converts a rapid series of weak pulses from one source into one large signal The aggregate input is then passed to the soma (cell body). The part of the soma that is concern with the signal is the axon hillock (Figure  21.1a). If the aggregate input is greater than the axon hillock’s threshold value, then the neuron fires, and an output signal is transmitted down the axon. The strength of the output is constant, regardless of whether the input was just above the threshold, or hundred times as great. The output strength reaches each terminal button with the same intensity it had at the axon hillock. This uniformity is critical in an analogue device such as a brain where small errors can snowball, and where error corrections are more difficult than in a digital system. Each terminal button is connected to other neurons across a small gap called a synapse (Figure 21.1b). The physical and neurochemical characteristics of each synapse determine the strength and polarity of the new input signal. 21.1.2 Model of an Artificial Neuron The basic model of an artificial neuron is founded upon the functionality of a biological neuron, where “neurons are the basic signaling units of the nervous system” and “each neuron is a discrete cell whose several processes arise from its cell body.”

460

Digital Transformation of Enterprise Architecture

FIGURE 21.1 (a) Schematic of biological neuron and (b) Schematic of synapse.

While creating a functional model of the biological neuron, the three basic components are: 1. The synapses of the neuron are modeled as weights. The strength of the connection between an input and a neuron is noted by the value of the weight. Negative weight values reflect inhibitory connections, while positive values designate excitatory connections. The next two components model the actual activity within the neuron cell. 2. An adder sums up all the inputs modified their respective weights. This activity is referred to as linear combination. 3. An activation function controls the amplitude of the output of the neuron. An acceptable range of output is usually between 0 and 1 or +1 and −1.

461

Soft Computing

FIGURE 21.2 Model of a biological neuron.

Figure 21.2 shows the model of a biological neuron. The interval activity of the neuron can be expressed as p

vk =

∑w x kj

j

j =1

The output of the neuron, yk would therefore be the outcome of some activation function on the value of vk . It is to be noted here that applying bias bk to the neuron, has the effect of raising or lowering the net input of the activation function, depending on whether it is positive or negative respectively. 21.1.2.1 Activation Function An activation function, in neural network is defined as the function that describes the output behavior of a neuron. Most neural network architectures start by computing the weighted sum of the inputs (that is, the sum of the product of each input with the weight associated with that input). The total net input is then usually transformed by an activation function or squashing function in the sense that it squashes the permissible amplitude range of the output signal to some finite values (i.e., 0 and 1 or −1 and +1). The different types of activation functions are: 1. Step function: This function takes on a value of 0, if the total net input is less than a certain threshold value (v) 1 if the total net input is greater than or equal to the threshold value.  1, if v ≥ 0 f ( v) =  0, if v < 0 2. Sigmoid function: This function can range between 0 and 1, but it is also sometimes useful to use the −1 to +1 range. An example of the sigmoid function is the hyperbolic tangent function.  v  1 − exp(−v) f (v) = tanh   =  2  1 + exp(−v)

462

Digital Transformation of Enterprise Architecture

3. Piecewise-linear function: This function again can take on the values 0 or 1, but can also take on values between that depending on the amplification factor in a certain region of linear operation. 1   1, v ≥ 2  1  −1 f ( v) =  v ,

2 2  1   0, v ≤ − 2 The use of a linear activation function removes the nonlinear behavior from the artificial neuron. Without nonlinearities, a neural network cannot model nonlinear phenomena. The above activation functions have been shown in Figure 21.3. 21.1.3 Artificial Neural Network An artificial neural network (ANN) is an adaptive, most often nonlinear systems that learns to perform a function (an input/output map) from data. Adaptive means that the system parameters are changed during operation, normally called the training phase. After the training phase, the ANN parameters are fixed and the system is deployed to solve the problem at hand during the testing phase. The ANN is built with a systematic step by step procedure to optimize a performance criterion or to follow some implicit internal constraint, which is commonly referred to as learning rule. The input/output training data are fundamental in neural network technology, because they convey the necessary information to “discover” the optimal operating point. The nonlinear nature of the neural network processing elements (PEs) provide the system with lots of flexibility to achieve practically any desired input/output map i.e., ANNs are universal mappers. ANN based solutions are extremely efficient in terms of development time and resources, and in many difficult problems, ANN provides performance that is difficult to match with other technologies. 21.1.3.1 Processing Units Each processing unit performs a relatively simple job of receiving input from neighbors or external sources and use this to compute an output signal which is propagated to other units. Apart from this processing; a second task is the adjustment of the weights. The system is inherently parallel in the sense that many units can carry out their computations at the same time. The units are of three types 1. Input units (indicated by an index i) which receives data from outside the neural network, 2. Output units (indicated by and index o) which sends data out of the neural network, and, 3. Hidden units (indicated by an index h) whose input and output signals remain within the neural network.

Soft Computing

463

FIGURE 21.3 Commonly used activation function for synaptic inhibition. Hard nonlinearity: (a) Step function (b) Signum function; Soft nonlinearity: (c) Sigmoid function and (d) tanh function.

During operation, processing units can be updated either synchronously or asynchronously. With synchronous updating, all units update their activation simultaneously, where as in case of asynchronous updates, each unit has a (usually fixed) probability of updating its activation at a time. 21.1.3.2 ANN Processing The computational style of an ANN is described below in Figure 21.4. An input is presented to the neural network and a corresponding desired or target response set at the output (when this is the case, the training is called supervised). An error is composed from the difference between the desired response and the system output. This  error information is feedback to the system and adjusts the system parameters (w;j) in a systematic fashion (the learning rule). The  process is repeated until the performance is acceptable.

464

Digital Transformation of Enterprise Architecture

FIGURE 21.4 Supervised type ANN processing.

21.1.3.3 ANN Topologies Based on the pattern of connections, ANN can be categorized as: 1. Feed forward neural networks (FFNN), where the data flow from input to output units is strictly straight forward. The  data processing can extend over multiple (Layers of) units, but no feedback connections are present—that is connections extending from outputs of units to inputs of units in the same layer or previous layers. Commonly, successive layers are totally interconnected, where • The first layer has no input connections, so consists of input units and is termed as input layer. • The  last layer has no output connections, so consists of output units and is termed on output layer. • The layers in between the input and output layer is termed as hidden layers, consisting of hidden units. Some of the examples of FFNN are the Perceptron and Adaptive Linear Element (Adaline). 2. Recurrent neural networks that do contain feedback connections. In comparison to feed-forward networks, the dynamical properties of the network are important. In some cases, the activation values of the units undergo a relaxation process such that the neural network will evolve to a stable state in which these activations do not change any more. In other applications, the change of the activation values of the output neurons are significant, such that the dynamical behavior constitutes the output of the neural network. Examples of recurrent neural networks (RNNs) are Kohonen self-organizing map (SOM) and Hopfield neural network.

Soft Computing

465

21.1.3.4 Training of ANN A neural network has to be configured such that the application of a set of inputs produces (either “direct” or via a relaxation process) the desired set of outputs. Various methods to set the strengths of the connection exist. One way is to set the weights explicitly, using a priori knowledge. Another way is to “train” the neural network by feeding it teaching patterns and letting it change its weights according to some learning rule (Figure 21.5). Learning algorithms can be categorized as: 1. Supervised learning or associative learning is a kind of machine learning algorithm where the algorithm is provided with a set of inputs for the algorithm along with the corresponding correct outputs, and learning involves the algorithm comparing its current actual output with the correct or target outputs, so that it knows what its error is, and modify things accordingly. 2. Unsupervised learning signifies a mode of machine learning where the system is not told the “right answer”; for example, it is not trained on pairs consisting of on input and desired output. Instead the system is given the input patterns and is left to find interesting patterns, regularities or clustering among them. 3. Reinforcement learning may be considered as an intermediate form of the above two types of learning. Here, the learning algorithm does some action on the environment and gets a feedback response from the environment. The learning system grades its action as good or bad; based on the environmental response and accordingly adjusts its parameters. Generally, parameter adjustment is continued until an equilibrium state occurs, following which there are no more changes to its parameters. 21.1.3.5 ANN Applications 1. Neural networks and control: Neural networks have been applied very successfully in the identification and control of dynamic systems. The universal approximation capabilities of the multilayer perception have made it a popular choice for modeling nonlinear systems and for implementing general purpose nonlinear controllers. Fixed stabilizing controller has been shown in Figure  21.6. This  schema has been applied to the control of robot arm trajectory, where a proportional controller

FIGURE 21.5 Learning process of an ANN.

466

Digital Transformation of Enterprise Architecture

FIGURE 21.6 Stabilizing controller.

with gain was used as the stabilizing feedback controller. That model uses the desired trajectory as the input and the feedback control as an error signal. As the neural network training advances, the input will converge to zero. The neural network controller will learn to take over from the feedback controller. From the above figure, it can be seen that the total input that enters the plant is the sum of the feedback control signal and the feedforward control signal, which is calculated from the inverse dynamics model (neural network). The  neurocontrol architectures discussed above have their own advantages and disadvantages. For example, the feedback linearization technique can only be applied to system with nonlinearity. The stable direct adaptive control technique requires that the unknown nonlinearities appear in the same way as the control input in a state space representation. The model reference adaptive control has no guarantee of stability. The  adaptive inverse control technique requires the existence of a stable plant inverse. 2. Neural networks and pattern recognition: ANN models have the ability of selforganization and can learn to recognize patterns. Many of them are hierarchical network consisting of layers of neuron like cells. The ability to process information increases in proportion to the number of layers in the network, layered neural networks involve a set of input cells connected to a collection of output cells, by means of one or more layers of modifiable intermediate connections. The most natural use of this architecture is to implement associativity by mapping an input pattern, the representation of a concept or a situation with another item, either of the same kind or totally different. Such multiple layered networks are termed as associative networks and are used in pattern recognition applications. Pattern recognition typically involves: • Acquisition • Concern with digitizing the data coming from a sensor such as a camera, scanner or a radar detector • Localizing, which involves extracting the object from its back ground • Representation, which is finding a set of significant features on the object

Soft Computing

467

• The object is then mapped to a real number, a word within a grammar, a graph or any element from a set of representations • The  decision stage, consists of dividing the set of objects representations into a number of classes

21.2 Fuzzy Systems The concept of fuzzy sets was first proposed by Zadeh (1965) as a method for modeling the uncertainty in human reasoning. In  fuzzy logic, the knowledge of experts is modeled by linguistic rules represented in the form of IF-THEN logic. Fuzzy reasoning is a straightforward formalism for encoding human knowledge or common sense in a numerical framework. Fuzzy Inference Systems (FISs) can be used in many areas where neural networks are applicable, such as in data analysis, control and signal processing. Fuzzy logic first found popular applications in control systems. In fuzzy control, human knowledge is codified by means of linguistic IF-THEN rules, which build up an FIS. An exact model is not needed for controller design. Since its first reported industrial application in 1982, Japanese industry has produced numerous consumer appliances using fuzzy controllers. This has aroused global interest in the industrial and scientific community, and fuzzy logic has also been widely applied in data analysis, regression, and signal and image processing. In fuzzy sets, every member of the set does not have a full membership of the set but rather has a degree of belongingness to the set. This  degree of membership is termed as the membership degree and the function that determines this membership degree or belongingness is called the membership function (MF). This function associates with each member of the set with a degree or probability of membership: The higher is this degree, the more strongly the member is a part of the set. Rather than the binary logic, fuzzy logic uses the notion of membership. Fuzzy logic is most suitable for the representation of vague data and concepts on an intuitive basis, such as human linguistic description, e.g. the expressions approximately, good, tall. The conventional or crisp set can be treated as a special case of the concept of a fuzzy set. A fuzzy set is uniquely determined by its MF, and it is also associated with a corresponding linguistically meaningful term. Fuzzy logic is based on three core concepts, namely, fuzzy sets, linguistic variables, and possibility distributions. A  fuzzy set is an effective means to represent linguistic variables. A linguistic variable is a variable whose value can be described qualitatively using a linguistic expression and quantitatively using an MF. Linguistic expressions are useful for communicating concepts and knowledge with human beings, whereas MFs are useful for processing numeric input data. When a fuzzy set is assigned to a linguistic variable, it imposes an elastic constraint, called a possibility distribution, on the possible values of the variable. Fuzzy systems are implemented by a set of rules called fuzzy rules, which may be defined on the set. A nonfuzzy system has a very discreet way of dealing with rules: either a rule fires fully or does not fire at all, depending on the truth of the expression in the condition specified. However, in the case of fuzzy rules, since the rule is true or false only to a degree, the rule fires to this degree of trueness or falseness. The output of all the rules is aggregated to get the system’s final output.

468

Digital Transformation of Enterprise Architecture

FIGURE 21.7 Architecture of fuzzy inference system.

The general procedure of working with fuzzy inference systems consists of the following: 1. 2. 3. 4. 5.

Crisp input modeling Membership functions that are applied to obtain the fuzzified inputs Rules that are applied over these inputs to generate the fuzzy outputs Various outputs that are aggregated Aggregated output is defuzzified to produce the crisp output

This procedure is presented in Figure 21.7. 21.2.1 Fuzzy Controller In  control systems, the inputs to the systems are the error and the change in the error of the feedback loop, while the output is the control action. Fuzzy Controllers (FCs) are used where an exact mathematical formulation of a problem is not possible or very difficult. These difficulties are due to non-linearities, time-varying nature of the process, large unpredictable environment disturbances, etc. The general architecture of a fuzzy controller is depicted in Figure 21.8. 1. Measurements (inputs) are taken of all variables that represent relevant conditions of the controller process. 2. These measurements are converted into appropriate fuzzy sets to express measurement uncertainties. This step is called fuzzification. 3. The fuzzified measurements are then used by the inference engine to evaluate the control rules stored in the fuzzy rule base. The result of this evaluation is a fuzzy set (or several fuzzy sets) defined on the universe of possible actions. 4. This output fuzzy set is then converted into a single (crisp) value (or a vector of values). This is the final step called defuzzification. The defuzzified values represent actions to be taken by the fuzzy controller. FISs are universal approximators capable of performing nonlinear mappings between inputs and outputs. The interpretations of a certain rule and the rule base depend on the

Soft Computing

469

FIGURE 21.8 Architecture of a fuzzy controller.

FIS model. The Mamdani model is a nonadditive fuzzy model that aggregates the output of fuzzy rules using the maximum operator, while the TSK model is an additive fuzzy model that aggregates the output of rules using the addition operator. All these models can be derived from fuzzy graph, and are universal approximators. 21.2.1.1 Example of a Fuzzy Logic Controller Consider the control of navigation of a robot in the presence of a number of moving objects. To make the problem simple, consider only four moving objects, each of equal size and moving with the same speed (Misra and Pal 2017). 1. Input parameters: D the distance from the robot to an object, and, D is [0.1, …. 2.2] in meters, θ the angle of motion of an object with respect to the most critical object will decide an output called deviation (δ), and, θ is [−90, …, 0, … 90] in degrees. 2. Linguistic states: After identifying the relevant input and output variables of the controller and their range of values, we adopt the Mamdani approach to select some meaningful states called “linguistic states” for each variable and express them by appropriate fuzzy sets. For  the current example, we consider the following linguistic states for the three parameters: Distance is represented by four linguistic states: VN = very near; NR = near; VF = very far; and FR = far. Angles for directions (θ) and deviation (δ) are represented by five linguistic states: LT = left; AL = ahead left; AR = ahead right; and RT = right. Three fuzzy sets for the three parameters are given in Figure 21.9a. 3. Fuzzy rule base: Once the fuzzy sets of all parameters are worked out, our next step in FC design is to decide the fuzzy rule base of the FC which defines 20 rules for all possible instances. These rules are simple and are of the form: Rule 1: If (distance is VN) and (angle is LT), (deviation is AA).

470

Digital Transformation of Enterprise Architecture

FIGURE 21.9 (a) Fuzzyfication of inputs, (b) fuzzified output, and (c) aggregate fuzzified output.

4. Fuzzification of inputs: Let us consider at any instant, the object O3 is critical to the mobile robot and distance D = 1.04 m and angle θ = 30° (Figure 21.9a). For this input, we have to decide the deviation δ of the robot as output. From the given fuzzy sets and input parameter values, we say that: Distance D = 1.04 m may be called NR (near) or FR (far). Input angle θ = 30° can be declared as either AA (ahead) or AR (ahead right). Hence, the membership values corresponding to these values are determined as follows: x = 1.04 m, µNR ( x) = 0.6571, µFR ( x) = 0.3429 y = 30 degrees, µAA ( y) = 0.3333, µAR ( y) = 0.6667

471

Soft Computing

5. Rule strength computation: There are many rules in the rule base and not all rules may be applicable. For the given x = 1.04 and θ = 30°, only the following four rules out of 20 rules are possible.

α (R 1) = min( µNR ( x), µAA ( y)) = min(0.6571, 0.3333) = 0.3333 α (R 2) = min( µNR ( x), µAR ( y)) = min(0.6571, 0.6667) = 0.6571 α (R 3) = min( µFR ( x), µAA ( y)) = min(0.3429, 0.3333) = 0.3333 α (R 4) = min( µFR ( x), µAR ( y))) = min(0.3429, 0.6667) = 0.3429 6. Fuzzy output: The next step is to determine the fuzzified outputs corresponding to all fired rules. R1 : IF (s1 is A1 )AND(s2 is B1 ),(f is C1 ). R2 : IF (s1 is A 2 )AND(s2 is B 2 ),(f is C 2 ). Suppose s1* and s2* are the inputs for fuzzy variables s1 and s2. µA1, µA2, µB1, µB2, µC1 and µC2 are the membership values for different fuzzy sets (Figure 21.9b). We take min of membership function values for each rule. Output membership function is obtained by aggregating the membership function of result of each rule. The fuzzy output needs to be defuzzified and its crisp value has to be determined for the output to make a decision. From the combined fuzzified output for all four fired rules (Figure 21.9c), we get the crisp value using the center of sum method as v=

12.5 × 71 + 25 × 45 + 25.56 × 0 + 25.56 × 0 = 19.59 12.5 + 39.79 + 25 + 25.56

7. Conclusion: The robot should deviate by 19.58089° toward the right with respect to the line joining the move of direction to avoid collision with the obstacle O3. Both neural networks and fuzzy logic can be used to approximate an unknown control function. Neural networks achieve a solution using the learning process, while FISs apply a vague interpolation technique. FISs are appropriate for modeling nonlinear systems whose mathematical models are not  available. Unlike neural networks and other numerical models, fuzzy models operate at a level of information granules—fuzzy sets.

21.3 Evolutionary Algorithms Evolutionary algorithms (EA) is inspired by the success of Darwin’s theory of natural selection surmised by survival of the fittest. Natural evolution results from the fusion of male and female chromosomes to generate one or more offspring that have a blend of characteristics from both the male and female counterpart. The offspring may be weaker or fitter than

472

Digital Transformation of Enterprise Architecture

the participating parents and would survive the parents to the degree of its fitness to the surrounding conditions and environment. This process of evolution leads to improvement from one generation to the next one. EA work in a similar way. The process is initiated with the generation of a random set of solutions to a given problem resulting in a population of such solutions. These individual solutions constituting the population are made to participate in an evolutionary process: From these solutions, a few individuals with high fitness are chosen based on a predetermined method termed as selection. These pair of individuals are then made to generate offspring guided again by a process termed as “crossover.” The  system then randomly shortlists some of the newly generated solutions and adds new characteristics to them through another predefined process termed as mutation; more similar operations are performed on the shortlisted solutions. The fitness of any one of these mutated solutions is adjudged based on another function termed as the fitness function. As the process cycles through newer and newer generations, the quality of the selected solutions continues to improve. This improvement is very rapid over the first few generations but slows down with later generations. The  idea of mimicking Evolution as an optimization tool for engineering problems appeared in the late 1950s and early 1960s. The concept was to evolve a population of candidate solutions to solve a problem, using operators inspired by natural selection. In  the 1960s, I. Rechenberg introduced “Evolution Strategies” (ES) for airfoil design in 1973. Genetic algorithms (GAs) were invented by J. Holland in the late 1960s and developed by Holland and his students (D. Goldberg among many others) at the University of Michigan to study within the computer the phenomenon of adaptation as it occurs in nature. Holland’s book on “Adaptation in Natural and Artificial Systems” presented the genetic algorithm as an abstraction of biological Evolution, which was a major innovation due to the biological concept of population, selection, crossover and mutation. The  theoretical foundation of Genetic algorithms (GAs) was built on the notion of “schemas” and “building blocks.” In  the last decade there has been widespread interaction among researchers studying Evolutionary Computation methods, and the GA, Evolution Strategies, Evolutionary Programming and other evolutionary approaches were finally unified in the late 1990s under the umbrella named Evolutionary Algorithms (EA). 21.3.1 Fundamentals of Evolutionary Algorithms Evolutionary Algorithms (EA) are decision maker algorithms that mimic the natural principle “survival of the fittest” introduced by Charles Darwin’s famous book Origin of Species in 1859. Broadly speaking they operate simply through the iterated mapping of one population of solutions to another population of solutions. This IT based approach contrasted with existing conventional deterministic search techniques such as the simplex method, conjugate gradient method and others, which proceed from one given sub-optimal solution to another, until an optimum solution is reached. EA are not deterministic, so that for identical problems and identical starting conditions, the evolution of the solution will not follow the same path on repeated simulations. It is for this reason that EA fall into the category of stochastic (randomized) optimization methods. Some other stochastic methods that are used are the Monte-Carlo approach (MC), the directed random walk and simulated annealing (SA). However the process of evolution in EA is of course not completely random (in particular during the selection procedure) because in this case the performance of the algorithm

Soft Computing

473

would be no better than simple guessing, and at worst would be equivalent to complete enumeration of the parameter search space. Evolutionary algorithms work by exploiting population statistics to some greater or lesser extent, so that when newer individual solutions or offspring are generated from parents, some will have inferior genetic characteristics while others will have superior genetic characteristics. The general working Darwin principle of the iterated mapping then reduces to generate an offspring population, removing a certain number of “below average” evaluated individuals, and obtaining the subsequent population. This  can be summarized as the repeated application of two EA  operators on the population: 1. Variation operator (the generation of offspring) 2. Selection operator (the survival of the fittest) Some variant approaches to EAs in the literature only differ in the operation of these two operators. The  origin of Evolutionary Algorithms for parameter optimization seems to have appeared independently in two separate streams, Genetic algorithms (GAs) and Evolution Strategies (ESs). Some of the advantages of EAs are that they require no derivatives (or gradients) of the objective function, have the capability of finding globally optimum solutions amongst many local optima, are easily run in parallel computers and can be adapted to arbitrary solver codes without major modification. Another major advantage of EAs is that they can tackle simultaneously multi-objective problems (by considering fitness vector and not the traditional weighted aggregation of several fitness criteria). 21.3.2 Genetic Algorithms Genetic algorithms (GAs), a particular class of EAs, were introduced by J. Holland who explained the adaptive procedure of natural systems and laid down the two main principles of GA: the ability of a simple bit-string representation to encode complicated structures and the power of simple transformations to improve such structures. The original GAs technique revolved around a single binary string (in base 2) encoding of the DNA of chromosomes, which is the genetic material that each individual carries. The  binary coded GAs’ variation operator is comprised of two parts, crossover and mutation. Crossover interchanges genetic portions of parental chromosomes while mutation involves the random switching of DNA letters in the chromosome. The selection operator has taken many forms, the most basic being the stochastic/deterministic fitnessproportionate (or roulette wheel) method. 21.3.2.1 Benefits of GA One of the main advantages of GA  is robustness: they are computationally simple and powerful in their search for improvement and are not limited by restrictive assumptions about the search space (continuity, existence of derivatives, uni-modality). Furthermore, they accommodate well to discontinuous environments GA  are search procedures that use semi-random search but allow a wider exploration in the search space compared to classical optimization methods which are not so robust but work nicely in a narrow search domain problem.

474

Digital Transformation of Enterprise Architecture

The benefits of GA are: • GA are indifferent to problem specifics: an example in shape airfoil optimization is the value of the drag of an airfoil which represents the so-called fitness function. • GA  use codes of decision variables by adapting artificial DNA  chromosomes or individuals rather than adapting the parameters themselves. In  practice, the designer will encode the candidate solutions using binary coding GAs with finitelength string genotype. • GA process populations via evolving generations compared to point by point conventional methods which use local information and can be trapped frequently in a local minimum. • GAs use randomized operators instead of strictly deterministic gradient information. 21.3.2.2 Description of a Simple GA For the optimization problem P dealing with the minimization of a fitness function f(x), a simple binary coded GA can be run in the computer according to the following step by step procedure: Step 1. Generate randomly a population of N individuals. Step 2. Evaluate the fitness function of each individual phenotype. Step 3. Select a pair of parents with a probability of selection depending of the value of the fitness function. One individual can be selected several times. Step 4. Crossover the selected pair at a randomly selected cutting point with probability Pc to form new children. Step 5. Mutate the two offspring by flipping a bit with probability Pm. Step 6. Repeat steps 3,4,5 until a new population has been generated. Step 7. Go to step 2 until convergence. After several generations, one or several highly fitted individuals in the population represent the solution of the problem P. The main parameters to adjust for convergence are the size N of the population, the length L of the bit string, the probabilities Pc and Pm of crossover and mutation, respectively. 21.3.2.3 General Presentation of GA Using Binary Coding GA  are different from the conventional search procedures encountered in engineering optimization. To understand the mechanism of GA, consider a minimization problem with a fitness function index J = f(x), where the design parameter is x. The first step of the optimization process is to encode x as a finite-length string. The length of the binary string is chosen according to the required accuracy. For a binary string of length l = 8 bits, the lower bound xmin is mapped to 00000000, the upper bound xmax is mapped to 11111111, with a linear mapping in between. Then for any given string the corresponding value x can be calculated according to: x = xmin + 1 /(2 × l − 1) ⋅ ( xmax − xmin )

475

Soft Computing

With this coding, the initial population is constituted of N individuals, and each of them is a potential solution to the optimization problem. We must define now a set of GA operators that use the initial population and then create a new population at every generation: 1. Selection consists in choosing solutions that are going to form a new generation. The  main idea is that selection should depend on the value of the fitness function: the higher the fitness is, the higher the probability is for the individual to be chosen (akin to the concept of survival of the fittest). But his selection remains a probability, which means not being a deterministic choice: even solutions with a comparative low fitness may be chosen, and they may prove to be very good in the course of events (e.g., if the optimization is trapped in a local minimum). Two well-known selection techniques are Roulette Wheel and Tournament. 2. Reproduction is a process by which a string is copied in the following generation. It  may be copied with no change, but it may also undergo a mutation, according to a fixed mutation probability Pm. However filling up the next generation is achieved through the operator called crossover. A′ 00110010

A 001 / 01110 → B111 / 10010

B′ 11101110

where "/" denotes the cutting site. First, two strings are randomly selected and sent to the mating pool. Second, a position along the two strings is selected according to a uniform random law. Finally, based on the crossover probability Pc, the paired strings exchange all genetic characters following the cross site. Clearly the crossover randomly exchanges structured information between parents A and B to produce two offspring Aʹ and Bʹ which are expected to combine the best characters of their parents. 3. Mutation, is a random alteration of a binary (0–1) bit at a string position, and is based on a mutation probability Pm. In the present case, a mutation means flipping a bit 0 into 1 and vice versa. The mutation operator enhances population diversity and enables the optimization to get out of local minima of the search space.

21.4 Rough Sets The purpose of rough sets is to discover knowledge in the form of business rules from imprecise and uncertain data sources. Rough set theory is based on the notion of indiscernibility and the inability to distinguish between objects, and provides an approximation of sets or concepts by means of binary relations, typically constructed from empirical data. As an approach to handling imperfect data, rough set analysis complements other more traditional theories such as probability theory, evidence theory, and fuzzy set theory. The intuition behind the rough set approach is the fact that in real life, when dealing with sets, we often have no means of precisely distinguishing individual set elements from each

476

Digital Transformation of Enterprise Architecture

other due to limited resolution (lack of complete and detailed knowledge) and uncertainty associated with their measurable characteristics. The rough set philosophy is founded on the assumption that we associate some information (data and knowledge) with every object of the universe of discourse. Objects, which are characterized by the same information, are indiscernible in view of the available information about them. The indiscernibility relation generated in this way is the mathematical basis for the rough set theory. Any set of all indiscernible objects is called an elementary set, and forms a basic granule of knowledge about the universe. Any set of objects being a union of some elementary sets is referred to as crisp or precise set otherwise the set is rough (imprecise or vague). Consequently, each rough set has boundary-line cases (i.e., objects), which cannot be classified with complete certainty as members of the set. The general procedure for conducting rough set analysis consists of the following: 1. 2. 3. 4. 5. 6. 7.

Data preprocessing Data partitioning Discretization Reduct generation Rule generation and rule filtering Applying the discretization cuts to test dataset Score the test dataset on the generated rule set (and measuring the prediction accuracy) 8. Deploying the rules in a production system

This procedure is presented in Figure 21.10. Despite their modeling power sand wide spread use in complex prediction modeling tasks, artificial neural networks have been criticized for their lack of explanatory power. In other words, it is difficult to trace and explain the way the reasoning is derived from the input variables due to the complexity and nonlinear nature of data

FIGURE 21.10 Architecture of a rough sets system.

Soft Computing

477

transformation conducted within the algorithm. In contrast to neural networks, decision trees and rough sets present their models in the form of business rules, which are intuitively explainable. They connect the input variables (factors) to output variables (conclusions) using IF  THEN  structure.

21.5 Summary This chapter proposed soft computing as the digital transformation of primarily the analyticity aspects of enterprise architecture (see Chapter 13). The primary considerations of traditional hard computing are precision, certainty, and rigor. In  contrast, the principal notion in soft computing is that precision and certainty carry a cost; and that computation, reasoning, and decision making should exploit (wherever possible) the tolerance for imprecision, uncertainty, approximate reasoning, and partial truth for obtaining low-cost solutions. Soft computing is a consortium of methodologies that works synergistically and provides, in one form or another, flexible information processing capability for handling real-life ambiguous situations. Its aim is to exploit the tolerance for imprecision, uncertainty, approximate reasoning, and partial truth in order to achieve tractability, robustness, and low-cost solutions. There  is no universally best soft computing method; choosing particular soft computing tool(s) or some combination with traditional methods is entirely dependent on the particular application, and it requires human interaction to decide on the suitability of a blended approach. The area of soft computing involves three main aspects: fuzzy systems, which are ideally suited for problem representations and user interactions; neural networks for making models; and evolutionary programming for finding a solution or making an inference. The chapter ended with an introduction to rough sets that discover knowledge in the form of business rules from imprecise and uncertain data sources.

22 Interactive Computing As stated right in the beginning of Chapter 14, to withstand the disruptive digital storms of the future, enterprise architecture affects digital transformation i.e. exponential change (amplification or attenuation) in any performance measure of usability aspects primarily through interactive computing. This chapter explains interactive computing. Customer conversation or dialog systems are a technology aimed at sustaining conversations with users that could be considered natural and human-like. Proactiveness is necessary for computers to stop being considered a tool and become real conversational partners. Proactive systems have the capability of engaging in a conversation with the user even when the user has not  explicitly requested the system’s intervention: this is a key aspect in the development of ubiquitous computing architectures in which the system is embedded in the user’s environment, and thus the user is not aware that he is interacting with a computer, but rather he perceives he is interacting with the environment. To achieve this goal, it is necessary to provide the systems with problem-solving capabilities and context-awareness. The  chapter starts with an introduction to the approach of Human Interaction Management (HIM) was developed to deal with the human behavior in the organization drawing ideas not only from process theory but also from biology, psychology, social systems theory and learning theory. An interaction or interactive pattern is a sequence of actions, references and reactions where each reference or reaction has a certain, ex-ante intended and ex-post recognizable, interrelation with preceding event(s) in terms of timing and content. Based on the comprehensive understanding of interactions, an effective customer interaction system is envisaged to possess subsystems for automatic speech recognition, spoken language understanding, dialog management, natural language generation and text-to-speech synthesis. This  chapter explains all these components of customer interaction systems.

22.1 Business Processes and Human Interactions It  has been observed that jobs that involve participating in interactions rather than extracting raw materials or making finished goods are becoming pervasive in developed economies. Raising the productivity of the workers who perform complex, non-routine tasks can provide a company competitive advantages that are much harder for competitors to copy than the advantages created by simply reengineering, automating or outsourcing clerical jobs (Han et al. 2005, Harrison-Broninski 2005). Businesses that want to stay competitive must put in place efficient systems for managing its processes. Most often these are computer systems for Business Process Management. However, the existing process languages and tools are not applicable for complex processes involving human interaction as a core element, as they do not capture the human element crucial for these processes. The traditional process modeling approaches of BPM, such as 479

480

Digital Transformation of Enterprise Architecture

BPMN or BPEL, are designed specifically for the description of regulated, routine, largely automatable activity with only occasional human involvement. However, when it comes to the processes driven by humans, such techniques are no longer applicable. The approach of Human Interaction Management (HIM) was developed to deal with the human behavior in the organization drawing ideas not only from process theory but also from biology, psychology, social systems theory and learning theory. HIM as an approach facilitates the five main stages of human work, namely, research, evaluate, analyze, constrain and task. 22.1.1 Human Interaction Management Per HIM, there are two kinds of business processes: 1. Mechanistic business processes are on the most part implemented by machines and human involvement is limited to key decision and data entry points. They are routinized and often semi- or fully automated. Examples of mechanistic business processes include logistics, invoicing, purchase order approval and stock level maintenance. 2. Human-driven processes, differ from this in that they are fundamentally collaborative, dynamic and innovative. These processes depend on interaction and are dynamically shaped by the participants. Examples of human-driven processes are product design, research, sales, marketing or company growth/merger/ divestment processes. Human-driven processes are characterized by following observations: 1. Intentionality: The  intention of each process participant is important. These persons may have different goals and their responsibilities may prevent from certain actions being taken. 2. Data: Data is typically maintained privately, and it may be expressed informally throughout the course of the process. There may also be some informal metadata attached to the shared information. 3. Mental work: Human-driven processes include activities that are investigative and analytic, and that may not move the process on in a visible way. 4. People are not automata: People do not operate like a conventional program that follows a procedural work flow. Activities can be controlled by pre- and post-conditions, but any number of them can be true at once, i.e. the persons participating in a process may freely choose the tasks to execute as long as the preconditions are valid. 5. Process dynamism: A large part of the process is about defining what will happen next  and thus agreeing on the rest of the process. For  process management this implies that how agreement is gained, described and shared, needs to be determined. The five main features of HIM are: 1. Connection visibility: A  human process creates meaningful connection between participants with varying skills, and each participant in the process must know who the other participants are and what they do in order to efficiently work with them. A  representation of the process participants is thus needed, showing the roles they play and the private information resources they have access to.

Interactive Computing

481

2. Structured messaging: Messaging is an essential part of human-computer interaction, but it typically result in both efficiency gains and losses. For  example, the large amount of email received by office workers and the difficulties related to determining the relevance and priorities of the messages. The interactions between process participants must thus be structured and under process control. 3. Mental work: A large part of human work does not have a concrete output that could be measured by existing management techniques or computer systems. However, the amount of time and mental effort invested in this non-measurable work, e.g. researching, comparing, making decisions, is a critical part of the job of an interaction worker and should thus be supported by the systems they work with. 4. Supportive rather than descriptive activity management: Humans do not  sequence their activities in the way computer programs do— “after doing x, I either do y or z, depending on the outcome of x”. He says people work differently on different days, depending on their co-workers, the resources to which they have access and their own mood on a specific day. However, activities do have preconditions (a state in which the activity can be performed) and postconditions (a state in which the activity is seen as completed). People should be allowed to carry out any activity for which the precondition is true at any time, disregarding what activity he has completed previously. Similarly any activities that do not fill the postconditions should be regarded as completely undone, so as to prevent it from derailing the entire process. 5. Processes change processes: Human activities are often concerned with solving problems or making something happen. Before such an activity is started, some planning is needed on how to carry out the activity: which methodology to use, which tools are required, which people should be consulted and so on. Process definition is in fact a part of the process itself. Furthermore, he sees that this definition does not happen only the first time the process is carried out but continually throughout the life of the process. In  other words, in human-driven processes are continuously changed by the actions and interactions of the process. They  effectively separate management control (day-to-day facilitation of human activity, ongoing resourcing, monitoring and process re-design) from executive control (authority over the process, determination of its primary roles, interactions and deliverables). Process evolution can be implemented under management control with the agreement of the related process participants and these agreements are documented and shared within the organization. 22.1.2 Human Interaction Management System The concept of a Human Interaction Management System (HIMS) as a process modeling and enactment system supports the five main elements of HIM in the following ways: • Connection visibility: Role and User objects, both instances and types, each with its own properties and responsibilities • Structured messaging: Interaction objects in which there are multiple asynchronous channels, each for a different purpose. • Support for mental work: Entity objects that can be created, versioned and shared in a structured way.

482

Digital Transformation of Enterprise Architecture

• Supportive rather than prescriptive activity management: State objects that can both enable and validate Activity objects, along with the Roles that contain them. • Processes change processes: The ability to manipulate not only objects but also user interfaces and integration mechanisms via the process that contains them. The HIMS is used to manage the work and separate it into “levels of control” to support the organizational strategy. A process requiring human knowledge, judgment and experience is divided into Roles, and assigned by a Human Interaction Management System (HIMS) to an individual or group which is responsible for a certain part of the process. A certain role may be responsible for several activities and tasks, and, the roles have interactions with each other. A business process is modeled using Role Activity Diagrams (RADs); the activities and other roles related to a certain role are depicted in the RAD. 22.1.3 Comparing HIM and BPM There is a marked difference between the approaches of BPM and HIM. HIM as a concept offers an alternative perspective to business processes, but it does not represent a contrarian view to BPM, rather it represents a complementary view to process management While both BPM and HIM have a way to model a process and the rules it should follow, BPM assumes that a process can be fully planned in advance and controlled during execution whereas HIM admits that unexpected events can occur and the process may not go exactly as planned (though it will stay within the guidelines). Some processes follow the rules and conditions set for them without any variation at all, whereas in others it is impossible to avoid processes straying away from the set rules and conditions. BPM recognizes human participation in a process, but gives humans a role more akin to that of a machine; assumes that humans have perfect knowledge and make rational decisions. BPM also assumes that humans are only a part of the automated process, taking on the tasks that are too complex for machines to perform, yet it does not recognize the fact that there may be processes which are primarily driven by humans. Ultimately, both BPM and HIM aim to improve business processes. In BPM, the goal of “improving the business processes” implies optimizing the activities conducted within the process to increase customer value whereas in HIM “improving the business processes” is regarded as facilitating and supporting the human interactions within the process to increase customer value. In other words, BPM focuses on improving the outcomes of the actions, whereas HIM focuses on improving the way the actions are carried out. HIM does not address how a process is or should be carried out, but rather what types of interaction and information exchange is essential for the process and which actors participate in them. In other words, HIM do not focus on optimizing business processes, i.e.  completing activities in an optimal way, but rather on optimizing the environment for enabling the execution of these processes by facilitating the exchange of information. 22.1.4 Humanedj Human Interaction Management Systems Internet tools have made basic communication quicker, but it has not  made collaboration more efficient; human work is the only competitive differentiator left and such work depends fundamentally on collaboration. This is why people need to collaborate better; they need to adopt a simple, general approach that meets both individual and organizational needs.

Interactive Computing

483

Humanedj is a software application founded directly on the principles of Human Interaction Management (HIM). It helps the user carry out all work activities in which they are engaged, facilitating complex tasks and interactions. Furthermore, Humanedj aims to help users to structure their activities, interactions and resources so that they can manage them more efficiently. Humanedj is a complete replacement for user’s standard desktop. It  provides access to all computing resources that the user may need (programs, files, network access, etc.) in a structured way. Humanedj is a personal desktop tool with the objective to help collaboration among humans through support for business rules, multi-agent system functionality, speech acts, XML schemas, ontology, Web services, scripting languages, Web browsing and external document access. It is available free of charge for all standard computing platforms and devices. It runs on the client machine(s) of each process participant and does not require any server installations (Humanedj web page). To use Humanedj, the user has to first create a Book. 1. The Book is the view to one’s entire working life. User can add as many Books as he wants. The Book files are encrypted and may be opened only with a password defined by the user. A Book consists of Stories and Identities. 2. Stories describe collaborations that the user is involved in, and Identity includes user’s personal name and email account. The Identity may have several messaging accounts attached—one for chatting, one for mobile phone, etc.—and the user may have several Identities. 3. The  main objects inside a Story are the Roles that represent participants in the work process; in a development project, for example, the roles could be Project Manager, Designer, Quality Assurance and Business Analyst. Other types of collaborative work process will include different Roles. 4. In  addition to Roles, a Story may contain Users and Interactions. Users are the people who will play the Roles. The  ability to define User types allows control over the sort of people that are assigned to particular Roles. Interactions describe how Role instances send information to one another. This information may be for example a document, structured data or Web link. An individual Humanedj user goes usually through three stages: 1. Humanedj for messaging, is to use the software to simplify and structure one’s communications with colleagues. Humanedj offers an email account with full capability of sending and receiving messages. However, the beta version of Humanedj that was explored in this study does not have all these email capabilities. It is only able to pick up messages directly from an email server. 2. Humanedj for work. At  this stage the user use the software to co-ordinate and automate all his working tasks: document creation, Web searching, use of other enterprise systems, data maintenance, calculations, and so on—even tasks that are not themselves conducted using a computer. 3. Humanedj for Human Interaction Management is to use the software to organize and manage one’s work (and those of his colleagues) via standard principles and patterns. The  developers of Humanedj emphasize that the software on its own is not  enough to fully transform one’s working life. As an example, having a computer-aided design tool will help to plan an office layout, but it does

484

Digital Transformation of Enterprise Architecture

not  qualify anybody to design an aircraft. However, by gradually adapting the use of Humanedj it could massively reduce the daily effort one expends on work, resulting in larger personal efficiency. But this can happen only when the user adopts the principles of HIM.

22.2 Interaction Architectures The  interaction-oriented software architectures decompose the system into three major partitions: 1. Data module, control module, and view presentation module. Each module has its own responsibilities. Data module provides the data abstraction and all core business logic on data processing. 2. View presentation module is responsible for visual or audio data output presentation and may also provide user input interface when necessary. 3. Control module determines the flow of control involving view selections, communications between modules, job dispatching, and certain data initialization and system configuration actions. The key point of interaction architecture is its separation of user interactions from data abstraction and business data processing. Since there may be many view presentations in different formats, multiple views may be supported for the same data set. Even for a specific view presentation, the interfaces or views may need to change often, so loose coupling between data abstractions and its presentations is helpful and is supported by this style. The loose coupling connections can be implemented in a variety of ways such as explicit method invocation or implicit registration/notification method invocation. The control module plays a central role that mediates the data module and view presentation modules. All three modules may be fully connected. 22.2.1 Presentation Abstraction Controller (PAC) PAC was developed to support the application requirement of multiple agents in addition to interactive requirements. In  PAC, the system is decomposed into a hierarchy of cooperating agents. Each agent has three components (Presentation, Abstraction, and Control). The Control component in each agent is in charge of communications with other agents. The top-level agent provides core data and business logics. The bottom-level agents provide detailed data and presentations. A middle-level agent may coordinate low-level agents. Within each agent, there are no direct connections between the abstraction component and Presentation component (Figure 22.1). The Presentation is the visual representation of a particular abstraction within the application, it is responsible for defining how the user interacts with the system. The Abstraction is the business domain functionality within the application. The  Control is a component which maintains consistency between the abstractions within the system and their presentation to the user in addition to communicating with other Controls within the system.

Interactive Computing

485

FIGURE 22.1 Presentation Abstraction Controller (PAC) architecture.

PAC is suitable for any distributed system where all the agents are distantly distributed, and each agent has its own functionalities with data and interactive interface. In such a system, all agents need to communicate with other agents in a well-structured manner. PAC is also used in applications with rich GUI components where each of them keeps its own current data and interactive interface and needs to communicate with other components. The PAC architecture is the right choice for distributed applications of wireless mobile communication systems since each device needs to have its own data and its own interactive interface, and also needs to communicate with other devices. A typical PAC software design is a networked traffic control management system that requires many mobile agents to monitor traffic and get data; display the analyzed data in graphics; coordinate the data from agents; and many other management tasks. 22.2.2 Model View Controller (MVC) The  MVC pattern was first described in 1979 by Trygve Reenskaug, then working on Smalltalk at Xerox research labs. The  original implementation is described in depth in the influential paper “Applications Programming in Smalltalk-80(TM): How to use Model-View-Controller.” Model module provides all core functional services and encapsulates all data details. The  Model module does not  depend on other modules, and it does not  know which views are registered with or attached to it. View module is responsible for displaying the data provided by the Model module and updating the interfaces whenever the data changes are notified.

486

Digital Transformation of Enterprise Architecture

FIGURE 22.2 Model View Controller (MVC) architecture.

Controller manages the user input requests, controls the sequence of user interactions, selects desired views for output displays, and manages all initialization, instantiations, and registrations of other modules in the MVC system. Figure 22.2 shows the MVC archictecture. JSP Model 2 (or MVC 2) is Sun’s attempt to wrap Java Server Pages (JSP) within the MVC paradigm. It’s not so much a product offering (or even an API) as it is a set of guidelines that go along with Sun’s packaging of Javabased components and services under the umbrella of J2EE. Most web developers are familiar with the MVC architecture because it is widely adopted for web server site interactive application design such as online shopping, surveys, student registration, and many other interactive service systems. MVC architecture is specifically used in applications where user interfaces are prone to data changes. MVC also typically supports “look and feel” features in GUI systems. 22.2.3 Data Context Interaction (DCI) While MVC separates the parts of a program that are responsible for representing the information in the system and the parts that are responsible for interaction with the user, DCI minimize[s] any gap that might exist between the programmer’s mental model of her program and the program that is actually stored and executed in the computer. In particular, it concretizes how the system realizes system operations as networks of communicating objects. The goal of DCI is to separate the code that represents the system state from the code that  represents system behavior. This  separation is related to but different from MVC’s split between data representation and user interaction. Ordinary object-oriented programming lumps what-the-system-is and what-the-systemdoes interfaces together. However, these two interfaces change at different rates and are often managed by different groups of people. DCI separates the architecture into a data part (the domain part, or what the system is) and an interaction, or feature part (what the system does) (Figure 22.3). The interaction part becomes connected to the data part on an event-by-event basis by an object called the Context. The architecture can be viewed as Data and Interaction code dynamically connected together by a Context: hence the name Data, Context, and Interaction, or DCI. Object roles are collections of related responsibilities that accomplish their work through each other’s responsibilities. When a system event occurs, code in the environment (often in the MVC Controller object) finds a Context object that understands the

Interactive Computing

487

FIGURE 22.3 Data Context Interaction (DCI) architecture.

object-role-to-object mapping for the use case that corresponds to the event. That allows each domain object to play an object role. The  Controller passes control to the Context object, which “wires up” the object roles to the objects, and then kicks off the execution by invoking the object role method that initiates the use case; this is called the trigger. In general each object may play several object roles, and a given object role may be played by a combination of several objects together. 22.2.4 Micro-Service Architecture (MSA) The  shortening of product cycles coupled with personalization of customer preferences has led to a surge in virtualization technology applications enabling rapid changes and continuous delivery of business applications. This has been possible only because of the concomitant transformation of traditional monolithic architecture to micro-service architecture (MSA) that can cater to increasing number and changes of services. The MSA is an architectural pattern that advocates the partitioning of a single application into a small set of services that provide the ultimate value to the user by coordinating and cooperating with each other. Each service runs in its own process, with lightweight communication mechanisms between services usually via the RESTful API based on the HTTP protocol. Each service is built around the specific business and can be deployed independently to production environments, deployment environments, and so on (Figure 22.4). The core benefits of the MSA includes: small and focused services implementation; independent processes; lightweight communication mechanisms; loosely coupled distributed data management, independent deployments tasks.

488

Digital Transformation of Enterprise Architecture

FIGURE 22.4 Micro-Service Architecture (MSA): (a) Unbundling the user interface. (b) from a rich client to a classic Web application, and (c) micro-services architecture.

Interactive Computing

489

22.3 Customer Interaction Systems Traditional graphical interfaces which might not  be appropriate for all users and/or applications; for this reason, spoken dialog systems are becoming a strong alternative. Speech and natural language technologies allow users to communicate in a flexible and efficient way, while also enabling the access to applications when traditional input and output interfaces cannot be used (e.g., in-car applications, access for disabled persons, etc.). Also, speech-based interfaces work seamlessly with small devices (e.g., smart phones, tablets, and PCs) and allow users to easily invoke local applications or access remote information (Griol et al. 2017). Spoken dialog systems are computer programs that receive speech as input and generate as output synthesized speech, engaging the user in a dialog that aims to be similar to that between humans: 1. Automatic Speech Recognizer (ASR): The goal of speech recognition is to obtain the sequence of words uttered by a speaker. Once the speech recognizer has provided an output, the system must understand what the user said. 2. Spoken Language Understanding (SLU) module: The goal of spoken language understanding is to obtain the semantics from the recognized sentence. This  process generally requires morphological, lexical, syntactical, semantic, discourse and pragmatic knowledge. 3. Dialog Manager (DM) module: The dialog manager decides the next action of the system, interpreting the incoming semantic representation of the user input in the context of the dialog. In addition, it resolves ellipsis and anaphora, evaluates the relevance and completeness of user requests, identifies and recovers from recognition and understanding errors, retrieves information from data repositories, and decides about the system’s response. 4. Natural Language Generation (NLG) module: Natural language generation is the process of obtaining sentences in natural language from the non-linguistic, internal representation of information handled by the dialog system. 5. Text-to-Speech (TTS) module: Text-to-speech transforms the generated sentences into synthesized speech. Figure 22.5 shows the schematic of a spoken dialog system. 22.3.1 Spoken Language Recognition Speech recognition is the process of obtaining the text string corresponding to an acoustic input. Since the output of the ASR is the starting point of the other modules in a spoken dialog system, it is important to try to detect and correct errors generated during the ASR process. The complexity of the recognition task arises because of reasons such as: • The acoustic variability (each person pronounces sounds differently when speaking) • Acoustic confusion (many words sound similar) • The  coarticulation problem (the characteristics of spoken sounds may vary depending on neighboring sounds)

490

Digital Transformation of Enterprise Architecture

FIGURE 22.5 Customer conversation systems.

• Out of vocabulary words and spontaneous speech (interjections, pauses, doubts, false starts, repetitions of words, self-corrections, etc.) • Environmental conditions (noise, channel distortion, bandwidth limitations, etc.) Consequently, ASR systems are classified according to the kind of users supported (userindependent or user-dependent systems), style of speech supported (recognizers isolated words, connected words or continuous speech), or vocabulary size (small, medium, or large vocabulary) and so on. The field of automatic speech recognition has progressed from the recognition of isolated words in reduced vocabularies to continuous speech recognition with increasing vocabulary sets. These advances have made the communication with dialog systems increasingly more natural. Among the variety of techniques used to develop ASR systems, the data-based approach is currently the most widely used. In this approach, the speech recognition problem can be understood as: Given a sequence of acoustic data A, finding the word sequence W uttered by the user determined by W = max P(W|A) W

Using Bayes’ rule, P(W|A) =

P(A|W)P(W) P(A)

491

Interactive Computing

where, P(A|W) is the probability of the acoustic sequence A when the word sequence W has been uttered (Acoustic Model) P(W) is the probabilities of word sequence W (Language Model) The probabilities of the rules in these models are learned from the training data. Since, P(A|W) and P(W) are not dependent on each other, W = max P(A|W)P(W) W

The acoustic model is created by taking audio recordings of speech and their transcriptions and then compiling them into statistical representations of the sounds for the different words. Acoustic model has been implemented mostly by means of Hidden Markov Model (HMM), The success of the HMM is mainly based on the use of machine learning algorithms to learn the parameters of the model, as well as in their ability to represent speech as a sequential phenomenon over time. Multiple models have been studied, such as discrete models, semicontinuous or continuous, as well as a variety of topologies models. Deep Neural Networks (DNNs) have replaced HMM models. DNNs are now used extensively in industrial and academic research as well as in most commercially deployed ASR systems. Various studies have shown that DNNs outperform HMM models in terms of increased recognition accuracy. Deep Learning algorithms extract high-level, complex abstractions as data representations through a hierarchical learning process. A key benefit of Deep Learning is the analysis and learning of massive amounts of unsupervised data, making it a valuable tool for Big Data Analytics where raw data is largely unlabeled and uncategorized. Learning a language model requires the transcriptions of sentences related to the application domain of the system. The  language model is one of the essential components required to develop a recognizer of continuous speech. The most used language models are based on regular or context-free grammars, and Ngrams. Grammars are usually suitable for small tasks, providing more precision based on the type of restrictions. However, they are not able to represent the great variability of natural speech processes. N-grams models allow to collect more easily the different concatenations among words when a sufficient number of training samples is available.

22.3.2 Spoken Language Understanding Once the spoken dialog system has recognized what the user uttered, it is necessary to understand what the user said. Natural language processing is a method of obtaining the semantics of a text string and generally involves: 1. Lexical and morphological: Lexical and morphological knowledge divide the words into their constituents by distinguishing between lexemes and morphemes: lexemes are parts of words that indicate their semantics and morphemes are the different infixes and suffixes that provide different word classes. 2. Syntactical: Syntactic analysis yields the hierarchical structure of the sentences. However, in spoken language, phrases are frequently affected by difficulties

492

Digital Transformation of Enterprise Architecture

associated with the so-called disfluency phenomena: filled pauses, repetitions, syntactic incompleteness and repairs. 3. Semantic: Semantic analysis extracts the meaning of a complex syntactic structure from the meaning of its constituent parts. 4. Discourse and pragmatical knowledge: Pragmatic and discourse processing stage, the sentences are interpreted in the context of the whole dialog, the main complexity of this stage is the resolution of anaphora, and ambiguities derived from phenomena such as irony, sarcasm or double entendre. The  process of understanding can be understood as a change in language representation, from natural language to a semantic language, so that the meaning of the message is not changed. As in the speech recognizer, the spoken language understanding module can work with several hypotheses (both for recognition and understanding) and confidence measures. The major approaches to tackling the problem of understanding are: • Rule-based approaches extract semantic information based on a syntactic semantic analysis of the sentences, using grammars defined for the task, or by means of the detection of keywords with semantic meanings. In order to improve the robustness of the analysis, some analyzers combine syntactic and semantic aspects of the specific task. Other techniques are based on an analysis at two levels, in which grammars are used to carry out a detailed analysis of the sentence and extract relevant semantic information. In addition, there are systems that use rule-based analyzers automatically learned from a training corpus using natural language processing techniques. • Statistical models learned from data corpus: Statistical methods are based on the definition of linguistic units with semantic content and obtaining models from labeled samples. This type of analysis uses a probabilistic model to identify concepts, markers and values of cases, to represent the relationship between markers of cases and their values and to decode semantically utterences of the user. The model is generated during a training phase (learning), where its parameters capture the correspondences between text entries and semantic representation. Once the training model has been learned, it is used as a decoder to generate the best representation. 22.3.3 Dialog Management The core logic of the conversational application is encapsulated within dialog management which mainly consists of tasks like: • • • •

Updating the dialog context Providing a context for sentence interpretation Coordinating invoking of other modules Deciding the information to convey to the user and when to do it

The design of an appropriate dialog management strategy is at the core of dialog system engineering. Users are diverse, which makes it difficult to foresee which form of system behavior will lead to quick and successful dialog completion, and speech recognition errors

Interactive Computing

493

may introduce uncertainty about their intention. The selection of a specific system action depends on multiple factors, such as the output of the speech recognizer (e.g., measures that define the reliability of the recognized information), the dialog interaction and previous dialog history (e.g., the number of repairs carried out so far), the application domain (e.g., guidelines for customer service), knowledge about the users, and the responses and status of external back-ends, devices, and data repositories. Statistical approaches for dialog management present several important advantages with  regard traditional rule-based methodologies. Rather than maintaining a single hypothesis for the dialog state, they maintain a distribution over many hypotheses for the correct dialog state. In  addition, statistical methodologies choose actions using an optimization process, in which a developer specifies high-level goals and the optimization works out the detailed dialog plan. The  main trend in this area is an increased use of data for automatically improving the performance of the system. Statistical models can be trained with corpora of human-computer dialogs with the goal of explicitly modeling the variance in user behavior that can be difficult to address by means of hand-written rules. Additionally, if it is necessary to satisfy certain deterministic behaviors, it is possible to extend the strategy learned from the training corpus with handcrafted rules that include expert knowledge or specifications about the task. The success of statistical approaches is dependent on the quality and coverage of the models and data used for training. Moreover, the training data must be correctly labeled for the learning process. The  size of currently available annotated dialog corpora is usually too small to sufficiently explore the vast space of possible dialog states and strategies. Collecting a corpus with real users and annotating it requires considerable time and effort. Automating dialog management is useful for developing, deploying and re-deploying applications and also reducing the time-consuming process of handcrafted design. In fact, the application of machine learning approaches to dialog management strategy design is a rapidly growing research area. Machine-learning approaches to dialog management attempt to learn optimal strategies from corpora of real human-computer dialog data using automated “trial-and-error” methods instead of relying on empirical design principles. The  most widespread methodology for machine-learning of dialog strategies consists of modeling human-computer interaction as an optimization problem using Markov Decision Process (MDP) and reinforcement methods. The main drawback of this approach is that the large state space of practical spoken dialog systems makes its direct representation intractable. Partially Observable MDPs (POMDPs) outperform MDP-based dialog strategies since they provide an explicit representation of uncertainty. This  enables the dialog manager to avoid and recover from recognition errors by sharing and shifting probability mass between multiple hypotheses of the current dialog state. 22.3.4 Natural Language Generation Natural language generation is the process of obtaining texts in natural language from a non-linguistic representation. It is important to obtain legible messages, optimizing the text using referring expressions and linking words and adapting the vocabulary and the complexity of the syntactic structures to the users linguistic expertise. Traditional natural language generation consist of five steps: 1. Content organization 2. Content distribution in sentences

494

Digital Transformation of Enterprise Architecture

3. Lexicalization 4. Generation of referential expressions 5. Linguistic realization The approaches for natural language generation are: • Default text generation, in which this approach uses predefined text messages e.g., error messages and warnings. • Template-based generation, in which the same message structure is produced with slight alterations. The template approach is used mainly formulti-sentence generation, particularly in applications whose texts are fairly regular in structure, such as business reports. • Phrase-based systems that employ what can be considered as generalized templates at the sentence level (in which case the phrases resemble phrase structure grammar rules), or at the discourse level (in which case they are often called text plans). In  such systems, a pattern is first selected to match the top level of the input, and then each part of the pattern is expanded into a more specific one that matches some portion of the input. The cascading process stops when every pattern has been replaced by one or more words. • Feature-based systems that represent the maximum level of generalization and flexibility. In feature-based systems, each possible minimal alternative of expression is represented by a single feature; for example, whether the sentence is either positive or negative, if it is a question or an imperative or a statement, or its tense. To arrange the features it is necessary to employ linguistic knowledge. • Corpus-based natural language generation, which stochastically generates system utterances. Natural language generation has traditionally been modeled as a planning pipeline involving content selection, surface realization and presentation. Early NLG systems were rule-based, and consequently quickly grew complex to maintain and adapt to new domains and constraints. Some related important aspects are given below: 1. Collaboration: In an interaction, no participant’s contributions stand alone—the process of constructing the discourse is a collaboration. This is true even if the participants are disagreeing or attempting to deceive each other—collaboration on an underlying task is not  necessary for collaboration in the interaction process. Each contribution is shaped by the need to fit into what has come before, and to maximize the chances of success of what will follow. These constraints influence content selection, surface realization and production. In  addition, the collaboration causes additional language behaviors not  present in noninteractive situations—these include interaction maintenance behaviors such as turntaking. 2. Reference: A  problem that is key to any interactive system is the understanding and production of references to entities states, and activities in the world or in the conversation. Reference influences or is influenced by every dialogue task, from recognition of user input through production of user output. Referring expression

Interactive Computing

495

generation, in particular, is the core of the NLG problem, for everything that a dialogue participant does is aimed at describing the word as it is or as the participant wishes it to be. 3. Uncertainty: Grounding and alignment behaviors help minimize uncertainty related to each conversational participant’s model of the interaction so far, but do not drive the interaction further. There is another type of uncertainty relevant to NLG, and that is the construction of a “right” contribution to drive the next step of the interaction. 4. Engagement: Any agent that produces language, whether for interactive or noninteractive contexts, will be more successful if it is engaging. An agent will engage a user if it is informative, relevant, clear, consistent, and concise. However, it is easier to be engaging for a particular (known) audience than for a general audience, and in a conversational interaction the participants are constantly giving each other information about themselves (what will best inform and engage them) and feedback about the success of the interaction so far. In particular, conversational participants give evidence of their culture and personality. Another area in which engagement affects the performance of dialog systems is when they are used to provide assistive and augmentative communication services. As communication aids become increasingly multimodal, combining speech with image and video input, so also natural language generation is evolving to produce multimodal contributions. One such multimodal generation mechanism is to use an avatar or a virtual agent. In the realm of multimodal communication, the intent of the message is carried not only by words, but also by the prosody of speech and by nonverbal means including body posture, eye gaze, and gestures. These para-linguistic means of communication take on a different dimension when contextualized by the cultural backgrounds of the conversational participants. 5. Evaluation: With the increasingly multimodal nature of human–machine conversation, where humans interact with devices endowed with cameras, the efficacy of conversation can be hugely improved by taking different factors beyond just the speech input/output to/from the system. 22.3.5 Text-to-Speech Synthesis Text-to-speech synthesizers transform a text into an acoustic signal. A  text-to-speech system is composed of two parts: 1. Front-end: a. It  converts raw text containing symbols such as numbers and abbreviations into their equivalent words. This  process is often called text normalization, pre-processing, or tokenization. b. It assigns a phonetic transcriptions to each word, and divides and marks the text into prosodic units, i.e. phrases, clauses, and sentences. The process of assigning phonetic transcriptions to words is called text-to-phoneme or grapheme-to-phoneme conversion. The output of the front-end is the symbolic representation constituted  by the phonetic transcriptions and prosody information.

496

Digital Transformation of Enterprise Architecture

2. Back-end (often referred to as the synthesizer) converts the symbolic linguistic representation into sound in two ways: a. Speech synthesis based on human speech production marks parameters such as fundamental frequency, voicing, and noise levels are varied over time to create a waveform of artificial speech. This is further sub-divided into: • Parametric synthesis which simulates the physiological parameters of the vocal tract • Formant-based synthesis which models the vibration of vocal chords • Articulatory synthesis is based on physiological models and refers to computational techniques for synthesizing speech based on models of the human vocal tract and the articulation processes b. Concatenative synthesis employs pre-recorded units of human voice. Concatenative synthesis is based on stringing together segments of recorded speech. It generally produces the most natural-sounding synthesized speech; however, differences between natural variations in speech and the nature of the automated techniques for segmenting the waveforms sometimes result in audible glitches in the output. The quality of the synthesized speech depends on the size of the synthesis unit employed. Unit selection synthesis uses large databases of recorded speech. During database creation, each recorded utterance is segmented into some or all of the following: individual phones, syllables, morphemes, words, phrases, and sentences. Unit selection provides the greatest naturalness, because it applies only a small amount of digital signal processing to the recorded speech. There is a balance between intelligibility and naturalness of the voice output or the automatization of the synthesis procedure. For  example, synthesis based on whole words is more intelligible than the phone based but for each new word it is necessary to obtain a new recording, whereas the phones allow building any new word. • Domain-specific synthesis concatenates pre-recorded words and phrases to create complete utterances. It is used in applications in which the variety of texts the system will produce is limited to a particular domain, like transit schedule announcements or weather reports. • Diphone synthesis uses a minimal speech database containing all the diphones (sound-to-sound transitions) occurring in a language. The number of diphones depends on the phonotactics of the language: for example, Spanish has about 800  diphones and German about 2,500. In  diphone synthesis, only one example of each diphone is contained in the speech database. • HMM-based synthesis is a method in which the frequency spectrum (vocal tract), fundamental frequency (vocal source), and duration (prosody) of speech are modeled simultaneously by HMMs. Speech waveforms are generated from HMMs themselves, based on the maximum likelihood criterion.

Interactive Computing

497

22.4 Implementing Customer Interaction Systems Several modules cooperate to perform the interaction with the user: • • • • •

Automatic Speech Recognizer (ASR) Spoken Language Understanding Module (SLU) Dialog Manager (DM) Natural Language Generation module (NLG) Text-To-Speech Synthesizer (TTS)

Each one of these has its own characteristics and the selection of the most convenient model varies depending on certain factors: the goal of each module, the possibility of manually defining the behavior of the module, or the capability of automatically obtaining models from training samples.

22.5 Summary This  chapter proposed interactive computing as the digital transformation of primarily the usability aspects of enterprise architecture (see Chapter 14). The  chapter started with an introduction to the approach of Human Interaction Management (HIM) developed to deal with the human behavior in the organization drawing ideas not only from process theory but also from biology, psychology, social systems theory and learning theory. The  chapter then introduced the concept of interaction or interactive pattern as a sequence of actions, references and reactions where each reference or reaction has a certain, ex-ante intended and ex-post recognizable, interrelation with preceding event(s) in terms of timing and content. The chapter then covered interaction architectures, namely, Presentation Abstraction Controller (PAC), Model View Controller (MVC), Data Context Interaction (DCI) and Micro-Service Architecture (MSA). Latter half of the chapter presented the components of an effective customer interaction systems, namely, automatic speech recognition, spoken language understanding, dialog management, natural language generation and text-to-speech synthesis.

Epilogue: Blown to Bits This was the title of the book authored by Philip Evans and Thomas S. Wurster in 1999. Boston Consulting Group veeps Evans and Wurster won a 1997 McKinsey Award for an article they wrote for the Harvard Business Review that became the basis of this book. Starting with a detailed account of the near-demise of the Encyclopaedia Britannica as an example, Evans and Wurster show how “the new economics of information will precipitate changes in the structure of entire industries and in the way companies compete.” They emphasize the role information plays in every business, and they demonstrate that companies will no longer be forced to choose between richness (the quality of information) and reach (the number of people who share that information) in their marketing mix. Trade-offs between those two factors was no longer necessary because of increasing connectivity and growing standardization. Consequently, they concluded, organizational supply chains and value chains (the increments by which value is added to products and services) were being blown to bits and reconstituted into separate and new businesses. Twenty years on, history seems to be repeating itself–but for Enterprise Architecture (EA). EA that manifests itself through its attributes has been blown to bits by the advent of a set of technologies in response to the digital storms of the twenty-first century. The traditional tradeoffs between the various attributes have been blown to bits by a set of emerging technologies led primarily by service-oriented, cloud and big data computing. This book focused on the digital transformation of enterprise architecture. It proposed that it is the perennial quest for interoperability and portability, scalability, availability, etc., that has directed and driven the evolution of the IT/IS industry in the past 50 years. It is this very quest that has led to the emergence of service orientation, cloud and big data computing. If we want to anticipate future developments in the area of digital transformations of enterprises, it is vital to make the connection between digital transformations and enterprise architecture—this book attempted to identify and establish such a connection.

499

Bibliography Albers, M. J. and B. Still (Eds.), Usability of Complex Information Systems: Evaluation of User Interaction (CRC Press, Boca Raton, FL, 2011). Alexander, C., The Timeless Way of Building (Oxford University Press, New York, 1979). Alexander, C. et al., A Pattern Language (Oxford University Press, New York, 1977). Alon, U., An Introduction to Systems Biology: Design Principles of Biological Circuits (CRC Press, Boca Raton, FL, 2019). Anthony, T. J., Systems Programming: Designing and Developing Distributed Applications (Elsevier, Amsterdam, the Netherlands, 2016). Bass, L., P. Clements and R. Kazman, Software Architecture in Practice (Addison-Wesley, Upper Saddle River, NJ, 2013). Bernard, S. A., An Introduction to Enterprise Architecture: Linking Strategy, Business, and Technology (AuthorHouse, Bloomington, IN, 2nd ed., 2012). Bernus, P., K. Mertins and G. Schmidt (Eds.), Handbook on Architectures of Information Systems, 2nd ed. (Springer, Berlin, Germany, 2006). Bessis, N. and C. Dobre (Eds.), Big Data and Internet of Things: A  Roadmap for Smart Environments (Springer, Cham, Switzerland, 2014). Buckminster Fuller, R., Critical Path, 2nd ed. (St. Martin’s Griffin, New York, 1982). Chen, S., Y. Shi, B. Hu and M. Ai, Mobility Management: Principle, Technology and Applications (Springer, Berlin, Germany, 2016). Chung, L., B. Nixon, E. Yu, J. Mylopoulos, (Eds.), Non-Functional Requirements in Software Engineering (Kluwer Academic Publishers, Boston, MA, 2000). Cortellessa, V., A. Di Marco and P. Inverardi, Model-Based Software Performance Analysis (Springer, Berlin, Germany, 2011). Coulouris, G., J. Dollimore, T. Kindberg, and G. Blair, Distributed Systems: Concept and Design (Addison-Wesley, Boston, MA, 2011). Coyne, R., Logic Models of Design (University College of London, London, UK, 1988). Critchley, T., High Availability IT Services (CRC Press, Boca Raton, FL, 2015). Critchley, T., High Performance IT Services (CRC Press, Boca Raton, FL, 2017). Dasgupta, S., Design Theory and Computer Science (Cambridge University Press, Cambridge, London, UK, 2009). Delgado, J. C., Improving data and service interoperability with structure, compliance, conformance and context awareness. In: N. Bessis and C. Dobre (Eds.), Big Data and Internet of Things: A Roadmap for Smart Environments (Springer, Cham, Switzerland, 2014). Delgado, J. C., Frameworks for distributed interoperability. In: M. Khosrow-Pour (Ed.), Encyclopedia of Information Science and Technology (IGI Global, Hershey, PA, 2015). Delgado, J. C. M., Interoperability in the Internet of Things with asymmetric schema matching. In: Z. Mahamood (Ed.), Connected Environments for the Internet of Things: Challenges and Solutions (Springer, Cham, Switzerland, 2017). Dey, A. K., Understanding and using context. Personal and Ubiquitous Computing Journal, 1, 4–7, 2001. Doerr, J., Elicitation of a Complete Set of Non-Functional Requirements (Fraunhofer Verlag, Stuttgart, Germany, 2011). Domingue, J., D. Fensel, and J. A. Hendler (Eds.), Handbook of Semantic Web Technologies (Springer, Berlin, Germany, 2011). Finkelstein, C., Information Engineering: Strategic System Development (Addison-Wesley, Reading, MA, 1992). Finklestein, C., Enterprise Architecture for Integration: Rapid Delivery Methods and Technologies (Artech House, Boston, MA, 2006). Fox, M. and M. Kemp, Interactive Architecture (Princeton Architectural Press, New York, 2009).

501

502

Bibliography

Garland, J. and R. Anthony, Large Scale Software Architecture: A Practical Guide Using UML (Wiley, Chichester, UK, 2003). Gorton, I., Essential Software Architecture, 2nd ed. (Springer, Berlin, Germany, 2011). Griol, D., J. M. Molina, and Z. Callejas, Big data for conversational interfaces: Current opportunities and prospects. In: Pedro, F., Marquez, G., and Lev, B. (Eds.), Big Data Management (Springer, Cham, Switzerland, 2017). Gumzej, R. and W. A. Halang, Real-time Systems’ Quality of Service: Introducing Quality of Service Considerations in the Life-cycle of Real-time Systems (Springer, London, UK, 2010). Gunther, N. J., Guerrilla Capacity Planning: Tactical Approach to Planning for Highly Scalable Applications and Services (Springer, Berlin, Germany, 2007). Han, Y., A. Kauranen, E. Kristola, and J. Merinen, Human Interaction Management – Adding Human Factors into Business Processes Management (Helsinki University of Technology, Espoo, Finland, 2005). Harrison-Broninski, K., Human Interactions: The Heart and Soul of Business Process Management (Meghan Kiffer Press, Tampa, FL, 2005). Hartshorne, C. and P. Weiss (Eds.), Collected papers of Charles Sanders Peirce (Harvard University Press, Cambridge, MA, 1931). Hemmerling, M. and L. Cocchiarella (Eds.), Informed Architecture: Computational Strategies in Architectural Design (Springer, Cham, Switzerland, 2018). Hevner, A. and S. Chatterjee, Design Research in Information Systems: Theory and Practice (Springer, New York, 2010). Hevner, A., S. March, J. Park, and S. Ram, 2004. Design science in information systems research. MIS Quarterly, 28(1), 75–105. Hwang, K. et  al., Distributed and Cloud Computing: From Parallel Processing to the Internet of Things (Morgan-Kaufmann, Amsterdam, the Netherlands, 2011). Johannesson, P. and E. Perjons, An Introduction to Design Science (Springer, Cham, Switzerland, 2014). Kale, V., Guide to Cloud Computing for Business and Technology Managers: From Distributed Computing to Cloudware Applications (Auerbach Publication, London, UK, 2015). Kale, V., Big Data Computing: A Guide for Business and Technology Managers (CRC Press, Boca Raton, FL, 2016). Kale, V., Agile Network Businesses: Collaboration, Coordination, and Competitive Advantage (CRC Press, Boca Raton, FL, 2017). Kale, V., Enterprise Process Management Systems: Engineering Process-Centric Enterprise Systems using BPMN 2.0 (CRC Press, Boca Raton, FL, 2018). Kaleppmann, M., Designing Data-Intensive Applications: The  Big Ideas Behind Reliable, Scalable and Maintainable Systems (O’Reilly, Sebastopol, CA, 2017). Kolarevic, B. and V. Parlac, Building Dynamics: Exploring Architecture of Change (Routledge, New York, 2015). Konar, A. and S. Saha, Gesture Recognition: Principles, Techniques and Applications (Springer, Cham, Switzerland, 2018). Kuhn, T. S., Logic of discovery or psychology of research? In: Lakatos I., Musgrave A. (Eds.), Criticism and the Growth of Knowledge (Cambridge University Press, Cambridge, pp. 1–23, 1970a). Kuhn, T. S., The  Structure of Scientific Revolutions, 2nd ed. (Chicago University Press, Chicago, IL, 1970b). Lattanze, A. J., Architecting Software Intensive Systems: A Practitioner’s Guide (CRC Press, Boca Raton, FL, 2009). Lee, G. G., H. K. Kim, M. Jeong, and J.-H. Kim (Eds.), Natural Language Dialog Systems and Intelligent Assistants (Springer, Cham, Switzerland, 2015). Liu, D., Systems Engineering: Design Principles and Models (CRC Press, Boca Raton, FL, 2016). Liu, H. H., Software Performance and Scalability: A Quantitative Approach (Wiley, Hoboken, NJ, 2009). Loke, S., Context-Aware Pervasive Systems: Architectures for a New Breed of Applications (Auerbach Publications, Boca Raton, FL, 2006). Maier, M. W. and E. Rechtin, The  Art of Systems Architecting, 3rd ed. (CRC Press, Boca Raton, FL, 2009).

Bibliography

503

Marinescu, D., Cloud Computing: Theory and Practice (Morgan Kaufmann, Boston, MA, 2013). McGovern, J., O. Sims, A. Jain, and M. Little, Enterprise Service Oriented Architectures: Concepts, Challenges, Recommendations (Springer, Dordrecht, the Netherlands, 2006). McTear, M., Z. Callejas, and D. Griol, The Conversational Interface: Talking to Smart Devices (Springer, Cham, Switzerland, 2016). Meng, X. and Z. Ding, and J. Xu, Moving Objects Management: Models, Techniques and Applications, 2nd ed. (Springer, Heidelberg, Germany, 2014). Milanovic, N., Non-functional Properties in Service Oriented Architecture: Requirements, Models, and Methods (IGI Global, Hershey, PA, 2011). Minoli, D., Enterprise Architecture A to Z: Frameworks, Business Process Modeling, SOA and Infrastructure Technology (CRC Press, Boca Raton, FL, 2008). Misra, S. and S. K. Pal, Soft Computing Applications in Sensor Networks (CRC Press, Boca Raton, FL, 2017). Mistrik, I., R. Bahsoon, N. Ali, M. Hiesel, and B. Maxim (Eds.), Software Architecture for Big Data and the Cloud (Morgan Kauffman, Cambridge, MA, 2017). Mistrik, I., R. Bahsoon, P. Eeles, R. Eishandel, and Stal (Eds.), Relating System Quality and Software Architecture (Morgan Kauffman, Burlington, MA, 2014). Nakamoto, S., “Bitcoin: A Peer-to-Peer Electronic Cash System,” Bitcoin.org, October 31, 2008. Nielsen, J., Usability Engineering (Academic Press, Boston, MA, 1993). Oppliger, R., Contemporary Cryptography (Artech House, Boston, MA, 2005). Osterwalder, A., The business model ontology: A proposition in a design science approach, Doctoral Dissertation, University of Lausanne, Lausanne, Switzerland (2004). Osterwalder, A. and Y. Pigneur, Business Model Generation: A Handbook for Visionaries, Game Changers, and Challengers (Wiley, Hoboken, NJ, 2010). Palfrey, J. and U. Gasser, Interop: The Promise and Perils of Highly Interconnected Systems (Basic Books, New York, 2012). Paradkar, S., Mastering Non-Functional Requirements: Analysis, Architecture, and Assessment (Packt Publishing, Birmingham, UK, 2017). Park, C. W., Designing Across Senses: A  Multimodal Approach to Product Design (O’Reilly, Beijing, China, 2018). Peffers, K., T. Tuunanen, M. Rothenberger, and S. Chatterjee (2008) A design science research methodology for information systems research. Journal of Management Information Systems, 24(3), 45–77. Pfeiffer, T., Gaze-based assistive technologies. In: Smart Technologies: Breakthroughs in Research and Practice (IGI Global, Hershey, PA, 2018). Popper, K., The Logic of Scientific Discovery (Routledge, London, UK, 2002). Portnoy, M., Virtualization Essentials, 2nd ed. (Sybex, Indianapolis, IN, 2016). Purhonen, A., Performance evaluation approaches for software architects. In: C. Atkinson, C. Bunse, Hans-Gerhard Gross, C. Peper (Eds.), Component-Based Software Development for Embedded Systems: An Overview of Current Research Trends (Springer, Berlin, Germany, 2005). Ramachandran, J., Designing Security Architecture Solutions (Wiley, New York, 2002). Rozanzski, N. and E. Woods, Software Systems Architecture: Working with Stakeholders Using Viewpoints and Perspectives, 2nd ed. (Addison-Wesley, Upper Saddle River, NJ, 2012). Salvador, V. F., J. S. de Oliveira Neto, and M. de Paiva Guimarães, Evaluating voice user interfaces in ubiquitous applications. In: Designing Solutions-Based Ubiquitous and Pervasive Computing: New Issues and Trends (IGI Global, Hershey, PA, 2010). Schumacher, P., The Autopoiesis of Architecture Vol. I: A New Framework for Architecture (Wiley, Chichester, UK, 2011). Schumacher, P., The Autopoiesis of Architecture, Vol. II: A New Agenda for Architecture (Wiley, Chichester, UK, 2012). Shackelford, D., Virtualization Security: Protecting Virtualized Environments (Wiley, New York, 2013). Sherif, M. H. (Ed.), Handbook of Enterprise Integration (CRC Press, Boca Raton, FL, 2010).

504

Bibliography

Simon, D., K. Fischbach and D. Schoder, Enterprise architecture management and its role in corporate strategic management. Information Systems and e-Business Management 12(1), 5–42, 2014. Sommerer, C., L. C. Jain, and Mignonneau (Eds.), The Art and Science of Interface and Interaction Design (Vol. 1) (Springer, Berlin, Germany, 2008). Veríssimo, P. and L. Rodrigues, Distributed Systems for System Architects (Springer, New York, 2001). Vissers, C. A., L. F. Pires, D. A. C. Quartel, and M. van Sinderen, Architectural Design: Conception and Specification of Interactive Systems (Springer, Cham, Switzerland, 2016). Walford, R. B., Business Process Implementation for IT Professionals and Managers (Artech House, Boston, MA, 1999). WiBotzki, M., Capability Management Guide: Method Support for Enterprise Architectures Management (Springer, Wiesbaden, Germany, 2018). Zadeh, L. A., Fuzzy sets. Information and Control 8, 338–353, 1965. Zadeh, L. A., The concept of a linguistic variable and its application to approximate reasoning–I, II, III. Information Sciences 8:199–249, 301–357; 9:43–80, 1975.

Index Note: Page numbers in italic and bold refer to figures and tables, respectively. A Aadhaar project, 390–392, 392 ABAS (attribute-based architecture style), 132 access control, 267–269 access control lists (ACLs), 268 accountability, 269–270 ACID (Atomicity, Consistency, Isolation, and Durability) properties, 210, 212 acoustic model, 491 active RFID tags, 246 adaptable applications, 403 address resolution, 422 affinity groupings, 302 agile EA, 13–15 agility, aspects, 16–17 built-for-change systems, principles, 17–18 change proficiency, framework for, 18–19 industrial generations, 6 from products to services to experiences, 11–12, 12 stability versus agility, 15–16 transformation, age, 1–4 agility, 1 aspects, 16–17 enterprise, 13, 41 stability versus, 15–16 agricultural economy, 1 Alexander, C., 17 Amazon Dynamo, 388 ambiguity, 5–6 Amdahl’s law, 171 AMM (availability maturity model), 192–194 amplification, 247 analysis techniques versus tasks, 291 analyticity, 20 data science techniques, 285–291 overview, 283–284 ANNs, see Artificial Neural Networks (ANNs) ant-colony algorithm, 175 Apache Cassandra, 387–388 API (application programming interface), 182 applicability, 58 availability perspective, 65 change perspective, 58–59 scalability perspective, 61–62

application architecture, 37–38 application-based context, 399 application binary interface (ABI), 182 application layer, 74 application programming interface (API), 182 application SLA, 200, 201 AR (association rules), 293, 302 AR (augmented reality), 327 architectural element, 34, 48 architectural versus detailed design, 30–31, 31 architecture style, 52 Architecture Trade-off Analysis Method (ATAM), 128, 132 arrival process, 137–138 Artificial Neural Networks (ANNs), 462–467 activation function, 461–462, 463 applications, 465–467 artificial neuron, model, 459–461 biological neuron, 459, 460 learning process, 465 overview, 458–459 processing units, 462–463, 464 topologies, 464 training, 465 artificial neuron, model, 459–461 ASR (Automatic Speech Recognizer), 489 assessments, 271–272 association rules (AR), 293, 302 associative learning, 465 associative networks, 466 asymmetric cryptosystem, 276 asymmetric encryption, 280–281 asymmetric interoperability, 159–160 asynchronous communication, 81, 163 ATAM (Architecture Trade-off Analysis Method), 128, 132 Atomicity, Consistency, Isolation, and Durability (ACID) properties, 210, 212 attacks, types, 261 attribute(s) business, 51 data, 230 implementation, 50–51 runtime, 51 tradeoffs, 51–52

505

506

attribute-based architecture style (ABAS), 132 augmented reality (AR), 327 authentication, 265–266 authorization, 265, 267 automatic clustering technique, 293 Automatic Speech Recognizer (ASR), 489 automatic text summarization, 288 autonomy, 86 availability, 19, 85, 99 AMM, 192–194 cloud computing, 260, 363 and downtime per year, 98 overview, 189–192 perspective, 64–67 and repair, 195–199 service and downtime ratings, 201 services, 199–203 steady-state, 98 system classes, 192 availability maturity model (AMM), 192–194 B B2B (business-to-business) integration, 166 balking, 139 bar codes, 407 Basically Available, Soft State, Eventual Consistency (BASE), 382–383 batch sequential style, 76 Bayesian approach, 286 Bayes’ rule, 490 BCP (business continuity plan), 194 BDSC (big data stream computing), 373 behavioral research (BR), 44, 45 behavioral specifications, 257 benchmarking, 253 big data, 10–11 computing systems, 379–380 4V characteristics, 375 overview, 373–375 processing approach, 380–382 system architecture, 382–384 big data stream computing (BDSC), 373 biological neuron, 459, 460, 461 biometric systems, 266 bipolar technology, 255 bitemporal database, 233 bivariate analysis, 300 blackboard systems, 76 blockchain, 437 architecture, 448–450 benefits, 448 features, 447–448

Index

forking, 449 for IoT, 452–453 overview, 443–446 P2P systems, 437–440 security, 450–452 structure, 444 transaction, 446–447 block ciphers, 279–280 block height, 448 Bluetooth device, 407–408, 427 Bohrbug, 105 bottleneck elimination optimization, 176 bottom up algorithm, 173 BPM (business process management), 339 BR (behavioral research), 44, 45 broad network access, 351 brute-force attack, 278 built-for-change systems, principles, 17–18 business architecture, components, 36 business continuity plan (BCP), 194 business domain, 369 business-driven elements, xviii business layer, 420 business model innovation, 41–42 design thinking for, 44–46 and EA, 46–47, 47 Osterwalder and Pigneur’s canvas, 42–44 business model trends, 9–10 business processes, 39, 369–370 integration, 166 business process management (BPM), 339 business services, 338, 370 business systems planning, 34 business-to-business (B2B) integration, 166 C call and return architectures, 77–78 calling population, 137 candidate architecture, 52–53 CAP (consistency availability partition) theorem, 211–212, 362 case-based reasoning (CBR), 292 CA (consistent/available) systems, 362 CDMA (Code Division Multiple Access), 405, 425 Cellular Digital Packet Data (CDPD), 425 cellular systems, 404 centralized data storage, 358 changeable systems, 27–29 change perspective, 58–60 change proficiency, framework for, 16, 18–19 channels, 43 CIM (computation independent model), 155, 157

Index

cipher system, 273–274 ciphertext, 275 classfull addressing scheme, 420 classification method, 286–287 clay, smart devices, 245 clickstreams, 299 Clinger-Cohen Act, xix closed systems, 26 cloud computing, 176 benefits, 356–358, 359–360 challenges, 360–363 characteristics, 351 comparison, 355 delivery models, 351–355, 352 deployment models, 355–356 key attributes, 350 overview, 349–350 cloud reference model, 353 cloud service provider, metrics for, 202–203, 204 cloud services, key attributes, 351 cloud technologies, 364 service-oriented computing, 368–369 virtualization, 364–368 cluster analysis, 287, 292 clustering technique, 302 CMOS (complementary metal-oxide semiconductor), 255 Code Division Multiple Access (CDMA), 405, 425 Colored Petri Nets (CPN) approach, 131–132 column family stores, 387 column-oriented stores, 387 commercial sector, EAs in, xix–xx communicating processes, 79–80 communications asynchronous, 81, 163 characteristics, voice, 320–321 interface, 243 modes, 162–163 spectrum, 320 synchronous, 81, 162–163 time dependent/independent, 81 community clouds, 356 complementary metal-oxide semiconductor (CMOS), 255 complexity, 5 component integration, 165 computation independent model (CIM), 155, 157 concatenative synthesis, 496 concurrency control and recovery techniques, 91–92 confidentiality, 260 consistency availability partition (CAP) theorem, 211–212, 362

507

consistent, partition-tolerant (CP) systems, 362 consistent/available (CA) systems, 362 content-related system influence factors, 313 context acquisition, 400 history, 399 influence factors, 315–316 models, 400–401 types, 399 context-aware applications generic, 402 illustrative, 402–403 context-aware computing applications, 396–403 location based systems, 403–408 overview, 395–396 recommender systems, 411–412 context-awareness, types, 398–399 Contiki, OS, 432, 434 continuous mobility, 217–218 controls, 262–263 access, 275 module, 484 plane, 218, 222 convergence, 12 coordinated interactions, 249 correlation analysis, 292 CPN (Colored Petri Nets) approach, 131–132 CP (consistent, partition-tolerant) systems, 362 criticality analysis (CA), 116 critical systems thinking (CST), 25 crossover, 472 cryptogram, 275 cryptographic algorithm, 273 cryptographic primitive, 273, 274 cryptographic protocol, 274 cryptography, 272–274 cryptosystem, 274 key lengths/keyspaces, 277 model, 274–276, 275 public-key, 280 relationship between keys and, 277 types, 276–277 customer conversation systems, 490 customer interaction systems ASR, 489 DM, 489, 492–493 implementing, 497 NLG, 489, 493–495 SLU, 489, 491–492 TTS, 489, 495–496 customer interface pillar, 43

508

D data analysis techniques/tasks, 291–294 -centered architectures, 75–76 distribution process, 91 -flow architectures, 76–77 integration, 165–166 integrity, 100 plane, 218–222, 220 scale, 376 variety, 377–378, 378 velocity, 377 veracity, 378–379 visualization, 287 volume, 376 database access middleware, 163 database interoperability, 148 database management system (DBMS), 83–84 database systems, 76, 285 Data Context Interaction (DCI), 486–487, 487 data mining, 287, 294–295 analysis, 300–303 applications, 298–300, 299 benefits, 297–298 overview, 295–296 data replication, 127 and allocation, 90–91 data science techniques database systems, 285 data mining/machine learning, 287 data visualization, 287 regression/classification, 286–287 statistical inference, 285–286 text analytics, 288–289 time series/market research models, 289–291 Date, C. J., 92 DBMS (database management system), 83–84 DCI (Data Context Interaction), 486–487, 487 DDBMS (distributed database management system), 83–84 deadlock, 87–88 decision trees (DT), 293 decryption algorithm, 276 decryption key, 275–276 Deep Neural Networks (DNNs), 491 defense in depths, 263, 263–264, 264 defuzzification, 468 denial of service (DoS) attack, 260 denying access, 268 dependability, 98 attributes, 98–100 errors, 100–102, 105–106

Index

failures, 100–102, 106–107 faults, 100–105 dependency analysis, 293 descriptive analytics, 284 design science method (DSM), 46 design science research (DSR), 45, 45 design technologies, 257 device-aware applications, 403 device-related system influence factors, 314–315 DHCP (Dynamic Host Control Protocol) server, 422 Dialog Manager (DM) module, 489, 492–493 digital economy, 2, 3 digital inclusion, 7 digital nomads, 9 digital signature, 281–282, 282 digital transformation, xvii EA attributes, 19–20 enterprises, 41–42 digitization process, 41 diphone synthesis, 496 directed/supervised models, 296 direct marketing campaigns, 297–298 disaster recovery (DR), 192, 194 discourse/pragmatical knowledge, 492 discrete event simulations, 174–175 discrete mobility, 217–218 discretion, 307 discriminant analysis, 292 distributed computing, 73–74, 80–83 distributed database, 83–84 advantages, 86–87 autonomy, 86 availability and reliability, 85 concurrency control and recovery in, 91–92 disadvantages, 87–90 query processing, 88 recovery measures, 88–89 rules for, 92–93 scalability and partition tolerance, 86 transparency, 84–85 distributed database management system (DDBMS), 83–84 distributed hash table (DHT), 438 Distributed Information and Communication Technology (DICT), 241 distributed object technology (DOT), 163–164 distributed recovery process, 92 distributed systems, 71–73 software architectural styles, 75–80 system architectural styles, 74–75 distribution transparency, 84–85

509

Index

DNNs (Deep Neural Networks), 491 document databases, 387–389 document-oriented databases, 388–389 domain-specific synthesis, 496 DoS (denial of service) attack, 260 DOT (distributed object technology), 163–164 dotted decimal notation, 420 doublespend problem, 446 DR (disaster recovery), 192, 194 DT (decision trees), 293 dwell time, 324 Dynamic Host Control Protocol (DHCP) server, 422 dynamic load balancing, 173, 178 cloud computing, 176 discrete event simulation, 174–175 stream based scenarios, 175 dynamic organizational interaction, 249 dynamic priority scheduling, 126 E EAFs, see enterprise architecture frameworks (EAFs) EAs, see enterprise architectures (EAs); evolutionary algorithms (EAs) EAS (electronic article surveillance), 423 ease of learning/use, 306 eavesdropping, 275 economic context, 315 electronic article surveillance (EAS), 423 elementary sets, 476 emancipatory approaches, 25 embedded systems (ES), 251–252 characteristics, 252 design metrics, 252–253 implementation, 253–257 emotional gestures, 325 encapsulation, 366 encryption, 275 algorithm, 276, 278, 279 key, 275 enterprise agility, 13, 41 enterprise application integration (EAI), 338 enterprise architecture frameworks (EAFs), xviii, 35, 67–68, 68 Enterprise Architecture Planning (EAP), xvii–xviii enterprise architectures (EAs), xvii, 34–35, 48, 49 application architecture, 37–38 attributes, 19 business architecture, 35–36

business model design and, 46–47, 47 candidate architecture, 52–53 in commercial sector, xix–xx digital transformation, 19–20 element, 48 genesis, xvii goals, 48 in government sector, xix information architecture, 36–37 role, xx stakeholder, 53 system structures, 49–52 technical architecture, 38 enterprise interoperability maturity model, 149–150 standards for, 150–151, 152 system interaction model, 151–154, 153 enterprise interoperability framework (EIF), 149 enterprise processes, 39 Enterprise Service Bus (ESB), 346–348, 347 Environment Layer (EL), 241 Environment Management Layer (EML), 241 Erector Set kits, 14 Erlang, A. K., 136 error handling adequacy, 322 error propagation, 101, 101 errors, 100–102, 105–106 ES, see embedded systems (ES) ESB (Enterprise Service Bus), 346–348, 347 Euclidean- (Eu) based modeling, 224 Euclidean-Motion-Vector-Based Location Update Mechanisms (Eu-MVLU), 224 event systems, 80 evolutionary algorithms (EAs), 471–472 fundamentals, 472–473 GA, 473–475 executable agent, 332–333 experience economy, 2 exploratory analysis, 300 external preference mapping process, 316 F fabric area network (FAN), 245 fabrication attacks, 261 failover process, 66 Failure Mode Effect Analysis (FMEA), 110, 116, 116, 117 failure rate, 196, 196–197 failures, 100–102, 106, 106–107 FAN (fabric area network), 245 Fault Containment Unit (FCU), 102

510

faults, 100–105 classification, 102 containment, 102, 105 detection, 105 elimination, 103–104 forecasting, 104 location, 105 masking, 104–105 prevention, 103 recovery, 105 space, 102 time, 102 tolerance, 100, 104, 107–109 fault-tolerant computing platforms, 65 Faulty Tree Analysis (FTA), 110, 117–119, 118 FCs, see Fuzzy Controllers (FCs) FCU (Fault Containment Unit), 102 feed forward neural networks (FFNN), 464 Field Programmable Device (FPD), 255 field-programmable gate array (FPGA), 248–249, 255 FIFO (first-in-first-out), 126, 138 financial aspects pillar, 43 first-in-first-out (FIFO), 126, 138 FISs, see Fuzzy Inference Systems (FISs) fitness function, 472 fixations, 323 Fixed-Distance Location Update (FDLU) mechanisms, 224 fixed-priority scheduling, 126 Fixed-Time Location Update (FTLU) mechanisms, 224 FLASH-OFDM (Fast, Low-latency Access with seamless Handoff—Orthogonal Frequency Division Multiplexing), 425–426 flexibility, 1, 253 FMEA, see Failure Mode Effect Analysis (FMEA) forward-looking infrared radar, 248 FPD (Field Programmable Device), 255 FPGA (field-programmable gate array), 248–249, 255 fragmentation transparency, 85 fragments, 84 Framework for Enterprise Interoperability (FEI), 148–149 fraud detection, 300 frequency, 306 FTA, see Faulty Tree Analysis (FTA) full-custom design, 256 full virtualization, 185–186, 365

Index

functional architecture requirements, 32–33 functional attributes, 430 functional decomposition, 383 functional integration, 164–165 functionalist approach, 24 fuzzification, 468 Fuzzy Controllers (FCs), 468–469, 469 Fuzzy Inference Systems (FISs), 467, 468 fuzzy logic controller, example, 469–471, 470 fuzzy rules, 467 G GA, see Genetic algorithms (GAs) Gate Array (GA), 256 gaze interfaces, 323–324 gaze-typing system, 323 generalized linear models, 286 General Packet Radio Service (GPRS), 425 general-purpose processors, 253–254 generic context-aware application architecture, 402 genesis block, 444, 449 Genetic algorithms (GAs), 173, 473 benefits, 473–474 simple description, 474 using binary coding, 474–475 geodatabase data model, 230 georelational data model, 229–230 gesture interfaces, 324 gesture recognition, 325 gestures, semantic classification, 325 global deadlock, 88 globalization, 9 global load balancing, 179 global positioning system (GPS), 397, 405 Global System for Mobile Communications (GSM), 425 Google, 97 Government Accountability Office (GAO), xix government sector, EAs in, xix GPRS (General Packet Radio Service), 425 GPS (global positioning system), 397, 405 graceful degradation, 105 graph databases, 387 graphical models, 401 graphical user interface (GUI), 148, 321 graph stores/database, 389–390 greedy algorithms, 173 GUI (graphical user interface), 148, 321

Index

H Hadoop HBase, 387 Haldane, J. B. S., 169 handover, 217 hard systems thinking (HST), 24 hardware clustering, 65–66 hardware description language (HDL), 248 hardware layer, 73 hash partitioning method, 384 Heisenbug, 106 Hidden Markov Model (HMM), 491, 496 hierarchical blockchain architecture, 453–454 higher-level processing, 313 high-level language (HLL), 182 HIM, see Human Interaction Management (HIM) HIMS (Human Interaction Management System), 481–482 histogram method, 178 historical database, 232 HLL (high-level languages), 182 HMM (Hidden Markov Model), 491, 496 horizontal mobility, 217 horizontal scaling, 361 host mobility, 216 hub–spoke integration, 167 human-centered environment, 239 human-computer interaction (HCI), 305 human-driven processes, 480 Humanedj HIMS, 482–484 human eye, 323, 323–324 human influence factors, 312–313 Human Interaction Management (HIM), 479 versus BPM, 482 business processes and, 479–480 features, 480–481 HIMS, 481–482 Human Interaction Management System (HIMS), 481–482 human-to-machine context, 311 hybrid architecture, 388 hybrid clouds, 356 hybrid encryption, 280 I IaaS (Infrastructure as a Service), 353 IDE (Integrated Development Environment), 254 IDEA, block ciphers, 280 identification, 264–265 image analysis and recognition, 299 immutability, 452

511

iMode, 425 industrial economy, 1–2, 3 industrial generations, 6 influence factors (IFs) context, 315–316 human, 312–313 system, 313–315 information system (IS), 37, 44 information technology (IT) block, 38 Infrastructure as a Service (IaaS), 353 infrastructure management pillar, 43 infrastructure services, 370 infrastructure SLA, 200, 200 input recognition adequacy, 322 instruction set architecture (ISA), 182 integrated circuits (ICs) technologies, 254–255 Integrated Development Environment (IDE), 254 integration B2B, 166 basics, 161–163 business process, 166 data, 165–166 EA, 161 functional, 164–165 hub–spoke, 167 message-oriented, 167 methods, 162 models, 163–166 patterns, 166–167 point-to-point, 166–167 presentation, 164 integrity, 100, 260 interaction architectures DCI, 486–487, 487 MSA, 487, 488 MVC, 485–486, 486 PAC, 484–485, 485 interactive computing architectures, 484–488 business processes/human interactions, 479–484 customer interaction systems, 489–496 overview, 479 interactive interfaces, 319–320 interactive systems, usability, 317–318 interception attacks, 261 interceptor, 276 Internet of Things (IoT) architecture, 418, 418–420 building blocks, 416–418 interoperability aspects for, 157–160, 158

512

Internet of Things (IoT) (Continued) overview, 414–416, 415 sensor networks, 424–431, 428, 428, 429–430 internet protocol (IP), 97, 420 interoperability, 19, 147 asymmetric, 159–160 IoT aspects, 157–160 model based, 154–157 overview, 147–148 standards, 150–151, 151 interpreter style, 79 interpretive approach, 24 interruption attacks, 261 intrinsic gestures, 324 intrusion detection systems (IDSes), 270 intrusion prevention systems (IPSes), 270 intrusive technique, 326 IoT, see Internet of Things (IoT) IP (internet protocol), 97, 420 IPv4 versus IPv6, 421 IS (information system), 37, 44 ISA (instruction set architecture), 182 isolated systems, 26 isolation, 366 J jockeying, 138 K kernel mode, 183 key establishment mechanism, 280 key structure models, 401 key–value pairs, 387 Key–Value Stores (K–V Stores), 387–388 Kinect sensor, 325–326 knowledge-based economy, 2 Kuhn, T. S., 44 L LAN (local area network), 84 latency, 205 lateration principle, 405, 406 layered architecture style, 77–78 Layered Queuing Network (LQN) model, 131 LEGO kits, 14–15 Levelized Weight Tuning (LWT) algorithm, 173 lexical/morphological analysis, 491 limited-purpose processors, 254 limiting access, 267 linear combination, 460

Index

linear predictor, 286–287 linear regression, 286–287 linguistic interactions, 250 linguistic states, 469 LM (location management), 218 load balancing schemes, 172 dynamic, 173–176, 178 in P2P systems, 176–177 performing, 179 for scalability, 170–172 static, 173, 177 load balancing technique, 66 local area network (LAN), 84 local deadlock, 88 local load balancing, 179 location-aware applications, 402 location-based context, 399 location based systems, 403–411 location data, sources, 403 cellular systems, 404 multireference point systems, 405 tagging, 406–408 location management (LM), 218 logging, 271 logical controls, 262 logical outage, 189 logic-based models, 401 logic blocks, 248 logic specifications, 257 lossy compression, 314 low-level—processing, 313 LQN (Layered Queuing Network) model, 131 LWT (Levelized Weight Tuning) algorithm, 173 M machine learning algorithms, 173, 287 maintainability, 100 management information base (MIB), 223 management plane, 218, 222–223 Mantis, OS, 434 MapReduce, 97, 367–368 market basket analysis, 298 market research model, 289–291 Markov Decision Process (MDP), 493 markup scheme models, 401 master mode, 183 master–slave replication, 384 MC (microcontroller devices), 248 MDP (Markov Decision Processes), 493 Mean Time Between Failures (MTBF), 99, 99, 112, 199 mean time to a failure (MTTF), 99, 99, 197–198

513

Index

Mean Time to Repair (MTTR), 99, 99, 103, 198–199 measurement chain of sensors, 430 mechanistic business processes, 480 media-related system influence factors, 313–314 medical diagnosis, 299 membership degree, 467 membership function (MF), 467 memory management unit (MMU), 184 message broker approach, 167 message-oriented integration, 167 Message Oriented Middleware (MOM), 163–164 metamodel-based systems, 60 MIB (management information base), 223 microcontroller devices (MC), 248 microelectromechanical systems (MEMS), 240 micro-mobility, 215 Micro-Service Architecture (MSA), 487 middleware layer, 73–74, 80–81, 163, 419 mining nodes, 444 mixed reality (MR), 327 Mobile Location Protocol (MLP), 403 mobility, 19, 215 data, 409 discrete and continuous, 217–218 objects, 216–217 support level, 216 wireless access types, 216 mobility data mining, 409 trajectory mapping, 410, 411 trajectory reconstruction, 409–410 trajectory visualization, 410–411 mobility management at different layers, 220 network reference model, 223 protocol reference model, 218–223, 219, 221 MOD, see moving object database (MOD) Model Builder’s kit, 14–15 Model Driven Architecture (MDA), 154 layers and implementation, 156, 156 Model Driven Interoperability (MDI), 157 Model View Controller (MVC), 485–486, 486 modification attacks, 261 MOM (Message Oriented Middleware), 163–164 monitoring, 271 moving object database (MOD), 223 indexes, 225 location tracking, 224–225 modeling, 224 querying, 225–226 statistical analysis and data mining, 226 uncertainty management, 225 moving objects spatio-temporal (MOST) model, 224

MTBF, see Mean Time Between Failures (MTBF) MTTF, see mean time to a failure (MTTF) MTTR, see Mean Time to Repair (MTTR) multichannel access, 338 multidatabase system (MDBS), 84 multi-factor authentication, 267 multimodal systems thinking (MST), 25 multiple regression, 286 multiple server models, 143–145 multireference point systems, 405 multisensor data fusion, 247 multi-tenancy model, 351, 356, 362–363 multi-tier architecture, 53 multiuser scaleup, 172 multivariate analysis, 300 mutation process, 472 mutual authentication, 266–267 N NameNode controller, 176 Natural Language Generation (NLG) module, 489, 493–495 net-based tracking, 225 network device-based context, 399 interoperability, 148 mobility, 216 reference model, 223 -related system influence factors, 314 virtualization, 186–187 neural networks (NNs), 292 and control, 465–466 and pattern recognition, 466–467 recurrent, 464 neurons, 458 NLG (Natural Language Generation) module, 489, 493–495 NN, see neural networks (NNs) nomadic mobility, 216 non-emotional gesture, 325 nonfunctional architecture requirements, 33–34 nonfunctional attributes, 430–431 non-functional property, 55 nonrelational databases, 285 nonrepudiation, 269 Not only SQL (NoSQL), 383 databases, 387–390 data management, 385–387 nounces, 451 n-tier architecture, 74–75

514

O object interoperability, 148 objective metrics, 318 Object Management Group (OMG), 154 object-oriented architecture style, 78 object-oriented models, 401 Object-Request-Broker (ORB), 148 Office of Management and Budget (OMB), xix on-demand self-service, 351 ontology-based models, 401 Open Group, xx Open Mobile Alliance (OMA), 403 openness, 306 open systems, 26 operating systems (OSs), 371, 432 layer, 73 virtualization, 185 opinion mining approach, 289 Order Processing System (OPS), 165 ordinary least-squares regression, 286 organic light-emitting diode (OLED), 244 organizational trends, 8–9 OSs, see operating systems (OSs) Osterwalder and Pigneur’s business model canvas, 42–43, 44 output phrasing adequacy, 322 output voice quality, 322 overfitting, 293 overlay routing versus IP routing, 440, 440–443 P P2P systems, see peer-to-peer (P2P) systems parallel architectures, 381 parallel computing system, 72 parallel processing, 170–171 parallel programming, 72, 97 parallel systems, 72 paravirtualization, 186, 365 parent block, 447 partition tolerance, 86 Partnership for Research in Information Systems Management (PRISM), xviii–xix PASA (Performance Assessment of Software Architectures), 130–131 passive RFID tags, 246 peer-to-peer (P2P) systems, 75, 176–177, 437–440, 442, 449 Peirce, S., 44 perception layer, 418 perceptual dimensions, 316

Index

performability, 121–124 managing resources, 125–127 performance evaluation, 127–135 resource demand, 124–125 Performance Assessment of Software Architectures (PASA), 130–131 performance evaluation, 127 framework, 128–130 methods, 130–133 results, 133–135, 134 solution techniques, 134 personalized applications, 403 personal mobility, 216–217 perspective, 55 advantages, 56 availability, 64–67 benefits, 56–57 change, 58–60 in detail, 57 scalability, 60–64 views versus applicable, 57 Petri nets, 127 physical context, 315 physical controls, 262 physical outage, 189 piconets, 427 piecewise-linear function, 462 pipe-and-filter style, 77 plaintext, 275 Platform as a Service (PaaS), 353, 354 platform independent model (PIM), 155, 157 platform interoperability, 148 platform specific model (PSM), 155, 157 platform virtualization, 185–186 PLDs (Programmable Logic Devices), 255 point-to-point integration, 166–167 Poisson queuing model, 140 Poisson regression, 286 policy/convention-based interaction, 249 PoS consensus, 451 possibility distribution, 467 Potential Influence Factors (IFs), 312 PoW consensus, 451 power consumption, 253 pragmatic interoperability, 150 predictive analytics models, 284 predictors, 296 prescriptive analytics, 284 Presentation Abstraction Controller (PAC), 484–485, 485 presentation integration, 164 privacy intrusion, 321 private blockchain, 444

515

Index

private clouds, 355 probability theory, 195 process algebras, 127 processing approach, big data, 380–382 product pillar, 42 Programmable Logic Devices (PLDs), 255 proof of stake (PoS) scheme, 444–445 Proof-of-Work (PoW), 444 protocol reference model, 218, 221 control plane, 222 data plane, 218–222 management plane, 222–223 for mobility management, 219 public blockchains, 445 public clouds, 356 public-key cryptosystems, 276, 280 Q quality of experience factors influencing, 312–316 features, 316–317 interactions and, 309–312, 310 Quality of Service (QoS), 201–202 Queuing Network Models (QNMs), 127 queuing systems, 135–136 configuration, 138 discipline, 138 multiple server models, 143–145 operations versus process capacity, 137 Poisson queuing model, 140 process, 137–138 pure birth model, 139 pure death model, 139 single server models, 140–143 queuing theory, 136 R radio frequency identification (RFID), 246, 408, 416, 422–423, 427 range partitioning, 384–385 rapid application integration, 338 rapid elasticity/scalability, 351 Rate-Monotonic Analysis (RMA), 130 rate monotonic scheduling (RMS) theory, 130 records, admissibility, 270 recurrent NN, 464 redundant software components, 109 reference architectures, 67 reference models, 68 Register Transfer (RT) specifications, 257 regression analysis, 292

regression method, 286–287 reinforcement learning, 465 relational database management systems (RDBMS), 74 relational databases, 285 relational marketing, 299 reliability, 85, 99, 110 analysis tools, 115–116 characteristics, 110–111 with independent failure event, 113, 113–115, 114 understanding, 111–113 reliability, availability, and serviceability (RAS), 193 remote procedure calls (RPCs), 77, 163–164 reneging, 139 repair, availability and failure rate, 196, 196–197 MTBF, 199 MTTF, 197–198 MTTR, 198–199 probability, 195 replication, 205 data, 88, 205 data management, 208–210 transparency, 85 types, 205–208 REpresentational State Transfer (REST), 344–345 resource management, 180 resource pooling, 351 resource virtualization, 364 response time, 121 revenue streams, 43 RFID, see radio frequency identification (RFID) risk, 262 risk evaluation, 300 rollback database, 232–233 rough sets, 458, 475–477, 476 round-robin partitioning, 384 round-robin scheduling, 126 row partitioning, 384–385 row versus column-oriented data layouts, 385 RPCs (remote procedure calls), 77, 163–164 rule-based approaches, 492 rule-based systems, 79 S SaaS (Software as a Service), 354–355 saccades, 323 safety, 100

516

sampling rate, 410 satellite communication (SC), 425 scalability, 19, 86 cloud computing, 361–362 load balancing for, 170–172 overview, 169–170 perspective, 60–64 scale-out/-up systems, 170 scaleup capacity, 171 scheduling resources, 125–126 seamless mobility, 216 search algorithms, 173 secret key cryptosystems, 276 security, 19 access control, 267–268 accountability, 269–270 audit, 270–272 authentication, 265–266 authorization, 267 identification, 264–265 overview, 259 technology, 259 segmentation, 297 semantic analysis, 492 semantic interactions, 250 semantic interoperability, 150 semicustom ASIC, 255 sensing system, 246–248 sensor networks, see Wireless Sensor Networks (WSN) sentiment analysis, 289 serpent, block ciphers, 280 service access point, 218 service channels, 136 service-level agreements (SLAs), 122, 199–201, 336 service mobility, 216–217 service-oriented computing (SOA), 368–369 advantages, 337 applications, 337–339 benefits, 334–335 characteristics, 335–336 definition, 332–333 ingredients, 339–341 layers in, 369–371 objects, 339–340 overview, 331–332 resources, 340–341 and RESTful services, 344–345 services, 333–334, 340 and web services, 341–344 service realizations, 370–371

Index

services, availability, 199 cloud service provider, 202–203 QoS, 201–202 SLAs, 199–201 7-Eleven company, 42 sharding technique, 384–385 shared nothing approach, 380, 382 sharing and prosuming, 6–7 sigmoid function, 461, 463 signature algorithm, 282 Simple Object Access Protocol (SOAP), 342–343 simulation modeling, systems dynamics via, 27 single-board computers (SBCs), 417 single-purpose processor, 254 single server models, 140–143 SLAs, see service-level agreements (SLAs) smart devices, 238, 243–245 smart dust, 244 smart environment, 238, 245 controlling, 248–249 sensing, 246–248 tagging, 245–246 smart interaction, 239, 249–250 snapshot database, 232 SOA, see service-oriented computing (SOA) SOAP (Simple Object Access Protocol), 342–343 SoC (Systems-on-a-Chip), 252 social context, 315 social trends, 6–7 soft computing ANN, 458–467 EA, 471–475 fuzzy systems, 467–471 overview, 457–458 with traditional hard computing, 458 soft systems thinking (SST), 24 software architectural styles, 75–80 Software as a Service (SaaS), 354–355 software-intensive system, 48 Software Performance Engineering (SPE), 130–131 space redundancy, 109 spam filtering, 288 spatial data, 227 characteristics, 227 raster, 228 types, 228 vector, 227–228 spatial database systems, 230, 231 spatio-temporal data, 233–234 database systems, 236 model, 234–235

517

Index

modeling layers, 235 preliminaries, 234 spatio-temporal objects, 233–234 special-purpose processor, 254 speedup, 171 spoke–hub integration, 167 spoken language dialogue systems (SLDSs), 317 spoken language recognition, 489–491 Spoken Language Understanding (SLU) module, 489, 491–492 stability versus agility, 15–16 stabilizing controller, 465, 466 stage priority optimization, 176 stakeholders, 53 Standard-Cells Based Integrated Circuits (CBIC), 256 static load balancing, 173, 177 static scheduling, 126–127 statistical inference, 285–286 statistical models, 492 steady-state availability, 98 steganography, 274 stochastic process algebras (SPAs), 127 storage virtualization, 186 Straight-Through Processing (STP), 165 strategic adjustments and reorientation, 16 stream based scenarios, 175 stream ciphers, 279 stream computing, 374 strengths, weaknesses, opportunities and threats (SWOT) analysis, 15 structured P2P systems, 438 Structured Query Language (SQL), 148 subjective metrics, 318 subnetting technique, 420 supervised analysis classification, 300–301 exploratory analysis, 300 regression, 301–302 time series, 302 supervised learning, 465 switchover, 191 SWOT (strengths, weaknesses, opportunities and threats) analysis, 15 symmetric cryptosystems, 276 symmetric encryption, 278–280 symmetric trust, 280 synchronization, 165 synchronous communication, 81, 162–163 syntactical analysis, 491–492 syntactic interoperability, 150 synthesizer, 496

systems, 23 architecting/architecture, 29–34 architectural style, 74–75 changeable, 27–29 decision problems, 28 design, 31 dynamics via simulation modeling, 27 elements, 23 engineering, 26–27 influence factors, 313–315 integrity, 100 properties, 25 science, 24–26 specification, 257 thinking, 23–24 types, 26 systems architecting/architecture, 29–32 functional requirements, 32–33 nonfunctional requirements, 33–34 Systems-on-a-Chip (SoC), 252 T tangible user interface (TUI), 245 task context, 315 Task Management Layer (TML), 241 task match, 306 TCP (transmission control protocol), 97 Technical Architecture Framework for Information Management (TAFIM), xx technical controls, 262 technical/information context, 315–316 technology implementation styles, 256 technology trends, 10–11 telerobotics, 327 temporal context, 315 temporal data, 231–233 terminal mobility, 216 Tesco Lotus company, 42 testing phase, 462 test statistic, 285 text analytics, 288–289 text annotation, 288 text mining, 299 text normalization, 495 Text-to-Speech (TTS) module, 489, 495–496 The Open Group Architecture Framework (TOGAF), xx threats, 261 three-dimensional (3-D) approaches, 326 3D interfaces, 326–327 threshold value method, 178

518

time-aware applications, 402 time-based context, 399 time dependent/independent communication, 81 time-of-flight principle, 326 time redundancy, 109 time series regression, 289–291 time-to-market, 253 TinyOS, 432 top-down architecture style, 77 TPM (transaction processing monitor), 163–164 traditional statistical analysis, 295 training phase, 462 trajectory mapping, 410, 411 reconstruction, 409–410 visualization, 410–411 transaction processing monitor (TPM), 163–164 transaction time, 231 transformation layer, 332 transmission control protocol (TCP), 97 transparency, 84–85 trends business model, 9–10 organizational, 8–9 social, 6–7 technology, 10–11 trigger, 487 true outage, 189–190 TTS (Text-to-Speech) module, 489, 495–496 two-dimensional (2-D) approaches, 326 twofish, block ciphers, 280 U ubiquitous computing (ubicom), 237, 327 core properties, 241–243 fundamentals, 240 ubiquitous technology, 237 ubiquity, 19 computing core properties, 241–243 overview, 237–239 ubicom fundamentals, 240 UDDI (Universal Description, Discovery, and Integration) specification, 343–344 UIDAI (Unique Identification Authority of India), 390 UML (Unified Modeling Language), 128, 130 uncertainty, 5 uncertainty management, 225 undirected models, 296 Unified Modeling Language (UML), 128, 130 Uniform Resource Locators (URLs), 420

Index

Unique Identification Authority of India (UIDAI), 390 unit selection synthesis, 496 univariate analysis, 300 Universal Description, Discovery, and Integration (UDDI) specification, 343–344 unstructured P2P, 438 unsupervised analysis association rules, 302 clustering, 302 description/visualization, 303 unsupervised learning, 465 unsupervised models, 296 URLs (Uniform Resource Locator), 420 usability, 20 concept, 306–307 interactive systems, 317–318 metrics, 318 overview, 305–306 principles, 307 specifications, 307–309 use cases model system, 33 user-centered approach, 305 user-defined time, 231 user-friendly technology, 11 user state-based context, 399 V value proposition, 42 verification algorithm, 282 vertical mobility, 217 vertical scaling, 361 viewpoints and views, 54–55 virtual architectures, 78–79 virtual environment (VE), 327 virtualization, 180–181, 182 advantages, 366–367 benefits, 367–368 challenges, 368 layering and, 181–183 network, 186–187 OS, 185 platform, 185–186 storage, 186 VM, 183–185 virtualized environment, characteristics, 365–368 virtual local area networks (VLANs), 187 virtual machine (VM), 181, 183–185 virtual machine monitor (VMM), 181, 184–185

519

Index

virtual reality (VR), 327 virtual sensors, 431 VLANs (virtual local area networks), 187 VM (virtual machine), 181, 183–185 VMM (virtual machine monitor), 181, 184–185 voice interfaces, 320–322 Voice User Interfaces (VUIs), 320–322 volatility, 4 VR (virtual reality), 327 VUCA (Volatility, Uncertainty, Complexity and Ambiguity) ecosystem, 4–6 vulnerabilities, 262 W web-based IT landscape, 190, 190 web-based system, 122 web mining, 299 web services, 37, 341–344, 342 Web Services Description Language (WSDL), 342 wide area network (WAN), 84

widened address space with IPv6, 420–422, 421 WiMAX (Worldwide Interoperability for Microwave Access), 426 Wireless Application Protocol (WAP), 425 Wireless Local Area Networks (WLANs), 426 Wireless Metropolitan Area Network (WMAN), 426 wireless networks, characteristics, 424 Wireless Personal Area Network (WPAN), 426 Wireless Sensor Networks (WSN), 416–417, 431–432 characteristics, 434–435 hardware platforms, 433 OSs, 434 Wireless Wide Area Network (WWAN), 425 workflow priority optimization, 176 workloads, 172 Z Zachman, J., xviii, 34