Cloud Transformation. How the Public Cloud is changing businesses 9783658388225, 9783658388232

226 48 10MB

English Pages [279] Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Cloud Transformation. How the Public Cloud is changing businesses
 9783658388225, 9783658388232

Table of contents :
Foreword
Preface
Contents
List of Figures
List of Tables
1: Do You Remember Daimler, RTL and Siemens?
1.1 Introduction
1.2 The Innovator’s Dilemma Can Affect Any Company
1.3 Disruptive Technology – Public Cloud
1.4 The Aim of This Book: Surfboard Instead of a Lifebelt
1.4.1 Methodological Approach
1.4.2 Guide through the Book
References
2: Everything Becomes Digital
2.1 Technical Digitization
2.2 The Consequences of Digitisation: Decentralisation, Communication, Convergence
2.2.1 Digitization as a Condition of Decentralized Working
2.2.2 Digitisation as a Social (Communication) Phenomenon
2.2.3 Convergence of the Media
2.3 The Complete Digitalisation of Value Creation
2.4 The Platform Economy – Data Is the New Oil
2.5 Conditions for the Successful Operation of Digital Platforms
2.5.1 Big Data
2.5.2 Data Leveraging
2.5.3 Winner Takes All
2.6 Success Factors for the Use of Digital Platforms
2.7 Conclusion
References
3: The Road to a Zero Marginal Cost Economy
3.1 Big Is Beautiful
3.1.1 Economies of Scale and Experience Curves
3.1.2 Marginal Cost Analysis
3.2 Zero Marginal Cost Business Models
3.2.1 Comparison of Zero Marginal Cost Business Models and Classical Business Models
3.2.2 The Model for Analysing Disruptive Market Changes Towards Zero Marginal Cost Business Models
3.3 When Is It Worthwhile to Start Using Digital Technologies?
3.4 Big Stays Beautiful
3.5 Artificial Intelligence for Editing
References
4: Cloud – The Automated IT Value Chain
4.1 It’s the Software
4.2 The Classic IT Process
4.2.1 Creating Software
4.2.2 Operating Software
4.2.3 Scaling Software
4.3 The Stack – IT and Its Value Chain
4.3.1 The Levels of the Stack
4.3.2 Variety of Components Creates Numerous Dependencies
4.4 The Cloud Transformation in IT
4.4.1 The Cloud as a Trend Term
4.4.2 What Is the Cloud?
4.4.3 The API as a Game-Changer
4.4.4 Not All Clouds Are Created Equal
4.5 The Cloud-Based IT Process
4.5.1 Creating Software
4.5.2 Operating Software
4.5.3 Scaling Software
4.6 Public Cloud vs. Private Cloud
4.7 Security in the Public Cloud
4.7.1 Fraud Groups and Examples of Threats
4.7.2 Countermeasures by Cloud Providers
4.7.3 Shared Responsibility Between Customer and Cloud Provider
4.8 Case Study: A Misunderstanding in the IT Purchasing Department of a Major Corporation
4.9 From Traditional IT to the Cloud – Explained on One Page and in One Picture
References
5: Cloud IT vs. Classic IT – Calculation for Controllers
5.1 A Practical Example: Outsourcing Invoice Management
5.2 Features of the Classic Application
5.2.1 Architecture and Fixed Operating Costs
5.2.2 Structure and Expansion of the Application
5.2.3 Average and Marginal Costs
5.3 The Cloud Transformation of the Application
5.4 Features of the Cloud-Based Application
5.4.1 Fixed Operating costs and Total Costs
5.4.2 Structure and Expansion of the Application
5.4.3 Average and Total Costs
5.5 An Overview of the Advantages and Disadvantages of Transformation
5.5.1 Comparison of Financial Factors
5.5.2 Comparison of Functional Factors
5.6 Conclusion
References
6: Mastering Software as a Core Competence
6.1 Everything Becomes Software
6.2 Why Software Is Such a Challenge for Managers
6.3 Virtualization Layers
6.4 Sourcing Options
6.5 Software Architecture
6.5.1 Monolithic Architectures
6.5.2 Distributed Systems
6.5.3 Cloud-Native Architectures
6.5.4 Comparison of Monoliths and Cloud-Native Architectures
6.6 Process Flows
6.6.1 Agile from the Idea to the Development of the Code
6.6.2 Agile Software Development with Scrum
6.6.3 Automated Software Testing and Deployment with CI/CD
6.6.4 Covering the Entire Process with DevOps and Feature Teams
6.7 People and Organization
6.7.1 Employee Management
6.7.2 Corporate Culture
6.7.3 Employment Situation
6.8 Practical Example ING DiBa
6.9 Conclusion
References
7: Falling Transaction Costs and the New Network Economy
7.1 Transaction Costs Hold Traditional Value Chains Together
7.2 Excursus: Internal Transaction Costs Slow Down Economies of Scale in Production
7.3 Integrators, Orchestrators, and Layer Players – How Transaction Costs Influence Economic Structures
7.4 Fast Communication and Simple Automation – The Transaction Cost Levers of Digitization
7.4.1 Decreasing Communication Costs
7.4.2 Automation of Business Processes
7.5 New Make-Or-Buy Decisions Through Digitalisation
7.6 The Impact of the Cloud Revolution on the Transaction Costs of Software Use
7.7 Practical Example: How Software Purchasing Via the Cloud Reduces Transaction Costs
7.7.1 The Purchasing Process of a Classic CRM System in Your Own Data Center
7.7.2 Use of Software Services (SaaS) for CRM
7.8 Towards the Network Economy
7.9 Conclusion
References
8: The Cloud Transformation
8.1 Scientific Models for Digital Transformation
8.1.1 McKinsey’s Three Horizons of Growth
8.1.2 Zone to Win by Geoffrey Moore
8.2 The Three Levels of Cloud Transformation
8.3 Transforming the Infrastructure Model
8.3.1 The Typical Migration Scenarios for Applications
8.3.2 Plan – Analyze Applications and Obtain Commitments
8.3.3 Building – Preparing the New Landscape
8.3.4 Performing Migrations
8.3.5 Further Development – Keeping the Landscape Up to Date and Safeguarding It
8.3.6 Summary – Cloud and Modern Software Approaches
8.4 Changing the Operating Model
8.4.1 Focus on Business-Relevant Applications
8.4.2 Resilient Handling of Errors
8.4.3 Customer Focus and Data Analysis
8.4.4 Machine Learning and Artificial Intelligence
8.4.5 People and Culture
8.5 Changing the Business Model
8.5.1 Transformation as the First Management Task
8.5.2 Zone Offense – Acting as a Disruptor
8.5.3 Zone Defense – Countering Disruption
8.6 The Impact of Cloud Transformation on Potential Employees
8.6.1 Developers – The New Paradise
8.6.2 Cloud Architects – The Scarcest Resource on the Market
8.6.3 Traditional IT Specialists – Real Threats and Great Opportunities
8.6.4 Middle Management – Pressure and Fear of Loss
8.6.5 Specialist Departments – Freedom, Chaos and Responsibility
8.6.6 Top Management – Financial Ratios, Threats of Disruption and New Ways of Doing Things
8.7 A Successful Cloud Transformation – Explained in One Picture
References
9: Cloud Transformation – How the Public Cloud Is Changing Businesses
9.1 Businesses Fail – Even When Managers Seem to Do Everything Right
9.2 Digitalisation as a Defining Trend in the Economy
9.3 Marginal Costs Determine Competitiveness
9.4 Cloud as a Key Technology of Digitization
9.5 Classic Applications Can Be Migrated to Cloud Technologies
9.6 Becoming Competitive for the Digital World with Software and Cloud Skills
9.7 Sinking Transaction Costs Lead to More Outsourcing and Change the Economy
9.8 Cloud Transformation Affects All Companies with Digital Value Creation – On Three Different Levels
9.9 Conclusion
Index

Citation preview

Roland Frank · Gregor Schumacher · Andreas Tamm

Cloud Transformation How the Public Cloud is changing businesses

Cloud Transformation

Roland Frank • Gregor Schumacher Andreas Tamm

Cloud Transformation How the Public Cloud is changing businesses

Roland Frank Mediadesign Hochschule München München, Germany

Gregor Schumacher Berlin, Germany

Andreas Tamm München, Bayern, Germany

ISBN 978-3-658-38822-5    ISBN 978-3-658-38823-2 (eBook) https://doi.org/10.1007/978-3-658-38823-2 © The Editor(s) (if applicable) and The Author(s), under exclusive licence to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2023 The translation was done with the help of artificial intelligence (machine translation by the service DeepL.com). A subsequent human revision was done primarily in terms of content. This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer Gabler imprint is published by the registered company Springer Fachmedien Wiesbaden GmbH, part of Springer Nature. The registered company address is: Abraham-Lincoln-Str. 46, 65189 Wiesbaden, Germany

Foreword

Anyone who thinks that Cloud Transformation: How the Public Cloud is Changing Business (Cloud-Transformation: Wie die Public Cloud Unternehmen verändert) is just another attempt to get CIOs to use cloud infrastructure services instead of their own data centers is completely wrong. The three authors combine a rare blend of economics, corporate strategy experience, and modern IT architecture in one book. It is therefore particularly aimed at interdisciplinary readers, both in established companies and in start-ups. At the beginning of the cloud computing era, many IT leaders in Europe particularly thought of public clouds as a continuous evolution of traditional IT operations and outsourcing offerings. However, the “Infrastructure as Code” approach, which offers fully automated procurement of infrastructure via an API in a matter of minutes or even seconds, showed how radical and innovative public clouds were. Although their IT infrastructure was comparable to traditional IT services at the beginning, the business model is the real disruptive innovation. Thus, “cloud transformation” has become the key enabler of innovative digital business models in most industries. The current work by Frank, Schumacher, and Tamm helps corporate strategists to navigate the standard American literature of the tech industry and to find the sensible direction of their digital strategy in the context of their industry. This is not just about new digital products, with which – detached from the core business – hardly any conservative investor or owner in Europe can be convinced. In established companies, the digitization of existing products or services is the key to success. In the process, existing products can take on a different “digital intensity.” Especially when an industry is seriously transformed by offerings of a very high digital intensity, as online retailers have experienced in retail, companies need to radically question their value creation. If necessary, an established company must cannibalize itself to avoid disruption by a newcomer. That’s why, for example, competing automotive giants like Daimler and BMW are jointly driving forward digital mobility solutions. Even if fewer cars are sold as a result, a new user group is addressed that prefers to rent vehicles on a short-term basis rather than buy them. Books like Cloud Transformation, which master the bridge between technology and business, are so important for realigning any industry. The fundamentals of digital business models are the same everywhere, whether they’ve been selling computer v

vi

Foreword

infrastructure or cars. The general availability of cloud-native technologies, with machine learning and IoT backends, is accelerating market transformation. This makes their digital strategy not just a corporate strategy but a survival strategy  – even as a current market leader! Dr. Ried Dr. Ried is a principal analyst at the independent analyst firm Crisp Research and conducts market research around cloud computing and IoT. In addition to leading roles at software vendors, Dr. Ried was responsible for global market research and strategy consulting for cloud providers at Forrester Research for over 7 years. Images: https://www.stefan-­ried.de/#publicimages.

Preface

In 2010, the animated film Cloudy with a Chance of Meatballs was released in the German version: Wolkig mit Aussicht auf Fleischbällchen. This film is about an ingenious inventor whose ideas have not been appreciated until now, be it spray-on shoes or devices that can read the minds of monkeys. It is only when he invents a machine that can turn water into food, which suddenly takes on a life of its own and eventually disappears into the clouds (“Cloud”), that he gains the public’s attention. Because everyone can now reap the benefits of the “Cloud,” every resident of the city gets an account with which they can trigger food orders. Pizzas, hamburgers, spaghetti with tomato sauce, and meatballs are now available to everyone and at any time. It’s a similar story with the public cloud: Software and IT infrastructure from the public cloud mean an incredible gain because they are available to everyone at all times – and that applies to both the private users of these services and companies. Dropbox, Lieferheld, Spotify, and Co. have become an indispensable part of our everyday lives. All of these cloud applications have integrated themselves inconspicuously but firmly into people’s lives. The same applies to numerous enterprise applications that are available via the public cloud: The increase in efficiency and convenience are so great that companies that have already dared to move to the public cloud can no longer imagine returning in many cases. Surprisingly, however, these advantages have not yet led to companies making intensive use of the cloud. According to a survey by Bitkom, 73% of companies in Germany already use cloud computing, but so far the use is generally limited to the areas of data storage (61%), e-mail applications (48%), or office applications (34%). The real benefits that can be realized by using the cloud play only a minor role: for example, agile software development, digital business models, new work processes, and flexible cost structures. So, it’s no wonder that for many managers the topic of the public cloud is still more of a duty than a freestyle part of their daily work. In many cases, individual technology-savvy employees have to drive the majority of the company ahead of them. Cloud Transformation: How the Public Cloud is Changing Businesses aims to help cloud experts better communicate the benefits of the new technology  – and help managers get to grips with the new issues. vii

viii

Preface

Expecting a sales employee to understand and judge all legal contract details would be asking too much. Similarly, it doesn’t make sense to have the accountant create the UX design of their new analytics software themselves. The specialization benefits in these sub-­ areas are so great that it is a full-time job to professionally implement the activities behind them. The starting position for managers, however, is different: While even they cannot be expected to develop software themselves or to be able to evaluate the code of their developers – they should know and be able to correctly classify the major economic and business contexts of the IT architectures used. Because this results in decisions that strongly influence the competitiveness of their digital business models – and ultimately the continuity of all the other tasks and activities in a company depends on this. To be able to take on a responsibility of this scope, fundamental knowledge in the areas of digitalization of business models, cloud computing, and software and organizational development will be increasingly expected in the future. If managers succeed in gaining this overview, they will be able to mediate between the different departments as contact persons – and last but not least: make the right decisions. This book is dedicated to these four areas – digital business models, cloud computing, software, and organizational development. It provides managers with a tool that enables them to (re)start the dialog with the business departments and drive the cloud transformation within the company. The book is written in such a way that IT laymen can follow the explanations – but at the same time, IT advanced users can also take away new ideas and concepts for the management and realignment of companies for their daily work. Several people were involved in the creation of this book, and we would like to take this opportunity to express our sincere thanks to them: • Many thanks to Ms. Wiegmann, our editor at Springer, who intensively accompanied the creation of this book from the first idea to publication. • In addition, a big thank you to the employees and managers of Arvato Systems, who provided the template for this book with the cloud transformation in their company. • We would like to thank our editor and proofreader: Dolores Omann and Jan-Erik Strasser, on whom we put a lot of pressure due to our tight deadlines. Finally, we express our heartfelt gratitude to the people in our immediate circle who have supported us in the last few months, and the book would not have been possible without their cooperation and encouragement. Fabiola, Valentin, Daniela, Tina, and Jutta: Thank you very much. This book is dedicated to you. Munich, Germany Berlin, Germany  July 2019

Roland Frank Gregor Schumacher Andreas Tamm

Contents

1 Do  You Remember Daimler, RTL and Siemens?   1 1.1 Introduction�����������������������������������������������������������������������������������������������   1 1.2 The Innovator’s Dilemma Can Affect Any Company�������������������������������   2 1.3 Disruptive Technology – Public Cloud������������������������������������������������������   5 1.4 The Aim of This Book: Surfboard Instead of a Lifebelt����������������������������   7 References��������������������������������������������������������������������������������������������������������������  12 2 Everything Becomes Digital  15 2.1 Technical Digitization�������������������������������������������������������������������������������  15 2.2 The Consequences of Digitisation: Decentralisation, Communication, Convergence����������������������������������������������������������������������������������������������  17 2.3 The Complete Digitalisation of Value Creation�����������������������������������������  21 2.4 The Platform Economy – Data Is the New Oil������������������������������������������  22 2.5 Conditions for the Successful Operation of Digital Platforms������������������  26 2.6 Success Factors for the Use of Digital Platforms��������������������������������������  32 2.7 Conclusion�������������������������������������������������������������������������������������������������  40 References��������������������������������������������������������������������������������������������������������������  41 3 The  Road to a Zero Marginal Cost Economy  45 3.1 Big Is Beautiful������������������������������������������������������������������������������������������  45 3.2 Zero Marginal Cost Business Models�������������������������������������������������������  52 3.3 When Is It Worthwhile to Start Using Digital Technologies?�������������������  59 3.4 Big Stays Beautiful������������������������������������������������������������������������������������  64 3.5 Artificial Intelligence for Editing��������������������������������������������������������������  67 References��������������������������������������������������������������������������������������������������������������  71 4 Cloud  – The Automated IT Value Chain  75 4.1 It Is the Software���������������������������������������������������������������������������������������  75 4.2 The Classic IT Process������������������������������������������������������������������������������  76 4.3 The Stack – IT and Its Value Chain�����������������������������������������������������������  83 4.4 The Cloud Transformation in IT����������������������������������������������������������������  85 4.5 The Cloud-Based IT Process���������������������������������������������������������������������  97 ix

x

Contents

4.6 Public Cloud vs. Private Cloud������������������������������������������������������������������ 100 4.7 Security in the Public Cloud���������������������������������������������������������������������� 102 4.8 Case Study: A Misunderstanding in the IT Purchasing Department of a Major Corporation������������������������������������������������������������������������������ 106 4.9 From Traditional IT to the Cloud – Explained on One Page and in One Picture������������������������������������������������������������������������������������� 108 References�������������������������������������������������������������������������������������������������������������� 110 5 Cloud  IT vs. Classic IT – Calculation for Controllers 115 5.1 A Practical Example: Outsourcing Invoice Management������������������������� 115 5.2 Features of the Classic Application����������������������������������������������������������� 118 5.3 The Cloud Transformation of the Application������������������������������������������� 122 5.4 Features of the Cloud-Based Application�������������������������������������������������� 125 5.5 An Overview of the Advantages and Disadvantages of Transformation��� 129 5.6 Conclusion������������������������������������������������������������������������������������������������� 132 References�������������������������������������������������������������������������������������������������������������� 132 6 Mastering  Software as a Core Competence 133 6.1 Everything Becomes Software������������������������������������������������������������������ 133 6.2 Why Software Is Such a Challenge for Managers������������������������������������� 135 6.3 Virtualization Layers��������������������������������������������������������������������������������� 138 6.4 Sourcing Options��������������������������������������������������������������������������������������� 141 6.5 Software Architecture�������������������������������������������������������������������������������� 143 6.6 Process Flows�������������������������������������������������������������������������������������������� 150 6.7 People and Organization���������������������������������������������������������������������������� 159 6.8 Practical Example ING DiBa�������������������������������������������������������������������� 163 6.9 Conclusion������������������������������������������������������������������������������������������������� 165 References�������������������������������������������������������������������������������������������������������������� 166 7 Falling  Transaction Costs and the New Network Economy 169 7.1 Transaction Costs Hold Traditional Value Chains Together���������������������� 169 7.2 Excursus: Internal Transaction Costs Slow Down Economies of Scale in Production��������������������������������������������������������������������������������������������� 172 7.3 Integrators, Orchestrators, and Layer Players – How Transaction Costs Influence Economic Structures�������������������������������������������������������� 174 7.4 Fast Communication and Simple Automation – The Transaction Cost Levers of Digitization�������������������������������������������������������������������������������� 178 7.5 New Make-Or-Buy Decisions Through Digitalisation������������������������������ 181 7.6 The Impact of the Cloud Revolution on the Transaction Costs of Software Use��������������������������������������������������������������������������������������������� 185 7.7 Practical Example: How Software Purchasing Via the Cloud Reduces Transaction Costs�������������������������������������������������������������������������������������� 189 7.8 Towards the Network Economy���������������������������������������������������������������� 195

Contents

xi

7.9 Conclusion������������������������������������������������������������������������������������������������� 197 References�������������������������������������������������������������������������������������������������������������� 199 8 The Cloud Transformation 203 8.1 Scientific Models for Digital Transformation�������������������������������������������� 203 8.2 The Three Levels of Cloud Transformation���������������������������������������������� 207 8.3 Transforming the Infrastructure Model����������������������������������������������������� 210 8.4 Changing the Operating Model����������������������������������������������������������������� 220 8.5 Changing the Business Model������������������������������������������������������������������� 232 8.6 The Impact of Cloud Transformation on Potential Employees����������������� 236 8.7 A Successful Cloud Transformation – Explained in One Picture������������� 240 References�������������������������������������������������������������������������������������������������������������� 242 9 Cloud  Transformation – How the Public Cloud Is Changing Businesses 247 9.1 Businesses Fail – Even When Managers Seem to Do Everything Right���������������������������������������������������������������������������������������������������������� 247 9.2 Digitalisation as a Defining Trend in the Economy����������������������������������� 250 9.3 Marginal Costs Determine Competitiveness��������������������������������������������� 251 9.4 Cloud as a Key Technology of Digitization����������������������������������������������� 252 9.5 Classic Applications Can Be Migrated to Cloud Technologies����������������� 254 9.6 Becoming Competitive for the Digital World with Software and Cloud Skills���������������������������������������������������������������������������������������������������������� 256 9.7 Sinking Transaction Costs Lead to More Outsourcing and Change the Economy���������������������������������������������������������������������������������������������� 258 9.8 Cloud Transformation Affects All Companies with Digital Value Creation – On Three Different Levels������������������������������������������������������� 259 9.9 Conclusion������������������������������������������������������������������������������������������������� 262 Index

265

List of Figures

Fig. 1.1 Fig. 1.2 Fig. 1.3 Fig. 1.4 Fig. 1.5 Fig. 1.6 Fig. 2.1 Fig. 2.2 Fig. 2.3 Fig. 2.4 Fig. 2.5 Fig. 2.6 Fig. 2.7 Fig. 2.8 Fig. 2.9 Fig. 2.10 Fig. 2.11 Fig. 2.12 Fig. 2.13 Fig. 2.14 Fig. 3.1 Fig. 3.2 Fig. 3.3 Fig. 3.4 Fig. 3.5 Fig. 3.6 Fig. 3.7

Comparison of market leaders in the computer value chain – mainframe vs. PC�������������������������������������������������������������������������������������������������������������� 2 Innovator’s dilemma according to Clayton Christensen��������������������������������� 4 Transformative effect of cloud technologies��������������������������������������������������� 6 Objective of the book�������������������������������������������������������������������������������������� 8 Procedure of the book������������������������������������������������������������������������������������� 9 Gartner Hypecycle 2018 (Panetta 2018)������������������������������������������������������� 11 The technical process of digitization������������������������������������������������������������ 16 Effects of digitalisation��������������������������������������������������������������������������������� 18 Digitization is changing economic paradigms���������������������������������������������� 22 Data collection leads to knowledge-based competitive advantages�������������� 23 The value-added relationship between customer and supplier��������������������� 25 Digital platforms change the value creation relationships between suppliers and customers�������������������������������������������������������������������������������� 26 The economic power of digital platforms����������������������������������������������������� 26 Projected increase in global data volume (Reinsel et al. 2018)�������������������� 27 The loyalty loop at Tesla������������������������������������������������������������������������������� 29 Netflix doubles the number of Emmys won (Loesche 2017)������������������������ 30 The network effect���������������������������������������������������������������������������������������� 31 Two-sided markets���������������������������������������������������������������������������������������� 32 The success factors for digital platforms������������������������������������������������������ 32 Portfolio theory according to Markowitz������������������������������������������������������ 36 Boston Consulting Group portfolio analysis������������������������������������������������� 49 Physical vs. digital products – average costs and marginal costs in comparison (Clement and Schreiber 2016)��������������������������������������������������� 51 Basic elements of the value chain����������������������������������������������������������������� 58 Business model analysis according to Gassmann����������������������������������������� 61 Launch of zero marginal cost business models��������������������������������������������� 63 Digital markets – market shares and number of employees�������������������������� 65 Actual distribution of market shares in digital markets�������������������������������� 67 xiii

xiv

List of Figures

Fig. 3.8 Book value chain������������������������������������������������������������������������������������������� 68 Fig. 3.9 Artificial neural networks������������������������������������������������������������������������������ 69 Fig. 3.10 Demo showcase on proofreading AI. (Source: Arvato Systems S4M GmbH)���������������������������������������������������������������������������������������������������������71 Fig. 4.1 The cycle of a software project��������������������������������������������������������������������� 77 Fig. 4.2 Network of known and unknown dependencies in the IT value chain���������� 79 Fig. 4.3 Irregular load curve of an application����������������������������������������������������������� 81 Fig. 4.4 The IT value chain (stack)����������������������������������������������������������������������������� 83 Fig. 4.5 Interest in the search term “Cloud“over time since 2004 according to Google Trends������������������������������������������������������������������������������������������� 86 Fig. 4.6 Schematic flow of the use of an application programming interface (API) using the example of OpenWeatherMap.org��������������������������������������� 89 Fig. 4.7 Example of an API call of a database����������������������������������������������������������� 92 Fig. 4.8 IT value creation becomes a network������������������������������������������������������������ 92 Fig. 4.9 Different levels of abstraction of cloud-based IT value creation������������������ 93 Fig. 4.10 Cloud enables simple focus on the core business����������������������������������������� 95 Fig. 4.11 Creating software using cloud services��������������������������������������������������������� 97 Fig. 4.12 Operating software in the cloud�������������������������������������������������������������������� 98 Fig. 4.13 Scaling software in the cloud������������������������������������������������������������������������ 99 Fig. 4.14 Basic scope of services of the private cloud����������������������������������������������� 100 Fig. 4.15 Private cloud and public cloud in comparison�������������������������������������������� 101 Fig. 4.16 Differentiation of compliance and security������������������������������������������������� 102 Fig. 4.17 Levels of technical safety���������������������������������������������������������������������������� 103 Fig. 4.18 Areas of responsibility for security when outsourcing to the cloud����������� 106 Fig. 4.19 Infrastructure components are no longer recognizable for platform services�������������������������������������������������������������������������������������������������������� 107 Fig. 4.20 From traditional IT to cloud-based IT value creation��������������������������������� 109 Fig. 5.1 Business model of a service provider for “process management for purchase invoices���������������������������������������������������������������������������������������� 116 Fig. 5.2 Challenges of global outsourcing of purchasing invoice management������� 117 Fig. 5.3 Logical architecture of the application�������������������������������������������������������� 118 Fig. 5.4 Number of transactions and monthly costs for IT��������������������������������������� 120 Fig. 5.5 Average cost per month and capacity limits of the old application������������ 121 Fig. 5.6 Marginal costs in the old application���������������������������������������������������������� 122 Fig. 5.7 Migration concept for the outsourcing application������������������������������������� 123 Fig. 5.8 Monthly costs of the new application��������������������������������������������������������� 125 Fig. 5.9 Monthly total operating costs including projects���������������������������������������� 127 Fig. 5.10 Average costs as a function of transactions������������������������������������������������� 128 Fig. 5.11 Summed costs over the entire application life cycle����������������������������������� 129 Fig. 5.12 Relevant price and quality differences in one image���������������������������������� 131 Fig. 6.1 The software process accesses IT value creation���������������������������������������� 134 Fig. 6.2 Monolithic software contains many dependencies������������������������������������� 136

List of Figures

Fig. 6.3 Fig. 6.4 Fig. 6.5 Fig. 6.6 Fig. 6.7 Fig. 6.8 Fig. 6.9 Fig. 6.10 Fig. 6.11 Fig. 6.12 Fig. 6.13 Fig. 6.14 Fig. 6.15 Fig. 6.16 Fig. 6.17 Fig. 6.18 Fig. 6.19 Fig. 6.20 Fig. 6.21 Fig. 6.22 Fig. 6.23 Fig. 6.24 Fig. 6.25 Fig. 6.26 Fig. 7.1 Fig. 7.2 Fig. 7.3 Fig. 7.4 Fig. 7.5 Fig. 7.6 Fig. 7.7 Fig. 7.8 Fig. 7.9 Fig. 7.10

Fig. 7.11 Fig. 7.12

xv

Monolithic software – the Hydra in the enterprise�������������������������������������� 137 Dependencies within the software slow down the organization����������������� 137 Factors influencing the performance of software���������������������������������������� 138 Decoupling enables scaling effects������������������������������������������������������������� 139 Virtualization of IT value creation using the example of a container service��������������������������������������������������������������������������������������������������������139 Example calculation of a website with fluctuating usage pattern��������������� 140 Make-or-buy question in overview������������������������������������������������������������� 141 The most important sourcing options���������������������������������������������������������� 142 Factors in the individual weighing of sourcing options������������������������������ 143 Client-server architecture – Downloading a photo from the FTP server���� 144 Multi-layer architecture������������������������������������������������������������������������������� 145 Service Oriented Architecture (SOA)��������������������������������������������������������� 145 Microservices architecture�������������������������������������������������������������������������� 146 Challenges with distributed systems����������������������������������������������������������� 146 Reduction of dependencies and complexity������������������������������������������������ 149 From monolith to microservices – economic impact���������������������������������� 150 Process flows from the idea to operation���������������������������������������������������� 151 Agile as incremental and collaborative software development������������������� 152 Agile approach is particularly well suited for non-material goods������������� 153 Scrum as a successful model at the beginning of software value creation������ 155 Classical process of software deployment with high communication and administration efforts���������������������������������������������������������������������������������� 155 DevOps as an agile form of IT delivery������������������������������������������������������ 157 DevOps – the four characterizing terms������������������������������������������������������ 158 Agile working with squads, tribes and chapters������������������������������������������ 164 The types of transaction costs – example engine production���������������������� 171 High transaction costs hold the value chain together���������������������������������� 172 Comparison of additional costs incurred in the context of the product������ 173 Internal transaction costs increase with rising output volume�������������������� 173 Falling communication costs according to Philip Evans (Evans 2013)������ 179 Reduced transaction costs through digitization – example of share trading��������������������������������������������������������������������������������������������������������� 180 Core business between digitization and automation����������������������������������� 182 Exploiting the benefits of the cloud with low transaction costs������������������ 185 Core competencies and core assets������������������������������������������������������������� 187 The digital make-or-buy decision: Exemplary positioning for a company with physical products and increasing relevance of digital business models������������������������������������������������������������������������������������������� 187 High internal transaction costs in traditional IT������������������������������������������ 190 Low external transaction costs in the use of software services������������������� 192

xvi

List of Figures

Fig. 7.13 On the way to the network economy: exemplary representation of the dissolution of classic value chains through falling transaction costs in the digital economy��������������������������������������������������������������������������������� 195 Fig. 7.14 From traditional to network value creation������������������������������������������������� 197 Fig. 7.15 Transaction costs falling due to cloud technologies are changing the corporate world������������������������������������������������������������������������������������������� 198 Fig. 8.1 Three Horizons of Growth according to McKinsey and Baghai, Coley & White����������������������������������������������������������������������������������������������������������� 204 Fig. 8.2 Zone to win according to Geoffrey Moore�������������������������������������������������� 206 Fig. 8.3 Levels of disruption according to Geoffrey Moore������������������������������������� 207 Fig. 8.4 Preparation phase of the cloud transformation������������������������������������������� 208 Fig. 8.5 Cloud strategy team: Organizational structure and expected scope of change per level������������������������������������������������������������������������������������������� 210 Fig. 8.6 The four steps of cloud transformation in the infrastructure model����������� 210 Fig. 8.7 The 5Rs according to Gartner: Application migration scenarios���������������� 211 Fig. 8.8 Analysis of the application landscape before a cloud migration to Microsoft����������������������������������������������������������������������������������������������������� 212 Fig. 8.9 Evaluation and prioritization of the application landscape according to Briggs/Kassner���������������������������������������������������������������������������������������� 214 Fig. 8.10 Required skills per migration type�������������������������������������������������������������� 214 Fig. 8.11 Framework for Governance, Risk and Compliance according to Michael South (AWS) simplified and translated����������������������������������������� 216 Fig. 8.12 Procedure for application migration to Microsoft��������������������������������������� 218 Fig. 8.13 Implementation of the cloud transformation in the infrastructure model������� 219 Fig. 8.14 Transforming the infrastructure model – the basic steps of cloud transformation��������������������������������������������������������������������������������������������� 220 Fig. 8.15 Using the cloud to improve business models���������������������������������������������� 221 Fig. 8.16 Migration type and opportunities for the business model��������������������������� 222 Fig. 8.17 Migration effort and benefit factors������������������������������������������������������������ 223 Fig. 8.18 Costs versus benefits of a cloud migration�������������������������������������������������� 225 Fig. 8.19 Consistent focus on the customer���������������������������������������������������������������� 225 Fig. 8.20 Levels of enterprise applications according to Rava Kalakota������������������� 227 Fig. 8.21 Feature team for “Detecting hate messages” – exemplary composition of a team������������������������������������������������������������������������������������������������������ 228 Fig. 8.22 Modernizing the operating model with cloud technology requires real changes in the company������������������������������������������������������������������������������ 232 Fig. 8.23 Transformation as a management task according to Geoffrey Moore�������� 233 Fig. 8.24 Zone Offense – Acting as a Disruptor according to Geoffrey Moore��������� 234 Fig. 8.25 Zone Defense – The Disruption Encounter according to Geoffrey Moore������ 235 Fig. 8.26 Public cloud transformation – summary of the most important points per level������������������������������������������������������������������������������������������������������� 241 Fig. 9.1 Cloud transformation – How the public cloud is changing companies������ 248

List of Figures

Fig. 9.2 Fig. 9.3 Fig. 9.4 Fig. 9.5 Fig. 9.6 Fig. 9.7 Fig. 9.8 Fig. 9.9 Fig. 9.10 Fig. 9.11 Fig. 9.12 Fig. 9.13 Fig. 9.14 Fig. 9.15 Fig. 9.16 Fig. 9.17 Fig. 9.18 Fig. 9.19

xvii

Innovator’s Dilemma according to Clayton Christensen using the example of “Mainframe Computer versus Personal Computer”����������������� 249 Cloud as digitization of IT�������������������������������������������������������������������������� 249 Cloud technologies as a multiple disruptive factor������������������������������������� 250 Digitalisation is changing economic paradigms����������������������������������������� 250 Success factors in the digital world������������������������������������������������������������� 251 The emergence of zero marginal cost business models with simultaneous digitization of production, product and sales������������������������ 252 Digitization of software processes through the cloud��������������������������������� 253 Concrete economic effects of the cloud transformation����������������������������� 254 Conversion of a simple application to Cloud Native����������������������������������� 255 Cloud transformation of a business model-relevant application����������������� 256 Software value creation as a factor in digital competition�������������������������� 256 Relevant factors influencing the performance of software value creation������ 258 Transaction costs are the glue that holds companies together�������������������� 258 Falling transaction costs are changing the corporate world������������������������ 259 Levels of disruption according to Geoffrey Moore������������������������������������� 260 Change infrastructure model����������������������������������������������������������������������� 260 Modernize operating model������������������������������������������������������������������������ 261 Transform total portfolio����������������������������������������������������������������������������� 262

List of Tables

Table 5.1 Table 5.2 Table 5.3 Table 5.4 Table 5.5

Calculation of the old application��������������������������������������������������������������� 119 Changed cost structures in the new application������������������������������������������ 124 Overview of migration efforts�������������������������������������������������������������������� 125 Calculation of the new application for three million transactions�������������� 126 Comparison of the most important cost components��������������������������������� 130

xix

1

Do You Remember Daimler, RTL and Siemens?

Abstract

Infrastructural revolutions usually have a major impact on companies: The steam engine, electric power, and the personal computer radically changed the way business was done. Today, companies are facing such a revolution again with cloud technology. The difference to the previous upheavals is the speed with which the changes can and must be adapted today. And this does not only affect software and IT companies but almost all companies and industries. This puts the employees involved in a tricky situation. Because on the one hand, many company leaders have understood that they have to deal with the topic of cloud transformation – and do so actively. At the same time, many managers do not know how to approach this process. The goal of this book is to provide managers with a guide to cloud transformation. The first chapter explains the topic corporate disruption through so-called disruptions and puts it in a historical context. Additionally, it provides an overview of the most important topics of the book.

1.1 Introduction Do you still remember Daimler, RTL, and Siemens? At first glance, this question makes little sense. After all, all three companies are doing well – at least for the moment. But in just a few years, this question could well be justified. And if not for these three, then for many other companies that have not dared to change. In 2018, General Electric, the last founding member, had to leave the American Dow Jones stock index. Since 1976, the composition of the Dow Jones has changed almost

© The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2023 R. Frank et al., Cloud Transformation, https://doi.org/10.1007/978-3-658-38823-2_1

1

2

1  Do You Remember Daimler, RTL and Siemens?

completely.1 The most expensive companies in the world today operate digital business models: Apple, Amazon, Alphabet (Steinharter 2018). On the other hand, companies like Kodak, Motorola, Netscape, and Nokia are cautionary examples: They show how quickly companies that have long claimed large market shares in innovative industries can disappear from the market in a very short time. Now, a critical reader will object that serious management errors were committed in these companies mentioned and that their exit from the market was therefore unavoidable. That is correct. But it is also true that these companies were run by experienced managers who had put all their knowledge on the line to keep their organizations on track for success. The starting situation for many of these companies was quite comfortable: they knew their markets; they knew their customers and they knew which products customers would want in the future. This accumulated knowledge was precisely the reason why they ultimately vanished. Sounds paradoxical? It is.

1.2 The Innovator’s Dilemma Can Affect Any Company The American economist Clayton Christensen described this phenomenon as early as 1997 in his book “The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail” (Christensen 1997). Christensen refers to the innovator’s dilemma as the trap into which companies fall when they respond too precisely to customer wishes – because this approach makes them victims of progress. IBM is a prime example of this dilemma: For more than two decades IBM dominated the market of large-scale computing systems (the so-called mainframes). The company covered the entire value chain, from the manufacture of the processors to the design, the production, the manufacture of the software, and the distribution of the mainframe systems (see Fig. 1.1). During IBM’s period of dominance in mainframe computing, data storage continued to grow. As an example, Christensen cites the IBM 305 RAMAC mainframe computer, Value chain

Processors

Age

Design

Mainframes PCs

Operating Software system

Sales World market leader over the entire value chain

IBM

1960-1980

1990-2010

Production

AMD, Intel

Apple, Lenovo

Compaq, Foxconn

Apple, Microsoft

Adobe, SAP

Amazon

Loss of leadership position in all areas of the value chain

Fig. 1.1  Comparison of market leaders in the computer value chain – mainframe vs. PC  Exxon Mobile is the only company that has been around longer. The oil company has been listed in the Dow Jones since 1928. 1

1.2  The Innovator’s Dilemma Can Affect Any Company

3

which had 4.38 Mbytes of memory and cost about $200,000 (USD) in 1956 (THOCP 2011), equivalent to about $1.9 million today.2 IBM planned to produce 1800 units of this model. Highly skilled staff were needed to set up, run and operate such a computer, and it was also required to invest additionally in premises, cooling systems and power systems. Few firms could afford this, and vendor companies – such as IBM – were able to generate large, low-risk revenues and high profits with their business model. So, what sealed IBM’s demise in the computing branch? The answer was the appearance of a disruptive product. In the book “The Innovator’s Dilemma“, Clayton Christensen distinguishes between two product categories: sustaining technologies and disruptive technologies. The difference between the two product categories is that disruptive technology is initially ridiculed when it first appears on the market. This is because market participants – and above all potential customers  – underestimate the market-changing power of the products at the beginning. However, the assessment of customers changes over time. With increasing success, disruptive technologies change the market logic of entire industries (Fleig 2017). Players leave the market, new players enter the market. The production, distribution and the product itself change. In the end, everything changed in the affected industry. The process in which disruptive products replace the prevailing (sustaining) products always follows the same pattern: Initially, the new (disruptive) products are hardly marketable. They are discovered by so-called “early adopters”, i.e. customers who are enthusiastic about new products and technologies or who use them for themselves in specific application areas. At this early stage, the performance and ease of use of the new product are still far below the performance of the prevailing product. In this so-called dilemma zone (see Fig. 1.2), it is simply not rational for the previously successful company to enter such a market (Harrison 2018). This is the reason why established companies miss their opportunity. This is exactly what happened to IBM when personal computers (PCs) appeared on the market. The technology and market-changing power of a computer that fits on a desk were simply underestimated. IBM was generous enough (or from today’s perspective: insane enough) to give the marketing rights of the freshly developed operating system for personal computers called MSDOS to a fledgling startup company. For $75,000 this startup had previously bought the QDOS operating system from Seattle Computer Products and renamed the operating system to MSDOS (Borchers 2011). The owner of this company was Bill Gates, and the company is called Microsoft. And just because of the mistake of IBM, Bill Gates became one of the richest people on earth today! The first PCs still had to be soldered together by the customers. Steve Jobs and Steve Wozniak – the two founders of Apple – initially refused to supply ready-assembled PCs to stores (Vollmer 2018). From their point of view, already assembled computer violated the principle of the product: They wanted to offer a tech-savvy fan community a new toy to play with.

 The calculation was made with the help of the inflation calculator fxtop.com

2

4

1  Do You Remember Daimler, RTL and Siemens?

New business Business growth

with disruptive technology and expanded target groups

S-Curve

Sustaining Innovation

Dilemma Zone

Disruptive Innovation

Current business

with existing technology and known customers

Time

Innovator's Dilemma According to Clayton Christensen Fig. 1.2  Innovator’s dilemma according to Clayton Christensen

After discussions with a local dealer, Apple finally offered the “Apple I” in 1976 for 666 USD. The following year, the Apple II followed. The price was 1300 USD (equivalent to about 2600 USD in today’s purchasing power). The computer was inexpensive, already assembled, and much easier to use. The Apple II cost only about one to two percent of the price of a mainframe. Margins were only 34% in the PC business compared to 56% in the mainframe business (Christensen 1997). As early as 1979, Apple sold 35,000 units of the new model, and just a few years later, billions of PCs populated users’ offices and desks around the world. According to research by market research firm Gartner, there were nearly 1.5 billion installed PCs worldwide in 2016 (Gartner 2016). At the same time, the sales model changed from B2B to B2C, investments in a new type of marketing and sales were necessary, and customer-specific consulting became unnecessary. From one moment to another, IBM lost its business model. Why should a company still afford large-scale computer systems when data processing could just as easily take place decentrally on desktop PCs? IBM has long since emerged from the deep valley it had to walk through after partially losing its business model. In addition to the traditional business model with large-scale computing systems, consulting and services became increasingly important. IBM said goodbye to the unprofitable PC business in 2004 and sold the division to Lenovo (Windeck 2014). Simultaneously, IBM became one of the largest digital consulting firms and one of the most important providers of external data centers in the world. However, the company did not succeed because it stuck to sustaining technologies, but because it reinvented itself

1.3  Disruptive Technology – Public Cloud

5

and its products. However, the danger of misjudging developments is not banished for a company even once it has extricated itself from the innovator’s dilemma. Particularly in the area of data center services, IBM is once again caught up in an innovator’s dilemma thanks to disruptive cloud technologies and is trying to extricate itself from this dilemma by buying RedHat (Grüner 2018). The product history of recent decades is full of examples of how disruptive technologies can replace and supersede established technologies on markets. When the first mobile phones came onto the market in the early 1980s, they were ridiculed around the world. The old (sustaining) technology “telephone” seemed to suffice for daily needs. After all, if the user wanted to make a call while on the move, there were telephone booths. In Germany, an anti-cell phone sentiment virtually broke out; users of mobile communication devices were vilified for years as “yuppies” (Hackmann and Bremmer 2012). Young companies at the time, such as Nokia, quickly succeeded in replacing the bulky devices of the early days with much smaller mobile phones that were suitable for the mass market. In 1994, just 4.6% of Germans had a mobile phone contract. Ten years later, the share had risen to 86.4% (Bundesnetzagentur 2018). The irony of the story is that Nokia itself became a victim of the “Innovator’s Dilemma“a few years later. With Apple’s iPhones, a new product category had appeared on the consumer market in 2007 that revolutionized the way mobile devices were used. Today, smartphones from Apple, Samsung, and Huawei dominate the mobile markets, with 5.8 billion mobile phones forecast worldwide by 2025 (GSMA 2019). Do you still remember Nokia?

1.3 Disruptive Technology – Public Cloud Numerous established companies are currently falling into the trap of the Innovator’s Dilemma. This time, the disruptive and thus initially easy-to-underestimate technology is called the “cloud“. This refers to a technology that enables the decentralized, automated provision of computing power, storage, and other IT components. As in the scenarios described, cloud technology has the disruptive power to fundamentally change markets and business models. Cloud approaches – similar to other infrastructural innovations of the last centuries – will revolutionize business due to their ease of use. To “spoil” the thesis of this book: The cloud – if used appropriately – is the entry point for companies into the digital age. And this applies to all levels: From the company’s fundamental business model to the manufacture and distribution of the product to the internal collaboration of employees (see Fig. 1.3). The disruptive possibilities of cloud technology were already recognized in the 1950s by Herb Grosch, an employee of IBM (Hühn 2018). He dreamed of no longer storing computing power in a stationary location, but of handing it over to huge external computing facilities. Technically, the construction of a decentralized computing power network was already possible at the time when Herb Grosch had the idea. However, the necessary technical

6

1  Do You Remember Daimler, RTL and Siemens? SaaS Machine Learning

Transformative effect on:

Business models

Internet of Things

cloud technologies Social Media Serverless PaaS

Data Analytics

Digital Transformation

Mobile IaaS

Products Infrastructure Employees

Fig. 1.3  Transformative effect of cloud technologies

bandwidths were lacking to transmit the data at an acceptable speed via the telephone network and thus to be able to use the distributed computing power efficiently. Only when broadband technology became widespread it was possible to create a decentralized network in which every provider could feed computing power into the network and make it available for a fee (Lauchenauer 2016). This laid the foundation for the economic success of cloud technology. An important driver behind this development was the company ‘Salesforce’, which succeeded in offering business software as packages on the net (so-called software-as-a-service). Services such as software for managing customer data (CRM) no longer had to be planned and ordered in a lengthy process but were available online with a few clicks. Just as smooth as customers use Spotify today. Since then, Internet giants such as Amazon, Microsoft, and Google have been building huge data center – the so-called hyperscalers – at various locations around the world. On these sites with the size of several football fields, thousands and thousands of microcomputers are connected to a network and connected to the grid. Similar to a power grid, the capacities of these data centers can be ramped up or down depending on the workload. A current example of the application scenarios that are possible with the help of cloud technology is provided by Alphabet, Google’s parent company. In 2019, Alphabet entered the highly competitive market for game consoles with a software streaming service called “Stadia”. Stadia illustrates the disruptive potential of cloud technology: Users of this service no longer need to rely on purchasing a powerful gaming console – all they need is a fast internet connection, a monitor, and an input device. The computing power comes from the network. This allows gamers to access high-quality games with lavish graphics anywhere in the world. With just a few clicks, the service is ordered, and the provisioning and billing of the service are automated (Heuzeroth 2019). Despite all these advantages, cloud technology has not yet fully taken hold in Europe. The recommendation by consultants to invest more resources in the implementation of cloud technology still triggers trepidation in boardrooms here (Holland 2016). On the one hand, managers intuitively feel that this trend will play an important role in the future. At the same time, they prefer to leave this field to IT managers and CIOs, technical directors (CTOs) or simply the “shadow IT“. Shadow IT is the hidden IT organization that develops

1.4  The Aim of This Book: Surfboard Instead of a Lifebelt

7

in companies when the official IT department does not meet the needs of the business departments for new functions and solutions. Users from specialist departments such as marketing, production, or logistics then use the cloud offerings available on the Internet – usually bypassing internal regulations and information obligations – to solve their specialist problems themselves (Manhart 2015). So, it’s no surprise that the potential of cloud technology so far has been lying fallow in many places in Europe. However, this is not a specifically European problem – on other continents, too, only a few companies have recognized the possibilities of the technology and pushed them forward with vehemence. The pioneers in this field are the USA, and here in particular the large Internet companies on the American West Coast such as Google, Amazon, and Salesforce. But China, too, is vigorously pushing the “cloudification” of domestic companies from the government side (see Chap. 2). Metaphorically speaking, with cloud technology a wave of digitization is rolling towards companies, and like non-swimmers, they are waiting with their eyes wide open for the impact. They have understood that the wave is big and will carry them along, but they do not know how to deal with this fact. This scenario can be observed with drowning victims: they behave intuitively wrong in this situation. Instead of reaching out for the life preserver, they push it away, hold their breath and stop swimming.

1.4 The Aim of This Book: Surfboard Instead of a Lifebelt This book is aimed at those who want to ride the wave of digitization. Therefore, everything in this book revolves around the opportunities of cloud technology. It holds the potential to adapt the product range to the conditions of digitization. It helps thinking about the development of new digital business models in the shortest possible time. And answers also the question how to put them online – without major investment risks. The book “Cloud Transformation – How the Public Cloud is Changing Businesses” helps companies help themselves. It enables the responsible persons to recognize the potential of cloud transformation and to use it for their own company. It describes the most important methods and tools of software development and analyzes their effect on the transaction costs and marginal costs. Based on these findings, a concrete guide for cloud transformation is developed that helps corporate leaders to initiate the right steps to digitize their business. In this way, the book provides the foundations for catching the approaching wave of digitization (see Fig. 1.4). In this sense, this book is not functioning as a life preserver. Rather, the contents resemble a surfboard that helps companies ride the oncoming wave. Is surfing easy? No, it has to be learned. The best surfers in the world train hard and a lot. Only if a company is willing to engage in the process of cloud transformation and has the necessary enthusiasm and perseverance the process can ultimately be successful.

8

1  Do You Remember Daimler, RTL and Siemens?

Cloud lowers marginal costs

The world goes digital

Influence on the competitive situation Cloud lowers transaction costs Recognize connections

Intelligent Outsourcing

Market Strategy

Modern Software

Business Models

Agile Organization Learning cloud transformation

Infrastructure

Digitizing the enterprise

Fig. 1.4  Objective of the book

1.4.1 Methodological Approach Consultants everywhere are currently promising the all-round digitization of companies in the shortest possible time  – for example, in just 18  months. These promises leave IT experts and CIOs somewhat perplexed. How is such a concept supposed to work? Will all departments be turned upside down for a few months and on the deadline date the board of directors will press the “all-digital” button?3 If you want your business to go digital in no time, try this: Just take out your credit card and visit the website of Azure or AWS, the cloud offerings from Microsoft and Amazon, respectively. After selecting the free trial month, you can configure and deploy your own server within minutes. You don’t need any training to do this, just follow the instructions. With about 20 clicks you have digitized your IT processes – in 30 min.4 In many cases, the suggestions for digitization from consultants resemble the diet tips that can be found in fitness magazines. There, suggestions are made (“The new pineapple diet”) without developing a basic understanding of the mechanisms of human metabolism in those interested in dieting. This book does not provide simple recipes and does not prescribe which goals companies must achieve in which timeframe. Rather, it is about understanding the fundamental interrelationships of the digitalized economy. Or to remain in the diet analogy: The reader should understand how the digital metabolism works and what the cloud means for the fitness of a company. First of all, companies have to accept digitization as a driving force for the development of their future business models. Afterwards they have to understand the underlying mechanisms of their IT systems. From that point on companies can successfully lose fat and build muscle.  On August 25, 1967, during a live broadcast from the Berlin Radio Exhibition, then-German Chancellor Willy Brandt pressed a red button to start color television in Germany. However, the technicians responsible for the switchover were a bit overzealous, so the new hues were visible on the screens seconds before the button was pressed. 4  For instructions on how to perform a free cloud server installation in 30  min, see (Tamm and Frank 2019). 3

1.4  The Aim of This Book: Surfboard Instead of a Lifebelt

9

Therefore, this book starts with the presentation of the theoretical basics, from which practical solution scenarios are subsequently derived. The reader is not told whether he or she should choose cloud provider A or B. Rather, the reader should understand where the advantages of the cloud lie and what to look out for in cloud transformation. With this approach, the book addresses two target groups: On the one hand, it is intended to be a guide for practitioners who are entrusted with the task of cloud transformation. These are managing directors, IT managers, and HR employees. It does not matter whether the company is a medium-sized enterprise or a large global corporation. The basic measures derived from the theoretical preliminary considerations apply to all company sizes and across all industries. In addition, the book is aimed at researchers and scientists who deal with the topic of cloud transformation. For them, the book offers numerous models and computational examples of their own, which enable a scientific classification of the topic. At the same time, the book connects already established ideas and models, enabling interested researchers to take up the topics and work on them further.

1.4.2 Guide through the Book The structure of the book is based on the “Golden Circle” by the British-American author Simon Sinek. His theory is that the basis of every strategic decision should be the answers to the three questions “Why do we do something?”, “How do we do it?” and “What do we do?” (Sinek 2014). Only when you have understood “why” you should engage with cloud technology then you can deal with the “how” – i.e. the measures to be taken – and the “what”, i.e. the processes that need to be initiated (see Fig. 1.5).

Why do we need to act? • Digization as the dominant megatrend • The strategic impact of the cloud on marginal costs in digital business models

How should we proceed?

Why How What

Fig. 1.5  Procedure of the book

• The cloud as automation of IT • Reducing global marginal costs to near zero through cloud transformation • Cloud and software expertise as key to competitiveness in the digital age • Reduction of transaction costs and network-like value chains

What should we do?

• Proven process models depending on the intensity of the cloud transformation • The most frequently asked questions about cloud transformation

10

1  Do You Remember Daimler, RTL and Siemens?

Chapter 2 deals with the question of why: Why is digitization such a powerful force that is sweeping over companies? In short, digitization is disrupting the world of business. All products that can be digitized will be digitized in the coming years – or already have been. If digitization of the core product or service is not possible, all process steps around the creation and distribution of the product will be digitized. Chapter 3 sheds light on the marginal cost analysis. Whereas for product categories such as cars, marginal costs generally only decrease with very large production volumes and very slowly, digital “zero marginal cost products“enable almost infinite scaling. Chapters 4, 5, 6 and 7 provide an answer to the question “How should we proceed?”. Digital business models require software and this software must be created, operated, and scaled. With the help of the cloud, this can be turned into globally scalable zero marginal cost businesses. Chapter 4 introduces the challenges of classic IT value creation and explains how the cloud is revolutionizing this very value creation. The most important virtualization stages of the cloud are presented and terms such as API and microservices are explained in a way that is understandable for the technical “noob”. In Chap. 5, the cloud transformation is illustrated concretely using a classic application. The accompanying cost analysis shows how a fixed-cost-intensive, monolithic application can become a globally scaling cloud application with marginal costs close to zero. Chapter 6 describes the most important competencies that companies should have if they want to successfully use the software at the core of their digital business model. Chapter 7 describes how cloud technologies and methods relevantly reduce transaction costs. Outsourcing is thus becoming easier and less risky, and the trend towards outsourcing secondary IT value creation is becoming mandatory. The focus of companies on specialized but globally scalable services leads to a network economy in which even small companies can survive. Chapter 8 illuminates the necessary processes in cloud transformation. The implementation principles are only touched upon here, as more concrete explanations of the details of possible cloud transformations would go beyond the scope of this book. Chapter 9 concludes by presenting the theses and findings of the book. Thus, this book integrates three perspectives that are usually described separately: Economics, Technology, and Organizational Development. The decision whether and when a far-reaching transformation of a business model is necessary can be identified by a marginal cost analysis of the distribution and production of a product – it lies therefore in the realm of economics. Whether a company is capable of offering its product at a competitive marginal cost depends largely on how well it can develop, operate and scale software – this is where technology comes into play. Finally aligning the company with its people, leadership, processes, culture, and collaboration to the new digital business models is an organizational development issue. Every year, the market research company Gartner publishes a report on the development status of disruptive technologies. In doing so, Gartner uses an interesting approach:

1.4  The Aim of This Book: Surfboard Instead of a Lifebelt

11

It combines the growing economic relevance of new technology over time with the interest shown in the technology by the public. The analysts’ insight gained from this is that disruptive technologies develop in five phases – called a “hype cycle”. After the interested public has become aware of the technology (“technological trigger”), expectations rise rapidly. Thus, a technology quickly reaches the “peak of inflated expectations.” At this point, however, the economic viability of the technology is not yet assured. It must therefore wander through the “valley of disillusionment” for a while before finally reaching the “path of enlightenment” and the “plateau of productivity”. In the Gartner Hype Cycle for Emerging Technologies from 2018, the new cloud technologies (“Serverless PaaS“) are still in the first stage (see Fig. 1.6). This means that many companies are not yet aware of the opportunities that are hidden behind the application of the new technologies. Gartner estimates that this new technology will already be adapted as a mainstream technology in two to five years. German companies have been given a second chance with the current wave of cloud technology. The second wave of cloud development with the new “as-a-service”-products offers them the chance to change from being hunted to becoming hunters. Those responsible in the companies must seize this opportunity themselves. The following chapters should enable you to write the story of the cloud winners yourself instead of getting lost in the wave. There are three prerequisites for a successful cloud transformation: 1 . Understanding the groundbreaking power of digitization 2. Understanding your own business models 3. The entrepreneurial acquisition of the skills necessary for transformation in the areas of modern software, intelligent outsourcing, and agile organization. Peak of exaggerated expectations

Serverless PaaS

2-5 years to general adoption in the marketplace Plateau of productivity Path of enlightenment

Technological trigger

Valley of disillusionment

Fig. 1.6  Gartner Hypecycle 2018 (Panetta 2018)

12

1  Do You Remember Daimler, RTL and Siemens?

Starting from this approach, the actual transformation can begin. Because no one should have to remember your company like an exhibit from a business museum. Your company should also play a role in the network economy of the future – because it dared to constantly renew itself.

References Borchers, Detlef (2011): 30 Jahre MS-DOS, erschienen in: heise.de, https://www.heise.de/newsticker/meldung/30-­Jahre-­MS-­DOS-­1286525.html. Bundesnetzagentur (2018): Jahresbericht, erschienen in: bundesnetzagentur.de, https://www.bundesnetzagentur.de/SharedDocs/Downloads/DE/Allgemeines/Bundesnetzagentur/Publikationen/ Berichte/2019/JB2018.pdf?__blob=publicationFile&v=5, abgerufen im Juni 2019. Christensen, Clayton M. (1997): The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail, Harvard Business Review Press, Boston. Fleig, Jürgen (2017): 3 Beispiele für eine disruptive Innovation, erschienen in: business-wissen.de, https://www.business-­wissen.de/artikel/innovationen-­3-­beispiele-­fuer-­eine-­disruptive-­innovation/, abgerufen im Mai 2019. Gartner (2016): Installed base of personal computers (PCs) worldwide from 2013 to 2019, erschienen in statista.de, https://www.statista.com/statistics/610271/worldwide-­personal-­computers-­installed-­ base/, abgerufen im Juni 2019. Grüner, Sebastian (2018): Die riskante Wette auf die Cloud, erschienen in: golem.de, https://www. golem.de/news/ibm-­kauft-­red-­hat-­die-­riesige-­verzweifelte-­wette-­auf-­die-­cloud-­1810-­137384. html, abgerufen im Mai 2019. GSMA (2019): The Mobile Economy, erschienen in: gsmaintelligence.com, https://www.gsmaintelligence.com/research/?file=b9a6e6202ee1d5f787cfebb95d3639c5&download, abgerufen im Juni 2019. Hackmann, Joachim und Manfred Bremmer (2012): Funkstille war einmal, erschienen in computerwoche.de, https://www.computerwoche.de/a/funkstille-­war-­einmal,2516580. Harrison, Chris (2018): The HCI Innovator’s Dilemma, erschienen in: interactions.acm.org, https:// interactions.acm.org/archive/view/november-­december-­2018/the-­hci-­innovators-­dilemma#R10. Heuzeroth, Thomas (2019): Google läutet das Ende der Spielkonsole ein, erschienen in: welt. de, https://www.welt.de/wirtschaft/webwelt/article190597597/Google-­Playstation-­Xbox-­Mit-­ Stadia-­gegen-­Konsolen.html. Holland, Martin (2016): Umfrage: Deutsche Firmen scheuen „Cloud“ auch wegen „Bauchgefühls“, erschienen in: heise.de, https://www.heise.de/newsticker/meldung/Umfrage-­Deutsche-­Firmen-­ scheuen-­Cloud-­auch-­wegen-­Bauchgefuehls-­3358260.html, abgerufen im Mai 2019. Hühn, Silvia (2018): Cloud Computing, erschienen in: ingenieur.de, https://www.ingenieur.de/technik/fachbereiche/cloud/cloud-­computing/, abgerufen im Mai 2019. Lauchenauer, David (2016): Die Entwicklung von Cloud ERP  – Nutzung & Fakten, erschienen in: myfactory.com, http://www.myfactory.com/blogbeitrag/infografik-­cloud-­entwicklung.aspx, abgerufen im Mai 2019. Manhart, Klaus (2015): IT-Services in der Besenkammer, erschienen in: computerwoche.de, https:// www.computerwoche.de/a/it-­services-­in-­der-­besenkammer,3214123. Panetta, Kasey (2018): 5 Trends Emerge in the Gartner Hype Cycle for Emerging Technologies, erschienen in: gartner.com, https://www.gartner.com/smarterwithgartner/5-­trends-­emerge-­in-­ gartner-­hype-­cycle-­for-­emerging-­technologies-­2018/, abgerufen im Mai 2019.

References

13

Sinek, Simon (2014): Frag immer erst: warum: Wie Top-Firmen und Führungskräfte zum Erfolg inspirieren, Redline Verlag, München. Steinharter, Hannah (2018): Das sind die zehn wertvollsten Unternehmen der Welt, erschienen in: handelsblatt.de, https://www.handelsblatt.com/finanzen/anlagestrategie/trends/apple-­google­amazon-­das-­sind-­die-­zehn-­wertvollsten-­unternehmen-­der-­welt/22856326.html?ticket=ST-­3067 435-­Tu7M2KViYcSsmoUvbDh1-­ap4, abgerufen im Mai 2019. Tamm, Andreas und Roland Frank (2019): In 30 Minuten in die Cloud, erschienen in:cloud-­blog. arvato.com, https://cloud-­blog.arvato.com/in-­30-­minuten-­in-­die-­cloud, zum Zeitpunkt der Drucklegung noch nicht abrufbar. THOCP (2011): Mainframe, erschienen in: thocp.net, https://www.thocp.net/hardware/mainframe.htm. Vollmer, Jan (2018): Apple-Chronik: Von der Beinahe-Pleite zur wertvollsten Firma der Welt, erschienen in: t3n.de, https://t3n.de/news/apple-­chronik-­von-­der-­beinahe-­pleite-­zur-­ wertvollsten-­firma-­der-­welt-­1099260/, abgerufen im Mai 2019. Windeck, Christof (2014): IBM-PC-Sparte ist seit zehn Jahren bei Lenovo, erschienen in: heise. de, https://www.heise.de/newsticker/meldung/IBM-­PC-­Sparte-­ist-­seit-­zehn-­Jahren-­bei-­Lenovo-­ 2482096.html, abgerufen im Mai 2019.

2

Everything Becomes Digital

Abstract

Technological progress is one of the main drivers of social and economic change. Both interpersonal communication and value chains in companies – and consequently the life of every individual in a connected society – are affected by the changes brought by digitization. This change is demanding. Companies must adapt their value creation processes in order not to emerge as losers from the disruptive competition. Digital platforms that efficiently use modern technologies are adapted to that change. They displace classic business models. At the same time, the importance of physical production is declining. As a result, digitization is becoming a catalyst for the modernization of companies. Therefore, many companies are facing painful cuts. Implementing this change requires courage and foresight (BMWi: Bundesministerium für Wirtschaft und Energie – Nationale Industriestrategie 2030. Strategische Leitlinien für eine deutsche und europäische Industriepolitik. Als Download unter https://www.bmwi.de/Redaktion/ D E / P u b l i k a t i o n e n / I n d u s t r i e / n a t i o n a l e -­i n d u s t r i e s t r a t eg i e -­2 0 3 0 . p d f ? _ _ blob=publicationFile&v=24, abgerufen im Mai 2019:2019). In this respect, European industry can learn from both the digitization pioneers in Silicon Valley and the digital hunters from the Far East.

2.1 Technical Digitization Anyone who opens nowadays the newsfeed of LinkedIn is “spammed” with the buzzwords of modern business management: Blockchain, Metaverse, Internet of Things and Industry 4.0 dominate the debate (Bergmann 2017).

© The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2023 R. Frank et al., Cloud Transformation, https://doi.org/10.1007/978-3-658-38823-2_2

15

16

2  Everything Becomes Digital

Analog signal

Digital signal

Analog signal

Storage Fig. 2.1  The technical process of digitization

Digitization has become the holy grail of corporate management. No meeting or session does not conclude with a commitment to the new conditions and goals of the market. The term digitization has long since lost its grip and has become, to a certain extent, arbitrary and interchangeable. If you want to make fun of it, play a round of bullshit bingo1 – it’s guaranteed to make the next strategy meeting more fun.2 Where does the importance attributed to digitization come from, and what consequences does this have for corporate management? Numerous trend researchers have proclaimed digitization to be the most important social trend of the twenty-first century (Schmidt and Drews 2016). In terms of its importance for the (further) development of society, digitization has long been equated with the industrialization of the eighteenth and nineteenth centuries and the electrification of the twentieth century. The term digitization occupies different levels of meaning: In addition to the technical term, digitization has long been a synonym for the social and economic change that accompanies the technical revolution. In its original meaning, digitization means a technical process that converts an analog signal into a digital signal. Analog signals are always surround us, we perceive them acoustically and visually with our senses. The process of digitization takes place in three stages (see Fig. 2.1):

 For example, one provider of bullshit bingo on the topic of the cloud is Bullshitbingo.net  Bullshit bingo is the name for a variation of the classic bingo game. Here, terms that could be used in the course of the business meeting are listed in all columns and rows of a table. When a term has fallen, the player is allowed to mark this term. The first player who manages to fill an entire row or column with marked terms wins and gets to shout “Bingo” out loud. Whether a game of bullshit bingo is actually recommended from an in-house perspective is left to the reader. 1 2

2.2  The Consequences of Digitisation: Decentralisation Communication…

17

1. First, the analog signals are recorded by an analog-to-digital converter and converted into a data system of zeros and ones. 2. These digital signals can be stored. 3. A digital-to-analog converter can be used to convert the digital signals back into an analog signal if required. Filter systems help to “smooth” the signals. For a long time, the development of digitization processes was hardly pushed forward – there was simply no need for it. If a person speaks into a microphone, for example, there is initially no reason to digitally convert the signal. An amplifier transfers the analog signal of the sound waves picked up by the microphone to a loudspeaker. The result: the human voice is acoustically amplified without the need for an intermediate digital step. The technology of acoustic sound carriers such as records are based on analogue signals. Radio also used an analog signal system for a long time, whereby acoustic signals were converted into electromagnetic waves for radio transmissions. These in turn could be played out by receivers in audible form. The digitization of the music industry only began to gain momentum with the age of the Compact Disc (CD). For the first time, analogue music signals were converted into digital signals and stored on digital media (e.g. CDs and computer systems with hard disks).

2.2 The Consequences of Digitisation: Decentralisation, Communication, Convergence The advantages that digitization offered for music production and distribution quickly became clear to everyone involved: in compressed form, music files could be stored, exchanged, and edited on ever-smaller systems. At the same time, digital signals could be processed much more efficiently. Today, software applications on PCs and mobile devices do the job of music studios: recording and editing the recorded music. With the MP3 format, digitization in the music industry has reached a temporary peak. A four-minute piece of music, for example, requires only 4 MB of storage space instead of about 50 MB. This made it possible to compress music files to such an extent that they could be exchanged millions of times on online platforms such as Napster in the early 2000s (Fritz 2018). This development initially posed major challenges to the music industry from an economic perspective, as neither the artists nor the music publishers were involved in this new distribution channel. The music industry had lost its business model “overnight”. The financial impact was enormous. The traditional business model was on the verge of collapse: fewer and fewer end consumers were using the classic distribution channels to buy a piece of music on a data carrier. Only Apple succeeded in developing a new business model for the music industry and thereby helped the industry steering it into “calmer waters”. The launch of the iPod MP3 player in 2001 and the iTunes platform in the same year convinced consumers to spend money on buying and consuming music again.

18

2  Everything Becomes Digital Digizaon

Compression

Personal Computer

Decentralized work

Transferability

Internet

New social forms of communicaon

Processability

Smartphone

Convergence of media

Fig. 2.2  Effects of digitalisation

Meanwhile, the music industry has been disrupted for a second time in a decade. Today, streaming services are replacing the purchase of music on digital distribution channels. Customers sign up for subscriptions and in return freely access large libraries of music. This second disruption of the music market also saw new players enter the scene and the industry’s revenue models were overturned once again. This second disruption has led to a digital value chain replacing a second value chain due to a better user experience. The advantages of digitization, which changed the entire music industry in a very short time, were also taken up by other industries. Triggered by the improved storage, transferability, and transformability of digital data, numerous social and technological processes changed as a result. Those companies that were the first to adopt the new technology were at the forefront of change. Because they had recognized that digitization could increase productivity to an extreme degree. Of course, this first wave of digitization did not remain without consequences  – in essence, it accelerated three developments (see Fig. 2.2): 1. The compression of data, as well as the higher performance of the devices, enabled decentralized work via desktop PCs. 2. The easier transferability of data abruptly empowered the Internet, which had evolved out of the institutional sphere since the 1980s. This massively changed many forms of social communication. 3. The ease with which digital data can be processed and combined by software has led to the so-called “convergence of media”, which finds its physical expression in smartphones.

2.2.1 Digitization as a Condition of Decentralized Working Before desktop PCs became widespread, corporate data processing was controlled centrally in large computer systems. At that time, digital work would not have been possible in any other way: Data storage and computing systems were physically too big for the offices. Instead, the data storage and data processing systems filled entire basement rooms.

2.2  The Consequences of Digitisation: Decentralisation Communication…

19

If an employee wanted to have a data record calculated, she or he had to address the request to the managers of the central computer centers. For decades, this internal administrative act blocked the possibilities offered by digitization in almost every workplace. It is therefore all the more astonishing that the dominant data processing companies – first and foremost IBM – did not recognize in time the advantages offered by a decentralized data processing structure in a company (Chap. 1). The advantages of compression, transferability, and processability of data revolutionized work – but this was only possible when storage devices became handy enough to fit on an average office desk. From the mid-1980s, programs such as Microsoft’s Word conquered offices due to their massive productivity benefits (Hülsbömer and Dirscherl 2019). Instead of having to painstakingly type the text on a typewriter and try to get it right the first time, the text could be prepared, checked, improved, or completely rewritten on the PC. Only when the document was available in its final version (“first copy”) it was printed. The “storage medium” paper thus lost its significance, as the data was stored on floppy disks and floppy hard disks. However, it was a long way to the paperless company – and in some companies and institutions, it still is today. Some of the readers will be able to remember the time before the digitization of office work. That is a time when writers improved the errors on the finished document with white TipEx paint instead of writing a new document. To younger readers, these accounts may sound like stories from another era. The exciting thing: That other era was just 30 years ago.

2.2.2 Digitisation as a Social (Communication) Phenomenon The compression of data especially made their transmission in communication networks much easier. Before anyone else, military and university institutions were thinking about exchanging data via telecommunications networks. In 1969, the first four universities joined together to form the so-called ARPA network (Rouse 2015). In 1989, a team around Tim Berners Lee, who worked for the European Organization for Nuclear Research (CERN) in Geneva, finally invented the technical standards for the transmission (http) and presentation (HTML) of data, which established itself quickly on the market in the following period (Podbregar 2019). This invention was a revolution. It changed the way people communicated: more and more servers were interconnected, creating a communication network that enabled data exchange via telephone lines with the most remote regions. So, the global development of e-mail communication systems started from the mid-1990s. The company Netscape, founded by Marc Andreesen and James H. Clark, brought the first web browser onto the market as early as 1994. This made it possible for end consumers to visit websites and distribute their own content. The first popular areas of application for the new technical possibilities were chat rooms, in which strangers scattered all over

20

2  Everything Becomes Digital

the world exchanged information on all possible and impossible topics. From the end of the 1990s, chat rooms were supplemented by messaging services such as ICQ and MSN. In 2004, Mark Zuckerberg finally founded Facebook (FB). Although other social media platforms existed at the time, Facebook quickly emerged as the global communication tool that connects 2.38 billion people today (Facebook 2019). Although new platforms have continued to emerge, Facebook dominates global communication behavior with its online messaging services (WhatsApp, Facebook Messenger) and social media channels (Facebook and Instagram). With the digitization of content and the new communication platforms, social communication processes have also changed. Marshall McLuhan recognized the consequences of digitization for communication relations as early as 1962, when he predicted the emergence of a new global communication structure through digital networking in his book “The Gutenberg Galaxy” (McLuhan 1962). McLuhan predicted that the new means of communication would enable people to communicate with each other at any time and from any place in the world. This would create a global village in which the laws of communication processes would be transferred from small gatherings of people to the digital age. This development had far-reaching social implications. For example, the free sharing of content has encouraged oppressed social groups around the world to inform each other and work together to advocate for their own interests. The Arab Spring in 2011 and the #Me too movement in 2018 are just two of many examples of the democratizing effects that have resulted from the digitization of content (Kneuer and Demmelhuber 2012). At the same time, however, these communication tools make it much easier to spread misinformation (fake news) and they open the door to excessive surveillance – further reading on this topic can be found in the bibliography at the end of this chapter.

2.2.3 Convergence of the Media Convergence refers to the technical merging of previously separate (media) or content ecosystems (Wagler 2018). Whereas before technical convergence, various media forms were consumed and distributed separately, the number of devices shrunk rapidly in recent years. Think about your own media consumption patterns: Years ago, you might have carried around an MP3 player, a digital camera, books, and a calendar. Today, all of these systems connect in the smartphones of about 5 billion registered connections worldwide (GSMA 2019). This means that about 67% of the world’s citizens now own this communication tool. For media consumers, this change is extremely convenient. From an economic perspective, however, it is exciting that new digital products and distribution channels for media content have emerged with the smartphone. In 2008, Google launched the “Android

2.3  The Complete Digitalisation of Value Creation

21

Market” application platform, which was renamed the “Google Play Store” in 2012. On this platform, users can purchase and download eBooks, movies and music files in addition to applications. Apple had already launched its competing products iTunes and iPod seven years earlier and was thus a pioneer of digital convergence. The iPod and later the iPhone revolutionized the way people consume media content. The increasing convergence also increases the processability of media content. With the corresponding applications, users can combine media content and produce their own content. In this way, the Internet is turning passive consumers into so-called “prosumers” who independently create, edit and share content on social networks – be it selfies, short video clips or memes. Everything we need is in our pocket. We can instantly share private and public events with the whole world – and sometimes even coordinate revolutions.

2.3 The Complete Digitalisation of Value Creation The changes described so far are only the first indications of a much more serious impact of digitization. In condensed form, the thesis is: “Everything is becoming digital”. Of course, this is an exaggeration – not all physical products can be “digitized”. There is no doubt that industry is a pioneer in this context: digitization makes it possible for companies to map their entire manufacturing and sales process in digital form and in real time. This creates so-called “digital twins”. These provide the basis for cross-­manufacturing exchange and analysis of the (meta) data mapped in them (Sauer 2019). Figuratively speaking, a digital mirror image of the real production situation is created, which permanently “runs” alongside the real world, mapping the processes and forwarding them to information systems and digital databases. Think of local public transport. Just a few years ago, it was unthinkable that a public bus in Germany would leave a digital trail. In the meantime, buses are connected to the Internet through mobile data transmission. Numerous sensors are installed in the buses, enabling a representation of the vehicles in the digital parallel world. GPS data is used to determine the exact position of the bus, and sensors record how passengers board and exit the bus. Meanwhile, the oil level and tire pressure can also be recorded and monitored by sensors. All in all, a digital twin of the bus is being created that maps all of the vehicle’s value-adding functionalities in the digital world. This digital mirror image makes new application scenarios for the monitoring and control of urban bus fleets conceivable, some of which have already been implemented. For example, the arrival times of buses can be predicted using the analyzed data and communicated to passengers via app or digital display boards at bus stops. Long-term analyses of passenger usage behavior can help optimize schedules and thus save costs. Services will also no longer be billed by people or vending machines, but by cashless payment systems that use sensors to detect when a passenger gets on and off the bus and what fare they have to pay.

22

2  Everything Becomes Digital

Powerful digital end devices

Omnipresent Internet

Platform-based business model

The world is going digital

Mass data-based individualization

Digitization of products Digitization of processes



Increasing economic attractiveness



Winner takes all markets



Changed risk profile of investments

Fig. 2.3  Digitization is changing economic paradigms

In the end, only the physical product remains, embedded in a digital process of creation and distribution. For value creation in companies, this means that in the future all stages of value creation will be subject to the digitization process. The formula for the limits of digitization is, therefore: If the product can be digitized, then it will be digitized. This is already the case today for products such as insurance or banking services. If the product cannot be digitized, then all upstream and downstream production steps will be digitized. If you go to the hairdresser in 2029, the process of washing, cutting, and blow-drying will probably still be done by a human. But all the value-added steps around the service will be digital: you will have bought the service on a digital platform, paid for it digitally, and evaluated it. Even the information on how the cut should look was displayed to the hairdresser on the screen of a digital device. In summary, the combination of powerful digital devices, an omnipresent Internet, and the accompanying individualization of consumer products opens up a new perspective on economic paradigms (see Fig. 2.3). This new form of the digital economy is also changing the value creation processes of companies. The beneficiaries of this development are digital platforms, which are combining ever larger areas of value creation (see Fig. 2.4). The big losers are the producers of physical goods and services.

2.4 The Platform Economy – Data Is the New Oil The German trade journal for marketing “Horizont” ran the headline “Everyone wants to be a platform” on March 21, 2019 (Horizont 2019). The article refers to the transformation of the German e-commerce provider Otto, which was one of the only German companies that were able to survive in a highly competitive e-commerce market. At that time, management was announcing a new strategy, with Otto set to become a digital platform for brand sellers from 2019. The ambitious plan envisaged that around 3000 brand manufacturers using the new platform by 2022 (Campillo-Lundbeck 2019). This transformation is just one recent example of many that show how big the economic importance of digital platforms has become. Why are digital platforms so attractive from an economic perspective?

2.4  The Platform Economy - Data is the New Oil

Combined with historical information and context, knowledge is created

Knowledge

Information Platforms collect data

23

Knowledge helps platforms gain significant competitive advantage

Analysis give meaning to data

Data Characters

Fig. 2.4  Data collection leads to knowledge-based competitive advantages

The value creation of digital platforms differs fundamentally from the value creation processes of manufacturing companies. It is based on bringing together suppliers and customers in a common marketplace. The business model, therefore, consists of collecting data from customers and providers, and bringing them together (“matching”). The physical production of products or services or the physical ownership of goods and business premises is not a prerequisite for the success of platforms. Airbnb, as a digital platform, is the largest provider of hotel services worldwide – without operating its hotels. Uber is the largest global provider of taxi services – without owning its taxis. This list could be continued endlessly (Urbach and Ahlemann 2016). The advantage that value creation models based on data collection have over business models based on physical production was described by Marc Uri Porat as early as 1977 in his book “The Information Economy” (Porat 1977): Data containing contexts about production processes can be converted into information through cognitive processes (see Fig. 2.4). Linked with historical information, the information becomes knowledge about the production of products and services. Therefore, manufacturing processes cannot be improved unless knowledge about them increases. But if platforms now accumulate knowledge about production processes, then they are also the bearers and transmitters of possible efficiency gains. Data and information collected on platforms thus become the efficiency drivers of production, and the productivity gains increasingly flow into the pockets of the platform operators. This is why digital platforms have such great economic appeal: a platform like “booking.com” knows more about the usage and booking behavior of guests than hotel operators. Booking.com uses this advantage to provide customers with the right offers at the right time. They know better which guest wants which hotel, and the platform operators make the hotel providers pay dearly for this advantage. What Are Digital Platforms? The use of digital platforms is advantageous for three sides of the market: For users or end consumers, platforms are simply convenient. Instead of having to go to different

24

2  Everything Becomes Digital

providers’ websites for information, a visit to the central platform is sufficient to be able to overview the market. The providers of the goods or services also have advantages: they can make their offers accessible to a much larger group of users. The third side of the market is the operators of the platform. They usually finance the operation through commissions, subscriptions or advertising. This, therefore, results in a WIN-WIN-WIN situation. One example of this success is provided by the company Valve with its “Steam” platform, founded in 2003, on which computer and video game producers can make their games available for download. Gamers from all over the world can create a user account with just a few clicks – at the end of 2018, around 90 million people were using this service and were thus able to access a library of over 30,000 games (Computerbild 2019). Valve, as the operator of the platform, receives about 30% of the revenue, the remaining 70% goes to the game producers (Börner 2017). Digital platforms can be differentiated into three models: 1. Digital Marketplaces Physical goods or services are offered to companies or end customers. Successful digital marketplaces currently include Alibaba from China, Amazon, eBay and Airbnb. 2. Streaming and Download Platforms Providers such as Spotify, Netflix or Steam bundle the media offerings of different producers in a common digital marketplace. The user has the advantage that the most important offers are available to him at a central location without much effort and he has the freedom of choice. The forms of refinancing are divided into subscription models (e.g. Spotify) and individual use (e.g. Steam). 3. Digital Service Platforms Digital service platforms are mostly aimed at companies. They combine offerings on the platform that can be integrated into the digital value chain. For example, the German mechanical engineering company Trumpf founded the digital platform AXOOM in 2017, on which manufacturing companies can integrate their value chain and expand it with digital offerings (Automationspraxis 2018). Other providers of digital service platforms are cloud providers such as Microsoft or Salesforce, which enable access to digital services. Digital platforms would be inconceivable without the production of physical products and services; they are dependent on real production and provision as the basis for their own economic value creation. This creates symbiotic relationships between the digital

2.7  Digital Service Platforms

25

platforms and the manufacturers. One example of the changing relationships is provided by the Munich-based company Tado. Founded in 2011 by Christian Deilmann, Johannes Schwarz and Valentin Sawadski, the startup offers household appliances that can be connected to the Internet via WLAN. A central product in the range is the digitally controllable thermostat (Gat 2014). Before Tado and other Internet of Things (IoT) companies entered the residential market with their digital home control offerings, there was a direct relationship between the heating user and the heating provider (see Fig. 2.5). To change the settings of the heating system, the customer used to have to go down to the basement and communicate with the heating system via a user interface, which was usually not intuitive. If the customer had a question or a problem, he had to contact the telephone customer service of the manufacturing company. With digital thermostat platforms from Tado, the principle of value creation is changing: A digital system has now interposed itself between the manufacturer of the heating equipment and the end consumer to operate the product (see Fig. 2.6). From a technical point of view, this “interconnection” takes place through a digital control element that must be attached to the user’s heating system. Once the control element is installed, the heating system can be managed with digital end devices. Specifically, as an end user, you can now turn your heating up and down as you wish – from anywhere in the world, provided your smartphone is connected to the internet. Through machine learning algorithms, the thermostat software from Tado is also able to store the behavior of the consumer. The system remembers when the user leaves or re-­ enters the home. As soon as it has detected patterns in the consumption habits, the system independently starts to control the heating. Half an hour before the apartment owner returns home, the software has turned up the heating and preheated the apartment. Economically speaking, the manufacturer of the system is decoupled from the end consumer through the intuitive usability of the platform software. Direct communication between the two market sides is eliminated and replaced by the digital platform from Tado. For heating manufacturers, this decoupling has drastic economic consequences. It is quite conceivable that other services and transactions will be processed via the Tado platform in the future. For example, the system could check the fill level of a gas tank and at the same time make a price comparison among regional gas providers. The next time the heating system needs to be replaced, the end consumer may decide to go with a different provider because of the better compatibility with the Tado platform.

Value creation Customer

Fig. 2.5  The value-added relationship between customer and supplier

26

2  Everything Becomes Digital

Value creation Customer

Platform

Fig. 2.6  Digital platforms change the value creation relationships between suppliers and customers

Transportation

Media

Lodging

• NYCTaxi

• AT&T

• Hilton

• YellowCab

• Comcast

• Marriott

Uber

Netflix

airbnb

Fig. 2.7  The economic power of digital platforms

In recent years, numerous companies have managed to occupy such platform positions in previously “analog” markets. Digital platforms thus have an explosive economic power that can disrupt entire industries and markets (see Fig. 2.7). For example, the taxi industry has been disrupted by Uber, the classic television broadcasters and cinemas by Netflix, and the hotel industry by Airbnb and Booking.com. So, digitization ensures that value creation processes shift from the real to the digital world. Digital platforms assume a hinge position between real production and the consumer. In this way, digital platforms are succeeding in claiming ever larger parts of value creation for themselves.

2.5 Conditions for the Successful Operation of Digital Platforms As tempting as the business model may look at first, it is difficult to successfully install a digital platform on the consumer market. The reasons are numerous: As a prerequisite, there must be a sufficient amount of data (“Big Data”), then it must be considered how the advantages created by the data are to be used (“Data Leveraging”), and of course, the difficult market situation must also be considered (“Winner takes all”).

Global data volume in zettabytes

2.8  Conditions for the Successful Operation of Digital Platforms

27

163

150

Corresponds to approx. 30% growth per year

100

50 16

2016

2025

Fig. 2.8  Projected increase in global data volume (Reinsel et al. 2018)

2.5.1 Big Data Data is the raw material that (digital) platforms need to operate their business models. For a long time, however, this data was not available or was simply not transferred from analog to digital. As described in the technical foundations of digitization (Sect. 2.2), data must first be digitized before it can be further processed in digital systems. In the past two decades, the number of digitized data has exploded. Current forecasts predict that additional sensors and digitization processes will increase the volume of data generated annually more than tenfold from 2016 (16.1 zettabytes of data) to 2025 (163 zettabytes) (see Fig.  2.8). These numbers are inconceivable with the human mind. For example, one zettabyte is equal to 10^21 bytes, i.e. a number with 21 zeros: 1,000,000,000,000,000,000. The former CEO of Google, Eric Schmidt, commented on this development back in 2010. He noted at the Techonomy conference in Lake Tahoe, California, that humanity currently generates 5 exabytes of information every two days. That is as much as has been produced since the beginning of mankind until the year 2003 (Siegler 2010). However, data is not only the raw material that can be used to run digital platforms efficiently. AI and machine learning systems also need large amounts of data to improve their performance. This gives an advantage to all those companies that recognized this trend early on and started storing data. Today these companies use this data to further improve their digital services.

28

2  Everything Becomes Digital

2.5.2 Data Leveraging Digital platforms have a decisive advantage over their analog competitors: they can use customer data to drive their own service quality and business models. For years, this valuable resource lay dormant and unused in many companies. In the meantime, many companies are aware of the importance of internally generated customer data and are making targeted attempts to use this data to improve the service quality of their products. A precursor of this trend were customer relations management (CRM) systems, in which companies’ customer data was stored centrally and could then be accessed and decentrally processed. These CRM software systems started getting popular in the 1990s (Reissmann 2013). They are usually based on the connection of employees’ digital work interfaces to a database that provides an overview of all customer relationships and contacts. With the support of CRM systems, synergies in the joint care of customers could be used and redundancies in the approach could be avoided. The internet companies from Silicon Valley have long since gone one step further and have become true specialists in monetizing the treasure trove of data they are sitting on. To this end, they are building a circuit between continuous data collection, data evaluation and product improvement: the “loyalty loop”. This process aims to continuously improve the products and services and to align them with the wishes of the customers. The data must be used to directly trigger changes to the product or services and thus continuously adapt them. One example of how the loyalty loop works is Tesla (see Fig. 2.9). • In 2018, Tesla had installed the free trial version of Autopilot software in its vehicles with the “over-the-air” feature (item 1) (Pillau 2019). • Social media evaluation tools from Tesla showed that occupants rated the speed at which Autopilot steered the vehicles through sharp turns as too high (item 2). • These findings were confirmed by the evaluation of the telemetry data that is permanently delivered to the company from the cars. It became clear that drivers often had to intervene manually in sharp bends (so-called “overriding” – point 3). • As a result, Tesla decided to adapt Autopilot and integrate a new “curve speed adaptation” in the newer versions. The system updates could again be installed “over the air” free of charge and parallely on all Tesla vehicles (point 5). In sum, a cycle was created between the product used by the customers, an increasingly precise knowledge about the functionalities of the product and the possibility of improving the product on the basis of the available data. Many other companies are also using data leveraging. Amazon, for example, uses customer data for a suggestion system that informs customers what other customers with similar interests were shopping. Meanwhile, Amazon is even able to predict consumer buying behavior with the data at hand. In what’s called Preemptive Shipping, machine-­ learning algorithms create a prediction of how likely a purchase from the consumer’s

Fig. 2.9  The loyalty loop at Tesla

Remote analysis of Autopilot actually reveals that drivers often override it.

There are complaints on social media about dangerous driving on sharp turns.

Tesla is releasing the new Autopilot to existing customers on a trial basis.

2

1

Customer 3 6

Product

4

5

Usage data from the vehicle fleet is collected to further improve the autopilot.

The software of cars already delivered is being updated with the new features on the fly

Tesla is developing Curve Speed Adaptation"" to better control handling on curves.

2.8  Conditions for the Successful Operation of Digital Platforms 29

30

2  Everything Becomes Digital 100 90

Netflix

80

Emmys won

Number

70

Nominations

60 50 40 30 20 10 0

2013

2014

2015

2016

2017

Fig. 2.10  Netflix doubles the number of Emmys won (Loesche 2017)

shopping cart is. This information can help improve internal logistics and, for example, bring a product that is highly likely to be purchased by the customer to the right region – even before the purchase has been triggered by the consumer. Netflix also makes use of the Loyalty Loop. Comments, ratings and cancellation times are constantly recorded in the various series and shows.3 This data is used to better adapt the produced formats to customer preferences. As a result, Netflix has succeeded in increasing the number of Emmy Awards – the “Oscars” for television series – more than sixfold between 2013 and 2017 (see Fig. 2.10).

2.5.3 Winner Takes All A decisive disadvantage of digital platforms is the network effects and the associated economies of scale. Network effects always occur in the economy where the value of good increases as it becomes more widespread. An example of this relationship is provided by the rate of spread of the telephone. In 1881, the first telephone directory appeared in Berlin – with just 187 entries. By 1885, little had happened: on average, only one in 306 Berliners had registered a telephone connection by then (Schwender 2019). Mainly, the telephone was used for real-time exchanges between institutions such as banks or ministries. Only as more and more households were connected to the telephone network the network effect set in, as the derivative utility of the product increased for each additional user. In 1900, there were just under 35,000 connections in Berlin – communication between end users had thus become possible and meaningful.

3

 Netflix is discussed in more detail in Chapter 8 with regard to highly scalable service provision.

2.8  Conditions for the Successful Operation of Digital Platforms

31

Robert Metcalf found the network effect formula in 1980. At that time, Metcalf was a developer at the XEROX Parks research center in San Francisco and was involved in the development of the technical LAN standard, among other things. His theory: in contrast to normal products, where the benefit increases linearly, the benefit of products with a network effect grows exponentially. The formula is:

N   N  1 / 2

Transferred to telephone technology, this means that with 5 telephone connections there are 10 connection options. With 105 telephone connections, there are already 5460 (see Fig. 2.11). The profitability of digital platforms follows the network effect, because the benefit of a platform grows exponentially with the number of consumers. Just imagine if WhatsApp didn’t have millions of users, but just five. Wouldn’t it be too much for you to even call up the application’s homepage? The growth in size of a platform increases its attractiveness – and that in turn attracts new users. What then follows is cut-throat competition between the platforms: The platform with the most providers attracts more demanders, which in turn increases the attractiveness of the platform for providers. This form of dualism of a digital platform is called “two-sided market”. The assumption behind this is that the platform is only attractive for the opposite side of the market if a critical mass has already been reached on the other side of the market (see Fig. 2.12). In the end, the winner takes it all: more suppliers lead to more demanders lead to more suppliers. If there is no intervention on the part of state market supervision, digital platforms run the risk of forming monopolies. Once a platform has managed to conquer the top spot, other platforms have a much harder time entering the competitive market. The digital platform eBay is a good example of this: Founded in 1995 by Pierre Omidyar, the online auction platform quickly managed to capture a large share of the market volume for private auctions on the Internet. To this day, other competitors find it difficult to gain share in this market.

Fig. 2.11  The network effect

32

2  Everything Becomes Digital

Suppliers benefit from more customers

Platform

Provider

Customer Customers benefit from more providers

Fig. 2.12  Two-sided markets

Ease of use Digital ecosystems Willingness to take risks

Usefulness

Success factors for digital platforms

Customizability Speed

Agile working methods

Change mentality Social conditions

Fig. 2.13  The success factors for digital platforms

The disadvantage for customers is immense: once the monopoly exists, the monopolist can dictate prices and conditions – and this situation is also exploited by the monopoly providers. For example, the fees for placing an auction on eBay increased over time to 10% of the selling price (Höwelkröger 2014). Overall, the battle for dominance in the digital marketplaces is leading to a dangerous situation. This is because, in the long term, successful platform providers can determine the market conditions under which the manufacturers of products and services have to offer.

2.6 Success Factors for the Use of Digital Platforms At the same time, the difficulties in successfully establishing a digital platform on the market result in those critical success factors (Szczutkowski 2018) that are crucial for the successful implementation of a platform strategy. The nine success factors for the use of digital platforms are summarised in Fig. 2.13:

2.9  Success Factors for the Use fo Digital Platforms

33

1. Utility The cornerstone of any successful digital platform is finding a way of solving existing customer problem. It doesn’t matter whether the customer is having trouble finding a parking space, scheduling a doctor’s appointment, ordering a taxi, or listening to their favorite music. For engineers and technicians, problem-solving is at the center of their own thinking and actions. Graduates of these fields often want their ideas and work to be judged by the value they bring to society. In this way, these students differ from graduates from business courses, for whom the profitability of a business model is often the primary consideration. So it’s no wonder that many of the successful digital platforms were founded by engineers. They are looking for solutions that improve people’s lives, and many of them have realized that digitization and digital platforms have this potential. Jeff Bezos, the founder of Amazon, studied electrical engineering and computer science at Princeton University, and the founders of Google (now Alphabet) – Larry Page and Sergey Brin – studied computer science at Stanford. (Frank 2017). 2. Simplicity Digital platforms make consumers’ lives easier by linking data records and information from providers and customers in such a way that customers are offered exactly the service or product they need at that moment. And this with a few clicks. Ease of use includes the factors of time (can be accessed at any time), location (can be accessed from anywhere) and cross-devices (can be accessed with all devices). Unobtrusively, digital platforms thus integrate themselves into people’s lives without their use being perceived as a hurdle. Think about your own consumer behavior: Do you still order a pizza on the phone? No, you prefer using a digital platform service via browser or in a mobile app. The cut-throat competition between platforms is therefore not only a competition for the most users (network effect) but also a competition of the simplest and most intuitive usability and user experience (UX). At the same time, this development is changing consumer usage behavior. Modern customers do not only expect intuitive UX, they actively demand it. From a historical perspective, the offerings of platform providers have therefore changed from push to pull markets. While in the early days the gain in convenience came from the fact that the platforms were launched in the first place, today technicians and graphic designers take care of the most intuitive usability possible. If the platform providers do not succeed in convincing the customer of the ease of use in the shortest possible time, the customer can switch to the next provider with a click. “Convenience” and “UX” have thus become an integral part of the consumption process as a result of digitization. The battle for the most intuitively usable product was described by Eric Ries in his 2011 book “The Lean Startup” (Ries 2011). Ries propagates that a young company should

34

2  Everything Becomes Digital

concentrate its resources on creating a “Minimum Viable Product“(MVP): Working prototypes are built for a small market, continuously improved based on customer feedback, and eventually molded into a broadly marketable product. Ries’ thesis is that it is not decisive whether a product is already perfect when it enters the market. It is much more important that a product fulfils its main function for the user when it enters the market. The experience gained on the market can then be used to further develop the product “iteratively”. Eric Ries’ recommendations to platform founders are: • Test your products in the market before looking at the market cap. • Don’t listen to market analysts and focus groups. • Look at what your customers who use the product every day are saying.

3. Customizability Digital platforms are also directly influencing companies’ production decisions. Before digital platforms existed, companies had to conduct extensive and costly capacity planning for production – today, with the help of platforms, they can scale supply volumes much easier and quicker. Through the rapid exchange of information with consumers, manufacturers can respond directly to the ordering processes and only have to start the manufacturing process in the event of a transaction being made. This makes the production process more and more individual. Books and digital photo albums are only printed when the book has been ordered. Cars only roll off the production line when the customer has defined the color and equipment. As early as the 1990s, Dell developed its own business model from this: the so-called negative capital turnover (“negative cash flow”). Here, a company agrees different payment terms with suppliers and customers. While the customer usually pays directly and the money – for example via credit card or PayPal – flows into the manufacturer’s account in a matter of seconds, the supplier aims for the longest possible payment terms. As a result, there is a time gap between the receipt of payment through the purchase and the outgoing payment for production. During this time, the manufacturer can work productively with the acquired capital, make new investments or invest the money profitably. 4. Digital ecosystems Successful digital platforms usually do not stop at a single digital offering, but rather expand their range of services over time so that the user receives more and more additional benefits through new functions. This creates digital usage ecosystems: The goal is to create a landscape of digital products so that there is no longer any incentive for the customer to leave the provider’s

2.9  Success Factors for the Use fo Digital Platforms

35

website. Apple and Google are such providers, but Amazon or Microsoft are also trying to offer ever-larger shares of digital services via their platforms. For example, an Amazon customer can already use the digital assistant Alexa to book airline tickets, plan trips and manage appointment calendars. The system warns of traffic jams on the way to work and helps solve technical problems in the household. Google Maps has also been upgraded over the years. It has now developed into a multifunctional application that has been integrated into numerous systems and applications. In the meantime, Google Maps can be used as a digital travel guide, event calendar and emergency communication system, for example in the event of floods or earthquakes. All of these functions can now also be accessed via the Google Now digital assistant. 5. Speed Innovations on digital platforms are coming to market at ever shorter intervals – this short “time-to-market” is a key factor in the success of a digital platform model. Today’s customers expect their applications to be permanently up to date with the latest technical developments. To meet these demands, software updates are developed in rapid succession and automatically installed on customers’ end devices. Kenneth Kaufman of Stanford University sums up this idea (quoted from Keese 2016): “Three months in Silicon Valley is an eternity.” For providers of digital software solutions, it is therefore not only important to create useful and simple products, but also to make them available on the market as quickly as possible. The advantage is that the companies receive quick feedback from the market due to the high speed. Whether a new functionality of a platform application is accepted or not can already be seen in the first few days after installation. This allows companies to change their approach to production. Instead of checking the degree of perfection of a product in extensive test runs, the market itself is left to do the checking. The rule is therefore: fail fast, learn fast (“fail fast, learn fast”). Companies that operate according to the new paradigm thus develop a new error culture. Errors are not only accepted but explicitly considered as a necessary step towards success (see Sect. 6.6). Portfolio Theory In 1952, the US economist Harry M. Markowitz presented the optimal investment behavior of investors in stock markets with different risk preferences in his portfolio theory (Markowitz 1952). The central statement of the model is that investment portfolios can be analyzed in terms of their risk-return situation through the diversification of risks and can be optimized with this knowledge. Applied to the portfolio analysis of a company, this theory can be used to identify inefficient risk-return situations of a company. Figure 2.14 illustrates this relationship.

36

2  Everything Becomes Digital

Fig. 2.14  Portfolio theory according to Markowitz

E

Yield

10%

D

C

5%

B

5%

A

10%

15%

20%

Risk

In Markowitz’s model, an investor can decide in which mix ratio to allocate his capital to two different types of investment: for example, to comparatively safe bonds with lower interest rates, or to comparatively risky equities that offer a potentially higher return. If the investor puts all his capital into bonds (point A), this is inefficient from an economic perspective. If he adds equities to his portfolio – starting from point A – then his potential return initially increases while the risk decreases. An example of this relationship is provided by the financial crisis in 2009/2010, when Greek government bonds were at risk of total default due to the Greek government’s lack of solvency (Höhler 2016). This happened even though government bonds are considered one of the safest asset classes worldwide. Therefore, from an investor’s point of view, it makes no sense to choose a point between A and B. Point E represents an investor’s portfolio that consists of 100% equities. This point is also inefficient from the perspective of portfolio theory, since between points D and E a slightly higher prospective return is accompanied by a much higher probability of default. The logic of Markowitz’s portfolio theory risk model is, therefore “Don’t lay all your eggs in one basket.” Point B is the point with the smallest fluctuation in value. If the investor adds further shares to his portfolio – starting from point B – he buys a potentially higher return with a higher risk along the so-called frontier efficiency line. A rational investor thus chooses a point between C and D depending on his own risk tolerance. With the help of the combined findings of portfolio theory and the BCG matrix (see Chap. 3), companies’ product portfolios can be analyzed and optimized. From an economic perspective, it is neither advisable to put all future investments into the cash cows that have been successful to date (this would correspond to point A), nor should all previously successful products be thrown overboard and all hopes (and the associated funds) placed on young question mark products.

2.9  Success Factors for the Use fo Digital Platforms

37

6. Willingness to take risks In line with Markowitz’s portfolio theory, companies entering digital platform business models should be aware that taking risks is part of the platform economy. The success of a digital platform depends on numerous factors, including the focus of customers on the largest providers (“winner takes all”). Companies that want to enter the platform competition must therefore increase the entrepreneurial risk in the short term through investments – at the same time, this increases the potential returns in the future. They therefore shift their own product portfolio according to Markowitz’s portfolio theory from point C towards point D (see Fig. 2.14). The greater risk appetite of companies is closely linked to the amount of venture capital available to be deployed. Let’s take a startup that launches a digital platform as an example: The financing of startups is mostly secured by venture capitalists (venture capital). Usually, venture capitalists exchange their investments for shares in the startup. If the young companies are successful, the value of the shares also increases. Nevertheless, it is a risky game: about 80% of startups whose market entry was co-financed by venture capitalists fail within the first three years (Kucklick 2013). Established companies are no different: they have to take into account the risk of failure and total loss of their investments. Due to the market peculiarities described in Sect. 2.4 (“data leveraging”, “winner takes all”), the risk of failure in the area of digital platforms is even significantly higher than with other forms of investment. 7. Change mentality The willingness to take risks is directly connected to the willingness to change. The principle that applied to successful companies for many years – namely, taking care of one’s own cash cows – doesn’t apply under the conditions of digitization. Steve Jobs has demonstrated this willingness to change many times in the course of his career: Namely, by permanently questioning the successful products of the past – even and especially when sales were bubbling (Gassmann et al. 2013). With this approach, Steve Jobs succeeded in disrupting no less than four markets with his product and business ideas in the course of his life: 1 . The mainframe computer market (with the Apple Home Computer) 2. The music industry (with Apple iTunes and the IPod) 3. The film industry (with Pixar, the company he runs). 4. The mobile phone market (with the Apple iPhone) In business, the willingness to change is called pivoting. This means that entrepreneurs show a willingness to modify an original direction and adapt to the conditions of the market. Tesla is a good example of this idea: Elon Musk was willing to throw his original concept for the production of e-cars overboard and instead conquer the market with a completely new product in a second attempt.

38

2  Everything Becomes Digital

Another example of risky but ultimately successful pivoting is Instagram, which is now part of the Meta group. The company first launched under the name “Burbn” with a mobile application that allowed users to post pictures and “check in” to locations. But the success of the app was disastrous, despite being heavily funded by venture capitalists. The problem was that the app was bursting with features and didn’t differentiate itself clearly enough from competitors like Foursquare. The second, completely reprogrammed version of the app, which focused only on uploading photos, also failed. It wasn’t until the third completely fresh attempt when the app included different filters for photo editing that the success set in. Instagram became so popular that the company was sold to Facebook for one billion dollars just two years after its launch (Techcrunch 2019). 8. Agile working methods The stronghold of the global digitization movement is still San Francisco and the neighboring Silicon Valley. A creative cluster has formed around this former military research location, triggering an economic and social network effect. In Silicon Valley, investors and founders are close to each other, which in turn attracts further founders and investors. Ironically, one of the foundations of Silicon Valley’s success is its “analog” work culture. Important meetings are held face to face, and important deals are closed in the same way. Engineers and investors are primarily looking for face-to-face conversations – this characterizes everyday work. Work is usually done in small interdisciplinary teams that produce concrete software in short sprints (Sect. 6.4). Agile working methods such as design thinking characterize collaboration. These working methods are already taught locally in university education. The role model in this context is Stanford University, where an institute for design thinking was founded as early as 2005 under Professor David M. Kelly: the Stanford d.school. Today, this institution is called the Hasso Plattner Institute of Design (HPI 2019) in honor of the German founder of SAP. It is therefore not surprising that numerous digital business models were developed by graduates or at least students of Stanford University: Hewlett Packard, Cisco, Google, Yahoo, Sun Microsystems, eBay, Netflix, Electronic Arts, Silicon Graphics, LinkedIn and PayPal. Professors here see themselves as enablers who encourage entrepreneurial thinking in their students and help with the spin-offs. According to Stanford University’s own study, about 40,000 companies have emerged from Stanford, creating a combined 5.4 million jobs and generating $2.7 trillion in annual revenue (Keese 2016). 9. Social framework conditions The social framework conditions contribute significantly to the success of a digital platform. These include external market factors and the influence of politics on the economic process. American and Chinese companies, for example, have the advantage of the size of their home market. There are currently around 327 million people living in the USA.  That

2.9  Success Factors for the Use fo Digital Platforms

39

equates to about 150 million households that can be targeted without language barriers in a common market. In China, the numbers are far more impressive: by 2020, over 300 million households are expected to be located in China’s cities alone. Of these, over half have a household income equivalent to more than USD 16,000 per year (Atsmon et al. 2012). In both markets – in China as well as in the USA – a digital euphoria has spread on the customer side. In China, for example, the number of users of digital services rose to over 800 million people in 2018 (Kroll 2018). There is no doubt that government support plays a major role in the success of digital platforms. This starts with a functioning infrastructure provided by the state and extends to the question of how innovation-friendly the climate is in a country and how big the financial support is. The US and China have another thing in common: for years, the governments of both countries have clearly signaled that investments in the creation of digital services are encouraged and supported by government programs. However, the US and China are taking different approaches. In the US, companies and start-ups are assessed differently for tax purposes than in Europe. For example, Amazon received a tax credit of $129 million in 2018  – having reported an annual profit of $10.1 billion in the same year (boersenblatt 2019). That translates to a tax rate of – 1%. Amazon is not the only company profiting from this framework conditions. Around 100 companies from the Fortune 500 stock market index were able to reduce their taxes to zero in 2018 or even received tax credits (Firlus 2019). China, unlike the US, has taken a different development path. To accelerate the catch­up process, the Chinese government is prepared to invest large sums in support of digital initiatives. For example, the “National Informatization Strategy” (2016–2020) supported Chinese companies to conquer foreign markets with their digital products. The government’s goal is the creation of a “digital silk road”. In addition, the two programmes “Made in China 2015” and “Internet Plus” were launched in 2015 to help strengthen domestic innovation in the field of digitization (Shi-Kupfer and Ohlberg 2019). In recent years, for example, companies such as Alibaba, Tecent and Huawei have succeeded in gaining ever greater shares in Western digital markets as well. A current example of this development is the Chinese company Bytedance, which has launched the social media application “TikTok”, among others, which is popular in Western Europe and the USA. The company consistently relies on artificial intelligence to improve the user experience with the collected customer data. With its success, Bytedance is currently the most expensive startup in the world with an estimated market value of USD 75 billion (Byford 2018). With this nationally driven strategy, China was able to transform its economy from being the extended workbench of the Western world to becoming an innovator in the digital sector within two decades. Alongside the USA, China is thus a global center of gravity for digital dynamics.

40

2  Everything Becomes Digital

2.7 Conclusion Digitization is the dominant technological and social trend of the present. This trend will have major social consequences that go far beyond the collective sharing of selfies via smartphones. The theses of this chapter can be summarized in three central statements: 1 . Everything is becoming digital. 2. Digital platforms are the successful business model of today. 3. There are numerous success factors to consider when implementing digital platform business models. The catchy thesis “Everything is becoming digital” emphasizes that ever-greater shares of the entire value creation process are migrating from the physical manufacturing process to the digital data process. The production and capital flows of the future will be determined by those companies that know best how to satisfy customer wishes. This puts digital platforms in pole position for the economic development in the coming years. This is because the operators of these platforms have acquired the greatest experience in the storage and processing of customer data and have thus realigned consumption and usage behavior. The digital markets are attractive, and competition has become tougher accordingly. Numerous factors determine the success or failure of the launch of new platforms. Only if companies understand these factors they can survive in the market. European companies are currently inferior to American and Chinese companies in terms of digitization. The USA and China have understood more quickly how digital markets develop and what the success factors of digital markets are. Therefore, USA and China have gained a considerable lead in the area of digital platforms. Many indicators suggest that this lead-in digitization will solidify in the coming years. For example, the Chinese patent authorities registered over 420,000 patents in 2017. This represents a growth of 3.9% compared to 2016. In contrast, only just under 320,000 patents were granted in the USA, and the European Patent Office awarded just under 106,000 patents (Krempl 2018). However, some indications can be interpreted positively for the European challengers: The faster technology develops, the less effort is required to catch up. The example of China makes it clear that with a coherent strategy, even large advances by other countries in digital subareas can be caught up. China can thus be seen as an example for the European economy to boldly move forward with the digital transformation. How this scenario can be implemented in companies is examined in the following chapters.

Reference

41

References Atsmon, Yuval, Max Magni, Lihua Li und Wenkan Liao (2012): Meet the 2020 Chinese Consumer, erschienen in: mckinsey.com, https://www.mckinsey.com/~/media/mckinsey/featured%20 insights/asia%20pacific/meet%20the%20chinese%20consumer%20of%202020/mckinseyinsightschina%20meetthe2020chineseconsumer.ashx, abgerufen im Juni 2019. Automationspraxis (2018): Axoom nun serienmäßig in Trumpf-Maschinen, erschienen in: automationspraxis.de, https://automationspraxis.industrie.de/news/axoom-­nun-­serienmaessig-­in-­ trumpf-­maschinen/, abgerufen im Juni 2019. Bergmann, Thomas (2017): Digitalisierung  – Hype oder kalter Kaffee?, erschienen in: linkedin. com, https://www.linkedin.com/pulse/digitalisierung-­hype-­oder-­kalter-­kaffee-­thomas-­bergmann /?originalSubdomain=de, abgerufen im Mai 2019. BMWi (2019): Bundesministerium für Wirtschaft und Energie – Nationale Industriestrategie 2030. Strategische Leitlinien für eine deutsche und europäische Industriepolitik. Als Download unter https://www.bmwi.de/Redaktion/DE/Publikationen/Industrie/nationale-­industriestrategie-­2030. pdf?__blob=publicationFile&v=24, abgerufen im Mai 2019. Börner, Yannik (2017): Steam: So hoch ist der Umsatz von Valve und Entwicklern, erschienen in: chip.de, https://praxistipps.chip.de/steam-­so-­hoch-­ist-­der-­umsatz-­von­valve-­und-­entwicklern_98293. Boersenblatt (2019): Amazon verdreifacht den Gewinn, erschienen in: boersenblatt.net, https://www. boersenblatt.net/2019-­02-­01-­artikel-­jahresbilanz_2018.1592168.html, abgerufen im Mai 2019. Byford, Sam (2018): How China’s Bytedance became the world’s most valuable startup, erschienen in: theverge.com, https://www.theverge.com/2018/11/30/18107732/bytedance-­valuation-­tiktok-­ china-­startup, abgerufen im Mai 2019. Campillo-Lundbeck, Santiago (2019): Wie die Marke Otto zur Plattform werden will, erschienen in: horizont.de, https://www.horizont.net/marketing/nachrichten/exklusiv-­interview-­wie-­die-­ marke-­otto-­zur-­plattform-­werden-­will-­173696, abgerufen im Mai 2019. Computerbild (2019): Steam: Nutzerzahlen veröffentlicht  – alle Werte im Überblick, erschienen in: computerbild.de, https://tipps.computerbild.de/unterhaltung/gaming/steam-­ nutzerzahlen-­636561.html, abgerufen im Mai 2019. Facebook (2019): Facebook Q1 2019 Results, erschienen in: investor.fb.com, https://s21.q4cdn. com/399680738/files/doc_financials/2019/Q1/Q1-­2019-­Earnings-­Presentation.pdf, abgerufen im Juni 2019. Firlus, Thorsten (2019): Amazon bekommt Steuern zurück, erschienen in: wiwo.de, https://www. wiwo.de/unternehmen/dienstleister/unternehmenssteuern-­i n-­d en-­u sa-­a mazon-­b ekommt-­ steuern-­zurueck-­/24007498.html, abgerufen im Mai 2019. Frank, Roland (2017): Vom CEO zum CTO – Sind Techniker die besseren Chefs?, erschienen in: cloud-­blog.arvato.com, https://cloud-­blog.arvato.com/vom-­ceo-­zum-­cto-­sind-­techniker-­die-­ besseren-­chefs/, abgerufen im Mai 2019. Fritz, Stefan (2018): Worüber wir statt der Digitalisierung sprechen sollten, erschienen in: computerwoche.de, https://www.computerwoche.de/a/worueber-­wir-­statt-­digitalisierung-­sprechen-­ sollten,3544566, abgerufen im Mai 2019. Gassmann, Oliver, Karolin Frankenberger und Michaela Csik (2013): Geschäftsmodelle entwickeln: 55 innovative Konzepte mit dem St. Galler Business Model Navigator, Carl Hanser Verlag, München. Gat, Aviva (2014): Munich startup Tado keeps your house cool when no one is home, erschienen in: geektime.com, http://www.geektime.com/2014/04/21/munich-­startup-­tado-­keeps-­your-­house-­ cool-­when-­no-­one-­is-­home/, abgerufen im Juni 2019.

42

2  Everything Becomes Digital

GSMA (2019): The Mobile Economy, erschienen in: gsmaintelligence.com, https://www.gsmaintelligence.com/research/?file=b9a6e6202ee1d5f787cfebb95d3639c5&download, abgerufen im Juni 2019. Höhler, Gerd (2016): So begann die Krise in Griechenland, erschienen in: rp-online.de, https://rp-­ online.de/politik/ausland/der-­23-­april-­2010-­so-­begann-­die-­krise-­in-­griechenland_aid-­9645883, abgerufen im Mai 2019. Horizont (2019): Alle wollen Plattform sein, erschienen in: Horizont, Ausgabe 12, 21.03.2019. Höwelkröger, Robert (2014): eBay erhöht seine Verkaufsprovision auf 10%, erschienen in: heise. de, https://www.heise.de/newsticker/meldung/eBay-­erhoeht-­seine-­Verkaufsprovision-­auf-­10-­ Prozent-­2086004.html, abgerufen im Mai 2019. HPI (2019): Geschichte des Hasso Plattner Instituts, erschienen in: hpi.de, https://hpi.de/das-­hpi/ organisation/geschichte.html, abgerufen im Juni 2019. Hülsbömer, Simon und Hans-Christian Dirscherl (2019): Die Geschichte von Microsoft Office, erschienen in: pcwelt.de, https://www.pcwelt.de/ratgeber/Microsoft-­Office-­Mit-­einer-­Maus-­ fing-­alles-­an-­6091613.html, abgerufen im Mai 2019. Keese, Christoph (2016): Silicon Valley – Was aus dem mächtigsten Tal der Welt auf uns zukommt, Penguin Verlag, München. Kneuer, Marianne und Thomas Demmelhuber (2012): Die Bedeutung Neuer Medien für die Demokratieentwicklung, erschienen in: Medien und Politik, Bd. 35, Innsbruck-Wien-Bozen. Krempl, Stefan (2018): Patente – China etabliert sich als Großmacht, erschienen in: heise.de, https:// www.heise.de/newsticker/meldung/Patente-­China-­etabliert-­sich-­als-­Grossmacht-­4239326.html, abgerufen im Mai 2019. Kroll, Sonja (2018): Mehr als 800 Millionen Internetnutzer in China, erschienen in: internetworld. de, https://www.internetworld.de/e-­commerce/zahlen-­studien/800-­millionen-­internetnutzer-­in-­ china-­1574347.html, abgerufen im Mai 2019. Kucklick, Christoph (2013): Schluss mit dem Scheitern, erschienen in: zeit.de, https://www.zeit. de/2014/01/scheitern-­misserfolg, abgerufen im Juni 2019. Loesche, Dyfed (2017): Netflix verdoppelt seinen Gewinn bei den Emmys, erschienen in: de. statista.com, https://de.statista.com/infografik/11130/nominierungen-­und-­gewonnene-­emmys-­ von-­netflix-­produktionen/, abgerufen im Juni 2019. Markowitz, Harry M. (1952): Portfolio Selection, erschienen in: Journal of Finance, Nr. 7, S. 77 – 91. McLuhan, Marshall (1962): The Gutenberg Galaxy, Toronto. Pillau, Florian (2019): Updates für Teslas „Navigate on Autopilot“, erschienen in: heise.de, https://www.heise.de/autos/artikel/Update-­fuer-­Teslas-­Navigate-­on-­Autopilot-­4359926.html, abgerufen im Mai 2019. Podbregar, Nadja (2019): Happy Birthday World Wide Web, erschienen in: scinexx.de, https://www. scinexx.de/news/technik/happy-­birthday-­world-­wide-­web/. Porat, Marc Uri (1977): The Information Economy, Michigan. Reinsel, David, John Gantz, und John Rydning (2018): The Digitization of the World From Edge to Core, erschienen in: seagate.com, https://www.seagate.com/files/www-­content/our-­story/trends/ files/idc-­seagate-­dataage-­whitepaper.pdf, abgerufen im Juni 2019. Reissmann, Frank (2013): Geschichte der CRM-Software von Microsoft, erschienen in: blog.rrcg. de, https://blog.rrcg.de/2013/10/15/geschichte-­der-­crm-­software-­von-­microsoft/, abgerufen im Mai 2019. Ries, Eric (2011): The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses, Currency, New York. Rouse, Margaret (2015): Definition Arpanet/Darpanet, erschienen in: computerweekly.com, https:// www.computerweekly.com/de/definition/ARPANET-­DARPANET, abgerufen im Mai 2019.

Reference

43

Sauer, Olaf (2019): Digitaler Zwilling, erschienen in: iosb.fraunhofer.de, https://www.iosb. fraunhofer.de/servlet/is/80212/, abgerufen im Mai 2019. Schmidt, Julian und Paul Drews (2016): Auswirkungen der Digitalisierung auf die Finanzindustrie – eine strukturierte Literaturanalyse auf der Basis der Business Model Canvas, erschienen in: researchgate.net, https://bit.ly/2EiYAvl, abgerufen im Mai 2019. Siegler, M. G. (2010): Every 2 Days We Create as much Information as We Did up to 2003, erschienen in: techcrunch.com, https://techcrunch.com/2010/08/04/schmidt-­data/, abgerufen im Juni 2019. Shi-Kupfer, Kristin und Mareike Ohlberg (2019): China’s Digital Rise  – Challenges for Europe, Merics Papers on China, No 7 – April 2019, https://www.merics.org/sites/default/files/2019-­04/ MPOC_No.7_ChinasDigitalRise_web_final.pdf, abgerufen im Mai 2019. Schwender, Clemens (2019): Chronologie von Fernsprechwesen und Telefonbuch in Berlin, erschienen in: schwender.in-berlin.de, https://www.schwender.in-­berlin.de/, abgerufen im Mai 2019. Szczutkowski, Andreas (2018): Kritische Erfolgsfaktoren, erschienen in: wirtschaftslexikon. gabler.de, https://wirtschaftslexikon.gabler.de/definition/kritische-­erfolgsfaktoren-­38219/ version-­261645. Techcrunch (2019): Pivoting to success: Agile founders who turned their companies on a dime, erschienen in: techcrunch.com, https://tcrn.ch/2WZj2ZQ, abgerufen im Mai 2019. Urbach, Nils und Frederik Ahlemann (2016): IT-Management im Zeitalter der Digitalisierung, Springer Gabler, Wiesbaden. Wagler, Alexandra (2018): Die Auswirkungen der Konvergenz der Medien auf den öffentlich-­ rechtlichen Rundfunk, insbesondere auf die Regelungen im Rundfunkstaatsvertrag, Berlin. Further literature on digitisation & society

3

The Road to a Zero Marginal Cost Economy

Abstract

The marginal costs of companies in the digital industry are approaching zero. This clearly distinguishes the business models of Facebook or Google from the business models of the old economy. While the marginal costs of industrial goods only decline slowly with large production volumes, digital “zero marginal cost products“make it possible to reach the break-even point even with low output volumes. This advantage combines with an almost infinite scalability of digital production. It is difficult for companies in the “traditional” industries to compete with such business models. A model to analyze disruptive change in the markets presented in this chapter (Sect. 3.5) helps answering the question under which conditions investments in new digital technologies are worthwhile. In addition, the model can be used to examine which markets and industries will undergo digital disruption in the coming years.

3.1 Big Is Beautiful For many companies, the principle: “Big is beautiful” still applies today. In the twentieth century, companies grew: Factories and manufactories initially became nationally active companies and corporations. Gradually, these grew into internationally operating multi-­ billion corporations that dominate economic development today. This trend was evident in the seven major waves of corporate mergers over the last 120  years (Wirtz 2012; Hinne 2007):

© The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2023 R. Frank et al., Cloud Transformation, https://doi.org/10.1007/978-3-658-38823-2_3

45

46

3  The Road to a Zero Marginal Cost Economy

1897–1904 Industrialization reached its first peak at the end of the nineteenth century. The resulting overcapacities led to price collapses in consumer goods markets. Horizontal mergers between companies often took place intending to stabilize market prices. This first wave of mergers ended with the stock market crash of 1904 in the USA. 1916–1929 In the “Roaring twenties”, the competition authorities in the USA enacted numerous anti-­ trust laws to prevent dominant positions of individual companies. Nevertheless, numerous companies continued to strive for a monopoly position in their respective markets. Vertical acquisitions along the value chain enabled companies to circumvent the requirements of the anti-trust legislation. This second wave of mergers also ended with a stock market crash, the so-called “Black Friday” of 1929. 1965–1969 In the 1960s, numerous conglomerates and consortiums emerged. The goal of corporate leaders during this period was portfolio expansion and risk diversification. The results of this development are still weighing on the economic development of the Japanese economy today. There, companies with a broad and diversified product portfolio still dominate. At the end of the 1960s, companies in Germany also grew in size and the number of conglomerates increased. One example of this trend is the expansion of Siemens in the 1960s. 1984–1989 The 1980s were the time of “Merger mania”. This was triggered, among other things, by political liberalization in the USA and Great Britain (the era of Reagan and Thatcher), which gave companies more freedom in structuring corporate takeovers. This period saw a surge in the number of “leverage buyouts” and “unfriendly takeovers”. The recession from the late 1980s onwards brought this phase to an end. 1993–2001 Fueled by major mergers and acquisitions, such as the Daimler-Chrysler merger, the stock market surged in the 1990s. In addition, shareholders had high expectations for the emerging technologies of biotech and the Internet. Corporate leaders used the available capital to create large global corporations. An example of this development is the merger of AOL and Time Warner. The stock market crash on the NASDAQ in the USA and on the Neuer Markt in Germany in 2001 put an end to this development. 2003–2007 Large hedge funds in particular played a crucial role in the first decade of the new century. With their enormous purchasing power, investors were able to finance major takeovers. In addition, countries such as China and India appeared as buyers for the first time in the

3.1  Big Is Beautiful

47

Western markets for corporate mergers. The global economic crisis from 2007 onwards initially ended this development. 2008 – Today Since 2008, the number of business combinations has been on the rise again (Imaa 2018). The value of global sales through mergers and acquisitions doubled from 2009 (global volume: USD 2.19 trillion) to 2015 (global volume: USD 4.76 trillion). Since 2015, the volume of global transactions has settled below the USD 4 trillion threshold (2017: USD 3.68 trillion). The trend towards larger companies is being driven by private equity firms, which operate their business models based on economies of scale. The relationship between increasing company sizes and rising economies of scale is examined in Sect. 3.1.1.

3.1.1 Economies of Scale and Experience Curves Large companies have many strategic and tactical advantages over small companies. First and foremost, they use their economies of scale and market share to produce their products and services at significantly lower prices than their competitors. In classical goods and industrial production, the following rule applies: Each newly produced car, each newly produced service costs approximately as much as the car or the individual service produced directly before it. The costs of producing the last product or service decrease slowly (see Sect. 3.1.2). Only traditional companies with large production volumes benefit from the effect of sinking marginal costs. The fact that goods become cheaper when more of them are produced was already known at the beginning of industrialization. In industrial production, the following relationship applies: The unit costs of production generally fall by 20 to 30% when the output quantity doubles (Dreiskämper 2018). Strategic management models took these findings and developed their own approaches to depict the benefit of larger market shares. Bruce Henderson – one of the founders of the Boston Consulting Group – revolutionized with his models the view of strategic thinking in management: From the fundamental insight of the advantage of larger market shares, he developed the strategic management tools of the “experience curve“and the “Boston Consulting Matrix“(Henderson 1968, 1984; see background information on the Boston Consulting Matrix). Following his ideas capital providers and banks supported corporate leaders in building ever-larger companies to improve the companies’ strategic starting position. The Boston Consulting Matrix In 1970, Bruce Henderson, founder of the Boston Consulting Group, developed a strategic management tool that remains popular among analysts (Baum et al. 2013).

48

3  The Road to a Zero Marginal Cost Economy

The model is based on two initial considerations: 1. Bigger market shares lead to greater advantages over the competitors. There are many reasons for this: Economies of scale, economies of scope, negotiating power vis-à-vis suppliers, partners, and customers rise the profit margins of companies with high market shares. 2. The market success of products and product classes changes over time. The product life cycle describes the different phases of each product and product class. In the long term, all products exit the market and are replaced by new products (Appleyard et al. 2006). To determine the success of a product and its future viability, both findings were transferred to the portfolio model of the Boston Consulting Matrix (BCG Matrix). The positioning of the product on the x-axis is based on the relative market share of the producing company. The relative market share is the quotient of the own market share and the market share of the strongest competitor. The positioning of the product on the y-axis is based on the current market growth of the product class. In summary, Bruce Henderson’s portfolio model results in four quadrants that allow a forecast of success for individual products of a company. Quadrant 1 contains the “question marks” of a company. These products show high market growth. However, the company has not yet succeeded in securing large market shares for the product. Quadrant 2 contains the “Stars”: These are products that have a high market share and high growth potential. Quadrant 3 includes products with achieved a significantly high market share, but the growth of the product class is already slowing down or even declining. These products are referred to as “cash cows”. If a company’s product has a low market share and is also in a declining submarket, these products are called “poor dogs”. Ideally, the product life cycle of a product runs through all four phases clockwise from quadrant 1 to quadrant 4 (see Fig. 3.1). The classification of products into the classes of the BCG matrix results in possible standard strategies for corporate managers: Question Marks represent potential investment targets. Managing the stars is a central task of corporate management. Cash Cows and Poor Dogs, on the other hand, are candidates for a divestment strategy. Due to its informative and predictive power, the BCG matrix is still an important model for understanding market structures and determining future success potential. Bruce Henderson’s experience curve illustrates that companies benefit from falling unit manufacturing costs as output volumes increase. The prerequisite for this effect is that over time efficiency gains occur. These effects are known as economies of scale. They include learning effects through repetition of production activities, the introduction of efficiency-enhancing production techniques, automation, and rationalization. The increased market power of firms resulting from economies of scale in factor markets also contributes to the emergence of experience curves.

3.1  Big Is Beautiful

49

10

E Question Marks

B

Stars

Market growth

In percent

D

C

A Dogs

Cash Cows

0 0

Relative market share

2

In relation to the largest competitor

Fig. 3.1  Boston Consulting Group portfolio analysis

An example of economies of scale through experience curves is the traditional business model of the textile industry. Because of the volume discounts that suppliers grant to large textile manufacturers when they purchase large quantities, manufacturing costs gradually decrease as output volumes increase. In addition, automatisms and workflows become more efficient over time. This results in potential savings with each additional garment produced. Increasing efficiency along the value chain, for example through the introduction of Just-in-time production or Kanban, also helps saving further costs as the size of the company increases. The strategic implications of the existence of experience curve are extensive. Assuming potential cost savings, it is advantageous to gain a large market share quickly in competitive markets. Large corporations can produce cheaper and reduce the prices of their products. In this way, dominant companies force their competitors out of the market in the long term. To this day, company size is an important strategic factor in achieving competitive advantages over the competition. Investors, therefore, provide large amounts of capital to allow larger corporations to emerge. The effects of the experience curve are illustrated, among other things, by a falling marginal cost curve. To clarify this relationship, Sect. 3.1.2 first takes a closer look at the cost curves of companies.

3.1.2 Marginal Cost Analysis The cost curve of a company shows how the costs change depending on the output quantity. A total of six different cost curves can be distinguished.

50

3  The Road to a Zero Marginal Cost Economy

Proportional (Linear) In the proportional cost curve, there is a linear dependence of costs on the quantity produced. The costs increase in direct relation to the output quantity. Degressive (Under-Proportional) The cost of production increases more slowly as the output quantity increases. In this case, the unit cost of production decreases with larger quantities. One reason for this development are the effects of the learning curve. Progressive (Disproportionate) In a progressive cost trend, costs increase faster than output quantities. That is, the cost of producing the final good/service becomes exponentially more expensive. For example, if labor becomes more expensive because overtime is paid, progressive cost trajectories occur. Regressive (Decreasing) If the production costs decrease as the output quantity increases, a regressive cost curve is created. For example, the cost of heating a room decreases with the number of people in the room. Fixed Fixed costs do not change. Each good and service produced costs as much as the unit before or the unit after. This relationship holds regardless of the output quantity produced. Gradually Within delimited intervals, fixed costs do not change. The costs of production only change abruptly when the threshold of the respective output quantity is reached. An example of an erratic change in costs is the acquisition of new machines to increase capacity. In reality, cost curves are more complicated than the six basic cost curves. A more detailed view of the cost curves becomes possible if they are divided into variable and fixed costs. Variable costs arise for a company when a good or service is actually produced. Siemens, for example, has been a supplier to Deutsche Bahn for many years. For every ICE that Siemens produces, it needs numerous individual components: Carriages, wheels, engines, chairs, etc. As a manufacturer, Siemens procures these components from its suppliers before the production process. This makes the production of the end products more expensive. Fixed costs, on the other hand, are incurred by companies regardless of the volume of production and employment. Fixed costs usually include rents and wages. For Siemens, this means that fixed costs (the cost of paying employees and maintaining and renting manufacturing equipment) must be paid even if no new train is produced in one period.

3.1  Big Is Beautiful

51

Cost of the first copy

Average cost (AC) marginal cost (MC)

ACphysical

MCphysical

ACdigital

MCdigital Output quantity

Fig. 3.2  Physical vs. digital products – average costs and marginal costs in comparison (Clement and Schreiber 2016)

The sum of variable and fixed costs is the total cost of the enterprise. Dividing the total costs (K) by the output quantity (x) results in the average costs (DK) of a company (see Fig. 3.2). The following relationship applies: DK  K  x (3.1) An increase in the output quantity reduces the average costs. This reduces the share of fixed costs that must be financed per product sold. This relationship is referred to as fixed cost degression (Clement and Schreiber 2016). What Are Marginal Costs? From a mathematical perspective, marginal cost is the first derivative of total cost. Thus, the marginal cost function provides information about the slope of the cost function at each point. In other words, the marginal cost function provides information about those costs that the firm incurs to output an additional unit. The following relationship holds:

K   x  ÆMarginal costs  MC 

(3.2)

The example of a printing company illustrates this connection. The fixed costs of production are usually high in this industry. The production facilities are provided and maintained regardless of the order volume. At the same time, wage costs are incurred to pay the employees. These costs are incurred regardless of the number of print assignments processed.

52

3  The Road to a Zero Marginal Cost Economy

If the print shop receives a print order, for example for a certain number of information brochures, there is a close correlation between the quantity ordered and the price per unit. This is due, among other things, to the purchasing discounts that a print shop can obtain from its suppliers. The more paper the company orders from a paper producer, the cheaper it is to process a single print assignment. Assume that the print shop’s fixed costs amount to EUR 100,000 per month. When purchasing 1000 printed sheets, the printing company pays the paper manufacturer EUR 2 per sheet. When 1000,000 printed sheets are purchased, the company still pays 1 EUR per sheet. If the 1000,000 sheets are printed and delivered in a month, the average cost per sheet is 10 EUR (fixed costs divided by output quantity) + 2 EUR (variable costs) = 12 EUR. If one million sheets are printed during the month, the average cost drops to 1.1 EUR (0.1 EUR + 1 EUR). The average costs have therefore fallen by 10.90 EUR. The marginal costs have fallen from 2 EUR to 1 EUR. The company can pass on these cost advantages, which the print shop gains by expanding production, to its clients. It is therefore significantly cheaper to award large print assignments to printers. The consideration of marginal costs is suitable to analyze the cost structure of a company. Low marginal costs allow a company to quickly reach the so-called “break-even point”: This refers to that output quantity at which the cost of production (i.e., the sum of variable costs and fixed costs) equals the firm’s sales of the product. If the sales per unit are greater than the marginal costs at this point, each additional unit produced increases the company’s profit. Digitization is leading to the dominance of new business models: Zero marginal cost business models. Zero marginal cost business models are characterized by the fact that a company already reaches the break-even point with low sales due to its favorable cost structure. How these business models are created and which industries are affected by this development is described in Sect. 3.2.

3.2 Zero Marginal Cost Business Models Digitization marks a transition between the traditional industrial production of the twentieth century and the digital economy of the twenty-first century. While companies in the goods and services sectors are making their internal value creation processes more efficient along the experience curves to improve their market position, a new generation of companies has discovered an independent field of activity for itself: The provision and processing of digital information. With this new positioning, digital companies are changing the rules of the game in consumer goods markets. This is because the marginal costs of companies that rely on digital business models amount to almost zero from the first product or service onwards – even with low output volumes. While many European corporations specialize in offering physical products and services, American corporations from Silicon Valley have become successful by providing

3.2  Zero Marginal Cost Business Models

53

digital platforms and disseminating and analyzing bits and bytes (Chap. 2). Digitization is causing a power shift between these two forms of production. While digital business models and the creation of software have become the drivers of economic development, physical production processes are playing an increasingly subordinate role. The example of the online game Fortnite illustrates how successful business models from the digital industry currently are. The Business Model of the Online Game Fortnite “Fortnite: Battle Royale” is one of the most successful online games of all time. In 2018, the software company Epic announced that 125 million players worldwide used the game. This means that the game has overtaken the industry leader League of Legends (Albrecht 2018). Fortnite is a multiplayer game in which 100 online players simultaneously compete with each other to be the last to survive on a virtual island. Like other battle royal shooters, the object is to equip your digital avatar with items and weapons and eliminate all other opponents as quickly as possible. What makes Fortnite special is its sophisticated business model. After purchasing the starting package, online players have a range of visual and functional enhancements for their characters at their disposal, which they can buy while playing (so-called in-app purchases). For example, players will be able to purchase changing player outfits (skins) for $ 20 USD, which will be available for a limited time. In this way, the incentive for players to buy these skins becomes stronger. Based on this business model, which is primarily based on in-app purchases during the game, Epic generated an estimated USD 3 billion in profit with Fortnite in 2018 (Rüther 2018). The costs of providing the software and servers are marginal compared to the revenues generated by online players. The risk for the manufacturer of online games therefore lies not in the operation of the digital platform, but in the costs that the company has to invest in the development of the game. Once the game is successfully established in the market, the manufacturer incurs almost no costs. A zero marginal cost business model has emerged. It can indeed be assumed that a large part of human consumer behavior will continue to relate to physical products in the coming decades. But around these physical processes and products, all stages of the value chain will gradually be digitized – from purchasing to sales and support. An example of this connection is provided by the development of the film industry over the past 20 years. Until the mid-1990s, color film (35 mm) was still the standard playback format in the cinema. Digital film effects (Digital Visual Effects) were already known at that time. To insert these effects, the film was digitized and processed on the computer, then the digitally altered film was played out onto a physical 35 mm film reel. The film distributor duplicated the film reels thousands of times and then delivered the copies to the cinemas. For many decades, the distribution of motion pictures was a completely physical process.

54

3  The Road to a Zero Marginal Cost Economy

With the advent of digital film cameras and cinema projectors, the business model of the film industry and especially the business model of film distribution changed. Today, film files are transferred from production companies to the hard drives of cinema operators via data transfer. Thus, the marginal costs for the distribution of a film drop to almost zero due to digitization. The number of employees needed to implement zero marginal cost business models is small. In many cases, the provision of the software already covers the core process of value creation. For example, Google’s search engine algorithm runs without human intervention. After the development of the software, the tasks of the employees are limited to other process stages, such as marketing and customer support. Digitization has thus spawned a new form of success model: Small startups that manage to turn over billions of dollars with just a few employees. Instagram appeared in Apple’s app store on October 6, 2010. Just two years later, Instagram was bought by Facebook for $1 billion. At that time, Instagram had just 12 employees. Today, the platform generates annual advertising revenue of USD 4.1 billion and has 594 million users worldwide (McCarthy 2017). The operation of digital business models follows the logic of digitization. Once the algorithm of the software or platform has been developed and programmed, the operator of the digital service bears only those costs that are incurred by providing the digital service on the World Wide Web. To be able to offer information on digital platforms, three process steps are necessary: The processing, the storage and the transmission of the data. The transmission costs for online communication in B2C models are borne by both customers and providers. Customers finance a large part of the ongoing operations of mobile and network operators in Europe with their contracts for broadband connections and with their mobile contracts. Providers of digital products and services typically pay for the connection of server equipment via a so-called point of presence (Isberto 2018). These dial-up costs can be added to the fixed costs of the company. In addition, companies only bear those costs that are incurred for the provision and operation of the data storage and server facilities. The average costs, i.e. the total costs of operating the server facilities divided by the number of users and thus the marginal costs approach zero. This gives digital production processes an advantage over industrial production processes. Even with low production volumes, the providers of digital products generate profits.

3.2.1 Comparison of Zero Marginal Cost Business Models and Classical Business Models The business models of Google, Facebook etc. differ from the business models of industrial companies such as General Electric or Siemens in their specific cost structure. Industrial companies use a considerable part of their turnover to finance production and to manufacture goods and services. Digital companies incur virtually no costs for production itself. The extent of the differences can be illustrated with a numerical example. For this

3.2  Zero Marginal Cost Business Models

55

purpose, the marginal cost situation of Google is examined. The variable costs of the search engine provider are compared with the revenues it can generate by providing the service. Subsequently, these figures are compared with the variable costs and the sales of VW. An example: The user of a search engine enters the search engine’s website via an internet browser. At this point, Google has not yet incurred any variable costs. By entering the search query and pressing the “Search” button, the customer triggers an operation that is performed on Google’s servers. Now costs are incurred for the first time. Current estimates assume that the processing of a single search query costs Google 0.0003 kilowatt hours (Ostendorf 2017). Extrapolated to the electricity price (as of January 2019), which is stated in relevant comparison portals at EUR 0.13 per kilowatt hour, this results in costs of EUR 0.000039 for a single search query. The fact that a company such as Google can calculate with volume discounts due to the high demand for electricity, or produces its own electricity, is not included in these considerations (Heuzeroth 2011). In order to answer the question about the relation between marginal costs and turnover of the company, we need a reliable statement about the turnover that Google can achieve with a search query. Google generates revenue on the search engine portals through the Google AdWords application: This is an advertising platform for companies that want to place their offers on Google’s search engine result pages (SERP). According to a 2018 study by Sistrix, 1.68% of all search queries are followed by a click on a Google AdWords ad (Beus 2018). At first glance, this does not seem to be enough for operating a successful business model. In the next step the click probability is multiplied with the potential revenue that a click generates for Google. Current market estimates show that the majority of pay-per-click costs range between EUR 0.4 and EUR 2.0. Using the mean value of EUR 1.2, Google’s average revenue per search query is EUR 0.02016 (i.e. about 2 cents). The comparison between costs and revenue illustrates the great potential of Google’s digital business model. Since the variable costs are 0.000039 EUR, the average sales exceed the costs by a factor of 500. An industrial company cannot compete with such figures. Since car production is a physical process, the individual components of the vehicle are purchased on the free goods market before they are used in the production process. The fixed costs of car production consist of the wages of the employees as well as the rents and maintenance costs that have to be paid for the provision of the production facilities. Even though the analysis of the marginal costs of car production does not include important factors such as energy consumption per unit produced, the analysis shows the disadvantage of industrial production compared to digital business models. Industry insiders assume that the level of material costs for the production of a car accounts for about 50% of the car’s sales price (Anker 2013). The detailed composition of the cost structure is usually not comprehensible to outsiders. Even if variations between high-end and mid-range models are taken into account (the high-end versions usually generate the higher margin profits), an industrial product cannot compete with the profit

56

3  The Road to a Zero Marginal Cost Economy

margins of a digital product. With material costs amounting to 50% of the sales price, sales only exceed costs by a factor of 2. A detailed look at VW’s financial statements confirms this trend: In 2017, the company’s total sales amounted to EUR 76.73 billion. The cost of materials in the income statement for the same year amounted to EUR 49.8 billion. That is, in the 2017 accounting period, VW spent almost 65% of its turnover on the procurement of the necessary production materials (VW 2018). In conclusion, the comparison between the old, industrial business models and the new, digital business models is devastating for the representatives of the old economy. While an industrial company has to put a large part of its turnover into financing material consumption, the marginal costs of the digital business models run towards zero. With this approach, companies with digitally based business models are not only able to develop their own industry, they also possess the technology to attack foreign industries with their digital platforms (Chap. 2). In 2014, Google bought the company Nest Labs for USD 3.2 billion. Nest Labs specializes in developing digital home appliances, such as thermostats. With its digital platform, which can also be installed on mobile devices, the company gained a significant market share in the heating systems industry. And it did so without ever having built a heating system itself. Following this example, digital companies from Silicon Valley or China are disrupting industries around the world, with the result that an increasing number of markets are moving in the direction of zero marginal cost business models. In order to obtain a theoretical approach to the question of when a market or industry disruption exists, a separate model is developed in Sect. 3.2.2. This model differentiates between the participants and the exchange processes that take place between these participants.

3.2.2 The Model for Analysing Disruptive Market Changes Towards Zero Marginal Cost Business Models The model for analyzing the disruptive change of markets towards zero marginal cost business models serves to identify those industries that will be affected by market changes due to digitization in the future. With the model, the entire value creation process of a company – regardless of whether it involves industrial or digital manufacturing processes  – is first broken down into its process steps. In a second step, the degrees of digitization of the individual stages are determined in more detail. The Model In 1986, Michael E. Porter’s economic model of the value chain changed strategic thinking in management (Porter 1986). His thesis is that companies are only successful if they succeed in identifying the processes that are necessary for value creation and design them more efficiently than their competitors.

3.2  Zero Marginal Cost Business Models

57

According to his model, a company can be better than any other company if it is either the quality or price leader in the respective industry (or both at the same time). Greater efficiency along the value chain lowers average costs. Therefore, an important task for business leaders is to focus their attention on the best possible design of value creation processes: Ultimately, all differences between companies in cost or price derive from the hundreds of activities required to create, produce, sell, and deliver their products or services, such as calling on customers, assembling final products, and training employees. Cost is generated by performing activities, and cost advantage arises from performing particular activities more efficiently than competitors. Similarly, differentiation arises from both the choice of activities and how they are performed. Activities, then, are the basic units of competitive advantage. Overall advantage or disadvantage results from all a company’s activities, not only a few. (Porter 1996, 2002).

To this day, the value chain model has lost none of its universal significance. In order to apply the model to the initial situation of digital value creation processes, the model is broken down to the essential process steps: every value creation process (whether physical or digital) of a product or service can be divided into two central activities: manufacturing and distribution. Transferred to the value creation tasks of digitization, the following exchange processes can be identified: • The manufacturer is responsible for both the creation of the product and its distribution. • The products are purchased by the customer, who satisfies his specific needs with the purchase. There are thus three market levels between the producer and the end customer that are eligible for digitization: Production, the product and distribution. It does not matter whether production and distribution are carried out by the same market participants or whether several economic participants are involved at one level of exchange. This representation applies irrespective of the type of goods considered and the market actors involved. It is the universal representation of a market transaction (see Fig. 3.3). The emergence of zero marginal cost business models requires that the digital value chain is maintained across all three levels of exchange. If one of the exchange levels remains at a physical level, marginal costs skyrocket at that level. The following relationship applies to all industries: Only when all value creation steps can be digitized in the future zero marginal cost business models will emerge in the respective industries. This statement illustrates the limits of digitization: Some products and processes can only be digitized to a limited extent or not at all. Food or textiles will never become digital products. In addition, many manufacturing and distribution processes are in transitional

58

3  The Road to a Zero Marginal Cost Economy

Manufacturer

Producon

Product or service

Sales

Customer

Fig. 3.3  Basic elements of the value chain

stages on the way from physical to digital provision. This means that while some of the manufacturing or distribution processes can be digitized, intermediate stages of production or distribution remain in the physical value chain. For example, with the use of a 3D printing process, the template for a physical product can be created digitally. The actual printing process nevertheless remains physical. In such cases, the exchange levels are classified as “partially digitized”. Example Banking Industry The banking industry is a good example to illustrate the impact of digitization on the value chain. As early as 1990, the then board member of Deutsche Bank, Ulrich Cartellieri, predicted massive cuts in the banks’ business model. “The banks are the steel industry of the 1990s,” was his credo, which was massively criticized by many bank executives at that time (Meifert 2016). However, the beginning of digitization or automation in the banking industry can be dated much earlier. In 1968, Kreissparkasse Tübingen installed the first ATM in Germany on the outside walls of its own headquarters. However, this service was initially only available to selected customers and only with the use of special punch cards. It was not until the introduction of the EC card and the equipping of these cards with a magnetic strip and a PIN encryption process that the use of ATMs increased rapidly from the mid-1980s onwards. The number of ATMs in Germany reached its zenith in 2015, when 61,100 ATMs were installed. Since then, the number has been slowly declining again  – among other things, because bank customers are now increasingly making cashless payments (Pöltzl 2018). The foundation stone for cashless payment transactions was laid by the first digital online transfer procedures, which could be carried out via BTX from the bank customers’ television sets. BTX is the abbreviation for screen text. This was an interactive online communication system for television sets that was introduced nationwide in Germany in 1983. On November 12, 1980, Deutsche Bundespost launched a field trial in which, among others, the mail-order companies Otto and Quelle, as well as 2000 households from North Rhine-Westphalia, participated. It was not until 2007 that the BTX network was switched off in Germany (Schmidt 2013). But it was not until the widespread use of the Internet from the mid-1990s that online banking began its success story. In 1995, Security First Network Bank in the USA was the first purely virtual bank to receive its banking license. The company deliberately refrained from operating its own branch network in order to save on the costs of branch premises

3.3  When Is It Worthwhile to Start Using Digital Technologies?

59

and employees (Jaskulla 2015). Four years later, Netbank became the first German bank to operate exclusively online. In addition to account management, online banks expanded their range of digital services to include, for example, online share management, share trading (brokerage) and online lending. In 2001, eight major banks in the USA used the system of digital money transfer via web browser. This made a trip to the bank branch superfluous, as customers could make their transfers from their home computers. At that time, the system was actively used by 19 million households. Ten years later, there were already 54 million households using online banking in the USA. In the meantime, the use of digital banking services has become standard. About half of the German population regularly used online banking in 2018, according to the Association of German Banks (Bundesverband deutscher Banken 2018). 2018 was also the year in which, for the first time, the sum of purchases made with EC cards exceeded the sum of purchases paid for in cash in Germany (Panagiotis and Wegmer 2019). In terms of the banking value chain, all three stages have long since been digitized: Production (banking services), product (money) and distribution (online platforms) can be handled without a physical intermediate stage. This has enormous consequences for the banks’ business model: estimates suggest that online banks employ only a third of the staff of a branch bank for the same turnover. At the same time, the costs for rent as well as operating and office equipment decrease. The improved cost situation can be passed on to customers in the form of lower bank fees and interest rates. And it is therefore not surprising that the German Volks- und Raiffeisenbanken, for example, are planning to close around 2000 branches in Germany by 2021 (Meifert 2016). This trend will be further reinforced in the coming years by the increasing automation of IT services and transactions. This means that  – with a time lag  – the forecast of the former German banker Cartellieri will come true in the second decade of the twenty-first century.

3.3 When Is It Worthwhile to Start Using Digital Technologies? Many companies are faced with a conundrum when it comes to answering the question of when it is worth integrating new digital technologies into their own company. The number of options available is too great, as the number of possible digital fields of activity has increased greatly in recent years. The following list provides an overview of current application scenarios: • • • • • •

Internet of Things Machine learning Deep learning Big Data Conversational bots Mobile communication

60

3  The Road to a Zero Marginal Cost Economy

• Cloud computing • Blockchain • Virtual reality/Augmented reality Using the model to analyze disruptive market changes towards zero marginal cost business models can answer the question: First, it is the company’s task to understand its own current business model. Only when it knows its own business model, then a company can gain an overview of when the use of digital technologies is worthwhile. If the value creation processes of the core business can be further digitalized, new cost structures will arise within the company. These cost advantages can improve the cost-­ income ratio and, if necessary, be passed on to customers. Digitization measures are therefore successful if the marginal costs of manufacturing or sales can be changed in the direction of a zero marginal cost business model. One example: Uber is a mobile platform that brokers transportation services between drivers and customers. Uber’s platform reaches hundreds of thousands of employees and millions of customers via mobile communication channels – and at almost zero cost. Uber is thus a prime example of how digital technology (in this case “mobile communication”) can be profitably integrated into the value chain. This practical example demonstrates that it is worthwhile for companies that want to bring their digital services to their customers via app or web browser to start thinking about using the digitization technology “mobile communication”. This is because mobile communication reduces the marginal costs for addressing potential customers via mobile devices to almost zero. This can be interesting for an intermediary platform for goods or services (e.g. eBay), but also for search engines (e.g. Google), e-commerce companies (e.g. Zalando) or content providers (e.g. Sky). Netflix provides another example of a successful digital transformation towards a zero marginal cost business model. Until 2006, the company founded by Reed Hastings and Marc Randolph in 1997 was limited to shipping DVDs. The business model was therefore rooted in the old economy: It consisted of providing and managing the logistics for shipping more than a billion DVDs. In 2006, Reed Hastings realized that the future of private film consumption was not tied to the distribution of physical data carriers. Streaming technology brought audiovisual content and films into viewers’ living rooms via the Internet (IPTV). In the early days, low bandwidths hindered a faster spread of digital streaming offerings. But with the expansion of broadband access in Western industrialized countries from around 2005 onwards, the initial situation changed. Today, a large part of Netflix’s value chain is digitized. The distribution of products in particular runs via the “Netflix.com“platform, which is currently used by 135 million people (as of 2019). When Is It Worthwhile for Your Company to Integrate Digital Technologies? The following guide is designed to help you evaluate potential investments in digitization technologies. The analysis is divided into five steps:

3.3  When Is It Worthwhile to Start Using Digital Technologies?

61

1. Understanding your current business model 2. Status quo of digitization within the industry 3. Application scenarios for digitization technologies along the value chain in your own core business 4. Review of own cost structure: Development towards a zero marginal cost business model possible? 5. Impact assessment 1. Understanding your current business model For a better understanding of one’s current business model, the model of the magic triangle developed by Oliver Gassmann is suitable (Gassmann et al. 2013, see Fig. 3.4). • Who is the target group? For a streaming service like Spotify, for example, there are two different target groups: The listeners who buy a music subscription and the advertising industry. • What is the product/service and how does it differ from competing offerings? Spotify offers a platform for music lovers that allows users to conveniently access a digital music library through streaming. Spotify differentiates itself from other companies through the convenience of use, the customizability of the offering, and the size of the music library. • How does profit arise? Profit remains after costs are deducted from revenue. Spotify incurs ongoing costs by operating the digital platform, marketing expenses, and purchasing music licenses. The company generates revenue through subscription payments from customers and advertising money from companies that place commercials between songs. • Which processes are necessary to manufacture and distribute the product? For this purpose, the company’s value chain is analysed (see Sect. 3.2.2). Fig. 3.4  Business model analysis according to Gassmann

What?

Value proposition

How? Yield mechanics

Value?

Value chain

Who?

62

3  The Road to a Zero Marginal Cost Economy

2. Status quo of digitization within the industry Now compare your company with other companies in the industry. Examine the status of digitization at your competitors along the value chain. • How far has the digitization of the market stages progressed in your industry? • Can the production process be digitized? And if so, with the help of which digitization technologies? • How far has the digitization of manufacturing progressed: Completely? Partially? Or not yet at all? • Can the product be digitized? And if so, with the help of which digitization technologies? • How far has the digitization of the product progressed: Completely? Partially? Or not yet at all? • Can sales be digitized? And if so, with the help of which digitization technologies? • How far has the digitization of sales progressed: Completely? Partially? Or not yet at all? 3. Application scenarios for digitization technologies along the value chain in your own core business Based on the value chain, application scenarios for the integration of digital technologies into the processes of creation and distribution are reviewed in the third step. The following list of digitization technologies is used as a checklist for this purpose: • • • • • • • • •

Internet of Things Machine learning Deep learning Conversational bots Big Data Mobile Communication Cloud Computing Blockchain Virtual Reality/Augmented Reality

For a media planning agency, for example, it makes sense to use AI algorithms to determine and evaluate potential advertising placements (see Sect. 3.5). By using AI algorithms, a process that previously tied up human labor can be automated, thus reducing the costs of service production. For a software game manufacturer, on the other hand, it may make sense to not only offer the online game already produced on traditional game platforms such as PCs or game consoles, but to expand the offering to mobile apps (Fig. 3.5).

1990

1995 2005

2010

Digitization of the market

ZDF Media Library

Streaming Spotify

2015

2020

Uber, Ly , MyTaxi, ...

Bandwidth increases Netflix changes distribution model

eBooks Kindle from Amazon

Downloads ITunes

Streaming starts

Napster

E-paper editions

Online advertising banners Ad server

Increasing use of the Internet Founding of the first online bank

Online News

2000

PDF from Adobe

CD era

First B2B transaction of a bank

1985

MP3 invented

Digital production Digital distribution

1980

Fig. 3.5  Launch of zero marginal cost business models

Individual transport (car)

TV industry

Book industry

Music industry

Newspaper

Advertising industry

Banks

Industry 2025

Autonomous driving

3.3  When Is It Worthwhile to Start Using Digital Technologies? 63

64

3  The Road to a Zero Marginal Cost Economy

Which of the technologies presented supports the business model of the company you are investigating? Be inspired by other industries and their approaches to digitization in your considerations. 4. Review of own cost structure: Development towards a zero marginal cost business model possible? In the next step, check how the cost structure within your company is changing as a result of digitization. Investments in digitization only make sense if the cost structure moves towards a zero marginal cost structure. For example, digital financial advisors are changing the way banks sell financial products, Instead of a face-to-face meeting in the branch, customers use a digital financial advisor to find out online about possible investment strategies and select the right product for them. The sale of financial products is thus developing in the direction of a “zero marginal cost business model“. Once the digital financial advisor has been developed and programmed, the bank’s cost situation changes significantly. Instead of remunerating the advisory service based on fixed salaries and commissions for bank employees, the digital financial advisor only produces server operating costs. The higher profit margin can be passed on by the banks to the buyers of the financial products. 5. Impact assessment Investing in digitization projects is not always a good decision. After all, the ongoing cost reduction through digitization is offset by the development and implementation costs. Therefore, the following questions should be answered at the end of the analysis: • How much can this investment save in the future? • What is the cost of the investment today? For this purpose, carry out an internal financing calculation. Only if the potential savings in the future, discounted to the present, exceed the investments, is it worth considering an investment in the implementation of digital technologies.

3.4 Big Stays Beautiful Will digitization lead to ever-smaller companies? The arguments of this chapter point in this direction: large revenues can be achieved by small teams in the future, thanks to the scalability of “zero marginal cost” business models. In such a scenario, formerly oligopolistic or monopolistic industries can develop into polypolistic markets. A market with a few large players and their respective value chains would become vibrant industries full of innovative startups and individual companies. In

3.4  Big Stays Beautiful

65

G L

Number of employees

E

W

F R

M Q K

B

H

C A 0

D

O

J

E

N

V

T P

I

S

U

Relative market share

2

In relation to the largest competitor

Fig. 3.6  Digital markets – market shares and number of employees

such a scenario, the time of large corporations is over due to the change in marginal costs towards zero marginal cost business models. Graphically, such a scenario is depicted in Fig. 3.6. However, the development of recent years points in a different direction, and this is especially true for the digital industry. The companies responsible for implementing the digital value chain are becoming larger and larger. This is because markets in which zero marginal cost business models dominate tend to form monopolies. The reasons for these concentration tendencies are manifold and have already been presented in the second chapter: The exploitation of digital network effects, data leveraging and the winner-takes-­ all problem can be better solved by large companies (Chap. 2). In addition to these effects, economies of scale lead to reduced marginal costs and thus improve the cost structures of large companies. This relationship was described in Sect. 3.1.1. In addition, economies of scope occur in digital markets when resources are shared between different business units. In the digital economy, such effects arise, for example, from the provision of large computing capacities that are accessed by different business units. The joint use of server facilities in companies such as Google, Amazon or Microsoft is an example of the exploitation of economies of scope. As a result, digital markets tend to a monopolistic structure. In digital markets a single company can meet demand across an entire market. The high scalability of zero marginal cost business models makes this possible. For example, Google has managed to dominate almost the entire internet search business for many years. Alphabet, Google’s parent company, owned a market share of 85.78% in desktop search engine queries and 98.3% in mobile search queries in Germany in 2018 (Seo-Summary 2018). A second search engine is not necessary to serve the demand in the market.

66

3  The Road to a Zero Marginal Cost Economy

If an up-and-coming young company does manage to stand up to the industry giants and capture relevant market share in a digital market, it is often quickly bought by the industry giants. The Messaging service WhatsApp is a good example: in 2014, Facebook Group bought the company, founded by two former Yahoo employees Jan Koum and Brian Acton, for 19 billion USD. (Graupner 2019). As a result, the emerging messaging service for smartphones became a part of the media giant Facebook’s product portfolio. As a result, Facebook was able to transfer all the advantages of their own zero marginal cost business model to WhatsApp. WhatsApp was able to continue to grow while benefiting from the computing power that Facebook maintains on its global server facilities. In addition, WhatsApp contributes to the growth of the parent company’s shared data pool (economies of scale and scope). Due to its great scalability, WhatsApp has been able to rise to become the unrestricted market leader in Germany. 63% of German online users use WhatsApp regularly (Brandt 2016). As more and more people used the product, its use became attractive to other users (network effect). Today, it is difficult for new messaging providers such as Threema or Telegram to capture relevant market shares (winner-takes-all problem). What was special about the WhatsApp deal was the way Facebook handled the acquisition: Instead of just taking over the technology for its own product Facebook Messenger and discontinuing the product WhatsApp itself, both the product and the team were kept in their independence. To this day, WhatsApp operates as an autonomous business unit within the Facebook group. The course of the takeover illustrates a connection that is relevant for the entire digital industry: On the one hand, the large companies learn that small units work more efficiently and transfer this idea to their own corporate management (see Chap. 7). On the other hand, Silicon Valley’s digital media groups benefit from the principle of monopolistic power in media markets. They have the data and the financial power to buy new companies early on with lucrative offers. At the same time, they have understood that zero marginal cost business models are implemented more efficiently in small autonomous units than in rigid corporate hierarchies. This creates companies that are optimally adapted to the conditions of digitization. They combine the advantages of the two worlds of startups and corporations. At the same time, the digital media groups have the market shares to exploit the economies of scale, economies of scope and joint data processing for themselves. On the other hand, their organizational subunits function more like a network than a classic hierarchically oriented corporation. In this way, they take advantage of the agility and innovative power of small units. Figure 3.7 illustrates this relationship graphically. If this development continues unchecked in the coming years, the large digital groups will continue to grow. This would be associated with concentration tendencies in individual submarkets. Only time will tell to what extent the competition authorities will increasingly deal with this development and what regulatory consequences will result.

3.5  Artificial Intelligence for Editing

67

Number of employees

Facebook

Facebook

Instagram Whatsapp Oculus K C A 0

J H

D

B

I E

Relative market share

2

In relation to the largest competitor

Fig. 3.7  Actual distribution of market shares in digital markets

This chapter concludes with a practical example of the development of zero marginal cost business models. Using the publishing industry as an example, it becomes clear how it is possible to successfully drive digitization while taking into account one’s own value chain.

3.5 Artificial Intelligence for Editing One business model that has not yet been fully digitized – and which is therefore only on the threshold of zero marginal cost business models – is the book publishing business. This context will be examined below using the three-stage model for analyzing disruptive market changes towards zero marginal cost business models. At the intermediate level (the product), the product “book” is already completely digitized today; the first e-book with an associated e-book reader was published by Sony as early as 1990. With the emergence of large e-commerce platforms such as Amazon, the earlier stage (the distribution process) has also been digitized. Although books are still delivered physically, the entire ordering process from receipt of order to payment and settlement is digital. Only the first stage, i.e. the production process of books, is a stage of transactions on the book market that can still only be partially digitized. The book production process can be broken down into its individual components along the value chain: An idea becomes a finished manuscript. The editor checks and corrects the manuscript and decides when the text is released and transferred to the final artwork (typesetting). The work can then be printed and delivered to the retail trade. The value chain of the book trade is shown in Fig. 3.8.

68

3  The Road to a Zero Marginal Cost Economy

Generate idea

Text writing

Editing and typesetting book

Print book

Distribute book

Fig. 3.8  Book value chain

Content creation – the process from the initial idea to the finished manuscript – will remain the domain of human authors for the foreseeable future. Highly specialized algorithms – so-called robot journalists – are capable of creating short texts based on new data, for example for weather forecasts, stock market reports and sports reporting. For the technical implementation, digital database systems are connected with text generation software. The first algorithmically generated text about an earthquake in California was published online in 2014. In the meantime, the Associated Press news agency publishes around 4400 automatically generated financial reports per quarter (Technikjournal 2018). However, these technologies quickly reach their limits when it comes to producing new and creative texts. The phenomena of the world to be described are too complex for an algorithm to be able to independently write texts about these phenomena. In short, writing complex and creative texts requires a level of understanding of the outside world that cannot be mapped by today’s AI systems (see background information on artificial intelligence). The editing of books is a second sub-sector in the “book” value creation process, which has so far remained almost untouched by digitization. The effort required to edit a book is enormous. From reading to revising manuscripts, it’s all manual work that is still considered irreplaceable by machines. If an algorithm were able to replace the work of an editor, it would have a massive impact on the publisher’s cost structure. The marginal cost of editing would drop towards zero and the publishing industry would have taken a big step towards zero marginal cost business models. German company Arvato Systems set out in 2017 to solve this problem. As part of a hackathon, in which 20 developers participated over three weekends, a software was developed that can serve as a digital assistant to human proofreading in the short term. In the long term, comparable systems could further digitize the proofreading process and replace human editors step by step. The question the developers addressed during the software development was: How can Artificial Intelligence (AI) imitate the work of an editor? To do this, it was first necessary to clarify which work steps an editor performs, i.e. which processing steps are relevant from its submission to the final version. Artificial Intelligence The term “Artificial intelligence” describes a variety of independent digital programs (so-­ called narrow AIs) that are developed to solve specific problems. Narrow AIs play chess, recognize traffic signs or communicate with us. So far, AI applications are isolated

3.5  Artificial Intelligence for Editing

69

solutions, they do not (yet) reflect general human competencies and abilities. A system that has been trained for a task can only be used for this task (Iskender 2018). Most of these systems are based on the technical foundation of so-called artificial neural networks (ANN). With their functionality, artificial neural networks imitate a fundamental ability of the brain: conditioning. This term was coined by the biologist Ivan Pavlov. In 1905, he used paired stimuli in an experiment with dogs by presenting the animals with a ringtone at the same time as food. In doing so, Pavlov observed the effect on the dogs’ salivation. After several repetitions, salivation started even if only the ring tone was heard but no food was seen. ANNs also take advantage of this effect: On successful trials, they strengthen the successful connections between the digital neurons; less successful connections are weakened after unsuccessful trials (see Fig. 3.9). To train the neural networks, ANNs receive input and learn over time to provide an appropriate output. The input is often a digital signal. The output is the answer to a predefined question. ANNs deliver probabilities as output. An example: The input is a photo. The question to the ANN is: Is there a face in the input photo? Output: Yes (with 85% probability). How well an artificial neural network solves its task depends on the quality of the training. The more data is fed into a neural network, the more reliably the system can solve the task set for it. The degree of complexity of the tasks that an ANN can handle has increased significantly in recent years. Presumably, however, much development effort will still be necessary to teach ANNs complex human activities and interactions.

CATEGORIZATIONS/IMAGES

ALGORITHM ls that a line?

Neuron

ls that fur? Ja

Input: each pixel of the image

Prediction correct?

Is that a cat? YES NO

Ja

Ja

CAT

Ja CUTE, BUT NO CAT

TRAINING

Is that a nose?

Ja

CAT

ls that an eye?

PREDICTION

ls that a circle?

Fig. 3.9  Artificial neural networks

ls that a triangle?

ls that a ear?

Adjust weight

YES NO

GOOD

70

3  The Road to a Zero Marginal Cost Economy

What Does the Work Process of an Editor Look Like? The first input to the editor is the manuscript at hand. This is examined for its suitability for the publishing portfolio. In a second step, the quality of the manuscript is checked (text quality, errors, language style, readability, red thread). In the third step, the editor prepares a profitability analysis, comparing the quality of the book with the requirements of the potential target audience. Each of these three steps can potentially end with a manuscript being rejected. To mimic what an editor does, the software must learn to access and match the information and data available to an editor. First of all, this is the manuscript itself. This must be available in a digital version that can be analyzed and edited. Furthermore, an editor accesses several additional data sources: Data from the publisher’s accounting, controlling and merchandise management departments. How well sold other books by the author? What comparable books have been produced in previous years? How many editions have these books achieved? But press reports, reviews and market research results contribute also to the editor’s analysis (Stegen 2017). If an editing software wants to imitate the work of an editor, such an AI therefore needs access to digital data sources in addition to the actual manuscript. The problem is that some of this data is stored on different systems. The solution to this problem is unified data structures that can be read in and processed by an AI editing system. In order to imitate the editing process, a potential editing software must be trained. Arvato Systems therefore first fed in large amounts of data – in other words, numerous books and metadata – and compared them with the real situation. Through this comparison, the editing AI was trained to better solve the given tasks. Using internal computational processes, the AI was enabled to repeatedly predict probabilities of occurrence and success on the basis of the inputs fed into it. The editing AI “Manuscript Inside” developed by Arvato Systems has various functionalities that support editors in their daily work (see Fig. 3.10). The software independently recognizes actors and locations and can graphically display the relationship between these entities. In addition, the system is able to display the suspense of the manuscript in graphical form through sentiment analysis. As output, the system generates a percentage value of complexity and it predicts the probability of success of the manuscript. This Arvato Systems project is an intermediate stage on the way to the complete digitization of the work of publishing editors. The system’s training cannot yet replace the experience of a human editor, but it can support the editor’s work. The interaction between humans and algorithms creates so-called centaur teams. Centaur teams became known to the public through a now-legendary chess duel. In 1996, a computer system, Deep Blue from IBM, beat the then reigning world champion Garri Kasparov from Russia for the first time. Today, digital chess algorithms are so advanced that no human player can compete with them. After these first successes of the chess programs centaur teams were formed, i.e. teams that made their decisions in mixed teams of chess players and computer programs. These centaur teams were able to defeat the purely algorithmically based programs for quite

References

71

Fig. 3.10  Demo showcase on proofreading AI. (Source: Arvato Systems S4M GmbH)

some time. In specially scheduled tournaments, the centaur teams regularly prevailed. At the end of 2017, this development also came to an end with the presentation of Google’s chess algorithm Alpha Zero. Alpha Zero was no longer programmed based on human tournament experience. The software taught itself how to play the game within four hours using “Unsupervised Learning” technology. These new algorithms are so powerful that a human player is no longer a help to the system (Parsch 2018). In terms of the development of zero marginal cost business models, collaboration between humans and machines provides valuable clues: Since real work processes are much more complicated than chess games, algorithms will not displace humans in the foreseeable future. At the same time, many stages of the value chain are being digitized step by step. In the collaboration between humans and machines, the tasks that human employees have to perform are changing: Many simple process steps are being outsourced to machines, leaving employees more time to focus on the more complex tasks in the process that require cognitive performance. This already reduces the marginal costs of production and distribution in the medium term. The application of AI in the work process – as illustrated by the example of Arvato Systems’ proofreading software – is one way in which an industry can move towards zero marginal cost business models. Chap. 4 shows how the use of cloud technology will help implement zero marginal cost business models in modern enterprises.

References Albrecht, Robert (2018): Fortnite hat jetzt 125 Millionen Spieler, erschienen in: mein-mmo.de,, https://mein-­mmo.de/fortnite-­125-­millionen-­spieler/, abgerufen im Januar 2019. Anker, Stefan (2013): Das Geheimnis der Gewinnspanne beim Autokauf, erschienen in: welt. de, https://www.welt.de/motor/article116695390/Das-­Geheimnis-­der-­Gewinnspanne-­beim-­ Autokauf.html, abgerufen im Januar 2019.

72

3  The Road to a Zero Marginal Cost Economy

Appleyard, Dennis, Alfred J.  Field Jr. und Steven L.  Cobb (2006): International Economics, McGraw-Hill, Boston. Baum, Heinz-Georg, Adolf Coenenberg und Thomas Günther (2013): Strategisches Controlling, Schäffer-Poeschel, Stuttgart. Beus, Johannes (2018): Nur 6,8 % aller Google-Klicks gehen auf AdWords-Anzeigen, erschienen in: sistrix.de, https://www.sistrix.de/news/nur-­6-­prozent-­aller-­google-­klicks-­gehen-­auf-­adwords-­ anzeigen/, abgerufen im Januar 2019. Brandt, Mathias (2016): Diese Messenger nutzen die Deutschen regelmäßig, Infografik, erschienen in: statista.de, https://de.statista.com/infografik/6344/diese-­messenger-­nutzen-­die-­deutschen/, abgerufen im Januar 2019. Bundesverband deutscher Banken (2018): Online Banking in Deutschland, erschienen in: bankenverband.de, https://bankenverband.de/media/files/2018_06_19_Charts_OLB-­final.pdf, abgerufen im Juni 2019. Clement, Reiner und Dirk Schreiber (2016): Internet-Ökonomie – Grundlagen und Fallbeispiele der vernetzten Wirtschaft, Springer Gabler, Berlin. Dreiskämper, Thomas (2018): Grundfragen der Medienbetriebslehre, De Gruyter Oldenbourg, Berlin. Gassmann, Oliver, Karolin Frankenberger und Michaela Csik (2013): Geschäftsmodelle entwickeln: 55 innovative Konzepte mit dem St. Galler Business Model Navigator, Carl Hanser Verlag, München. Graupner, Hardy (2019): Vor fünf Jahren: Facebook kauft WhatsApp, erschienen in: dw.com, https:// www.dw.com/de/vor-­fünf-­jahren-­facebook-­kauft-­whatsapp/a-­47569526. Henderson, Bruce (1968, 1984): Die Erfahrungskurve in der Unternehmensstrategie, 2. überarbeitete Auflage, Campus, Frankfurt. Heuzeroth, Thomas (2011): Wie Google gegen seinen Stromhunger ankämpft, erschienen in: welt. de, https://www.welt.de/wirtschaft/article13437833/Wie-­Google-­gegen-­seinen-­Stromhunger-­ ankaempft.html, abgerufen im Juni 2019. Hinne, Carsten (2007): Mergers & Acquisitions Management  – Bedeutung und Erfolgsbeitrag unternehmensinterner M&A Dienstleister, Gabler, Wiesbaden. Imaa (2018): Number & Value of M&A worldwide, https://imaa-­institute.org/mergers-­and-­ acquisitions-­statistics/, abgerufen im Januar 2019. Isberto, Michael (2018): What is a Point of Presence, erschienen in: colocationamerica.com, https:// www.colocationamerica.com/blog/point-­of-­presence, abgerufen im Juni 2019. Iskender, Dirik (2018): Deep Learning einfach gemacht, erschienen in: medium.com, https://medium. com/@iskender/whitepaper-­the-­simple-­guide-­to-­deeplearning-­8229f87dbe66, abgerufen im Januar 2019. Jaskulla, Ekkehard M. (2015): Direct Banking im Cyberspace, erschienen in: Zeitschrift für Bankrecht und Bankwirtschaft, Band 8, Heft 3, Seiten 214–224. McCarthy, John (2017): Instagram Ad Revenue to Double, erschienen in: thedrum.com, https://www. thedrum.com/news/2017/12/17/instagram-­ad-­revenue-­double-­1087bn-­2019-­says-­emarketer, abgerufen im Januar 2019. Meifert, Matthias (2016): Wie sich die Banken noch retten können, erschienen in: manager-magazin. de, https://www.manager-­magazin.de/unternehmen/banken/wie-­sich-­die-­banken-­noch-­retten-­ koennen-­a-­1106596.html, abgerufen im Juni 2019. Ostendorf, Sebastian (2017): Unersättlicher Hunger nach Strom: Warum der Datenverkehr im Internet so viel Energie verschlingt, erschienen in: stern.de, https://www.stern.de/digital/online/ google–wieviel-­energie-­verschlingt-­eine-­suchanfrage-­8397770.html, abgerufen im Januar 2019. Panagiotis, Fotiadis und Michael Wegmer (2019): Deutsche zahlen erstmals mehr mit Karte, erschienen in: swrfernsehen.de, https://www.swrfernsehen.de/marktcheck/aktuell/Bargeld-­ vs,deutsche-­zahlten-­2018-­erstmals-­oefter-­mit-­karte-­106.html, abgerufen im Juni 2019.

References

73

Parsch, Stefan (2018): AlphaZero spielt Schach, Go und Shogi – und schlägt alle seine Vorgänger, erschienen in: welt.de, https://www.welt.de/wissenschaft/article185109198/Vergesst-­AlphaGo-­ der-­neue-­Held-­heisst-­AlphaZero.html, abgerufen im Januar 2019. Pöltzl, Norbert (2018): Geld aus der Wand holen – wie alles begann, erschienen in: spiegel.de, https:// www.spiegel.de/einestages/1968-­in-­tuebingen-­deutschlands-­erster-­geldautomat-­a-­1208937. html, abgerufen im Juni 2019. Porter, Michael Eugene (1986): Wettbewerbsvorteile (Competitive Advantage). Spitzenleistungen erreichen und behaupten, Campus, Frankfurt. Porter, Michael Eugene (1996, 2002): What is Strategy?, erschienen in: Mazzucato (2002), S. 10–32, Sage, London. Rüther, Robin (2018): Reich durch Fortnite, erschienen in: gamestar.de, https://www.gamestar. de/artikel/reich-­d urch-­f ortnite-­e pic-­games-­e rwirtschaftet-­2 018-­d rei-­m illiarden-­u s-­d ollar-­ gewinn,3338773.html, abgerufen im Januar 2019. Schmidt, Volker (2013): Die ersten Online-Gehversuche der Deutschen, erschienen in: zeit.de, https://www.zeit.de/digital/internet/2013-­09/30-­jahre-­btx, abgerufen im Juni 2019. Seo-Summary (2018): Entwicklung der Marktanteile der beliebtesten Suchmaschinen in Deutschland, erschienen in: seo-summary.de, https://seo-­summary.de/suchmaschinen/, abgerufen im Januar 2019. Stegen, Klaus-Peter (2017): Was verstehen wir unter KI?, erschienen in: boersenblatt.net, https:// www.boersenblatt.net/artikel-­kuenstliche_intelligenz_____der_versuch_einer_aktuellen_einsortierung_der_moeglichkeiten_fuer_verlage_und_buchhandel.1363770.html, abgerufen im Januar 2019. Technikjournal (2018): Schreib-Maschinen: Voll im Trend, erschienen in: technikjournal.de, https:// technikjournal.de/2018/08/14/schreib-­maschinen-­voll-­im-­trend/, abgerufen im Juni 2019. VW (2018): Abschluss Volkswagen  – Bilanz der Volkswagen AG zum 31. Dezember 2017, erschienen in: volkswagenag.com, https://www.volkswagenag.com/presence/investorrelation/ publications/annual-­media-­conference/2018/jahresabschluss/Abschlu%C3%9F_VWAG_2017_ de.pdf, abgerufen im Januar 2019. Wirtz, Bernd W. (2012): Mergers & Acquisitions Management, Wiesbaden.

4

Cloud – The Automated IT Value Chain

Abstract

Globally scaling zero marginal cost business models lead companies to success in the age of digitalization. To build such business models for a company, it is important to understand the transformation of traditional information technology (IT) to cloud-­ based IT. In the three core processes of IT value creation  – creating, operating and scaling software – new technologies result in massive savings and optimization potential. While in traditional IT the path to ready-to-use and scalable software is coordination-­ intensive, lengthy and associated with high investments, cloud-based digital business models can be developed much faster and with less risk. Operating and scaling IT become less expensive and costs are more dependent on the actual use of an almost infinitely scalable infrastructure.

4.1 It’s the Software Bill Clinton won the 1992 presidential campaign with the slogan “It’s the economy, stupid” (Kelly 1992). By this he meant that after the invasion of Iraq George H. W. Bush had still found enormous approval among the population in 1991 – but in the end, the economic situation would decide the voters’ favour. And so it came to pass: Just 12 months later, the popularity of the incumbent US president had fallen so far that Bill Clinton won the election (Hart 2017). A similar message can be applied to digital business models: A good idea, a successful advertising campaign or an application that is attractive at the moment is not enough to be successful in the digital age in the long term. Only those companies survive in the digital world that create software quickly, reliably and in a user-friendly manner, develop it © The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2023 R. Frank et al., Cloud Transformation, https://doi.org/10.1007/978-3-658-38823-2_4

75

76

4  Cloud – The Automated IT Value Chain

further in an agile and customer-oriented manner, operate it cheaply and reliably, and scale it quickly worldwide if required. It’s the software, stupid! Software – sometimes called an application, app, or program – is at the heart of every digital business model. Software takes the data from weather stations, calculates weather models, and uses them to create forecasts for each city. It is also software which combines these forecasts based on individual smartphone app user data with personalized advertising. Digital assistants like Siri and Alexa are software; every device on the Internet of Things communicates to software. Enterprise applications like SAP and Salesforce are software, mobile apps are software, internet sites are software, and artificial intelligence is nothing more than software fed with lots of data. If the software is at the core of all digital business models, the competitive advantages for a company in the context of digitalization also lie in the way software is created and used.

4.2 The Classic IT Process From a business perspective, IT value creation within development and application of software is determined by three core processes: 1. Creating software: Based on an idea, an application is developed that fulfills the intended business purpose. 2. Operating software: This application is installed on hardware infrastructure and maintained in order to remain functional and free of failures. 3. Scaling software: Resources are added or removed depending on the usage of the application. These core processes are fundamentally changing as a result of cloud technology, thereby creating the prerequisite for more successful digitization of business processes and models. In order to highlight the differences between traditional and modern IT value chains, the next three sections will first describe the traditional IT process. Then, we will show how this process is changed by the cloud and what the most important characteristics of cloud-based IT are.

4.2.1 Creating Software The path from the idea to a functioning, software-based business model is not easy, as many conflicting interests collide within a company: The visionary and idea generator wants to develop a new, previously unknown business model. The financial controller wants low project and operating costs, or at least the certainty that the invested capital will be recouped. The sales representative would like to reach as many customers as possible

4.2  The Classic IT Process

77 Typical cycle for large, classic software projects: 6-24 months

Idea

Go-Live

Budget approval

User Idea generator Management

Customer support

Exploration & Specification

Software developers

Implementation Operation

Infrastructure managers

Changes

Fig. 4.1  The cycle of a software project

at the same time with the software, whereas the client account lead has rather the special wishes of a single customer in mind. The software developer wants to use the latest development tools, the infrastructure administrator wants to use only well-known and proven technologies. All these different interests have to be reconciled in the common process of “creating software“. This process can be divided into two phases (see Fig. 4.1): • Exploration and specification: In this phase, the parties involved clarify with each other which functionalities the software should contain. • Implementation and testing: This is where the actual development, programming and testing of the software functions take place. This is followed by the process steps “operation“and “customer support”. As soon as the so-called “go-live” has taken place, the software is used in its current form for the intended purpose and the existing customers are supported in their business with the software. In the final phase of the process, change requests usually arise, and so the process of probing and specifying begins again: does the company want to incorporate these new features into the software, how much will they cost, who will pay for them, and what exactly should they look like? The best known and probably oldest model of software development is the so-called “waterfall model“(Royce 1970). Strictly linear, the requirements are first recorded and fixed in the form of a requirements specification. The developers and infrastructure managers respond to these requirements with a requirements specification and a cost calculation, i.e. with a proposal as to how the desired functions could be implemented. If the requirements specification is accepted by the customer, implementation begins, i.e. the development of the software and the construction of the infrastructure required to operate the software. In the final stages of development, the software is tested and finally handed over to operations. Changes afterward are not possible or trigger the first cascade of the waterfall again. The waterfall model is intuitively easy to understand and also appears

78

4  Cloud – The Automated IT Value Chain

logical because it is based on the procedure for building a house: A house is designed by an architect, the construction costs are calculated and agreed with the builder and then the plans are submitted to the authorities. Only then are the tradesmen and materials ordered. When the shell is finished, the roof is put on, and the facade and interior finishing are done. Even if the builder’s real motivation for having his own house was the large eat-in kitchen with a view of the children playing in the garden, there is no real way to start the building project with the eat-in kitchen and the garden first. So the classic process of building software goes along with the following characteristics1: • It takes a long time to create large applications. Due to the long project runtimes, most of the functionalities are already obsolete at this point. And that means: With a traditional software creation process, it is hardly possible to react flexibly to current market conditions. • On the way from the idea to the first use by the customer, many different actors are involved within a company. The effort directly related to the business idea and its creation is relatively low compared to the planning, communication, and controlling tasks. • Investment security is to be created through intensive and detailed planning of all matters relating to the business model.

4.2.2 Operating Software Once the software has been created and accepted by the customer, it is installed in the internal or external data center at a provider and made available to the users. In addition to the physical hardware such as network cables, storage disk and computer, other components such as operating systems and databases are required. Since the software is only reliably available if all other components are functioning at the same time, some challenges arise for the software operation. Uneven Utilization One of the goals of traditional IT is to ensure that every component runs as constantly as possible. The key figure for this is the so-called Service Level Agreement (SLA): This describes how high the availability of a particular service is (Hiles 2016). For example, availability of 99.9% means that with 365  days per year and 24  h per day, there is an annual unplanned downtime of about nine hours. With ten components running at the same time and each being 99.9% available, the availability reduces to about 99.0045%,2 which in detail means three days, 15  h and 10  min per year of downtime. To reduce unplanned downtime, there are so-called maintenance windows: During these periods, the  The challenges that have to be overcome in the production of software are considered again in detail in Chap. 6. 2  The formula for calculating this value is: Availability [%] ^ Number of components. 1

4.2  The Classic IT Process

79

components are shut down and checked in a controlled manner. Maintenance windows are therefore scheduled outside normal business hours and do not count as downtime for the SLAs. The challenges of classic IT operations should not be underestimated: Each component requires special resources before it can be installed, and each of these resources can fail or have its properties changed by updates. For security reasons, components are repeatedly updated or provided with so-called patches, which in turn can affect subsequent components. If software has already been in use at the client side for several years, sometimes outdated versions of components are required, which have to be maintained in parallel to the current components. Traditional IT operations, therefore, tend to generate high costs. Data centers try to reduce the number of components as well as the variants of components operated in parallel. This standardization is intended to save effort and reduce the costs of IT operations. For those being responsible for the application, a smaller selection of components and their variants means that they may have to adapt their application. For them, this means additional effort and project risks. Figure 4.2 provides an example of dependencies between components in the IT value chain. Since each of the components is potentially managed by separate teams with potentially conflicting goals, complex social dynamics arise in addition to the complicated technical dependencies. The range of existing roles in the data center environment is wide. In his glossary, Walter Abel comes up with 60 different roles alone (Abel 2011)  – from Process Manager and Transition Manager to Change Advisory Board Responsible, Compliance Manager and Knowledge Manager. Even if these are not required in every use

Business model

Software

IT value chain

Integration



Libraries Database 2

Database 1 Operating system 1 Operating system 2 Server 2

Server 1

Loadbalancer

Runtimes Server 4

Server 3 Network

Storage

Fig. 4.2  Network of known and unknown dependencies in the IT value chain

80

4  Cloud – The Automated IT Value Chain

case and in some cases several roles are performed in personnel union, this is a clear indication of the size of the human-induced challenges. From the strategic point of view of the business leader, the message is simple: In traditional IT operations, the task of the many people involved is to keep the installed software running and to guarantee the overall functionality of the system. For each component, some specialists try to operate the software as failure-free, cost-optimized and secure as possible. This is reflected in the day-to-day work of these specialists: they sometimes work in shifts to ensure 24-hour operation. • Maintenance and migration activities are scheduled in bundles outside normal business hours so that the customer can use the application all the time. • If a component fails unplanned or security issues pop up, there are extra shifts. • The operation of a data center is therefore not oriented to the optimal utilization of employees, but to the availability of services. For peak loads, many and expensive specialist roles are kept available and, in addition, the provision of personnel is necessary who are responsible for the coordination of the individual parts, ordering and billing.

Expensive Hardware Resources The smoothest possible operation of the software in a data center is also a question of so-­ called “sizing“, i.e. adapting on-premises systems to the requirements placed on them. On-premises” refers to the use a company’s local servers. As a rule, these server systems are operated on the company’s premises and maintained by its own IT staff. This distinguishes the term on-premise (or OnPrem for short) from the public cloud application scenarios presented in this book (Fisher 2018). An average client of a data center service may well be overwhelmed by the question about the sizing: “How powerful should the computers be, which ones exactly do you need, and how many?” Employees from marketing or the business department may have had the original idea for the application; they may be familiar with insurance comparisons or energy data management, for example. But in most cases, they have neither the experience with nor the knowledge of application servers, database servers, front-end and back-end servers, storage technologies or networks. Apart from that, these decisions have a huge financial impact. A server can cost 50, 1500 EUR or more per month. For each component, there are large financial bandwidths, many dependencies and additional options such as high availability (particularly high reliability) or geo-redundancy (distribution of the application to several, locally independent data centers). With a little trial and error, sizing usually works well, since the infrastructure resources are oriented to the expected demand based on experience. If, for example, an application is operated whose main load always occurs on “Cyber Monday” (Weidemann 2018), resources are kept available that the IT managers assume will also withstand the expected user rush.

4.2  The Classic IT Process

81

Cyber Monday

Maximum utilization

Page Visits

Too much resources reserved Average utilization

1. Quartal

2. Quartal

3. Quartal

4. Quartal

Fig. 4.3  Irregular load curve of an application

A disadvantage of this traditional sizing approach is that too many infrastructure resources are kept available – as a result, the average utilization falls significantly below the maximum value (see Fig. 4.3). This means that a large part of the system’s capacity is not used throughout the year. Conversely, it is not acceptable for customers and users if resources are based on the average value. This approach would result in the system failing or slowing down on the days of the year with the highest demand. The company that runs the application traditionally in the data center, therefore, has little choice but to pay a high monthly fixed fee for its infrastructure resources, even if these are not used by the digital business model and therefore do not provide any benefit. The interim conclusion is, therefore: The costs of traditional IT are not based on the actual use of the application. Many infrastructure resources are kept available and paid for regardless of their benefit to the customer. The large number of people who take care of updates, security and internal communication also generates fixed costs – regardless of whether the digital service provided is even in demand on the market. The two most important other  – mostly fixed  – cost components in the traditional data center are for electricity and the licenses of the operating software. In a nutshell, the most important characteristic of traditional IT is that the costs are not incurred due to the service itself: A large part of the costs are not incurred due to actual use.

82

4  Cloud – The Automated IT Value Chain

4.2.3 Scaling Software Once the software has been created and is being operated successfully, the motto is often mistakenly “never change a running system” (Kaczenski 2010). In practice, this rarely works, after all, because all components need their bug fixes and security updates, the hardware has to be replaced every three to five years, and there are regularly new software versions with new requirements. Or the digital business model is successful and needs more resources in combination with optimized components. In this case, what does the process of software scaling look like in the reality of traditional data centers? The Information Technology Infrastructure Library (ITIL) unifies the descriptions of the most important IT management processes that play a role in the creation, operation and scaling of software (Kurzlechner 2019). According to ITIL terminology, software scaling is a “change” process: The goal of this process is to manage changes to the IT infrastructure with low risk. Change requests should therefore be evaluated, approved and documented, and the changes themselves should be planned and executed in a controlled manner. For each change, the many dependencies and interrelationships in IT operations must be considered individually, because “many other processes are indirectly and directly involved” (Olbrich 2004, p. 51). These include, for example, the effects on service level agreements and security, on capacity planning, release statuses, cost structures, reports, status information for customers, and much more. While the primary goal is to minimize the risk to the operation of the application, the company’s entire IT landscape must be kept in mind. After all, poorly coordinated changes can lead to expensive disruptions and failures. Therefore, again, many different players are involved: Change Manager, Change Advisory Board, Configuration Manager, Knowledge Manager, Project Manager, Release Manager, Test Manager. This list alone gives an idea of how complex, time-consuming and expensive such a change can become. If the success of the application falls short of expectations after the adaptation has been carried out, the reverse occurs: In this case, IT infrastructure is held by the company without being used. Even if some types of change processes can generally be carried out quickly and easily, it can be assumed that it is not worthwhile in traditional data centers to set up infrastructure for Cyber Monday (see Fig. 4.3), for example, only to dismantle it again a few days later and sell it second-hand. It becomes even more challenging when a company has triggered a global hype with its digital offering. Within a month, for example, the number of users of the smartphone and tablet game Pokémon Go rose from a few thousand in June 2016 to nearly 30 million worldwide at the end of July 2016 (Siegman 2017). In the fall of the same year, the numbers stabilized at about 7.5 million users. How can IT departments with many employees and hardware that is costly to procure successfully manage a global onslaught within a few days without capacity constraints? Would the same employees dismantle the computers two months later and sell them second-hand on an Internet marketplace? Such demanding scaling scenarios are virtually inconceivable under the conditions of traditional IT.

4.3  The Stack – IT and Its Value Chain Fig. 4.4  The IT value chain (stack)

83

Business Model Application Security & Integration Runtimes & Libraries Databases Operating System Virtualization Computing Power Memory Network

4.3 The Stack – IT and Its Value Chain The IT stack provides an overview of the different technological levels that are required for the operation of a digital business model or a digital application. In the following (see Fig. 4.4), the so-called IT stack is presented and evaluated according to business management standards.

4.3.1 The Levels of the Stack The IT stack is divided into eight levels below the software application (see Fig. 4.4) The lowest level of the stack is formed by the network: This level of the stack enables the software to exchange information and data with other computers. The next level is the storage, in which the data of the application, but also the operating system, is stored. The computers (also called compute or server) are on the next level, similar to the usual desktop PCs: The more central processing units (CPUs) and main memory a computer has, the faster it works. In classic data centers, however, computers are not sold or rented directly to a customer. Instead, a virtualization environment is installed on a whole series of such computers, which generates an even larger number of virtual servers from several physical servers. The customer is thus sold vCPUs (virtual CPUs) on virtual machines (VMs). This decoupling of the physical computer from the virtual machine actually in use is called virtualization and offers many advantages to both the provider and the customer (Wiehr 2015). In addition to the fact that space and energy are saved, the falling costs and

84

4  Cloud – The Automated IT Value Chain

increased availability are the main arguments in favor of virtualization at the computing power level. The next level of the stack is the operating system (OS). The best-known operating systems worldwide are Linux and Windows. Both offer a large portfolio of variants, each with its detailed requirements for data center operators (e.g. maintenance periods, compatibility, costs, system requirements). At the database level, efficient, consistent, and durable storage of large amounts of data is the goal (Lackes et  al. 2018). The most well-known relational database vendors are Oracle DB, Microsoft SQL, and IBM DB2. In addition, there are many other databases, each with different profiles: Some are particularly inexpensive, others are particularly fast, and some are particularly scalable or flexible in terms of data formats. At the runtime level (runtime environments, ITWissen 2014), certain programming interfaces and basic functions are made available in a standardized manner. This ensures that the application can be executed independently of the peripheral device used. A well-­ known runtime is Java3 or also “JRE – Java Runtime Environment” (Tyson 2018). So-called software libraries can also be found at the same level (Metzger 2001). These libraries consist of auxiliary programs which, although not independently executable, facilitate the programmer’s work with ready-made functions. An example of this are the toolkits for graphical user interfaces (GUI). These toolkits provide standardized graphic elements, such as selection windows, buttons, arrows, sliders, and so on. The uniformity of the offerings in the libraries is also the reason why most programs on Windows look very similar. As the top layer, the Security & Integration Layer serves to protect the application from unauthorized access. A well-known service at this level is Microsoft’s “Active Directory” identity management (Barghava 2017). Used correctly, it ensures that only authorized users can access the application. The software with the actual logic of the business model – usually called application, program or app – is ultimately based on all these levels.

4.3.2 Variety of Components Creates Numerous Dependencies At first glance, the layers of the stack seem manageable. However, there are a large number of alternatives within each level, each of which brings different variants that are requested by the application in different version states. For example, a database can be a relational database or a hierarchical database. One of the relational databases is offered by Microsoft (MS SQL), but there is also an open-source variant (MySQL). MySQL has been in use since 2003, and there are now versions ranging from MySQL 4.0 to MySQL 5.7. To function smoothly, each variant may bring specific requirements to the underlying layers  From time to time, missing runtimes are also remembered by the end user of a software: Until 2013, the Java Runtime still had to be installed before installing the tax return program “Elster”. 3

4.4  The Cloud Transformation in IT

85

of the stack (network, storage, compute, operating system). Each variant may also have specific requirements for the overlying layers such as runtimes, security & integration, and the application. This is an example of how complex the dependencies are that need to be controlled by service management. How is this complex value creation organized? In most cases, the required expert knowledge is bundled in small specialized teams for each special area (e.g. Linux server, Oracle DB, network). These teams take care of the security, availability and costs of their respective special services. The cross-team tasks are coordinated by controlling roles such as “Service Delivery Management” or automated by workflow tools. Nevertheless, the process flows are usually defined or at least influenced by hand. Clarifying customer requirements and translating them into formats that are understood by the tools is usually very time-consuming. This strict division of labor in IT organizations is reminiscent of Taylorism (Gaugler 2002), in which each employee is responsible for exactly one prescribed action in the process. When changes need to be made, many small subtasks need to be completed. The number of working hours required is often not high at all. However, since the tasks are distributed among many different teams, each with its optimized scheduling, the coordination effort increases – this can significantly increase the lead time of a change to a software application.

4.4 The Cloud Transformation in IT The classic approaches to providing IT services in companies and operating software described in Sects. 4.2 and 4.3 have weaknesses that will not have escaped your attention: They are time-consuming, costly, and labor-intensive. As a result, these approaches are hardly compatible with the demands that digital business models place on a company’s IT. To understand why “the cloud“offers a way out of this dilemma, the next sections describe what the cloud is, how it works, and what a cloud-based IT process looks like.

4.4.1 The Cloud as a Trend Term The importance that the topic of “cloud computing” has gained within a short period can be understood with the help of Google Trends (see Fig. 4.5). In 2007, about a year after the market launch of the cloud pioneer AWS (Rojas 2017), interest grew rapidly. And this growth continues to this day. The term “cloud“was taken up by more and more companies from 2010 onwards to offer new services: • Deutsche Telekom has been talking about TelekomCLOUD or later MagentaCLOUD (Deutsche Telekom 2019) as “free cloud storage” since 2011.

Fig. 4.5  Interest in the search term “Cloud“over time since 2004 according to Google Trends

Interest in the topic of "cloud computing" over time

Quelle:

86 4  Cloud – The Automated IT Value Chain

4.4  The Cloud Transformation in IT

87

• Microsoft has been offering the “Azure Cloud Platform” for the target group of software developers since 2010 (Luber and Karlstätter 2017a). • Salesforce talks about the “Marketing Cloud“as a platform for “intelligent customer journeys” (Salesforce 2019). Numerous other companies are focusing on “cloud” as their business of the future: There is a Creative Cloud (Adobe), one for the energy industry (Powercloud), there is a Media Cloud (Mediacloud), a Commerce Cloud (SAP 2019), IBM has brought its own application “Watson” into the cloud (IBM 2019). Google talks about “Cloud AI“and means artificial intelligence in the cloud (Google 2019). The list could go on and on with numerous examples. So what is this cloud and why is it so interesting for companies from a business perspective? This question will be answered in the following sections.

4.4.2 What Is the Cloud? The definition of the term “cloud“according to Fehling and Leymann is: Cloud computing includes technologies and business models to provide IT resources dynamically and to charge for their use according to flexible payment models. Instead of operating IT resources, for example, servers or applications, in company-owned data centers, these are available on-demand and flexibly in the form of a service-based business model via the internet or an intranet. This type of provision leads to industrialisation of IT resources, similar to the provision of electricity. Firms can reduce long-term capital expenditures (CAPEX) for information technology (IT) benefits through the use of cloud computing, as IT resources provided by a cloud often have mainly operational costs (OPEX). (Fehling and Leymann 2018)

The National Institute of Standards and Technology (NIST) has defined five criteria for cloud computing (Mell and Grance 2011): • On-demand self-service: The services must be usable by the user without an intermediary. End users usually do this via a Graphical User Interface (GUI)Graphical User Interface and software developers via an Application Programming Interface (API). Application Programming Interface • Broad network access: The services must be generally available via known standard mechanisms. • Resource pooling: In the background, IT resources are shared between different projects and customers. • Rapid elasticity: Resources can be automatically ramped up and down as needed, essentially indefinitely. • Measured service: The actual use of resources is measured and usually serves as the basis for billing.

88

4  Cloud – The Automated IT Value Chain

In light of these definitions, cloud computing can be defined from a business perspective as follows: Cloud computing is the automation of IT value creation.

The cloud offers a broad portfolio of IT services that can be used without human intervention via an Application Programming Interface (API) or Graphical User Interface (GUI). This offering is effectively unlimited and can be delivered in quantities as small or as large as desired and billed according to the actual quantity used. This results in the advantage of cloud use: companies accessing these services can automate their IT processes to an ever greater extent. Automation in this context means that human activities are no longer necessary for the specific use of the service. The cloud thus resolves an old contradiction that dates back to the industrial age. In 1909, Henry Ford tuned in his sales staff with the following words (Hart 2015): “Any customer can have a car painted in any color that he wants so long as it is black.” He was thus placing the production benefits of standardizing performance at high volumes produced above the customization desires of his clientele. From the perspective of the time and with the technological and organisational possibilities available, this was a thoroughly pragmatic approach. More than 100 years later, the cloud is now able to combine both requirements and thus break through the paradigm that prevailed at the time of mass industrialization: Under the conditions of cloud-based IT value creation, it is possible to produce automated standard services as well as to deliver and invoice any small and large quantities of these services in individual combinations. The following characteristics of the cloud are particularly relevant to enterprises: • Ready-to-use components: Companies can easily source components of their IT value proposition from the cloud. • Microtransactions: These components are consumable in small to very small units. • Global scaling: If required, these IT components can be made available globally. • Usage-based billing: All IT components can be billed on a usage basis. In this way, business models are possible that only generate costs if the IT used also provides business benefits. So how do the cloud examples given in Sect. 4.4.1 fit the economic definition? • Telekom talks about “free cloud storage”. This refers to a service via which photos, videos and music can be stored online and which can be used via a graphical user interface. Telekom has thus automated the IT value creation around the use of cloud data storage. • With the “Azure Cloud Platform”, Microsoft offers a broad portfolio of components from the entire stack (Sect. 4.3.1), which can be used by software developers via a

4.4  The Cloud Transformation in IT

89

programming interface. The IT value creation behind each of these components is fully automated and scalable worldwide. • Salesforce has focused on sales processes and offers ready-made IT processes and applications for this purpose. Here, too, the value creation behind the user interface is completely automated. So, in summary, the cloud automates the tasks in the IT value chain that were previously performed by many people. It generates this automation without losing the possibility of individualization for the business model and brings companies advantages in terms of ready-to-use components, microtransactions, global scaling and usage-based billing. The cloud thus behaves to traditional IT like the delivery of an e-mail to that of a letter: From the order to the delivery and billing of a cloud-based service, no single action by a human being is necessary anymore.

4.4.3 The API as a Game-Changer The term “game changer” comes from sports. It is used when a seemingly small change has a big effect on the entire game. This could be a freshly substituted player who, under his understanding of the game or his attacking qualities, makes up for a deficit, or a time-­ out tactically used by the coach that interrupts the current flow of the game. Rule changes can also affect games. A striking example is the introduction of the 3-point rule in basketball (Mather 2016). Until 1979, all field goals still counted as two points, leading to a

Your software Request

Answer

What is the weather like in London?

Here are the latest weather data from London! […]

api.openweathermap.org/d ata/2.5/weather?q=London

"weather":[{"id":804,"main":"clouds","description":"overcast clouds","icon":"04n"}], "main":{"temp":289.5,"humidity":89,"pressure":1013,"temp_min":287.04,"temp_max":292.04}, "wind":{"speed":7.31,"deg":187.002}, "rain":{"3h":0}, "clouds":{"all":92}, "dt":1369824698, […]

API

Requirements for API calls documented at https://openweathermap.org/current

Software service of OpenWeatherMap

The automated service provision behind the API remains hidden from the user (abstraction)

Fig. 4.6  Schematic flow of the use of an application programming interface (API) using the example of OpenWeatherMap.org

90

4  Cloud – The Automated IT Value Chain

dominance of very tall players and centering the game near the basket. The introduction of the 3-point line changed the game: relatively small players like Stephen Curry (1.91 m) now play a much bigger role with their distance throws (Haberstroh 2016). Why is cloud computing the game changer in the IT industry? The answer is: Because the digital services available in the cloud can be accessed via a simple application programming interface (API) (Luber and Augsten 2017). The principle: Applications can make requests via APIs and receive the exact answers they need to run the software. Via such an API, application A, for example, makes its functions available to application B, without application B having to be informed about the true complexity of the processes running in application A. The application B is then able to access the functions of application A via a simple application programming interface (API). Figure 4.6 shows schematically how weather information from London can be retrieved via API using the OpenWeatherMap. The own software application does not require any knowledge about how a weather measuring station on the Zugspitze has to be read out, the application developer does not have to deal with the differences between the communication protocols of the measuring stations from the Zugspitze and those of the measuring station in Salzburg. A new user of a service usually does not even have to communicate with an employee of an API provider. For example, with the weather data provider OpenWeatherMap, it is sufficient to register as a user and, depending on the subscription model, to store payment data to access the application’s functionalities. Changes behind the API are made without impacting the user. No individual billing processes are established or specific liability clauses are agreed upon. The developer who wants to use weather data in his application simply integrates the call to the OpenWeatherMap API in his application. It is sufficient if he follows the guidelines for calling the API described in the documentation. The result: Your application can access worldwide weather forecast data and make it available to end users. And this is without the own developers having to deal with the details of the quite complicated information procurement in the area of weather data. The API Manifesto Amazon is one of the pioneers in orchestrating software services via API. According to former Amazon engineer Steve Yegge, Jeff Bezos, CEO of Amazon, announced a so-called “API Manifesto” in 2002 (Lane 2012). In it he postulated: All teams will henceforth expose their data and functionality through service interfaces. Teams must communicate with each other through these interfaces. There will be no other form of inter-process communication allowed: no direct linking, no direct reads of another team’s data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network. It doesn’t matter what technology they use. All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.

4.4  The Cloud Transformation in IT

91

Bezos concluded his manifesto by saying, “Anyone who doesn’t comply will be fired.” So he was serious about putting his concern into practice. Although this manifesto was actually about an IT topic, Bezos used the word “teams” no less than four times and the word “communication” three times – technology, on the other hand, plays a subordinate role. Therein lies the central message behind the API: When companies make their services  – no matter what technology they’re based on  – available in the right way, a new kind of collaboration emerges. Globally distributed teams can interact with each other without having to talk to each other. Enterprise-wide communication and collaboration are automated by calling each other’s API. The abundance of offerings in the cloud comes from the infinite number of services that relate to each other. This creates a network of globally distributed and scalable services that are available to every company today, each of which can be deployed quickly and agilely on its own. The name of this network: Public Cloud. Virtualization and Abstraction Virtualization of computing power has already been described in Sect. 4.3.1: Real computers become “virtual machines“. However, Jeff Bezos’ idea goes beyond the idea of virtualizing a physical good. He wants all teams to make their knowledge and services easily consumable digitally. This includes, for example, information about stock levels, predictions about customer behaviour and profiles on shopping preferences. Because of this, the term “abstraction“is often used in the context of APIs. Abstraction means looking at a thing or an idea from a higher vantage point and reducing that thing to its essentials (Alt 2008). APIs thus provide their users with only the essentials – precisely the functionality for which the service is needed. Information about the internal structure and dependencies between individual internal components remains hidden. This can be explained well using the example of a database. What does the developer of an application need the database for? Essentially, he wants to store the data of his application there permanently and be able to retrieve it when needed. But what is necessary so that the database can actually perform this task? Following the IT stack, it must first be installed, and for this, it needs a virtual machine, which in turn needs an operating system. At the lowest levels of the stack, network and storage must be available. Each component in itself needs software updates and has scheduled downtime, sometimes outages. If the database is to remain available during these downtimes, the components must each be created redundantly. This means that additional components are needed to intelligently distribute the load. In addition, each element generates costs that must be allocated internally according to their cause. Information about the state of each element must be collected and evaluated. However, the developer is not interested in all this information. He wants to store and retrieve data in the database or even expand its memory. Figure 4.7 shows the API call for the case “Extend MySQL database memory to 7 GB”. The complexity described in Sect. 4.2 remains invisible to the user of the API. The performance is abstracted, the user only needs the essentials – in this case, the option to use more memory with a short voice command.

92

4  Cloud – The Automated IT Value Chain

Fig. 4.7  Example of an API call of a database

API-Call to expand the memory to 7GB az mysql server update \ --resource-group myresourcegroup \ --name mydemoserver \ --storage-size 7168

API

MySQL Database Service API

Your software

„Visitor Guide London“

IT value creation becomes a network

API

• Each component encapsulates its inner complexity from the others via the API (abstraction).

API

TrafficInfoApp

• Each component can be used by infinitely many other services services. • For the combined use of several components, a software is written - which in turn can be reused as component infinitely.

OpenWeatherMap

Staus in London

Wetter in London API

API Service X

API Service Y

API Storage

Service Y

API Compute

API Service Y

Fig. 4.8  IT value creation becomes a network

Ready-Made Components, Microtransactions, and Global Networks Those responsible for database services must also rethink under the conditions of cloud computing. In the days of traditional IT, there was a way to counter increased demand by saying “We’re currently overloaded. I guess next month we will take care of it” to the service manager. In the cloud age, service is expected to be deliverable within seconds or minutes. This can be achieved by also building the database service like software. The software code specifies how the service will respond to increased demand and what additional resources the code will draw from other cloud services. This creates an automated interacting network that can react flexibly to fluctuations in demand (see Fig. 4.8).

4.4  The Cloud Transformation in IT

93

Another advantage of this approach is that these services can be broken down and marketed on a fine-granular basis due to their software properties. The use of a database can be charged according to the number of database transactions, a memory according to the number of documents stored, a computer according to the CPU capacity actually used. This gives rise to new business models in which each search query is billed in micro-cents, but which ultimately  – due to their global scalability  – yield profits in the billions (Novet 2019). Four years after the publication of the API Manifesto, Amazon made its internally developed IT services available to the rest of the world – under the name Amazon Web Services (AWS). This meant that customers could not only order books and CDs from Amazon but also use APIs to access the components of the IT value chain underlying Amazon’s businesses. The same server services that Amazon uses for its video business are now used by Netflix for its streaming data (Macaulay 2018). The database services that Airbnb uses on Amazon (Hu and Guo 2018) are also open to competitor Booking.com. In this way, AWS has become a “construction market” for ready-to-use IT components in the cloud (Sect. 4.4.4).

4.4.4 Not All Clouds Are Created Equal From a business perspective, the term “cloud“initially only stands for the idea of “automated IT value creation“. The multitude of components in the IT value chain also means that there are many variants of cloud application scenarios that cover very different performance spectrums. The degree of abstraction of the respective service is decisive for the meaningful use of the cloud in companies. Depending on the level of the IT stack at which automation begins, specific cloud applications are differentiated. Figure 4.9 provides an overview of the most important approaches.

Dedicated

Virtualized

Application

IaaS

Runtimes & Libraries Databases

Virtualization

PaaS API

Manual activities involved

Security & Integration

Operating System

Container

API

API API

API

Automated IT Value Creation

Computing Power Memory Network Classic IT

Serverless

Cloud IT

Cloud Native IT

Fig. 4.9  Different levels of abstraction of cloud-based IT value creation

SaaS

94

4  Cloud – The Automated IT Value Chain

Traditional IT In the traditional approach of “Dedicated IT“, no resources are virtualized. Instead, dedicated resources are set up for the individual customer and operated individually. With “Virtualized IT“, at least the computing power is virtualized and shared. This model is sometimes already referred to as the cloud. Infrastructure-as-a-Service and Container The virtualization of IT computing power, storage and network is called “Infrastructure-­ as-­a-Service“(IaaS). All three infrastructure components are completely automated and accessible via an API. Lifting an application from a traditional data center to an “IaaS cloud“is comparatively quick and straightforward. In very few cases, more complex adaptations of the application or larger migration efforts are necessary. Such a migration is called “Rehost“or “Lift & Shift”. Platform-as-a-Service and Not Platform services (PaaS Platform-as-a-Service) are taking a very big step forward in automation. In this form of cloud, extensive functionalities are provided, such as databases, machine learning services, Internet-of-Things features, bots, video services and much more. Software developers make use of platform services because they relieve them of many tasks and with their help they can create very extensive applications very quickly. These pre-built building blocks have significantly increased the speed at which software can be developed. The first functional prototypes of digital business models can often be developed by small teams within a few days. Serverless computing, also called Function-as-a-Service, is the newest abstraction layer of the cloud. Traditionally, software developers have to take care of capacity planning, scalability, flexibility and fault tolerance in addition to the actual business logic of their application (Büst 2016). In serverless computing, these tasks are also taken away from the developer. The share of the software developer’s effort in the primary value creation (with direct value contribution to the business model) can thus increase even further (see Fig. 4.10). Serverless is usually billed on a fine-granular basis in the micro-euro range based on the amount of CPU capacity used. The Range of PaaS Providers The catalog of platform services (PaaS) offered by cloud providers is huge. In addition to the offerings of the hyperscalers Microsoft, Google, Amazon and Alibaba, there are industry-­specific providers such as Zalando or Spotify, which offer their own services for their respective niches. General IT Services In order to meet the demand of companies to map their IT landscape in the cloud, cloud providers offer a variety of basic IT services. These include, in particular, container

4.4  The Cloud Transformation in IT

95 Your softwarebased business model

Your secondary value creation •

... can be sourced as if from a giant DIY store with ready-to-use components

API

API

Service X

Service Y

API

API

• ... only costs, if it is actually used • can be used in microtransactions, but still scales worldwide.

Focus on primary value creation

Service Z

API Service B

API Storage

Service A

API Compute

API Service C

Fig. 4.10  Cloud enables simple focus on the core business

services, database services and services for web applications. The benefits of the many other IT services can only be explained to the technical layman with more extensive explanations, so the following core message is important: All common requirements for IT landscapes can be covered with the service portfolio available in the cloud. Analytics Behind the term “Big Data“lies a global trend: The ever-increasing number of data sources and the ever-cheaper options for data storage are creating huge data pools. With the tools that Hyperscalers provide in the field of analytics, these data pools can be combined, migrated and analyzed in milliseconds. For example, data on the online behavior of consumers can be compared with each other in such a way that each customer receives the relevant advertising via real-time advertising. Artificial Intelligence (AI) and Machine Learning (ML) One of the most exciting application areas of platform services is artificial intelligence (AI), artificial intelligence and machine learning. Established services include facial recognition in images and videos, content analysis of texts, and the recognition of spoken language or the conversion of spoken language into the written language (“speech-to-­ text”). Such machine learning algorithms are used, for example, in the operation of the voice assistants Siri (Apple), Alexa (Amazon), GoogleNow (Alphabet) and Cortana (Microsoft).

96

4  Cloud – The Automated IT Value Chain

Blockchain Blockchain technology is a decentralized, chronologically updated database that enables digital securitization of property rights (Mitschele 2018; Pilkington 2016). Blockchain offerings from the cloud allow the corresponding networks to be established and operated relatively quickly. Cloud-based blockchains can then be used, for example, to easily document and represent supply chains in the food industry in a tamper-proof manner (Zores and Koppik 2019). Internet of Things The collective term “Internet of Things“(IoT) combines services and offers that relate to the combination and joint control of different end devices. In the home, for example, the functions of end devices such as alarm clocks, stereo systems, coffee machines and heating systems can be linked together. With the IoT products from PaaS providers, billions of IoT resources or end devices can be networked, administered and security ensured. Media Services The providers offer a special service portfolio that is geared towards the processing of media data. These include a Content Delivery Network (CDN), which simplifies the distribution of large quantities of video data. There are also encoding services for converting image formats and resolutions, as well as other media-optimized streaming and analytics services. Security Another focus is on security-related services. We discuss this topic separately in Sect. 4.7. This list offers a first overview of the current possibilities. Of course, it does not claim completeness, because the offers change quickly and are still much more diverse than shown here. Software as a Service Software-as-a-Service (SaaS) is ready-made software with a graphical interface that can be used by end customers. A good example of this is Slack, a software for collaboration and communication in groups. This can be used by anyone, even without programming skills, in a browser and as an app. Signing up is easy via the website, and the basic service is free. But there are also paid versions with advanced features starting at EUR 6.25 per month. The pricing model makes it clear: The value chain behind it is fully automated. Frequently, SaaS providers also offer partial functionalities of their services as PaaS. Features such as messaging services, workflow tools and tools for creating bots (chat robots) are available via an API.

4.5  The Cloud-Based IT Process

97

4.5 The Cloud-Based IT Process Building on the possibilities of using API-based software solutions and automating IT with the help of cloud technologies, the three-part software process (“creating”, “operating” and “scaling“) can be rethought across all stages. How such a reorientation plays out in practice is illustrated in the following sections.

4.5.1 Creating Software The most important change in cloud-based software creation concerns the software developers. In a cloud environment, they can control all IT elements required for the creation of software themselves. In doing so, they use the described programming interfaces (APIs) and can draw from the rich catalog of ready-to-use IT components of the cloud providers for this purpose. All components are available immediately and in virtually any quantity. Tasks that occur behind the API remain invisible to the creator of the software. The cloud thus gives the developer a new creative power that was previously distributed among many roles and actors in the corporate organization. The developer can now – together with the customer or the person responsible for the software – quickly develop, demonstrate and publish new features with little coordination effort (see Fig. 4.11).4

Business model

Proximity of software developers to business owner

API

API

Service Y

Service X

API

Large catalog of easy-to-use IT components

F

Database

API Service B

Developers can manage all IT components themselves

API Storage

API Service A

API Compute

API Service C

Fig. 4.11  Creating software using cloud services

 Section 6.6 describes in detail the process of software development from the idea to the operation of the application. 4

98

4  Cloud – The Automated IT Value Chain

4.5.2 Operating Software The effort required to operate an application in the cloud depends heavily on the degree of abstraction described in Sect. 4.4.4. The following simple relationship applies: The more IT value is transferred to the cloud provider, the lower the own operating costs. The large cloud providers bring the following basic advantages compared to small data centers: • The more customers use the same IT services, the greater the economies of scale that occur. The team that operates a container service only becomes marginally larger if it supports ten times the number of customers instead of five. The same applies to all other services at all levels of the stack. • Applications follow individual load patterns. The more applications are operated on the same infrastructure, the better their utilization can be kept at an economically attractive level. The risks of underutilization decrease, as does the probability that individual applications will push the entire infrastructure to its performance limits (see Fig. 4.12). With high abstraction levels such as serverless or SaaS, cloud providers can share resources at all levels of the stack and optimize utilization. Accordingly, they have to maintain fewer fixed infrastructure resources, and their internal costs fall. They pass on part of these cost savings to their customers through attractive, usage-based prices. Provided that the customer converts his application in such a way that it actually uses the high abstraction levels (see Chap. 5), the costs for him fall significantly. Not only does it save the investment in its own data center with its real estate and the necessary hardware,

No utilization risks

Cost reduction through shared resources and economies of scale at all levels of the stack

Fig. 4.12  Operating software in the cloud

Low commitment of capital and employees in the secondary value chain

4.5  The Cloud-Based IT Process

99

it also requires significantly fewer qualified employees to provide the necessary IT services. In addition, the previously fixed costs for IT operations become variable costs that largely develop in parallel with the actual use of the software. The risks of underutilization and overloading of the company’s own infrastructure also decrease.

4.5.3 Scaling Software A major issue at the beginning of every new software introduction is the question of how heavily it will be used. In classic IT, managers have to decide at an early stage which IT resources should be reserved for the expected rush of users and in what amount. This then results in an investment with a corresponding acquisition and installation project as well as fixed, monthly cost blocks. These consist of the depreciation for the hardware as well as the monthly flat rates for the operation of each IT component used at all affected levels of the stack. This is precisely where the cloud comes into its own. A cloud application can be designed in such a way that it starts with no or only very low fixed costs – because it does not occupy any fixed resources at the provider due to the utilization management possible in the cloud. If the use of the application now increases gradually, the cloud provider can only provide exactly as much computing power as is needed thanks to the complete automation of the cloud services (see Fig. 4.13). The worldwide infrastructure of many cloud providers also enables customers to simply roll out their applications globally. In order to take advantage of the cloud for creating, operating and scaling software, the application must make use of the higher cloud abstraction levels. This eliminates many manual steps in IT value creation, creates economies of scale across customers, and reduces investment and utilization risks.

Globally scalable

Costs only when used

Fig. 4.13  Scaling software in the cloud

Very fine granular changes

100

4  Cloud – The Automated IT Value Chain

4.6 Public Cloud vs. Private Cloud The Fraunhofer Institute defines the public cloud as “an offering from a freely accessible provider”, while the private cloud is only made “accessible to the company’s own employees” (Fraunhofer 2019). This definition assumes that the private cloud is operated by the company itself. Usually, the extended definition is that the “private cloud is not accessible to the general public via the Internet” (Luber and Karlstetter 2017b), i.e. it is available “exclusively to individual organizations”. This means that companies do not have to operate the private cloud themselves; it is sufficient if a service provider does this for them. Which variant can companies use to establish digital business models more successfully – the public or the private cloud? To answer this question, it makes sense to recall the major general advantages of the cloud as automated IT: • • • •

Companies can use many ready-to-use IT components. This use can be in microtransactions. Companies can scale globally if successful. If the application is not used anymore – for example, in the event of a failure – companies can easily reduce the cost to zero.

In terms of the usability of these four benefits, there are major differences between the private and the public cloud. Figure 4.14 shows the different virtualization levels of IT value creation. Private clouds usually include the four levels of network, storage, computing power and virtualization – i.e. the Infrastructure-as-a-Service (IaaS) level. Based on this, specialized services can be operated in both the private cloud and the public cloud – example categories are container services, platform services, serverless and software services. In large IT landscapes with many applications, a very broad selection of these services is required in many different variants. For each of these services, successful operation in the private cloud requires two things: A trained and available operations team to install and operate the service, and sufficient infrastructure to allow the service to scale well as needed. The size of the operations team

Dedicated

Virtualized

IaaS

Container

Application

API

Security & Integration Runtimes & Libraries Databases Operating System Virtualization Computing Power Memory

PaaS

API API

Serverless

Wide range of specialized services with high business impact

API

Basic scope of services of private clouds

Network

Fig. 4.14  Basic scope of services of the private cloud

SaaS

API

4.6  Public Cloud vs. Private Cloud The most important opportunities for companies arise from ...

µ€

101

Private Cloud

Public Cloud

... the many ready-to-use IT components

High invests or little choice

No own invests but still large service portfolio

... the usability in microtransactions

Bearing the risk of hardware utilization yourself

Utilization risk is borne by the cloud provider

... the possibility of global scaling

Only possible with high investments

On demand at the public cloud provider

... direct synchronization of costs and benefits

Hardly possible with low use

Easily possible

There are also risks to this

No influence on technology and processes behind the API Cloud services tend to be more expensive at large scale

Fig. 4.15  Private cloud and public cloud in comparison

remains largely constant as the number of customers increases, but the infrastructure grows with the amount of the largest total workload across all users. This also puts the private cloud in its polylemma: On the one hand, as many customers as possible are needed to refinance the fixed costs of the operating team. On the other hand, the underlying infrastructure is usually too small for peak utilization across multiple customers. If more infrastructure is provided in the private cloud, the risk of underutilization increases. Figure 4.15 compares the advantages and disadvantages of private cloud use with those of public cloud use. The private cloud is thus at a disadvantage in many areas compared to the public cloud. There are two main reasons for this: • Utilization management: Cloud providers make a one-time investment in their global data centers. All services offered to all their customers use the same infrastructure. Providers, therefore, have a completely different opportunity to optimize their utilization, sell more virtual services with less actual hardware, lower prices and improve their margins at the same time. • Portfolio breadth: A fixed operating expense is required for each service in the private cloud. To be able to map the breadth of the service portfolios of a public cloud provider in the private cloud, high one-off and monthly expenses are therefore incurred. The multitude of services, coupled with the fluctuating load behavior of cloud applications, makes running a private cloud with many platform services a big risk. Customers who want to build and operate globally scaling, digital business models with zero marginal costs will therefore generally turn to the public cloud: This offers more choice of services, the provider bears the utilization risks and they can scale globally

102

4  Cloud – The Automated IT Value Chain

without infrastructure investment. Only when customers have reached a significant size can their hardware infrastructures and the in-house operation of the required platform and software services bring their advantages to bear (Sect. 4.5).

4.7 Security in the Public Cloud One of the central questions of cloud opponents in companies is: Is the cloud secure at all? This usually refers to the public cloud, i.e. the globally distributed data centers of the large American companies Amazon, Google, and Microsoft. This is a legitimate question: Should European companies really store their data precisely with those companies that are often referred to as “data octopuses” (Strathmann 2015)? Does the company want to place its applications with the credit card information to be protected in precisely that cloud that already bears the word “public” in its name? And can’t the hackers of Chinese competitors compromise their applications particularly easily if the company’s IT shares resources with them in the cloud? The section that follows now provides a brief look at the security precautions and mechanisms used by public cloud providers. The goal of this section is to enable you to make your own basic assessment of the underlying risk factors. In a first step, a distinction must be made between the terms “compliance“and “IT security“(see Fig. 4.16). The former focuses on the behavior of the company: It wants to comply with laws, rules and standards of both an internal and external nature. IT security, on the other hand, is about protecting against the fraudulent behaviour of others. On the one hand, it involves protecting physical assets such as data centers and server rooms from unauthorized access. On the other hand, it is about protecting against manipulation from the outside with the help of technical means. Information security includes IT security and supplements it with analogous topics such as the storage of paper files.

Ensure own correct behavior

Protect against criminals

Compliance

IT security

Information security

Compliance with laws, rules and standards

Protecting computer systems from fraudulent activities

Also includes the protection of non-technical systems (such as paper filing systems)

Physical security When hackers capture credit card data, this can lead to a compliance violation.

Fig. 4.16  Differentiation of compliance and security

Technical security

4.7  Security in the Public Cloud

103

4.7.1 Fraud Groups and Examples of Threats The three typical groups of fraudsters that businesses need to protect themselves from are: • Vandals without major criminal or long-term ambitions. A good example is a 20-year-­ old student from Hesse who allegedly spied out and published personal data (Kling 2019). The student had been annoyed by certain expressions of opinion by his victims. • Common criminals hope to gain certain financial benefits by penetrating other people’s computer systems. A typical example is a ransomware. The aim here is to prevent companies from accessing their data to then release it again – for a ransom (Rouse 2016). • State actors penetrating data centers to conduct such things as industrial espionage (Dams 2017). Regardless of the motivation of the fraudulent actors, there are different levels through which they attempt to penetrate IT systems (in the cloud as well as in traditional data centers) (see Fig. 4.17). One classic route is via the physical network. An example of this is the so-called “Distributed Denial of Service“attack (DDoS). The attackers send simultaneous requests to the same Internet address from many distributed locations on the Internet. As a result, the attacked system is overloaded and thus no longer usable. At the same time, this can lead to errors that the attacker can use to permanently establish itself in the victim’s network (Luber and Schmitz 2017). At the next level, the physical hosts (computers), the fraudsters attempt to install manipulated hardware on the victim’s computers. One example is an action by the American National Security Agency (NSA) in 2010, in which packages from the company Cisco to its customers were intercepted, opened and the hardware contained therein was equipped with spy hardware (“Trojan horses”). In this way, the intelligence agency was able to eavesdrop on any communication via the devices installed at the customer’s site (Gallagher 2014). In 2019, the US is now worried that something similar could happen to them if they use 5G hardware from the Chinese company Huawei (Kühl 2018).

Physical Security

Level

Technical Security

Example of a security problem

End user access points

Cell phone is hacked and used for access

Identity management

Someone hacks someone else's account password

Application

The application itself generates dangerous errors

Operating systems, databases

Someone randomly tries passwords until they get in

Physical computers

Malicious hardware is smuggled in

Physical network

Access to the data center is disrupted by many simultaneous calls

Fig. 4.17  Levels of technical safety

104

4  Cloud – The Automated IT Value Chain

Middleware can also be a gateway for fraudsters. Like any software, operating systems and databases can also contain errors (“bugs”) that are then exploited by criminals. A well-­ known example is the malware program “WannaCry”, which exploited a hole in the Windows operating system to install ransomware there. Microsoft reacted and closed the gap with a corresponding “patch” (spiegel.de 2017) – i.e. a security update to the operating system. However, users must also install this patch to protect themselves. Fraud groups can also strike at the application level. For example, the company’s own application may be programmed in such a way that it directly releases security-relevant information via its normal interface. This happens, for example, when several customers work on it, but these are not systematically separated in the software architecture (“client separation“). Customers may then gain access to data from competitors who are customers of the same service provider. Applications can also contain errors in their program code that enable attackers to compromise the application and thus gain access to relevant data. A major vulnerability of many applications is the so-called identity management. This involves the identity of the user who logs into an application. Is it really Heinz Mustermann who logs on from the Internet café in Cairo, or is it a case of so-called “identity theft”? Was the password stored in the wrong browser and later read by a fraudster? Did the user enter his data on a fake website of his bank and is now a criminal using it? These hacked passwords may then be available for payment in a special part of the internet – the darknet (Schirrmacher 2019). At the next level up, it’s about the end users’ access points, for example, their mobile phones or PCs. If these have been hacked by attackers, the devices can be used to access the networks and applications of the target company.

4.7.2 Countermeasures by Cloud Providers Cloud providers are aware of the threats described in Sect. 4.7.1 and take numerous measures to increase the security of their systems and those of their customers. The first major area of concern is safeguarding against internal sabotage and misconduct. To prevent in-house employees from exploiting their access to the systems, there are strictly separated areas of responsibility. There are employees with access to the hardware in the data centers. Another group, the operations staff, always works outside the physical data center via remote access and cannot tamper with hardware themselves. These employees do not work alone and are only allowed to access customer systems and data via specific approvals from their customers. The software that controls and documents this access is in turn developed by another group of employees. This separate responsibility makes it more difficult for criminals to gain access to data and systems by bribing or blackmailing individual employees. The issue of DDoS attacks is being addressed in two ways. First, cloud providers detect these attacks and redirect the resulting traffic over very large data lines. Peaceful requests to the actual system are still allowed through. So the application remains available and no

4.7  Security in the Public Cloud

105

bugs are created that allow the attacker to infiltrate. In addition, providers maintain their own “cyber-defense” units, which, among other things, deal with finding so-called “botnets” and taking them over. These are computer networks that have been hijacked by online criminals and are used for such attacks. The current attacks of the day can be monitored at www.digitalattackmap.com. Cloud providers try to circumvent the manipulation of the physical computers by taking over the value chain themselves, from the development of the hardware (Ostler 2018) to the production and delivery of the components, or by closely supervising it with their own employees. In addition, hardware components are built into the computers to ensure that there is no unplanned hardware on the board. At the middleware level, cloud providers use strict rights management between the services used. This prevents the components from accessing the Internet and only allows access from certain other components via predefined paths. This ensures that databases, for example, cannot be accessed from outside. In the past, most of the focus was on preventing security incidents at the network level described earlier. But since employees and executives insist on using as many enterprise applications as possible on many different endpoints, the focus of IT security has shifted. It’s now more about ensuring that the user who logs into an application is who they say they are. The term for this is identity management. A common and very secure method of ensuring a user’s identity is Multi-Factor Authentication (MFA). This requires a user to authenticate themselves using two different factors (ACSC 2019). The first factor in most cases is the password – if this is correct, the second factor is applied. This can be an SMS with a generated number code, an application that generates a timed code, or a call to the user’s smartphone. However, the cyber defense units of the cloud providers go further here. They are also active on the darknet and monitor the identities that are illegally traded there. If it is detected in this way that a username-password combination has been hacked, the provider can ask the users within its domain to change their password. The examples given are only a sample of the known activities of the providers, the actual activities probably go beyond that. It is an arms race between IT providers and their fraudulent adversaries. Microsoft, for example, invests over a billion dollars annually in cyber-security research (Cohen 2017). Google, on the other hand, is investing heavily in its cyber-security subsidiary Chronicle (Brewster 2019), aiming to revolutionize the enterprise IT security market.

4.7.3 Shared Responsibility Between Customer and Cloud Provider As soon as a company uses the public cloud in its IT value chain, it shares responsibility for IT security with the cloud provider. Behind the API or GUI, the provider is responsible for all aspects of security. For example, if a company uses Adobe Marketing Cloud as a software service (SaaS), Adobe is responsible for ensuring that the application is developed securely, the operating systems are always up to date, the databases are not accessible

106

4  Cloud – The Automated IT Value Chain Classic

Security levels

End user access points Identity management Application

Tools and mechanisms of the own company

Tools of the provider, to protect own area of responsibility SaaS

or of the traditional service provider

Operating systems, databases

PaaS IaaS

Physical computers Physical network

Cloud

Dedicated

Trust in provider and its certifications

Fig. 4.18  Areas of responsibility for security when outsourcing to the cloud

from the outside, the computer hardware is not compromised, and the physical network is not paralyzed by attacks. Customers of these software services usually have no way of controlling the value creation behind them, so they need a minimum of trust in their provider. In return, they offer a wide range of security and compliance certifications awarded by independent testing institutes. Microsoft offers about 90 certifications on its website (Microsoft 2019), Google and AWS are on a similar level. Figure 4.18 shows how the boundary of responsibility shifts with the cloud virtualization layer. Thus, in all cases, security issues remain that are the responsibility of the customer, in particular the areas of identity management and end-user access points. Cloud providers offer tools for their customers to secure their own area of responsibility. These include firewalls, easy-to-use encryption services, various methods for storing keys, tools for dealing with DDoS attacks, options for fine-grained rights management and services for identity management. Also of interest are analysis and monitoring tools that use machine learning algorithms to scan the IT landscape for suspicious patterns and thus detect attackers. Whether the public cloud is more secure than the private cloud or a traditional data center cannot be said in general terms. This depends very much on the tasks performed by the data center in question and how well developed its security measures are. Overall, the previously mentioned advantages of the public cloud remain in the area of security: Customers can simply use the existing security services and precautions without having to invest in their own cybersecurity teams, encryption services and network security. If they use these consistently, they can create a high level of security more quickly and with less effort than they would be able to do on their own.

4.8 Case Study: A Misunderstanding in the IT Purchasing Department of a Major Corporation The challenges that companies face in the context of cloud transformation will be illustrated below using a fictitious scenario. An example is a major European corporation with annual sales of EUR 10 billion and an IT expenditure of EUR 500 million. This group

4.8  Case Study: A Misunderstanding in the IT Purchasing Department…

107

owns four data centers across Europe and has outsourced data center services for several business areas to two traditional suppliers for several years. The company’s IT buyer has learned that the cloud can significantly cut costs. He identifies the largest single item in the IT budget: Of the EUR 500 million, the company spends around EUR 130 million for infrastructure services (computers, storage, network), distributed across all data centers. There, the largest items are again the virtual machines (computers). It decides to compare the costs of this largest single item between the cloud providers and also with its own, traditional data centers. In-house IT is used to these comparisons from the past and immediately sends a list of the major VM types with monthly prices. All three cloud providers object to this approach. They bring technical experts to the purchasing negotiations. They talk about topics like auto elasticity, AWS Lambda, and containers. They lecture about a new culture and new thinking. Cost, on the other hand, is not a good reason to go to the cloud, they say, it’s more about agility and time-to-market, innovation and artificial intelligence. The IT buyer considers this to be the normal business behavior of companies that do not want to be compared, and insists on compliance with his approach: All participating cloud providers must offer exactly the same virtual machines under exactly the same conditions as the internal data center. What does the result look like? The cloud providers hardly differ, but the IT buyer is surprised to discover: Many computer types are even offered cheaper by the classic data center. The buyer can hardly believe it and goes back to talking to the cloud providers. How can such large data centers with such economies of scale be seriously more expensive than their own small data centers? The cloud providers are ramping up their efforts once again: They plan to invite the entire senior management of the group to the American West Coast for an “executive briefing”. They offer free architecture training, governance seminars, free initial projects, and they even offer their own experts to take on these tasks. Only they don’t want to lower the prices. At what point did the IT buyer and the cloud providers talk past each other? Figure 4.19 provides a clue: The IT buyer wanted to compare an important component of traditional value creation with its counterpart in cloud IT. This component does exist there, but the procedure would have the consequence that the opportunities of automated IT value

Traditional software

Cloud-based software Account Management

Application operation Operation of the operating system Database operation

Web App PaaS

System operation

Network

Storage

Database Cluster

Backup

VM1

VM2

Virtual Machines (VM) Compute

This comparison the IT buyer wanted to make.

Of this the cloud providers have talked about.

MySQL PaaS

?

Fig. 4.19  Infrastructure components are no longer recognizable for platform services

108

4  Cloud – The Automated IT Value Chain

creation would hardly be used – all other services of traditional IT would remain necessary. The cloud providers have thus tried in vain to convince the IT buyer of the advantages of the new cloud world – an IT world in which not only numerous infrastructure resources can be used automatically, but in which the entire IT value creation is automated. Ultimately, the optimization potential in many companies remains unused in this way. They remain stuck on their outdated software architectures and at the same time miss the opportunities to reduce costs and increase their innovative power. In the worst case, change is delayed until new competitors disrupt the market and it is too late for an effective turnaround.

4.9 From Traditional IT to the Cloud – Explained on One Page and in One Picture Software forms the core of every digital business model. However, the road from the business idea to the software required for it is rocky. Regardless of whether the software is developed in a traditional IT environment or a cloud environment, the three stages: Creating software, operating software and scaling software are passed through (see Fig. 4.20). The cloud automates IT value creation. This is made possible by virtualizing the infrastructure resources that are still necessary (computers, storage, network) via software. Three physical servers located in the basement of the company can become 10 or 20 virtual computers that continue to perform all calculations. All other components of IT value creation (such as databases, machine learning functions, etc.) can be abstracted, automated and offered as a service in the cloud. All communication between these value creation components runs via defined programming interfaces (APIs) that encapsulate the complexity of the service and make only the required functionalities available to external users. This division of IT value creation into small, automated services, as well as the ability to create new services from the combination of known services and scale them globally on the existing infrastructure of cloud providers, creates a completely new world of digitized IT value creation. In this way, every company has a rich “DIY store” of ready-to-use IT components at its disposal, which can be used without investment to build digital business models. The services are accessed in microtransactions. The advantage: costs are generally only incurred in the public cloud when the range of services is actually used. The new world of cloud-based IT is radically changing software processes. Software creation is significantly faster (because of the many prefabricated parts) and at the same time less risky (payment only when used). When using cloud resources, the focus of the developers involved is fully on the functions relevant to the business model, and the extensive support tasks are outsourced to the cloud.

4.9  From Traditional IT to the Cloud – Explained on One Page and in One Picture

109

How is the cloud changing the world of digital business models?

Traditional IT value creation

Cloud-based IT value creation

Creating software ...

The many ready-to-use components accelerate development, remaining activities are focused on the primary value creation. Early customer feedback gives invest security.

Cloud automates IT

... takes a long time, is expensive and involves a high investment risk

The most important opportunities for companies arise from: ...the many IT prefabricated parts

Running software...

µ€

... usually means unused infrastructure and many employees in support processes.

There is no longer any unused hardware kept in stock, nor are employees for IT support functions necessary. Existing employees can be focused on the primary value creation.

..the usability in microtransactions ...the possibility of global scaling



Scaling software... ... is complex in coordination and implementation. It is hardly worthwhile for small changes.

Resources are increased or reduced as needed in a fully automated manner - in microtransactions. The marginal costs for additional use and new customers fall to virtually zero.

..direct synchronization of costs and benefits

What technical approaches have enabled the cloud world?

Network of cloud Services

The cloud services use existing hardware infrastructures and can therefore scale globally at any time.

Your software

API

Traffic

API Service

Compute

API

API Weather service

Your software Request

The API abstracts the underlying complexity and makes the service efficiently usable. The service is itself software and infinitely scalable.

How is the weather in London?

API

Database Service A

API

Storage

API Backup

Network

Answer

Cloudy, 23° C and 80% humidity

API Weather service in the cloud

Order, Configuration and scaling are also carried out through software-based Services — which in turn can be used by other services.

Each element of the IT value creation can be a cloud Service

Fig. 4.20  From traditional IT to cloud-based IT value creation

Once the software is in operation, the company itself no longer maintains hardware resources for peak loads – the scaling of resources is automatic. In addition, employees are no longer required for the costly operation of IT components. The business model can operate with zero marginal costs right from the start and – in the particularly successful case – can also scale worldwide without major problems.

110

4  Cloud – The Automated IT Value Chain

Companies that embrace the new world of automated IT value creation can build, try out, and successfully market digital business models around the world much faster, at lower cost, with less risk, and more innovatively.

References Abel, Walter (2011): ITSM Rollen nach ITIL 2011, erschienen in: itsmprocesses.com, https:// www.itsmprocesses.com/Dokumente/ITIL%202011%20Glossar%20Rollen.pdf, abgerufen im Juni 2019. ACSC (2019): Implementing Multi-Factor Authentication, erschienen in: cyber.gov.au, https:// www.cyber.gov.au/publications/multi-­factor-­authentication, abgerufen im Juni 2019. Alt, Oliver (2008): Car Multimedia Systeme modellbasiert testen mit SysML, Vieweg und Teubner, Wiesbaden. Bhargava, Rajat (2017): Why Google IAM is Afraid of AD, erschienen in: jumpcloud.com, https:// jumpcloud.com/blog/google-­identity-­management-­active-­directory/, abgerufen im Juni 2019. Brewster, Thomas (2019): Alphabet’s Chronicle Startup Finally Launches—It’s Like Google Photos For Cybersecurity, erschienen in: forbes.com, https://www.forbes.com/sites/thomasbrewster/2019/03/04/alphabets-­chronicle-­startup-­finally-­launchesits-­like-­google-­photos-­for-­cybersec urity/#5353d23d79c5, abgerufen im Juni 2019. Büst, René (2016): Serverless Infrastructure: Der schmale Grat zwischen Einfachheit und Kontrollverlust, erschienen in: crisp-­research.com, https://www.crisp-­research.com/serverless-­ infrastructure-­der-­schmale-­grat-­zwischen-­einfachheit-­und-­kontrollverlust/, abgerufen im Juni 2019. Cohen, Dova (2017): Microsoft to continue to invest over $1 billion a year on cyber security, erschienen in: reuters.com, https://www.reuters.com/article/us-­tech-­cyber-­microsoft-­ idUSKBN15A1GA, abgerufen im Juni 2019. Dams, Jan (2017): So spionieren Geheimdienste deutsche Firmen aus, erschienen in: welt.de, https:// www.welt.de/wirtschaft/article162217929/So-­spionieren-­Geheimdienste-­deutsche-­Firmen-­aus. html, abgerufen im Juni 2019. Deutsche Telekom (2019): Magenta Cloud, erschienen in: cloud.telekom-dienste.de, https://cloud. telekom-­dienste.de/, abgerufen im Juni 2019. Fehling, Christoph und Frank Leymann (2018): Cloud Computing, erschienen in: wirtschaftslexikon.gabler.de, https://wirtschaftslexikon.gabler.de/definition/cloud-­computing-­53360/version-­276453, abgerufen im Juni 2019. Fisher, Cameron (2018): Cloud vs. On-Premise Computing, erschienen in: American Journal of Industrial and Business Management, Nr. 8/2018, S. 1991–2006. Fraunhofer (2019): Was bedeutet Public, Private und Hybrid Cloud?, erschienen in: cloud.fraunhofer. de, https://www.cloud.fraunhofer.de/de/faq/publicprivatehybrid.html, abgerufen im Juni 2019. Gallagher, Sean (2014): Photos of an NSA „upgrade“ factory show Cisco router getting implant, erschienen in: arstechnica.com, https://arstechnica.com/tech-­policy/2014/05/photos-­of-­an-­nsa-­ upgrade-­factory-­show-­cisco-­router-­getting-­implant/, abgerufen im Juni 2019. Gaugler, Eduard (2002): Taylorismus und Technologischer Determinismus, erschienen in: Albach et al. (2002), S. 165–181. Google (2019): AI and machine learning products, erschienen in: cloud.google.com, https://cloud. google.com/products/ai/, abgerufen im Juni 2019.

References

111

Haberstroh, Tom (2016): The Year of Steph Curry, erschienen in: espn.com, http://www.espn.com/ espn/feature/story/_/id/15492948/the-­numbers-­steph-­curry-­incredible-­mvp-­season, abgerufen im Juni 2019. Hart, John (2017): It’s Still The Economy, Stupid, erschienen in: forbes.com, https://www.forbes. com/sites/johnhart/2017/12/27/its-­still-­the-­economy-­stupid/#722176512c9a, abgerufen im Juni 2019. Hart, Mark A. (2015): The truth about „any color so long as it is black“, erschienen in: oplaunch. com, http://oplaunch.com/blog/2015/04/30/the-­truth-­about-­any-­color-­so-­long-­as-­it-­is-­black/, abgerufen im Juni 2019. Hiles, Andrew (2016): The Complete Guide to I.T. Service Level Agreements: Aligning It Services to Business Needs (Service Level Management), Rothstein Publishing, Brookfield. Hu, Xinyao und Liang Guo (2018): A Chronicle of Airbnb Architecture Evolution, Vortrag im Rahmen der Konferenz AWS re: Invent, erschienen in: de.slideshare.net, https://de.slideshare.net/ AmazonWebServices/a-­chronicle-­of-­airbnb-­architecture-­evolution-­arc407-­aws-­reinvent-­2018, abgerufen im Juni 2019. IBM (2019): Nutzen Sie das Potenzial von KI mit IBM Watson, erschienen in: ibm.com, https:// www.ibm.com/de-­de/cloud/ai, abgerufen im Juni 2019. ITWissen (2014): RTE (runtime environment), erschienen in: itwissen.info, https://www.itwissen. info/RTE-­runtime-­environment-­Laufzeitumgebung.html, abgerufen im Juni 2019. Kaczenski, Nils (2010): Never change a running system? Bullshit!, erschienen in: faq-­o-­matic.net, https://www.faq-­o-­matic.net/2008/02/20/never-­change-­a-­running-­system-­bullshit/, abgerufen im Juni 2019. Kelly, Michael (1992): THE 1992 CAMPAIGN: The Democrats – Clinton and Bush Compete to Be Champion of Change; Democrat Fights Perceptions of Bush Gain, erschienen in: nytimes.com, https://www.nytimes.com/1992/10/31/us/1992-­campaign-­democrats-­clinton-­bush-­compete-­be-­ champion-­change-­democrat-­fights.html, abgerufen im Juni 2019. Kling, Bernd (2019): Daten-Leak: 20-jähriger Hacker geständig, erschienen in: zdnet.de, https:// www.zdnet.de/88351153/daten-­leak-­20-­jaehriger-­hacker-­gestaendig/, abgerufen im Juni 2019. Kühl, Eike (2018): Wer hat Angst vor Huawei?, erschienen in: zeit.de, https://www.zeit.de/digital/mobil/2018-­02/smartphones-­china-­huawei-­zte-­mate-­10-­spionage-­risiken/komplettansicht, abgerufen im Juni 2019. Kurzlechner, Werner (2019): Was ist was bei ITIL und ITSM?, erschienen in: cio.de, https://www. cio.de/a/was-­ist-­was-­bei-­itil-­und-­itsm,3258078, abgerufen im Juni 2019. Lackes, Richard, Astrid Meckel und Markus Siepermann (2018): Datenbank, erschienen in: wirtschaftslexikon.gabler.de, https://wirtschaftslexikon.gabler.de/definition/datenbank-­30025/ version-­253619, abgerufen im Juni 2019. Lane, Kin (2012): The Secret to Amazon’s Success–Internal APIs, erschienen in: apievangelist.com, https://apievangelist.com/2012/01/12/the-­secret-­to-­amazons-­success-­internal-­apis/, abgerufen im Juni 2019. Luber, Stefan und Stephan Augsten (2017): Was ist eine API?, erschienen in: dev-insider.de, https:// www.dev-­insider.de/was-­ist-­eine-­api-­a-­583923/, abgerufen im Juni 2019. Luber, Stefan und Florian Karlstetter (2017a): Was ist Microsoft Azure?, erschienen in: cloudcomputing-­insider.de, https://www.cloudcomputing-­insider.de/was-­ist-­microsoft-­azure-­ a-­667912/, abgerufen im Juni 2019. Luber, Stefan und Florian Karlstetter (2017b): Was ist eine Private Cloud?, erschienen in: cloudcomputing-­insider.de, https://www.cloudcomputing-­insider.de/was-­ist-­eine-­private-­cloud-­ a-­631415/, abgerufen im Juni 2019. Luber, Stefan und Peter Schmitz (2017): Was ist ein DDoS-Angriff?, erschienen in: security-insider. de, https://www.security-­insider.de/was-­ist-­ein-­ddos-­angriff-­a-­672826/, abgerufen im Juni 2019.

112

4  Cloud – The Automated IT Value Chain

Macaulay, Tom (2018): Ten years on: How Netflix completed a historic cloud migration with AWS, erschienen in: computerworlduk.com, https://www.computerworlduk.com/cloud-­ computing/how-­n etflix-­m oved-­c loud-­b ecome-­g lobal-­i nternet-­t v-­n etwork-­3 683479/, abgerufen im Juni 2019. Mather, Victor (2016): How the N.B.A. 3-Point Shot Went From Gimmick to Game Changer, erschienen in: nytimes.com, https://www.nytimes.com/2016/01/21/sports/basketball/how-­the-­ nba-­3-­point-­shot-­went-­from-­gimmick-­to-­game-­changer.html, abgerufen im Juni 2019. Mell, Peter und Tim Grance (2011): The NIST Definition of Cloud Computing, erschienen in: csrc. nist.gov, https://csrc.nist.gov/publications/detail/sp/800-­145/final, abgerufen im Juni 2019. Metzger, Axel (2001): Software-Bibliotheken, erschienen in: linux-magazin.de, https://www.linux-­ magazin.de/ausgaben/2001/03/freiheit-­den-­bibliotheken/, abgerufen im Juni 2019. Microsoft (2019): Compliance Offerings, erschienen in: microsoft.com, https://www.microsoft. com/en-­us/trustcenter/compliance/complianceofferings, abgerufen im Juni 2019. Mitschele, Andreas (2018): Blockchain, erschienen in: wirtschaftslexikon.gabler.de, https:// wirtschaftslexikon.gabler.de/definition/blockchain-­54161/version-­277215, abgerufen im Juni 2019. Novet, Jordan (2019): Amazon Web Services reports 45 percent jump in revenue in the fourth quarter, erschienen in: cnbc.com, https://www.cnbc.com/2019/01/31/aws-­earnings-­q4-­2018.html, abgerufen im Juni 2019. Olbrich, Alfred (2004): ITIL kompakt und verständlich: Effizientes IT Service Management  – Den Standard für IT-Prozesse kennenlernen, verstehen und erfolgreich in der Praxis umsetzen, Vieweg und Teubner, Wiesbaden. Ostler, Ulrike (2018): Die Plusserver GmbH bietet eine Bare-Metal-Cloud und Cloud-­ Optimierung, erschienen in: DataCenter Insider, https://www.datacenter-­insider.de/ die-­plusserver-­gmbh-­bietet-­eine-­bare-­metal-­cloud-­und-­cloud-­optimierung-­a-­724682/. Pilkington, Marc (2016): Blockchain Technology: Principles and Applications, erschienen in: Olleros, Zhegu (2016), S. 225–253. Rojas, Alec (2017): A Brief History of AWS, erschienen in: mediatemple.net, https://mediatemple. net/blog/news/brief-­history-­aws/, abgerufen im Juni 2019. Rouse, Margaret (2016): Ransomware, erschienen in: computerweekly.com, https://www.computerweekly.com/de/definition/Ransomware, abgerufen im Juni 2019. Royce, Winston W. (1970): Managing the development of large software systems, erschienen in: IEEE WESCON, S. 328–338. Salesforce (2019): Entdecken Sie die weltweit führende Marketingplattform für intelligente Customer Journeys, erschienen in: salesforce.com, https://www.salesforce.com/de/products/ marketing-­cloud/overview/#, abgerufen im Juni 2019. SAP (2019): SAP Hybris  – Commerce Cloud, erschienen in: sap.com, https://www.sap.com/ germany/documents/2017/08/d0f22d3d-­cd7c-­0010-­82c7-­eda71af511fa.html, abgerufen im Juni 2019. Schirrmacher, Dennis (2019): Gehackte Websites: 620 Millionen Accounts zum Verkauf im Darknet, erschienen in: heise.de, https://www.heise.de/security/meldung/Gehackte-­Websites-­620-­Millionen-­ Accounts-­zum-­Verkauf-­im-­Darknet-­4305517.html, abgerufen im Juni 2019. Siegmann, Nate (2017): Pokemon GO Paved Way for Mobile Gaming, erschienen in: mynorsecode.com, http://mynorsecode.com/2017/10/17/pokemon-­go-­paved-­way-­for-­mobile-­gaming/, abgerufen im Juni 2019. Spiegel.de (2017): “WannaCry“-Attacke – Fakten zum globalen Cyberangriff, erschienen in: spiegel. de, https://www.spiegel.de/netzwelt/web/wannacry-­attacke-­fakten-­zum-­globalen-­cyber-­angriff-­ a-­1147523.html, abgerufen im Juni 2019.

References

113

Strathmann, Marvin (2015): Google weiß, was Sie letzten Sommer getan haben, erschienen in: focus.de, https://www.focus.de/digital/internet/datenkrake-­ueberlisten-­google-­weiss-­was-­sie-­ letzten-­sommer-­getan-­haben_id_4037493.html, abgerufen im Juni 2019. Tyson, Matthew (2018): What is the JRE? Introduction to the Java Runtime Environment, erschienen in: javaworld.com, https://www.javaworld.com/article/3304858/what-­is-­the-­jre-­introduction-­to-­ the-­java-­runtime-­environment.html, abgerufen im Juni 2019. Weidemann, Tobias (2018): Cyber Monday und Co.: Wie sich Kunden optimal auf die Verkaufsschlacht vorbereiten, erschienen in: t3n.de, https://t3n.de/news/cyber-­monday-­kunden-­ shopping-­1125845/, abgerufen im Juni 2019. Wiehr, Hartmut (2015): Die 10 größten Vorteile von Server-Virtualisierung, erschienen in: computerwoche.de, https://www.computerwoche.de/a/die-­10-­groessten-­vorteile-­von-­server-­virtuali sierung,2296941, abgerufen im Juni 2019. Zores, Robert und Marco Koppik (2019): Multi-Cloud Blockchain in the Food Sector, Vortrag im Rahmen einer gemeinsamen Konferenz von Rewe und Arvato, erschienen in: logistik.tu-berlin. de, https://www.logistik.tu-­berlin.de/fileadmin/fg175/Downloads/190412_09_Koppik_Multi-­ Cloud_Blockchain_in_the_Food_Sector-­TU_Berlin-­short.pdf, abgerufen im Juni 2019.

5

Cloud IT vs. Classic IT – Calculation for Controllers

Abstract

Software applications have a major influence on the cost structure of companies’ business models. The costs incurred during the development of the application essentially determine the investment risk. The fixed costs for operating and maintaining IT resources primarily influence the early phases of business development. Marginal costs become particularly relevant when the business model scales, i.e. grows – globally, for example: If marginal costs are lower than those of competitors, a higher profit is generated at the same market prices and price wars can be better endured. All of the cost factors mentioned can be significantly improved by transforming an on-prem application into a cloud application. The effort incurred by the migration is usually overcompensated by the advantages after a certain time – especially if the competitive advantages resulting from the transformation are particularly relevant for the core business.

5.1 A Practical Example: Outsourcing Invoice Management The changed revenues and costs ultimately decide whether a business model should be built in the cloud or not. In case of doubt, the colleague from controlling will stand in your door and ask you for a calculation of the economic parameters that change through the use of a company application in the cloud. Such a calculation is illustrated in the following section using a real example: For this purpose, we first describe the business model of a service provider that runs a specific application on a classic, logical architecture. In a second step, this application is compared to a cloud variant. For this purpose, the relevant

© The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2023 R. Frank et al., Cloud Transformation, https://doi.org/10.1007/978-3-658-38823-2_5

115

116

5  Cloud IT vs. Classic IT – Calculation for Controllers

International client

Invoices Invo

Scan

Assign

Check

Edit Transfer



Process management for purchase invoices

Fig. 5.1  Business model of a service provider for “process management for purchase invoices

measures of the cloud migration are briefly explained, the new cost structures1are presented and compared with those of the classic application. Figure 5.1 shows the value creation process of a service provider that takes over the management of purchase invoices for its customers: A customer hands over the invoices of its suppliers, which arrive in paper and digital form, to the service provider. The service provider scans the documents – if necessary – and puts them into a standardized form. Afterward, the basic information on the invoices is recorded so that they can be assigned to the correct clerks according to specialist area, country and language. The clerks check the invoices and decide how to proceed: Either transfer are initiated, the documents are assigned internally to other employees, or they are sent back to the customer. In this business model, the service provider has to overcome several challenges (see Fig. 5.2), in which his IT application supports him: • Many different customers: The service provider wants to serve more than one customer at the same time. On the one hand, the information on the individual customers must remain strictly separated during processing, and on the other hand, the application should not be operated separately for each customer for cost reasons. In addition, each customer has different requirements for the process and, above all, these processes must be compatible with the customers’ IT systems. • Fluctuating task load: The amount of transactions to be processed fluctuates greatly. The load is highest during the quarterly and annual financial statements; in the months

 The quoted costs for the old application are based on actual calculations from a traditional data center in 2015. The costs for the new application are based on Microsoft Azure pricing in April 2019. 1

5.1  A Practical Example: Outsourcing Invoice Management

Many and different customers

117

Globally distributed delivery

Fluctuating task load Very specific processes

Many individual skills

Fig. 5.2  Challenges of global outsourcing of purchasing invoice management

in between, the workload tends to be low. The task load must be able to be processed by the staff at any time. • Specific processes: The processes are individual for each customer and for each supplier of the customer and depend on many other factors. • Different skills: There are very diverse requirements for the skills of the employees. Some tasks are very simple and require only a few qualifications, while others require specific training and can therefore only be performed by a few employees. • Globally distributed delivery: The customers themselves are global companies and need a service provider that is just as globalized. Purchasing invoices need to be processed in different languages and according to specific legal frameworks. The software has a significant influence on the way the service provider can respond to these challenges. If the customer’s IT interfaces are well served, the effort required at the start of the project is reduced, both for the customer and the service provider. If the customers are securely separated at the software level (a so-called “client separation“), there is no need to maintain different hardware environments and the overall costs for ongoing operations are significantly reduced. If the individual processes of the customer can be well taken into account in the software, more automation is possible and the personnel costs are reduced. If the employees and their skills can be well mapped in the system, the fluctuating workload can be better distributed across the many delivery locations and the overall costs are reduced. The quality of this process management application, therefore, has a major influence on the competitiveness of the service provider.

118

5  Cloud IT vs. Classic IT – Calculation for Controllers

Presentation layer

Presentation layer

for the customer

Integration into customer systems

for the employee

Reporting

Process Logic

Customer systems

Data storage

Storage

Databases

Fig. 5.3  Logical architecture of the application

5.2 Features of the Classic Application The service provider’s application consists of six logical units (see Fig. 5.3): • • • • •

Via the presentation layer the customers’ employees can interact with the application A second display layer supports the service provider’s employees in their work One area takes care of the integration of the application into the customer’s IT landscape. Comprehensive reporting functions ensure transparency for all stakeholders In the actual process logic, the process steps of invoice processing are mapped and partially automated • On the level of data storage, the documents are stored in the memory and the transaction data is stored in databases.

5.2.1 Architecture and Fixed Operating Costs In the initial scenario, the application is operated in a classic data center; according to Fig. 4.9 in Sect. 4.4.4, the automation level is “virtualized”. This means that only the computing power is virtualized using so-called virtual machines (VM) and shared with other customers. All other components such as the network, storage, databases and operating system are provided traditionally – along with the disadvantages described in Sect. 5.4. For this application, this specifically means that 20 servers with a total of 120 processors, 686 GB of RAM and 22 TB of data storage are kept available. These capacities are

5.2  Features of the Classic Application

119

Table 5.1  Calculation of the old application Up to three million transactions per month Presentation layer - web server Process logic - Biztalk server Data management - SQL server - storage Other system components - MSMQ (message queue) - active directory - active directory B2C - load balancer - firewall - DDOS protection - private network DMZ Operations - Operation of the application - user support Totals

Number

CPU

RAM GB

4

4

14

4

64

512

6

16

80

GB

22,000 2 2 2 1 2 1 1

4 16 16

16 32 32

120

686

1 22,000

Total 2569.60 € 2569.60 € 29,900.80 € 29,900.80 € 7774.60 € 4905.60 € 2869.00 € 7189.05 € 350.40 € 934.40 € 934.40 € 990.09 € 2586.20 € 1026.15 € 367.41 € 5726.84 € 4470.44 € 1256.40 € 53,160.89 €

provided independently of the actual use of the application and are largely based on an estimate of the expected maximum demand. Based on the experience of the people involved, this demand was estimated to be around three million transactions per month. Table 5.1 shows an example calculation for the operation of this application. The costs are applied correspond to empirical values and are based on a current calculation (as of 2019). By far the largest single cost item, at almost EUR 30,000, is the BizTalk server. This is a tool provided by Microsoft that is specifically designed to exchange messages between different applications within large companies (Huffmann 2009). Thus, the BizTalk server is the foundation for automating cross-application processes. The next largest cost items are the databases (SQL Server) at about 5000 EUR and the operation of the overall application at about 4500 EUR. Other system components such as firewalls and tools for user management (Active Directory) add up to over 7000 EUR. The total amount of over EUR 50,000 is largely a fixed cost pool. The service provider’s business model is therefore dependent on high and predictable revenues, or it runs the risk of operating at a loss and inefficiently if there are fewer incoming invoices. Due to its fixed-cost-oriented cost structure, the service provider will try to agree the highest possible fixed prices with its customers. These, in turn, are a risk for the customer, as he cannot be sure whether outsourcing invoice management will actually bring the hoped-for internal savings. These conflicting interests cause some discussions in classic contract negotiations and accordingly take up a lot of time.

120

5  Cloud IT vs. Classic IT – Calculation for Controllers 6

100,000 €

Operations and projects

Monthly costs

80,000 €

5

70,000 €

4

60,000 € 50,000 €

3

40,000 € 2

Transactions

30,000 € 20,000 €

1

Transactions (in millions)

90,000 €

10,000 € - €

Jul-18

Jan-19

Jul-191 Year

Jan-20

Jul-20 Year 2

Jan-21

Jul-21 Year 3

0

Fig. 5.4  Number of transactions and monthly costs for IT

5.2.2 Structure and Expansion of the Application In the positive case, about six months pass until the application is adapted to the needs of a new customer. Most of the time, the customer and service provider need about six weeks to find the right people for the team, free them from their current tasks and start the project. Add to that another six weeks for detailed planning and ordering processes, and another two months for project execution. An additional month is scheduled for testing and making changes. The costly infrastructure resources must be in place early in the project to set up the processes and test the execution of transactions. Only when the application goes live does the number of transactions increase to the level for which the infrastructure is designed (see Fig. 5.4). A typical transaction history looks something like this: Starting in January of the first year, the customer begins to perform a relevant number of transactions on the system. Towards the end of the year, there are the most transactions due to the year-end closing, whereas in the summer months there are the fewest. During the second year, the customer involves more country subsidiaries in outsourcing the purchasing process. As a result, the number of transactions peaks in December of the second year at over five million for the time being. The total IT costs consist of the monthly IT operating costs (Sect. 5.2.1) and the project costs incurred. In addition to the set-up project before the start of the first year (costs: EUR 210,000), there is a second major task for the system operation staff: In the second year, the number of transactions rises above the three million mark due to the onboarding of the customer’s additional country subsidiaries. In the second half of the year, the service provider will therefore begin to expand the infrastructure: In a project lasting about three

5.2  Features of the Classic Application

121

months, the application is adapted to the needs of the additional national companies – this costs around EUR 105,000. This expense is divided between employees who take care of the procurement and setup of the additional infrastructure (buyers, administrators, infrastructure architects and project managers) on the one hand and employees who take care of the setup of the new purchasing management processes (process experts, developers, project managers) on the other hand. Figure 5.4 shows that the costs move largely independently of the number of transactions over the entire term of three and a half years. In the beginning, there is a large block of costs with comparatively little business benefit. These are fixed costs that are incurred when the infrastructure is expanded and/or a project is added to the outsourcing processes.

5.2.3 Average and Marginal Costs A relevant variable when scaling classic applications is the capacity limit of the existing infrastructure. In this example, it was initially estimated at three million transactions per month and must be expanded to six million towards the end of the second year. In reality, such estimates are much more difficult to make because transactions are not evenly distributed over the days of a month and the hours of a day. In addition, bottlenecks can also occur in individual components such as the user interface or the database, crippling all other parts of the application in their ability to function. This problem is most prevalent in monolithic applications, which are discussed in Chapter 6. Both theoretically and practically, problems are likely to occur as a result of the increased workload: The application becomes slower, and the system may even fail completely again and again – the expansion can therefore no longer be avoided. The average costs fall sharply with increasing useful life and fluctuate in the opposite direction to the development of the transactions (see Fig.  5.5). The set-up costs at the 6

100

1

Transactions

Capacity limit

4 3 2

0.1

0.01

Average cost Year 1

Year 2

1

Year 3

Fig. 5.5  Average cost per month and capacity limits of the old application

0

Transactions in millions

Average cost (€) per transaction

5 10

122

5  Cloud IT vs. Classic IT – Calculation for Controllers 100

6

1

0.1

Capacity limit

3

Number of transactions

2

Marginal costs 0.01

Year 1

Year 2

Year 3

in millions

4

Transactions

Marginal costs (€)

5 10

1 0

Fig. 5.6  Marginal costs in the old application

beginning of the contract term are very significant, while the expenditure for the expansion of the infrastructure hardly changes the average costs. The marginal costs of the company in the baseline scenario cannot be analysed in any meaningful detail. The cost structure is largely fixed. Only at the time of the expansion of the infrastructure is there a very high deflection of an additional EUR 20,000 per month. After that, up to six million transactions per month can be mapped and the marginal costs are zero again for a certain period (see Fig. 5.6). In summary, it can be said that the old application can operate in a defined area with zero marginal costs, but this advantage is bought by high one-off costs in the preparation phase and high fixed costs in the operating phase.

5.3 The Cloud Transformation of the Application In practice, the cloud transformation process begins with an application analysis by the cloud solution architect. This examines the customer’s requirements, the business model, the current price and cost structure, the operational requirements as well as security and compliance issues. Depending on the current architecture of the application and the opportunities arising from the existing services of the cloud providers, various migration options are then run through. For the case described, there is a clear recommendation (see Fig. 5.7): Five of the six parts of the application can be transferred from classic infrastructure components (e.g., Web server, SQL server, firewall) to corresponding platform services of the cloud providers with few changes. These platform services offer several advantages: On the one hand, they can be programmed in such a way that they automatically adapt their performance to the actual demand (“autoscaling“), and on the other hand, the transition to the cloud eliminates large parts of the operating costs.

5.3  The Cloud Transformation of the Application

123

All parts of the application are lifted onto appropriate platform services.

Presentation layer

Platform-Services PaaS

Integration into customer systems

Presentation layer

Reporting

Process logic

Process logic must be newly created

Data management

Fig. 5.7  Migration concept for the outsourcing application

A 1:1 transfer (“rehost“) of the BizTalk server, which maps the process logic, to infrastructure services (IaaS) would be technically possible, but the in-depth analysis of the situation opens up another option. Cloud provider Microsoft offers so-called “Logic Apps” (Microsoft 2019) for mapping process logic. These are used to map complex business processes across different applications and – as far as possible – to automate them. To create and improve processes, expensive programmers were still necessary for the old application – with Logic Apps, on the other hand, process experts can make changes themselves with little or no programming knowledge (“low code”). A migration of the BizTalk-based application, therefore, offers two advantages: • The fixed cost block is dissolved in favor of platform services, which are billed on a transaction basis. The price per action is EUR 0,000022 (as of May 2019). • Since trained clerks can create and improve their own processes, a business bottleneck in day-to-day operations is resolved. This eliminates the need for cross-departmental communication with programmers. When changes are necessary, they can be made faster and with less effort. Overall, the cost structures of the application change significantly. Table 5.2 shows that most of the cost components in the second application scenario in the new cloud-based application are billed on a usage basis. As shown in Fig. 5.8, this results in simple scenarios for the migration to platform services (PaaS) for all technical components. The effort for this is estimated at 20 person-­ days (at 1000 EUR per person-day this corresponds to 20,000 EUR), including testing, migration and particularly intensive support in the start-up phase after migration. The transfer of the processing logic to the new platform service is not technically possible. The outsourcing service provider has to set up a total of 180 processes in the new application. The estimated effort per process is one person-day. In total, this means 180 person-days for the transfer of the technical processes of invoice processing to the cloud. The financial outlay is, therefore, EUR 120,000, as a lower daily rate is charged for

124

5  Cloud IT vs. Classic IT – Calculation for Controllers

Table 5.2  Changed cost structures in the new application Cost structure of the old application Cost structure of the new application Presentation layer Web server Fixed costs per Web server Basic service as fixed costs. Additional month service required is billed to the second Process logic Biztalk server Fixed costs per Logic apps Costs depend on the number of month transactions as well as the number of connectors Message queue Fixed costs per Service bus Costs depend on the number of messages (MSMQ) month Data management SQL server Fixed costs per Azure SQL Fixed costs per month (transaction-based month database variants are also possible) Storage Fixed costs per Storage After usage month Other system components Active directory Fixed costs per Azure active By number of users month directory Active directory Fixed costs per Azure active By number of authentications B2C month directory B2C Load balancer Fixed costs per Application By hours and amount of data transfer month gateway Firewall Fixed costs per Azure firewall By hours and amount of data transfer month DDOS protection Fixed costs per Azure DDoS Fixed costs per month plus number of month protection resources to be protected (if more than 100) Private network Fixed costs per Outbound Depending on traffic DMZ month traffic Operations Operation of the Fixed costs per Operation of Fixed costs per month application month the application User support Fixed costs per User support Fixed costs per month month

process experts than for programmers (Sect. 5.2.2). In addition, training, expenses for project management and for testing the new processes amount to EUR 90,000 and 90 person-days (see Table 5.3). The technical expenses of the cloud migration are conservatively estimated, but in any case, only make up a small part of the total costs. The migration of the processing logic is the most costly part. Compared to a project implemented traditionally with a cross-silo team of developers, project managers, and subject matter experts, the 180 person-days still turn out to be relatively “cheap”. The effort for project management and training ultimately depends on how well trained the process experts are and how independent they are – accordingly, the effort can vary.

5.4  F eatures of the Cloud-Based Application

125 6

90,000 €

Transactions

80,000 € 70,000 €

5

4

60,000 € 50,000 €

Old application

3

40,000 € 2

30,000 € 20,000 €

New application

10,000 € - €

Transactions (in millions)

Monthly operating costs, excluding projects

100,000 €

1

0

Time

Fig. 5.8  Monthly costs of the new application

Table 5.3  Overview of migration efforts Technical migration Migration of the process logic Project management, training etc.

20,000 € 120,000 € 90,000 €

20 PT 180 PT 90 PT

230,000 €

290 PT

5.4 Features of the Cloud-Based Application The logical software architecture of the application does not change due to the cloud transformation. The application parts display layer, integration in customer systems, reporting, data management and process logic remain. The underlying physical architecture, on the other hand, changes fundamentally: The service provider for purchase invoice outsourcing no longer has to operate or maintain a single infrastructure component itself. They no longer need specialists for servers, storage, networks, active directory, load balancers and network-oriented security issues. He can also do without his own data center – at least he no longer needs it for the most important application in his core business. The remaining operating services focus on the application level. They are thus much easier to automate and are much more closely aligned with the needs of the core business.

126

5  Cloud IT vs. Classic IT – Calculation for Controllers

5.4.1 Fixed Operating costs and Total Costs The calculation of the new application is done in the new application as shown in Table 5.4. While all 13 elements of the cost calculation were fixed costs in the old application, there are only five in the new, cloud-based application: Web server, SQL database, firewall, and application operation and support. In total, the fixed costs add up to EUR 17.365 per month, and they remain constant for this application even with a higher number of transactions. All other costs are based on the degree of use of the platform. However, the linear link between costs and the number of transactions is shown in Figs. 5.8, 5.9 is a simplification.

Table 5.4  Calculation of the new application for three million transactions Component Price of the CPU Presentation layer Web server Premium V2 tier; 4 P3V2 (4 Core(s), 14 GB RAM, 250 GB storage) × 730 h; Linux OS Process logic Logic apps 100,000 action executions × 30 day(s) Service bus Premium tier: 2 daily message units, 730 h Data management Azure SQL database

Elastic Pool, vCore Purchase Model, Business Critical Tier, Provisioned, Gen 5, 2 32 vCore instance(s), 3 year reserved, 1000 GB Storage, 2500 GB Backup Storage Storage Block blob storage, general purpose V2, ZRS redundancy, hot access tier, 20 TB capacity Other system components Azure active Premium P1 tier, per-user MFA billing model, 50 MFA user(s), Di-rectory 25,001–100,000 directory objects, 730 h Azure active 50,000 authentication(s), 1000 authentication(s) Di-rectory B2C Application Web application firewall tier, large in-stance size: 2 gateway hours gate-way instance(s) × 730 h, 0 GB data processed unit(s), 1 TB Zone unit(s) Azure firewall Logical firewall units × 730 h, 2 TB inbound data transfer, 2 TB outbound data transfer Azure DDoS Protection for 100 resources, 4 TB data processed pro-tection Outbound traffic Virtual network with outbound traffic Operations Operation of the application User support Total

Total 2.073,20 € 2.073,20 € 2.518,04 € 1.548,60 € 969,44 € 13.897,94 € 13.443,68 € 454,26 € 4.918,34 € 512,28 € 12,65 € 680,88 €

1.361,10 € 2.206,02 € 145,41 € 2.021,46 € 1.021,46 € 1.000,00 € 25.428,98 €

5.4  F eatures of the Cloud-Based Application

127 6

100,000 €

5

80,000 € 70,000 €

4

60,000 € 50,000 €

Transactions

Old application

3

40,000 € 2

30,000 €

New application

20,000 €

Transactions (in millions)

Total monthly costs, with projects

90,000 €

1

10,000 € - €

Time

0

Fig. 5.9  Monthly total operating costs including projects

Table 5.2 shows the actual scaling factors per service. In practice, this simplification of the cost calculation can be quite problematic: If unrealistic assumptions are made at the beginning about how much the respective services will be used, unexpected changes in the cost structure can occur – in both a positive and negative sense. Figure 5.8 shows the development of the cost components of the new application depending on the transactions. In the beginning, the fixed costs still play a major role; with an increasing number of transactions, the share of variable costs increases.

5.4.2 Structure and Expansion of the Application The effort required to migrate from the old architecture to a cloud-based architecture has already been described in Sect. 5.3. If the processes were set up again for a second, similar customer in the same application, significantly less effort would be required. Since all areas of the application are software-defined services, their setup can again be described as software code and called again and again. The technical setup for the new customer would therefore only take a few minutes to hours  – and hardly any people would be needed. In addition, manual ordering processes and waiting times for physical delivery processes are eliminated. The actual scaling of the IT components can be automated in 11 of the 13 cases, according to the previously defined methods and limits. The document storage is simply used and paid for afterward based on actual consumption, and the logic apps are also billed according to the transactions actually executed. The servers for the presentation of the user

128

5  Cloud IT vs. Classic IT – Calculation for Controllers

interface as well as for the databases can be automated according to individually definable rules and react quickly to additional demand (“autoscale”). The processes for the additional national companies must still be set up manually in separate projects. Subject matter experts create these processes at EUR 1000 per day each for new customers. For new customers, the scope of services, the process flow and the system to be integrated must be specified. In this case, a total project cost of 60.000 EUR is assumed by the end of the second year. This improves the competitive situation of the new application compared to the old one. This difference becomes particularly apparent at the beginning of scaling (see Fig. 5.9).

5.4.3 Average and Total Costs

6

8.000 €

Transactions

5

2.000 € 1.000 €

Old application

4

0.500 € 3

0.250 € 0.125 €

2

0.063 € 0.031 €

New application

1

0.016 € 0.008 €

0

Time

Fig. 5.10  Average costs as a function of transactions

Transactions (in millions)

Average costs (cumulative)

4.000 €

Millions

A significant advantage of the cloud application over the classic application is that the cloud application can already operate at significantly lower average costs in the initial phase. The classic application, on the other hand, only reaches the average costs of the cloud application with a very high number of transactions. Figure 5.10 shows the cumulative average costs per transaction after the “go-live” of the application. The unit costs of the new application are constant from the beginning at 0.0027 EUR per transaction. They do not decrease even if there are more transactions, because the cloud provider charges fixed amounts per use for each of the services mentioned. From a

5.5 An Overview of the Advantages and Disadvantages of Transformation

129

very large number of transactions, cloud providers are willing to negotiate volume discounts (Linthicum 2019). Before this happens, cloud architects first try to reduce costs by making improvements in the software architecture. For example, cheaper storage components are used or the same calculations are executed with less workload. Only when the demand for IT services is very high can insourcing become worthwhile again – sometimes for cost reasons, but often also to be able to better implement specific technical requirements. Dropbox, for example, decided in 2017 to invest in its own data centers and to bring back a large part of its applications and data from AWS (Miller 2017). A look at the summed costs over the entire course of the application cycle shows the differences most clearly (see Fig. 5.11). The new application can be introduced with less effort than the old one. It starts directly with lower fixed costs and does not give up its lead due to the lower average marginal costs.

5.5 An Overview of the Advantages and Disadvantages of Transformation

Cumulative total costs, with projects

Transactions

2.5 €

2.0 €

1.5 €

5

4

Old application

3

1.0 €

2

New application 0.5 €

1

0.0 €

0

Time

Fig. 5.11  Summed costs over the entire application life cycle

Transactions (in millions)

6

Millions

3.0 €

Millions

In a comparison of the most important competitive factors, the cloud application performs significantly better in most aspects. However, it also brings with it some specific disadvantages.

130

5  Cloud IT vs. Classic IT – Calculation for Controllers

5.5.1 Comparison of Financial Factors The effort required to set up the cloud application is significantly lower, Table 5.5 shows a cost reduction of 62%. This is mainly because cloud resources can be set up faster and cheaper and that new purchasing processes can be set up by the subject matter experts themselves. The total costs for the entire three-and-a-half-year term are also significantly lower. This is mainly because the old application already brings along high fixed costs at the beginning without the resources provided being used. The average marginal cost of the cloud application is also significantly cheaper. However, it is particularly relevant for the service provider that the average costs fall to the target level very early in the process. In Fig. 5.12 this point is indicated by the first arrow. The disadvantages of the cloud become visible when the unit costs are considered: The cloud provider basically pays for every transaction, whereas the traditional application operates under certain conditions with unit costs that are effectively zero. If the number of transactions is below the maximum load (see Fig. 5.6), only the additional electricity costs for the execution of the computing operation are actually incurred in the traditional data center. If the monthly utilization of the application rises close to the capacity limit, the average costs drop significantly and almost reach the level of the cloud application (see arrow 2 in Fig. 5.12). If the number of transactions is high and evenly distributed and the infrastructure is optimized accordingly, it can be significantly cheaper to operate the business model in the traditional way rather than in the cloud. Table 5.5  Comparison of the most important cost components

Number of transactions over the term Project costs (in total) Total costs over the term Average cost per transaction Average marginal cost Cheapest unit cost

Traditional application 80.750.001

Cloud application

Cost reduction

315.000 €

120.000 €

61.9%

2.734.596 €

1.048.732 € 61.6%

0,0339 €

0,0130 €

0,0069 €

0,0042 €

39.2%

0,0005 €

0,0027 €

−439.0%

Comment

The costs of the transformation from the new to the old application are not included.

The cloud providers earn money on every transaction. Whereas in a traditional applications, marginal costs come mainly through electricity used.

Average monthly cost

6

Old application

Transactions 5

4 0.391 € 3

3

2

0.039 €

Millions

3.906 €

131

Transactions (in millions)

5.5 An Overview of the Advantages and Disadvantages of Transformation

1

0.004 €

1

2

New application

0

Time

Fig. 5.12  Relevant price and quality differences in one image

5.5.2 Comparison of Functional Factors Running an application at the edge of the infrastructure’s capacity limits and thus optimizing the economic factors is a tempting game. However, this approach carries the risk that the application will no longer function efficiently precisely when a particularly large number of users require its functions. In the case of arrow 3 in Fig. 5.12, such a situation exists: the number of transactions exceeds the capacity limit of the infrastructure. For the provider of the billing service, this would mean that the clerks would no longer be able to perform their tasks or would be able to do so much more slowly. A small cost-saving in IT would therefore cause productivity to collapse at another point in the process. In practice, the fact that the cloud helps a company to quickly and finely adjust the resource situation to the current demand situation means that cloud applications are less likely to reach capacity limits and thus perform better in the particularly usage-­ intensive phases. It is particularly advantageous in the practice of this business model that the process owners can define their own processes. With the appropriate tools, they can assume full responsibility for the process both vis-à-vis the customer and internally for implementation. Whereas previously they were always dependent on the availability of IT experts, they can now map the customer’s requirements in the system themselves and without any time-consuming internal coordination.

132

5  Cloud IT vs. Classic IT – Calculation for Controllers

5.6 Conclusion Classic applications and cloud applications differ significantly in their economic characteristics, even if they may have the same range of functions. Neither of the two variants can be described as better across the board. Traditional IT brings benefits especially when the bulk of the investment in infrastructure and operational organization has already been made, demand for computing power is high and evenly distributed throughout the day and year, and no major changes are expected in the future. Cloud IT, on the other hand, plays out its advantages in the early phases of software as well as with large load fluctuations. The average cost per transaction drops too low values very early on and changes can be made faster and more cheaply. The amount of damage in the event of bad investments is lower because companies can quickly scale back the business without any obligations for operating staff or data center real estate.2

References Huffman, Clint (2009): Microsoft BizTalk Server Explained in Simple Terms, erschienen in: blogs.technet.microsoft.com, https://blogs.technet.microsoft.com/clint_huffman/2009/03/18/ microsoft-­biztalk-­server-­explained-­in-­simple-­terms/, abgerufen im Juni 2019. Linthicum, David (2019): The CFO Guide to Cloud, erschienen in: deloitte.com, https://www2. deloitte.com/de/de/pages/finance-­transformation/articles/CFO-­Cloud-­leitfaden.html, abgerufen im Juni 2019. Microsoft Azure (2019): Logic Apps, erschienen in: azure.microsoft.com, https://azure.microsoft. com/de-­de/services/logic-­apps/, abgerufen im Juni 2019. Miller, Ron (2017): Why Dropbox decided to drop AWS and build its own infrastructure, erschienen in: techcrunch.com, https://techcrunch.com/2017/09/15/why-­dropbox-­decided-­to-­drop-­aws-­ and-­build-­its-­own-­infrastructure-­and-­network/?guccounter=1&guce_referrer_us=aHR0cHM6 Ly93d3cuZ29vZ2xlLmRlLw&guce_referrer_cs=vXaxNCwK4KTN0zHe-­5VzSw, abgerufen im Juni 2019.

 It should be noted that the presentation chosen in this chapter does not compare digital with non-­ digital business models, as was the case in Chapters 2 and 3. Rather, an already digitized business model that was operated on an on-prem solution was compared here with an outsourcing of this business model to the cloud. If a business model is completely digitized with the help of cloud technologies as described in Chap. 3, the cost structures shown in this chapter also arise with virtually zero marginal costs. 2

6

Mastering Software as a Core Competence

Abstract

The software value chain is at the heart of every digital business model: software is created, operated and scaled. The better the company masters this value creation, the more survivable it is in the digital competition. The most important levers for optimizing the software value chain are the virtualization levels used, the sourcing options selected, the software architecture used, the process flows in operation and development, and how the people involved are dealt with in the context of one’s organization.

6.1 Everything Becomes Software How can traditional companies survive in a market that is being shaken up by digital disruptors with new business models, products, and processes? These disruptors have a decisive advantage: They are planned on a greenfield site. From the selection of employees to work processes and equipment, all organizational areas start from scratch. They have the freedom to think about everything in a new and completely different way than was previously customary in their industry. Depending on his willingness to take risks, the entrepreneur can change the pace of growth at will. He can scale slowly or quickly as needed. New business models are tested on the market as Minimum Viable Products (MVPs). In this way, the company quickly finds out what works on the market and customers – and what does not. The sunk costs invested thus remain manageable. These disruptors have another advantage: they build on the technical and organizational possibilities of the present. This is the best starting point for putting pressure on traditional companies with their grown – and sometimes overgrown – structures. © The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2023 R. Frank et al., Cloud Transformation, https://doi.org/10.1007/978-3-658-38823-2_6

133

134

6  Mastering Software as a Core Competence

Much more difficult than building something entirely new is managing an existing company and transforming it under the conditions of digitalization. The leadership has to work with the given means: Employees, technology and workflows have aligned for years and sometimes decades to a certain way of doing business – making any change difficult. Traditional companies face the Herculean task of having to go through a digital transformation process (Frank 2019). To be able to manage this transformation process in a target-oriented manner, managers need to understand the interrelationships in two sub-areas: • You should know the application possibilities and business potentials of modern software architecture. • They should know how agile working methods work and how they can be used most effectively to bring products to market quickly. Many managers need to catch up in both areas, as this content was neither part of their training nor did it play a major role in their previous areas of responsibility. This gap is to be closed in this chapter. To this end, it offers an insight into the most important connections between the technology used, the processes of the organization, and the competitiveness of a company. The Software Value Chain and the Overall IT View The “stack“- i.e. the elements of the IT value creation from the infrastructure to the operating systems to the application – was described in Sect. 4.3. The software value creation with the steps “creating software“, “operating software“and “scaling software“is at the level of the application and was described in the variants “without cloud“in Sect. 4.2 and “with cloud“in Sect. 4.5. At each point in the software process, the underlying IT components are accessed (see Fig. 6.1): In software creation, the IT components are selected, built and the software code is run on them. In software operation, the components are maintained and serviced, and in scaling, resources are added or removed again. Business model Application

„stack“

IT value chain

Creating software

Operating software

Scaling software

Security & Integration Runtimes & Libraries Databases Operating System Virtualization Computing Power Memory Network

The software value chain uses infrastructure and middleware components from the stack

Software value chain

Fig. 6.1  The software process accesses IT value creation

6.2 Why Software Is Such a Challenge for Managers

135

In the age of the cloud, the IT resource layers that lie beneath the software process are becoming increasingly less important. From a business perspective, mastery of the software process, therefore, becomes the decisive variable. The cloud fundamentally changes the software process. Because in the cloud, all relevant properties of the required IT components can be defined and controlled automatically. With the help of the so-called “Infrastructure-as-a-Code“, it is described which computer types are started in which quantity and which databases are used in which way. It is also possible to define how to react if IT components fail or if more resources are needed due to a customer rush. In the age of software-defined IT, the software application becomes the control center for all other IT components. All the explanations that now follow relate to how companies can influence software-related issues in order to optimize their business models.

6.2 Why Software Is Such a Challenge for Managers On the surface, software behavior seems like a predictable thing: You say “Alexa, turn on the light,” and the light comes on. You approach the garage entrance, and the gate opens. With your PC, things can get trickier: When you plug the old printer into the new computer, printing can either work right away – or not. Sometimes the printer is only compatible with an old operating system, and that requires an additional driver. For software to function reliably and predictably from the customer’s point of view, all relevant components must work. In the case of the printer, this means the hardware of the PC, the operating system, the printer, and the printer driver. These areas of responsibility can be relatively well delineated: Lenovo, for example, supplies the PC, Microsoft the operating system, and the printer solution may come from Canon. Microsoft’s operating system provides well-defined and commonly known APIs that Lenovo and Canon developers can use to design and build their hardware and drivers. The processes between these companies have been in place for years, so the likelihood that the installation process will work smoothly is relatively high. Within the same company, these relationships are much more difficult. The reason for this is usually to be found in the development process of the company applications. In the above example, Microsoft had printer manufacturers in mind from the beginning. Consideration was given to: How can independent third-party products work with our operating system simply and reliably, without having to do individual projects with each manufacturer? The typical enterprise application, on the other hand, was usually developed to address a specific, business issue. Most often, such applications are created as part of a single project that defines the local and near-term requirements for the software. The software is created without considering the long-term organizational implications for both internal and external collaboration. This internal compartmentalization of the project resulted in a monolithic enterprise application.

136

6  Mastering Software as a Core Competence

Security & Integration Runtimes & Libraries Datenbanken Betriebssystem Virtualisierung Rechenleistung Speicher Netzwerk

Workflow Control

Video playout Functions

VideoEditing

RightsManagement

Application

UserInterface

In software development, a “monolith“is an application whose functional elements cannot be separated from each other (ITWissen 2019). If a small function is added to such a monolithic application, this has an impact on the entire software in the worst case. Every time a change is made, the software has to be fully tested and rolled out again. During this maintenance time, it is not available, neither for customers nor for employees. If an error occurs in one part of the software or a subcomponent of the infrastructure is overloaded, the entire application usually does not work (Fig. 6.2). In many cases, monolithic applications are several decades old and were designed by people who may no longer even work in the company. Over the years, this ponderous software kludge has been run and developed by different teams and/or different vendors. The code is barely or poorly documented. Figuratively speaking, monolithic software architectures resemble a church that has been built over centuries by different master builders – each of whom has realized himself in it. The application only works in conjunction with very specific IT components, which must be available in specific versions. The operations team is so busy with maintenance that hardly any new functions can be implemented. Therefore, the users have created an elaborate ecosystem of Excel lists and process logics to solve at least some of their problems interacting with the application. For certain key functions in the software, the only hope is that the already retired freelancer will still be available to the company, as he is the only one capable of making changes. No manager dares to touch such applications or even have them programmed from scratch. The effort involved would not fit into any annual budget, and on a personal level, it would be a project with which a manager can only lose: A successful reprogramming would merely restore the old functionality. An unsuccessful project, on the other hand, would really mess up the company’s processes. Badly programmed software is therefore like the nine-headed Hydra from Greek mythology (see Fig. 6.3): If one head is cut off, two new ones grow back. Half-hearted attempts to get the monster under control only make the snake more complicated and

Monolith Feature-level teams can work decoupled only with difficulty

Many and partly unknown dependencies At all levels of the stack.

Fig. 6.2  Monolithic software contains many dependencies

Increased organizational complexity

Lengthy development and test processes

6.2 Why Software Is Such a Challenge for Managers

137

SoftwareMonolith

Fig. 6.3  Monolithic software – the Hydra in the enterprise

Faster response times New functions Better user interface Fewer crashes

HardwareSoftwareUpdates Updates

• Updates every 6-12 months

New compliance requirements

• Focus on maintenance • Few new features

Cost reduction New client

Bottleneck of complexity

Management of dependencies in development, test and operation

Fig. 6.4  Dependencies within the software slow down the organization

dangerous. Therefore, the wise corporate leader is best to leave it alone altogether or to smash it in a joint, concerted action – just as Heracles and Iolaos did. Monolith Applications Are Bottlenecks of the Organization The more successful and important a software is for a company, the more frequent and relevant are usually the change requests from customers and internal stakeholders. There are new compliance requirements, the customer demands faster response times, the competitor has developed a new feature that the own software should also offer. In addition, costs should be reduced and there should be fewer crashes. If the development team meets some of these requirements, the new features have to be tested extensively, because: any single change can have an impact on all other elements of the application, and any non-­ functional feature can cripple the entire application. Changes are therefore not welcomed by software managers. They are bundled, tested together, and released in one large, seldom update (see Fig. 6.4).

138

6  Mastering Software as a Core Competence

People & organization

Process flows Software architecture Organizational complex software

Sourcing options Virtualization layers

Fig. 6.5  Factors influencing the performance of software

This approach creates an organizational bottleneck. The various stakeholders in the company want more changes than they receive from the application owners. Increasing the number of employees working on the software rarely helps, because usually the necessary knowledge about the dependencies is distributed among a few, already overburdened employees. Even with optimally documented monolith applications, there is only a limited possibility to resolve complexity bottlenecks by adding more employees, because human organizations simply become less efficient with increasing size. Decision trees must then be drawn, governance policies drafted, and committees formed to keep track of and decide upon everything. It is uncertain whether the guaranteed additional costs will then be offset to the same extent by more speed and new functions. The performance of the software significantly determines the performance of the business model based on it. But how can companies improve the performance of software? The following five sections describe the essential corporate and technical factors that influence the performance of software (see Fig. 6.5). First, the virtualization levels and sourcing options of a company are presented. This is followed by the areas of software architecture, process flows and the company organization. In all areas, it is shown how the development step away from monoliths towards agile software processes can look.

6.3 Virtualization Layers At the infrastructure level, the hardware is virtualized with the help of APIs and the corresponding cloud software. At the levels above  – i.e., the operating systems and databases – it is strictly speaking more a matter of abstraction because we are dealing with software that does not physically appear anyway. In this book, the terms virtualization and abstraction are used synonymously. From an economic point of view, the decoupling (see Fig. 6.6) of the respective higher level of IT value creation from the levels below is decisive. Downwards, the value creation is automated, which enables billing in microtransactions. The use of the services by many customers enables uniformly high utilization of the existing infrastructure. This results in

6.3 Virtualization Layers

Application Security & integration Runtimes & libraries Databases Operating system Virtualization

139

API

At these levels individualization remains possible

API

API

Virtualized

IaaS

Decoupling through software interface

API

Automation and distribution over many customers enable usage-based billing and low costs per transaction.

API

Computing power Memory Network

Container

PaaS

Serverless

SaaS

Fig. 6.6  Decoupling enables scaling effects

Application

Application Security & Integration Runtimes & Libraries Databases

The complexity of the application remains the same.

API

De-coupling via software interface

Operating system Virtualization

Independent optimization below the API

Computing power Memory

Container

The operations team of the container service is responsible for the the value creation below the API. This includes technologies, tools, providers, security, etc.

Network

Fig. 6.7  Virtualization of IT value creation using the example of a container service

an average low cost per transaction. When choosing the right virtualization level, it is, therefore, necessary to weigh up: between the necessary degree of individualization (above the API) and the use of economies of scale through the infrastructure shared with other customers. Figure 6.7 concretizes the consideration using the example “container service“: Containers (Sect. 4.4.4) virtualize the operating system level. With this service, the infrastructure can be replaced, shut down, or started up again without affecting the higher levels of the stack. Depending on the need, cheaper and smaller server instances (a river ship) or large and expensive virtual machines (an ocean container ship) can be used. The exchange takes place within a few seconds, the application is encapsulated above it in a “container“. It does not matter which operating system (ship) has loaded the container. A company that uses container technology for its application can adapt the IT resources provided as precisely as possible to the actual requirements. To stay with the metaphor: The smallest possible ship is always used to transport the container efficiently in the current sea area in the given weather conditions. The rule of thumb is: The higher the level of abstraction, the better the cloud provider can manage the utilization of the lower levels and the lower the fixed costs. The calculation in Fig. 6.8 illustrates the relationships. In this figure, the operation of a website with a fluctuating usage pattern is considered. During the week, the site receives few visits, but on Saturday afternoons the number of users accesses regularly skyrockets.

Monthly costs Virtualized

IaaS

PaaS

Virtualization level

Container

Fig. 6.8  Example calculation of a website with fluctuating usage pattern

Weekly usage

Serverless

SaaS

500 €

1.000 €

Example: Blogger.de or Jimdo Fixed amount per month: -0-10 €

Example: Azure Functions Fixed amount per month: €-0 Cost per execution time: €0.000014/GB-s Cost per million executions: €0.169

Example: Individual IT in data center Fixed amount per month: >1000€ Independent of actual usage

140 6  Mastering Software as a Core Competence

6.4 Sourcing Options

141

In the variant with a low degree of virtualization, all system components are aligned to this peak load and kept permanently available (virtualized, IaaS). The costs are correspondingly high, and the average utilization of the components is low. In the variants with high degrees of virtualization (Container, PaaS, Server less, SaaS), the cloud provider takes care of the intelligent distribution of the load. Hardly any resources are held in reserve, yet sufficient computing power is available at peak times. Since the cloud provider passes on part of its optimization successes to the customer, the price for the application operated at high virtualization levels drops significantly. The low costs at higher virtualization levels are also accompanied by disadvantages. Free software services (SaaS) such as the website provider blogger.de can easily map the usage pattern shown but only offer a few setting and configuration options, usually only in the area of graphical templates. The lower the application operator selects the virtualization level, the more settings he can make himself. Extensive individual programming is still possible at the PaaS and container level, while storage, network and database components can still be individualized at the virtualized level.

6.4 Sourcing Options The influencing factor “sourcing options“is concerned with the question of where and how the IT components are procured. In essence, the question to be answered is whether the IT components are produced in-house or rented from a provider. If the components are insourced, the company enters into a high commitment in the medium to long term. This relates to the capital employed, the substantive commitment of managers to IT infrastructure issues and the commitment of employees to build up and develop the IT services. Figure  6.9 visualizes the make-or-buy decision from an operational point of view; in Chapter 7 it is explained again from a corporate strategy point of view. The decision for insourcing or outsourcing within corporate IT is not a 0/1 decision but shows numerous gradations. Separate decisions can be made for almost every IT component. For example, a company can use a so-called “housing provider” who provides the building infrastructure such as cooling, power and physical security facilities. However, Insourcing

Establishment of own data centers, acquisition of own hardware, training of employees, development and expansion of services, control of capacity utilization

Klassische IT

Outsourcing Use of other providers' buildings, consumption of IT components as cloud service, no own operating teams, no utilization risk

Public Cloud Private Cloud

Fig. 6.9  Make-or-buy question in overview

Private Cloud

142

6  Mastering Software as a Core Competence

this off-site data center houses the company’s own server and storage hardware. Conversely, companies can use their premises as a data center and purchase their servers, but have the container service operated and further developed by a service provider. A private cloud does not have to be set up and operated by the company itself. It is possible to commission a service provider to operate it or to share it with a limited number of other customers of the service provider. The public cloud, on the other hand, is open for use by all paying companies. There is no such thing as a hybrid cloud in the strict sense; the term “hybrid cloud“is used when applications use IT components from both the private cloud and the public cloud (Rouse 2019). This is often useful when internal compliance guidelines dictate the use of the private cloud, but less critical parts of the application – such as test and development systems – want to take advantage of the public cloud (Raza 2018). IT staff are increasingly talking about the “multi-cloud“, mostly in the context of large IT landscapes. This term describes the fact that a company uses many clouds in parallel. This is rarely the result of actively implementing a multi-cloud strategy but corresponds to the natural development of IT landscapes in the context of the cloud megatrend. A possible multi-cloud scenario looks like this: Microsoft wins a company as a customer for its Office365 cloud office software. In addition, Oracle offers hardly beatable conditions for the use of its cloud offerings for Oracle-intensive applications. The sales department has chosen a CRM solution from Salesforce. The IT department has built a private cloud and the e-commerce application developers have been using AWS for a long time. The junior staff love Slack – a SaaS tool for team communication – and the marketing department can’t function without the cloud tool Google Analytics. Thus, without consciously working toward it, a multi-cloud scenario emerges from the various functional needs. Figure  6.10 provides an overview of the most important sourcing options for companies. The crucial question in the context of sourcing options is who assumes which investments and which risks (see Fig.  6.11). The most important factors in weighing up the sourcing options are: • Investment in equipment and hardware infrastructure • Setup of the cloud services

Application

The IT components are only accessible to certain companies.

An application uses IT components from both cloud types.

Hybrid Cloud

Private Cloud Fig. 6.10  The most important sourcing options

The IT provider offers its components to all companies equally to all companies.

Public Cloud

6.5 Software Architecture

143

€ Fix

Investments in hardware

Structure of services

consideration

Utilization during operation

Private Cloud

Responsibility for risks

Hybrid Cloud

Benefits of individualization

Public Cloud

Fig. 6.11  Factors in the individual weighing of sourcing options

• Operation of the cloud services and assumption of the associated utilization risks • Assumption of performance and security challenges • Benefits of individualization for the business model The investments in hardware are high and the development of the many necessary services is costly. Ensuring sufficient utilization of hardware and services during operation in the face of fluctuating demand entails financial risks. In addition, there is the risk that the self-­ provided services fail and cause consequential damage to the applications. There are also security risks: Whoever provides the services is also responsible for ensuring that no one enters without authorization. Accordingly, there are several arguments in favor of outsourcing services to external cloud providers. Accordingly, the market research institute Gartner assumes that around 80% of companies will eliminate their data centers by 2025 (Cappuccio 2018). Conversely, there are some arguments for insourcing IT components. Especially for large companies with their own IT expertise and a clear focus on specific IT services, the investments can be worthwhile both financially and functionally. Apple, for example, has announced plans to invest USD 10 billion in its own data centers in the US (Donath 2018), but also continues to distribute applications to cloud providers AWS and Google (Nickel 2018). Osram is developing its own IoT platform but uses Microsoft’s cloud infrastructure for this purpose (Osram 2019). Netflix relies on the cloud of its streaming competitor AWS, but operates its own services for specific video tasks (Hahn 2016).

6.5 Software Architecture One factor that most strongly influences the competitiveness of enterprise applications is software architecture. The term software architecture was not included in software development as an independent field until the 1990s. Helmut Balzert’s definition describes the

144

6  Mastering Software as a Core Competence

term as “a structured or hierarchical arrangement of system components as well as a description of their relationships” (Balzert 2011, p. 580). Since this time, many other areas of software development have been subordinated to software architecture. In the following sections, the term software architecture is explained within the narrow understanding of the term, the chronologically successive development stages are presented, and the most important relationships are explained.

6.5.1 Monolithic Architectures In the 1990s, the so-called client-server architectures dominated. Numerous software applications developed based on these architectures are still actively used in companies today (Calcott 2018). In the client-server architecture pattern (see Fig. 6.12), the server usually provides a service that can be used by a client. The client and the server are usually executed on different computers. A simple example is the file server, which makes files available via an interface. The client component interacts with this interface to process provided files. At the end of the 1990s, software architectures changed to so-called multi-tier architectures (see Fig. 6.13). Here, the individual components are distributed over different logical layers, which are then executed on one or more computers. The layer model can vary from two to n layers. A commonly used variant is the 3-tier architecture (ITWissen 2018). In this variant, the representation layer is located on the first layer, the second layer provides the functional logic (“How does the system work?”) and the data storage (database) is located on the third layer. The fact that the functional areas of the software are not decoupled across the layers means that all parts of the architecture always have to be adapted simultaneously when the application is changed. During this maintenance period, the application is not available to third parties. In the early 2000s, a protocol was developed that served as the basis for a new software architecture pattern (Drilling and Augsten 2017a): the so-called SOAP protocol (Simple Object Access Protocol). SOAP allowed different services to communicate with each other using a specific message type (XML). The protocol is still used today and pioneered the architecture pattern of distributed systems known as Service Oriented Architecture (SOA) (see Fig. 6.14). In this architecture pattern, individual domain-oriented functions are encapsulated and made available as a service, whereby the individual services can be executed on different computers. This architecture pattern has proven itself and has been constantly developed further to this day.

Server

Client

Fig. 6.12  Client-server architecture – Downloading a photo from the FTP server

6.5 Software Architecture Fig. 6.13 Multi-layer architecture

145

Representation layer

Business logic

Data management

Display layer

Legend XML messages Database Business logic

Fig. 6.14  Service Oriented Architecture (SOA)

6.5.2 Distributed Systems A current trend in software architecture pattern is the so-called micro-services or “microservices“(Wolff 2018a). Microservices are an information technology architecture pattern in which complex application software is composed of independent services that communicate with each other via language-independent programming interfaces (Wolff 2018b; Newman 2015). The distinction between microservices and SOA is not always clear-cut. One key difference is that microservices – in contrast to SOA – generally do not require a heavyweight application server and control their own infrastructure autonomously (Biske 2015). The services are largely decoupled and each perform small tasks (see Fig. 6.15). In this way, they enable a modular structure of application software. This modular structure has significant advantages over the variants presented so far: • The function of the overall system is independent of the function of individual partial services. These can fail or be replaced, but the application as a whole is still available. • A single service can be changed without affecting other services.

146

6  Mastering Software as a Core Competence

Presentation layer Application A

Microservice

Microservice

Presentation layer

Microservice

Microservice

Microservice

Microservice

Microservice

Application B

Fig. 6.15  Microservices architecture Example

Old temperature

49°F Measuring point Berlin

Network failure New temperature

52°F

Measuring point New York

Weather service

CAP theorem

HandyApp

Availability

Advertising service Weather service Old

49°F temperature

Data consistency

Failure tolerance

Fig. 6.16  Challenges with distributed systems

• Applications can access each other’s services. They work together in this way without increasing the overall complexity. Applications developed based on microservices are called distributed systems. When developing them, certain peculiarities must be taken into account, which is described in the so-called CAP theorem (Nazrul 2018). This theorem states that in distributed systems only two of the following three properties can be fulfilled simultaneously: 1. Availability: The system is available in the sense of fast response times. 2. Failure tolerance: The system continues to operate even if communication between its parts is disrupted or delayed. 3. Data consistency: All data records are consistent across all distributed systems. Figure 6.16 illustrates the challenges of operating a microservice software architecture using the example of a mobile weather service application. For example, if the data connection of the software architecture to New York fails, the current temperature value of New York cannot be transmitted to the weather service application. The microservice solution: The user receives the outdated value for New York on his mobile app, but the application remains available despite the outage. All other services of

6.5 Software Architecture

147

the app are not affected by the outage, for example, the app delivers the current weather data for Berlin and plays the banner ads correctly. A monolithic system, on the other hand, would have failed until the network connection to New York was working again. In the early days of distributed software architecture patterns, communication between subsystems was mostly based on the SOAP protocol via XML-based messages. To create these messages, comparatively much development work had to be done by the responsible programmers. The messages also contained a lot of additional information (meta-­ information) with little content (Drilling and Augsten 2017b). To address this problem, Roy Fielding developed the architectural approach ReST (Representational State Transfer) in the course of his doctoral thesis (Srocke and Karlstetter 2017). This approach uses existing Internet formats (especially HTTP) and simplifies communication between distributed resources and services. Virtually all large IT companies now rely on this concept and thus enable distributed systems to collaborate across the board. In particular, access to the cloud services described in Chapter 4 is possible in this way in a simple and high-performance manner (Drilling and Augsten 2017b).

6.5.3 Cloud-Native Architectures Chapter 4 described in detail the advantages of developing and operating a software architecture from the cloud. The idea behind cloud-native architectures is to use the advantages inherent in the cloud (“native”) for the company’s own applications. • Ready-to-use IT components: Cloud providers offer a variety of ready-made services at all levels of IT value creation. These can be quickly installed and used in the applications without own investments and operating expenses. • Billing in microtransactions: The higher the virtualization level of the cloud service, the smaller the billing units usually become. This reduces the marginal costs for new customers in the digital business models to virtually zero. • Global scalability: Cloud applications can adapt their performance to actual demand. If this increases globally, the applications can also scale globally in an automated manner while maintaining low marginal costs (Gannon et al. 2017). • Costs only on usage: Cloud services are usually billed according to actual usage. Cost structures can thus be aligned with the success of the digital business model and thus significantly reduce investment risks. The following architectural ideas are at the forefront of cloud-native software development (Kratzke 2018): Distributed Services The application should be distributed across many different services or microservices, each with its own data storage. This is the basis for reducing complexity in development and operation.

148

6  Mastering Software as a Core Competence

Loosely Coupled The distributed services should only be loosely coupled with each other. This means that the services only communicate with each other via defined APIs and no direct access to components within the service is possible. In this way, the service owner can be sure that he can make changes to his service behind the API – free of any dependencies on other parts of the application. Of course, this only applies on the condition that he continues to operate the functions defined in the interface documentation externally. Stateless Applications An application is stateful if the server can only process a second request if it has also processed the first request. This situation is comparable to the memory function of an analog calculator: If a subtotal is calculated and saved there, all further calculations must also be continued exactly on this calculator. An application is stateless if there are no dependencies between successive requests. The requests then contain all data relevant to that interaction (Fielding 2017) and thus there is no longer a binding to a particular executing server. Any server can handle any request. Stateless applications can thus scale flexibly across different servers, reducing internal dependencies in the application. Automation of the Processes The processes in the lifecycle of an application should be automated. This starts with the provision of the infrastructure and extends to the updating of the provided applications. Automated processes make it possible to adapt the infrastructure used to the applications in real-time and without human effort, and to minimize the number of unused resources. Scaling at virtually zero marginal costs is thus possible. Elastic Scaling of Each Service Elasticity means that each service can adapt its resource requirements to the respective demand independently of the others. This enables cost-saving scaling of the infrastructure. Resilience in Case of Failures The application should be robust in dealing with failures in software and hardware. Resiliently developed applications remain functional overall, even if some of their services are down or do not respond. Thanks to this principle, cloud-native applications do not require maintenance windows but are available even when updates are applied or maintenance is performed. In addition, resilient applications have less stringent requirements for the availability of infrastructure components – which has a positive impact on their operating costs. If all these architectural guidelines are implemented during development, applications are created that are designed to take advantage of the cloud. Each microservice uses a self-­ sufficient set of middleware and infrastructure and makes use of easily usable IT

6.5 Software Architecture

149

prefabricated parts. Relevant costs are only incurred in those microservices that are used. Resources are automatically adjusted in the size of microtransactions. In case of success, any number of additional IT resources are elastically added up to global scaling. If resources fail, new ones are automatically started.

6.5.4 Comparison of Monoliths and Cloud-Native Architectures

Setup

Presentation

Business logic

Infrastructure

Application A

Data management

Operation

tion

Applica

Further development

The evolution of software architecture from monolithic applications to microservices-­ based cloud applications has had an enormous impact on the economic metrics of digital business models. Large software monoliths with many dependencies that are often difficult to understand have become applications that consist of many small components and whose dependencies are transparent and manageable (see Fig. 6.17). Dependencies exist in monoliths in the following three dimensions: 1. IT stack – from infrastructure to middleware to application 2. Process  – from the setup of the application and components to operate and further development 3. Software functionality – from data management and business logic to the display level. Modern cloud architectures reduce the dependencies by • As much of the IT value creation as possible is outsourced to the public cloud via the ready-to-use IT components; • The overall responsibility for a small part of the business logic and data management, as well as partly the presentation level, is merged in a microservice; • This microservice is then built, operated and further developed by a small team; • The microservices communicate with each other decoupled via ReST APIs. In terms of the consumption of IT infrastructure resources, the decentralized model has greater advantages (see Fig. 6.18). Much less computing capacity needs to be maintained and costs are only incurred for those microservices that are used.

Large software with complex dependencies

Fig. 6.17  Reduction of dependencies and complexity

Application B

Small components with manageable dependencies

150

6  Mastering Software as a Core Competence

In classical approaches, the handling of failures is characterized by the idea of error prevention. Components are kept as available as possible at high fixed costs. Cloud-native architectures, on the other hand, pursue the idea of resilience: The entire application should remain available, even if individual components are not. In this way, fixed costs are reduced and, in individual cases, overall availability is even increased.

6.6 Process Flows However, a functioning software architecture alone is not enough to adapt a company to the conditions of digital competition. To carry out a successful cloud transformation, the processes within a company must also be adapted. In the area of software, two types of processes can be distinguished (see Fig. 6.19): 1. At first, there are processes which occur between the people involved from the idea to the all-around clarified technical and functional requirements. 2. In addition, companies that produce software have development and operational processes – from the planning of the software code to its creation, to the secure and agile operation of the application. Both areas are closely related and directly interlocked in the DevOps approach.

6.6.1 Agile from the Idea to the Development of the Code Agility refers to the ability of an institution or company to respond flexibly to changes in the market, to adapt to these changes in rapid succession, while maintaining cohesion and structures internally (MacCormack et al. 2017). This is made possible in part by minimizing internal processes and bureaucratic tasks while empowering individuals with greater responsibility for the end product. Software development is leading the way in the implementation of agile working methods in many companies (Jacobs 2017). As described in Sect. 4.2.1, software used to be developed in a traditional way using the waterfall model or similar planning-oriented approaches. All these planning-oriented approaches have some basic problems in common:

Microservice

Microservice

Microservice

Reduced organizational complexity Resilience instead of high component availability Scaling per subcomponent and in fine steps

Fig. 6.18  From monolith to microservices – economic impact

Reduction of fixed costs Reduction of marginal costs

6.6 Process Flows

151

Processes between the people involved ...

End user

Customer

Product Owner

Product Manager

Software architect

Project manager

Chef

Chef

Tester

Operations staff

Software developer

... and development and operation processes. ... through the development of the code...

Efficient and secure from the initial intuition...

... to the secure and agile operation of the application.

..to the all-round clarified technical requirement...

Plan

Code

Build

Test

Release

Deploy

Operate

Fig. 6.19  Process flows from the idea to operation

• Groups of people find it difficult to develop a common picture of abstract and complex software and to describe it so precisely that it can be implemented by another group without risk. • In the knowledge of this, more and more comprehensive performance specifications are being created. In addition, the parties involved try to formulate the contracts in such a way that risks must be borne by the other party in each case. This dynamic generates high bureaucratic expenses and slows down the implementation enormously. By the time the software is implemented according to the original description, the requirements of the market and the wishes of the customers have often already changed again, resulting in further, cost-intensive changes. The agile model of software development represents the answer to the problems mentioned (Fig. 6.20). The guidelines of the agile model are (Agile Manifesto 2001): • • • •

Individuals and interactions more than processes and tools Functioning software more than comprehensive documentation Cooperation with the customer more than contract negotiation Responding to change more than following a plan

Project risks are not reduced by contract negotiations and abstract performance specifications, but by early feedback from the people involved on usable prototypes of software. The problem is that stakeholder groups often have difficulty agreeing on a clear and specific picture of software is solved by implementing new ideas and wishes in short cycles, which makes it easier to discuss them with each other. An agile approach is ideal for the development of applications because it makes particularly good use of the peculiarities of the non-physical product “software“: Software allows the development of functions in any order. For example, the graphical user interface of the application could be developed first, while the functionalities behind it are first simulated using simple means. In this way, the users of an application can better

152

6  Mastering Software as a Core Competence

Project manager

End user

Supplier / Software team



Customer

Customer / User

People look at concrete software and improve it step by step and together.

Product Manager

Operations staff



Software developer

Software Fig. 6.20  Agile as incremental and collaborative software development

understand digital work steps and describe their requirements more accurately before elaborate back-end functions are developed for them. Another possibility is to create fully functional applications, but a very little scope of services (e.g. “send message”) – the so called “Minimum Viable Products“(MVPs). Before complicated and expensive components are added (e.g. “encryption“), market feedback is first awaited. In this way, agile working in software development contributes significantly to shortening the time between the idea and the go-live of a software (see Fig. 6.21).

6.6.2 Agile Software Development with Scrum An agile software development method that has been used in project development and implementation since the 1990s is called Scrum (Schwaber and Sutherland 2013). This fundamentally team-oriented approach distinguishes three roles, five different events, and three artifacts. Scrum as a procedure and set of rules define the interaction between these actors, events, and artifacts. The roles are product owner, development team, and scrum master. Together these three roles form the scrum team.

6.6 Process Flows

153

Software

Material goods Only when the foundation is in place the first floor can be built. Creating functional hardware is expensive, changing it in retrospect is often not possible. Hardware can be touched and interdependencies are easier to understand.

versus

Functions can be developed in almost any order.

versus

First, functional software is cheap to create and easily changeable.

versus

Software artifacts are hardly imaginable for non-experts.

Tangible prototypes and systematic communication between the participants reduce misunderstandings and generate constant progress.

Fig. 6.21  Agile approach is particularly well suited for non-material goods

• The product owner is responsible for the product, the sales development, and its further development of the product. • The development team is responsible for the creation and further development of the product. In this function, it works in a self-determined manner and without an externally imposed hierarchy. The size of the development team can vary depending on the scope of the project. • The scrum master has the function of a coach and supports the team on the way to self-­ determined and efficient interdisciplinary work. Above all, he takes care that the team gets the ideal framework conditions and ensures that obstacles are removed. The five events are called: • Development sprint: The actual product development within a scrum team takes place within the framework of so-called development sprints. These sprints are usually between two weeks and one month long and serve to complete or improve a previously defined and clearly outlined product or product increment. • Sprint planning: In sprint planning, the team members work out the tasks for the future development sprints and commit to them – i.e. they make a binding commitment as to which functional scope they want to implement in the coming sprint. In many companies, a workday is set aside for sprint planning, during which the team meets to jointly determine the sprint‘s objectives. • Daily Scrum: During this daily event, the scrum team comes together to coordinate the work and make the interim status transparent for all members. To limit the scope of these daily Scrums, a rigid time window is usually set. Depending on the team size, this event should not last longer than 15 to 20 minutes. • Sprint review: At the end of the sprint is the so-called sprint review. Here, the team members come together to evaluate the results of the sprint and to record the learnings for the coming Sprints. For this purpose, the results of the sprint are presented to the stakeholders and especially the users and feedback is gathered. The summary forms the starting point for the formulation of new goals in sprint planning.

154

6  Mastering Software as a Core Competence

• Sprint retrospective: The sprint retrospective takes place between the sprint review and the subsequent sprint planning. It allows all team members to reflect on the experience gained and to formulate wishes and expectations for the upcoming development sprints. These are transformed into concrete suggestions for improvement. As a rule, a period of up to three hours is planned for the sprint retrospective. The three artifacts are the product backlog, the sprint backlog and the product increment: • The product backlog is a list of desired product features that the development team can use as a guide. The contents of the product backlog are ordered according to their importance, i.e. the central elements and requirements for the product are at the top of the list. The product backlog is managed by the product owner of the scrum team. He is also responsible for adapting the backlog if changes occur during the development and improvement process. • Sprint backlog: The sprint backlog defines which points of the product backlog are to be processed or improved in an upcoming development sprint. In addition, the sprint backlog contains information about how the specified goal is to be achieved in the intended time and with the given resources. The entire scrum team is responsible for maintaining and updating the sprint backlog. Ideally, the sprint backlog paints a real-­ time picture of a team’s progress on its work within a development sprint. • Product increment: The increment is also referred to as “Done”. It is the result delivered by the team members at the end of the development sprint. The goal of the sprint should be to produce a publishable increment under all circumstances. This increment capability is independent of whether or not the final product is actually delivered. The implementation of a scrum process is based on three ideal-typical pillars: transparency, review and adaptation. This means that every single step that a team member completes within the framework of a scrum process should also be aligned with these three conditions and be measurable. Ideally, each team member should permanently be able to view the work steps of the other team members and subject them to a critical review. The introduction of scrum is not about implementing an ideal process. The core of scrum is continuous improvement: Results, procedures and also behaviors in the team are questioned again and again to adapt to the conditions in the company, but above all to the conditions of the market. The scrum process is therefore always “work in progress”, or in other words: a permanent improvement task. Especially in the field of software development, the scrum method plays a major role today (Luber and Augsten 2017). Figure 6.22 illustrates the interaction of the three scrum roles in the process of software value creation: The scrum master is responsible for protecting the development team and the product owner from the influences of stakeholders external to the team in the “Plan”, “Code” and “Build” phases. This is the only way to ensure that the Scrum team can act as a self-determined unit within the company.

6.6 Process Flows

155

Customer Controller Division manager Team leader End User

Other projects Consultant

Scrum Master

Attention to the human aspects

Understand

Designer

Product Owner

Developer

Self-determined Developer team

Agree

Plan

Protection of the team from external influence

Architect Expert

Code

Build

Social context

Test

Release

Deploy

Operate

Software development process

Fig. 6.22  Scrum as a successful model at the beginning of software value creation

Plan

Code

Build

Test

Release

Deploy

Operate

Continous Integration Continous Deployment

Fig. 6.23  Classical process of software deployment with high communication and administration efforts

6.6.3 Automated Software Testing and Deployment with CI/CD The abbreviation “CI/CD“stands for “Continuous Integration“(Augsten 2018) and “Continuous Deployment“. It refers to the automation of parts of the software creation process, whereby the following steps are passed through (see Fig. 6.23): Plan: The software to be created and its features are planned. Code: The actual software code is written. Build: The code is compiled, i.e. converted into machine language. Test: The newly written code is tested. Release: The tested software is published for further use. Deploy: The software is installed on the live infrastructure and is thus usable by the users. • Operate: The ongoing operation of the software is ensured. • • • • • •

156

6  Mastering Software as a Core Competence

Continuous Integration (CI) concerns the process from planning to testing the software. The special focus here is on automating the integration tests. Newly created code from a developer is continuously integrated into the overall code base and tested for functionality in the context of the overall application. Continuous Deployment (CD) additionally includes the steps “Release” and “Deploy”. The new software functions are automatically deployed to the live environment in a CD environment. To understand the relevance of these two approaches, it is necessary to take a look at the classic software development processes. As already described in Sect. 4.3, it takes a very long time for a planned feature to become an actually usable function in the live system. The reason for these long release cycles is that in traditional IT, many planning, ordering, and delivery processes are performed manually and separately across different organizational units. In reality, such a release process can look as follows: The planning and the associated calculation of the new functionalities take place in the business department. This department hands over requirements document to the developers. These are located at an offshore location due to the lower daily rates – however, these colleagues are controlled by a project management department at the company’s home location. The code is created and handed over to a test team, which works on it for two weeks. The errors found are reported back to the development team, which now – instead of creating new functions – takes care of the faulty, old functions. Once the code is released for deployment, the development team creates a deployment document and an operations manual and passes these documents to another team. This team is located in a completely different area of the company, IT infrastructure operations. If problems occur at any point in this process between the business department and the end user, the entire process chain is searched for errors and culprits across the various departments and areas. For this purpose, independent third parties such as controllers and lawyers read requirements documents, test protocols, and operating manuals and moderate the usually conflicting viewpoints of the parties involved. Thanks to the possibility created by cloud technologies to address all IT components via software code using an API, all process steps can be automated from the first stage of the software process (creation of the code) to the deployment of the code on the live infrastructure. This leads to significant improvements in the entire process: • • • •

New functions are continuously integrated into the overall codebase. Faulty code is detected early and can be improved quickly. Developers no longer have to wait for other teams to finish needed pieces of software. The number and extent of tests performed by humans can be greatly reduced or reduced to zero. • Developers themselves can deploy the software to the infrastructure of their choice. • The time from the request of a new function to its release is reduced enormously  – sometimes to one day. • It eliminates the communication and management overhead, as well as the handoff risks between departmental boundaries from code to deploy.

6.6 Process Flows

157

6.6.4 Covering the Entire Process with DevOps and Feature Teams The term DevOps is a made-up word. It consists of the two terms development (software development) and operations (operations). Rob England has created a synopsis of definitions of the term on his blog and developed his definition from it (England 2014): “DevOps is agile IT operations delivery, a holistic system across all the value flow from business need to live software.” England states in his paper that DevOps is not a mere toolbox or automation tool, but a philosophy. The implementation of DevOps in companies is about merging software development and IT operations at the levels of culture, systems, practices, and tools (see Fig. 6.24). The goal of this change is to orient IT to the needs of the customer, to improve quality and speed. The ideal team size for DevOps teams is five to ten employees (Choi 2018). In American parlance, these teams are therefore also called two-pizza teams, since all members of the team should get their fill from two American pizzas. Ideally, the members of DevOps teams are merged organizationally. Pure developer and operations departments (“silos”), which perform small subtasks for many customers and services, become small and multidisciplinary teams that focus on one or a few services with full responsibility. The DevOps philosophy can be well characterized by the following four terms: overall responsibility, flow, learning, and vulture (see Fig. 6.25). Overall Responsibility – Clear Focus on the Customer The key to DevOps is the new assignment of tasks: The team is given end-to-end responsibility for one or a few services with a clear focus on the customer. The area of responsibility can be delineated via the software interfaces (APIs), both vis-à-vis the suppliers and the customers. This overall responsibility requires an interdisciplinary composition. Developers (Dev) and operations staff (Ops) are needed and depending on the service, security experts or designers are added. Often those colleagues are involved who have the technical requirements. For example, if it is translation software, they could be linguists; if it isbook evaluation, they could be editors. Teams with such subject matter experts are also called “BizDevOps” or “feature teams“.

Das DevOps-Team shares systems, tools, processes and a common culture.

Ideate

Plan

Code

Build

Test DevOps

Feature Teams

Fig. 6.24  DevOps as an agile form of IT delivery

Release

Deploy

Operate

158

6  Mastering Software as a Core Competence Customer or market Learning Culture

Overall responsibility Flow

Fig. 6.25  DevOps – the four characterizing terms

A frequently used motto to illustrate the overall responsibility is “You build it, you run it”. This means that the team that developed the functionality is also responsible if it does not work. In the past, colleagues from the “operations silo” were called in such cases; now, a colleague from the team itself has to get up at night and fix the code. Flow – Simplicity, Speed and Automation The process optimization ideas in DevOps are strongly oriented towards models developed in the manufacturing industry. Like “lean” (Produktion 2017a), it is about reducing waste. Code is not produced “in stock” in large software packages; instead, finished, small features are tested through and released immediately. Waiting times decrease because teams are small and collaboration is based on trust. Automation and the use of as many ready-made IT components from the cloud as possible simplify work steps. Following the pull principle of Kanban production control (Lean Production 2019), developers take functions from the task inventory and accompany them in the sense of one-piece flow (Production 2018) – analogous to the optimization of assembly lines – until release. This lowers the throughput time, reduces the stock of half-finished functions and promotes the identification of the employee with his work – after all, it is “his” feature. Learning – Fast and Data-Based Feedback Loops Another element of DevOps is continuous learning. Every function is immediately tested in an automated manner. If it still generates problems after release, these are quickly fixed and the test procedures are expanded. In A/B testing, two variants of a function are created and played out to different user groups. Objective decisions are then made from the resulting data. Functioning DevOps teams live and breathe continuous improvement – again a parallel to the “kaizen idea” in industrial production (Produktion 2017b). Overcoming Silo Thinking in Companies Overcoming silo thinking plays a very large role in the implementation of DevOps. To understand the relevance of the topic, a trip into the world of traditional IT is necessary.

6.7 People and Organization

159

The walls visualized in Fig. 6.23 are often walls where trust ends. The customer does not trust that the developers will really produce the desired functions – and if they do, then not at the desired price. So lawyers and controllers get involved, trying to shift risk rather than reduce it. After all, the developers program the code, but the operations colleagues don’t trust that the code will run stably. Often rightly so, because the developers have to develop the functions under time pressure – whether these can then be operated reliably is no longer their responsibility. The interests of the three most important stakeholder groups – customer, developer and operations staff – are not synchronized in traditional IT. This separation also manifests itself organizationally, and sometimes these silos work toward opposing goals. Employees who have experienced and lived this cooperation characterized by mistrust for years take the culture issue very seriously. They know that a culture of trust, empowerment, professional, constructive interaction is not only more pleasant for everyone involved, but also leads to better results. DevOps in its pure form is difficult to implement in many cases. On the one hand, there are technical requirements so that the steps from testing to deployment can actually run fully automated. For some applications, companies are dependent on external standard software whose developers cannot be integrated into such a team either legally or logistically. Customers also cannot be integrated organizationally. DevOps should therefore not be seen as some sort of ISO standard that the company either meets or doesn’t. It is actually more of an ideal to strive for.

6.7 People and Organization The factors of virtualization, sourcing, software architecture, process flows and company organization play an outstanding role in the success of the digital (cloud) transformation of a company (Sects. 6.2 to 6.4). However, the human factor is missing from the description so far. The current digitization monitor of the consulting firm BearingPoint points out that the cultural change necessary for digitization is not yet – or only hesitantly – being implemented in numerous companies in Germany (Broj and Schulz 2017) The integration of employees into the process of transformation is generally not easy and in many cases proceeds “bumpily”. The reason for this bumpiness are mistakes made by management in dealing with (potential) employees in the three sub-areas of employee management, adaptation of corporate culture and employment situation.

6.7.1 Employee Management A successful digital transformation starts at the top of the organization chart: with management, and preferably with top management, i.e. with the executive board or the board

160

6  Mastering Software as a Core Competence

of directors. Members of top management should not only be convinced that an agile transformation is the right decision for the company. They should also understand why this transformation is important for the company. Because only if the management has the argumentative tools and the intuitive understanding for the necessary transformation can those responsible credibly represent and implement the changes to the workforce. In terms of the digital transformation process, this means: The plans of the corporate strategists for an internal adaptation of the company to digitalization can be as well thought-out and rational as they may be. If it is not possible to prepare the company’s employees for the change through communication, change or transformation processes will necessarily fail. As part of this mediation process, employees and their fears must feel met by management and understand the direction in which the company is heading. This is where the communication skills of top management, the communications department and the HR department are particularly in demand. Information events and meetings between the various hierarchical levels have proven to be effective. In an informal setting, management can better communicate why the change is necessary (Nadella 2017). In addition to informing the entire workforce, some stakeholder groups move to the center of the communicative efforts, since they are in a position to slow down or stop the transformation process with their internal company influence. In communicating with these stakeholder groups, a good manager acts like a family doctor who recognizes the problems that the digital transformation will bring to the company in good time and prepares the stakeholders for them. It is like a vaccination: the patient is protected from getting a disease in the first place, or at least the consequences are mitigated (Moore 2016). In this vaccination process, the stakeholder groups are informed and, in the best case, convinced. For example, the controlling department should understand how important the transformation process is for the company’s continued existence. This also means that new perspectives should be integrated when recording a company’s performance targets. This may be accompanied by new KPIs that measure the company’s success beyond the usual financial metrics. For example, controlling can be expanded to include “soft” metrics that map transformation success in numbers: For example, it can be measured how well the internal exchange of information works or how far the standardization and automation of tools and processes between the company departments has progressed. In addition, top management should inform controlling that a transformation process may initially harm the development of traditional financial ratios. If controlling is prepared for this development and can thereby adjust its expectations, “rejection reactions” towards new ideas, technologies and processes can be alleviated, at least for a while.

6.7 People and Organization

161

6.7.2 Corporate Culture Peter Drucker, the pioneer of modern management theory, coined the phrase “Culture eats strategy for breakfast” (quoted from Cave 2017). What did he mean by this? His thesis is that for a company’s internal or external strategy to be successful, the company’s culture must be in line with the formulated goals. If this is not the case, implementation becomes difficult or even impossible. The adaptability of the actual corporate culture to the working conditions of the DevOps culture thus becomes the key factor for its successful adaptation. Agile DevOps teams that live and breathe new work processes create a new work culture throughout the entire company over the long term. In the process, the individual employee’s view of his or her area of activity also changes, because the more the DevOps idea takes hold, the larger the individual employee’s area of responsibility becomes. They can initiate more and more processes themselves without having to consult with the administration. Subordination and command execution give way to an independent planning, execution and monitoring process. In short, the employee becomes an entrepreneur in the company. On the one hand, this gives him greater freedom, but on the other hand, greater responsibility for the product he is in charge of. In shaping this process of change, employees are challenged. They must understand this new role in the company, internalize it and fill it with life – if they want to. However, the change in mindset can only succeed if it is accompanied by a new error culture in the company. Classic hierarchies serve, among other things, to control the performance of others and to expose their mistakes. This is why a culture of fear prevails in many companies: no mistakes are allowed to happen and it is certainly not wise to admit a mistake. This creates more problems than advantages, as creative and, above all, truly innovative ideas are nipped in the bud. The development of an open error culture is therefore another building block in the DevOps approach: Not only are mistakes allowed to be made, but they are also the engine for the further development of the company. Because: Only those who are not afraid to make mistakes also dare to try out new things and address problems. Innovation needs the support of colleagues and the courage of each individual to be able to create something new. Not for nothing is one of the principles of a successful DevOps implementation “fail fast, learn fast” (Booth 2014). For this development, the mistakes in the team and between the teams must be allowed to be communicated without sanctions, so that everyone involved can learn from them. If this change succeeds, then this development releases an enormous potential for creativity in the company. In this new environment, individual employees no longer see themselves as cogs in a machine in which they are assigned their tasks and their place. Instead, they become inventors, tinkerers, developers and managers in the company, who start, implement and take responsibility for projects independently within their DevOps team. Adopting these new working principles in the daily corporate culture is the most important step in integrating employees into the transformation process. At the same time, it is the most difficult step, since for many employees it goes hand in hand with the painful

162

6  Mastering Software as a Core Competence

departure from behaviors that were practiced in the traditional corporate hierarchy system. Agile coaches can help to accompany the integration of the new working ideas into everyday work.

6.7.3 Employment Situation The increasing automation of process flows within value chains is changing the employment situation. The consequence is, among other things, a decreasing demand by companies for employees whose activities consist of repeatable and automatable work steps. In terms of IT departments in companies, this results in an interesting starting situation. Intuitively, there is a lot to be said for reducing the number of jobs – after all, traditional tasks such as the operation and maintenance of the stack can be controlled by algorithms and software. However, the exact opposite is the case: IT employees are currently in greater demand than ever before. The trade association Bitkom reported around 82,000 vacant IT positions in Germany in 2018 (Bitkom 2018). The development of entry-level salaries in the IT sector also speaks volumes. The salaries of IT graduates have long since passed those of business graduates (Frank 2017). One reason for this development is the increasing demand of companies for well-­ trained employees who can take over the creative and thus in the future value-adding activities in the area of software development, production and operation. This increasing demand currently far exceeds the potential savings in manpower that can be achieved by automating IT operations in the cloud. Rather than shedding IT staff, the current challenge is to retain the existing IT workforce and prepare them for the changing tasks. Skills in the maintenance of IT hardware and traditional IT components are being replaced by skills along the cloud-based software process – from development to operation. To ensure this change, companies need to start developing the skills of existing employees in good time and provide them with appropriate training options and periods. What’s the alternative? Of course, companies can also buy in already trained professionals for cloud-based software value creation. To some degree, companies will rely on knowledge growth and expertise from outside in the future. Potential employees who have built up expertise in the areas of software development and software architecture over the past few years can use this initial situation to their advantage. They can massively expand their earning potential in the coming years and leverage their new negotiating power with potential employers. To have enough potential up their sleeves, companies will need both: successful training of their current IT staff and successful recruiting that can beat the competition in the battle for workers. HR departments, therefore, have a central role to play: they must adapt their recruiting strategies and measures to the new starting situation and, at the same time, buy in the

6.8 Practical Example ING DiBa

163

appropriate training and development measures for employees or even develop them themselves. This requires that HR understands digitization and the cloud transformation as critical variables for the upcoming decisions. Leadership, culture and employment  – these are the three areas that will shape the working world of the future. The focus must be on all three areas if the transformation is not to fail. One example of a successful transformation of corporate processes and culture is provided by the Dutch direct bank ING DiBa, which is presented in the following section.

6.8 Practical Example ING DiBa Banks have not had a reputation for being agile or fast. They were rather known for long, complicated processes and strongly pronounced hierarchies. With its agile transformation process, ING Bank shows that this no longer has to be the case today (Thienel 2018). The major changes in the financial market were one of the starting points for the Dutch bank’s consideration of a transformation process. The younger generation of bank customers also changed the perception of what a bank should offer and provide. And finally, the traditional banking sector was not spared from startups – in this case “fintechs”: young companies like N26 dared to enter the market and thus questioned the cumbersome value creation processes. In response, ING started to adapt its internal organization to the agile guidelines of DevOps structures in 2015. In the course of this, large parts of the headquarters in Amsterdam were restructured. The process was started with the announcement that all employees would be assigned a “mobile status” when the transformation began (Jacobs 2017). This meant that almost the entire workforce was “jobless” and had to reapply within the company. This allowed for a completely new matching between the interests and skills of the employees and their future tasks. This process was definitely painful for the bank and its employees. About 40% of the employees were working in new positions and areas after the changeover. The prerequisite for the successful conversion of the employees’ areas of activity was that the bank weighted the right “mindset” and the employees’ willingness to transform more highly than the knowledge and experience in the sub-areas during the conversion. Within eight to nine months, 350 nine-man teams (so-called “squads“) were formed and structured into 13 sub-­ areas (so-called “tribes“) (Jacobs 2017). Since then, ING DiBa’s motto has been: “One agile way of working”. This motto is described by the following eight key points: 1. We work in strong and competent teams. 2. We promote autonomy and self-determination in teams. 3. We promote talent and professional expertise. 4. We learn from our customers and use this knowledge to improve ourselves.

164

6  Mastering Software as a Core Competence

5. We set clear priorities in order to achieve our overall goals. 6. We have a uniform organisational design and working methods. 7. We work in a simple, uncomplicated and understandable way. 8. We improve what already exists instead of reinventing everything. These eight points contain some important statements that radically change the way companies work. Two of these points  – autonomy and self-determination in teams  – are addressed here specifically, as they trigger the greatest changes. For the implementation, ING started by introducing a completely new model of working (see Fig. 6.26). The biggest change for the employees was the introduction of so-called squads. ING Bank describes a squad as follows: “In squads, employees from different areas and with different competences work together to successfully solve a project or task” (ING 2019). Squads are entrusted with different customer-centric tasks. To accomplish these tasks, employees from a wide range of disciplines work together in a team. Employees from marketing with employees from IT, bankers with colleagues from sales. Squads at ING are characterized by end-to-end accountability of team members. This requires a high level of trust in the employees. For example, a single squad could be assigned to the “Search Engines” area and given responsibility for all search engine applications in the entire end consumer area. The task of this squad in such a scenario would be to provide search engine services that are as functional and customer-friendly as possible across the company’s various applications. Within the squads, one team member is assigned the so-called “product ownership”. This product owner decides which products the team will produce. He also manages the backlog and the to-do lists of the team. However, this does not mean that he is authorized to give instructions to the other team members. Communication across squad boundaries takes place in so-called “chapters“. In the chapters, employees from the various departments exchange information at regular intervals across squad boundaries. For example, there may be a “Data Security“chapter or a “Marketing” chapter within the company.

Chapter

C

for web designers

Chapter

C

for cloud architects

Chapter for topic X

Chapter Lead Product Owner

Agile Coach

C P Squad

P Squad

for consumer online banking

for the digital financial check

P Squad for task X

Fig. 6.26  Agile working with squads, tribes and chapters

P Squad for task Y

Tribe Lead

6.9 Conclusion

165

The coordination between squads is made possible by so-called tribes. Tribes consist of squads that can all be assigned to a common goal. In ING DiBa’s structure, these tribes are oriented towards the Bank’s product classes: loans, current accounts, mortgages. As a rule, these tribes should not comprise more than 150 employees. The tribe leader is responsible for the functioning of the tribe and for setting the right priorities, allocating the budget and allocating knowledge within the tribe. To accompany the implementation within the organization, Agile coaches were included in the construct in each tribe. They have the task of coaching and supporting the teams and the individual team members in the transformation to the new model. The success of the transformation efforts at the headquarters in Amsterdam was so great that the new working methods are now being gradually transferred to other areas and departments. For example, the German subsidiary of ING DiBa plans to have all organizational units working in an agile manner as early as 2019 (ING Deutschland 2019).

6.9 Conclusion As the example of ING DiBa illustrates, the implementation of agile working methods is a task that should be adapted to the specific needs of the company and the respective customers in the industry. There is therefore no one-size-fits-all solution when implementing digital transformation processes in companies. The importance that companies from different industries attach to the process shows that such a change is necessary in the digital business world. ING Bank is not the only one to focus on new models of work  – companies such as General Electrics, BMW and Microsoft have also declared these changes to be a living part of their corporate philosophy. All of the companies mentioned have fundamentally changed the way they work in many areas. But the large (digital) corporations will not remain the only ones to change. In the meantime, the transformation of the working world towards agile models has reached a momentum that is spilling over into companies of all sectors and sizes. Managers are assigned new roles in this transformation process. In order to be able to fully perform the classic leadership and decision-making tasks, managers must begin to expand their knowledge of the software process across the areas of creation, operation, and scaling. They should also be familiar with the fundamentals of software architecture so that they can work with technical staff to define and manage the software‘s potential and deployment scenarios. Finally, managers should also be familiar with the new agile working methods and their implementation in companies. If these competencies are left out of the training of future managers, then, in the future, technical managers and engineers will (have to) increasingly lead the fortunes of companies (Frank 2017).

166

6  Mastering Software as a Core Competence

References Agile Manifesto (2001): Manifesto for Agile Software Development, erschienen in: agilemanifesto. org, https://agilemanifesto.org/, abgerufen im Juni 2019. Augsten, Stephan (2018): Was ist Continuous Integration?, erschienen in: dev-insider.de, https:// www.dev-­insider.de/was-­ist-­continuous-­integration-­a-­690914/, abgerufen im Juni 2019. Balzert, Helmut (2011): Lehrbuch der Softwaretechnik. Bd. 2: Entwurf, Implementierung, Installation und Betrieb, Spektrum Akademischer Verlag, Heidelberg. Biske, Todd (2015): Was ist der Unterschied zwischen SOA und einer Microservice Architektur?, erschienen in: computerweekly.com, https://www.computerweekly.com/de/meinung/Was-­ist-­ der-­Unterschied-­zwischen-­SOA-­und-­einer-­Microservice-­Architektur, abgerufen im Juni 2019. Bitkom (2018): 82.000 freie Jobs: IT-Fachkräftemangel spitzt sich zu, erschienen in: bitkom.org, https://www.bitkom.org/Presse/Presseinformation/82000-­freie-­Jobs-­IT-­Fachkraeftemangel-­ spitzt-­sich-­zu, abgerufen im Juni 2019. Booth, Janice Holly (2014): Failure is the New Success, erschienen in: aarp.com, https://www.aarp. org/work/working-­at-­50-­plus/info-­2017/fail-­fast-­learn-­fast.html#slide1, abgerufen im Juni 2019. Broj, Alexander und Carsten Schulz (2017): Roboter, Rebellen, Relikte.Überkommene Strukturen behindern die Digitale Transformation, erschienen in: bearingpoint.com, https://www.bearingpoint.com/files/Digitalisierungsmonitor_2017.pdf?hash=b7cb75f6419d778cc06f875c48d6ba6c aa6a73073b45e47f, abgerufen im Juni 2019. Calcott, Gary (2018): Microservices vs. Monolithische Architekturen: ein Leitfaden, erschienen in: silicon.de, https://www.silicon.de/41666855/microservices-­vs-­monolithische-­architekturen-­ein-­ leitfaden, abgerufen im Juni 2019. Cappuccio, Dave (2018): The Data Center is Dead, erschienen in: gartner.com, https://blogs.gartner. com/david_cappuccio/2018/07/26/the-­data-­center-­is-­dead/, abgerufen im Juni 2019. Cave, Andrew (2017): Culture Eats Strategy For Breakfast. So What’s For Lunch?, erschienen in: forbes.com, https://www.forbes.com/sites/andrewcave/2017/11/09/culture-­eats-­strategy-­for-­ breakfast-­so-­whats-­for-­lunch/#6705be097e0f, abgerufen im Juni 2016. Choi, Janet (2018): Why Jeff Bezos’ Two-Pizza Team Rule Still Holds True in 2018, erschienen in: blog.idonethis.com, http://blog.idonethis.com/two-­pizza-­team/, abgerufen im Juni 2019. Donath, Andreas (2018): Apple investiert 1 Milliarde US-Dollar für texanischen Campus, erschienen in: golem.de, https://www.golem.de/news/expansion-­apple-­investiert-­1-­milliarde-­us-­dollar-­fuer-­ texanischen-­campus-­1812-­138247.html, abgerufen im Juni 2019. Drilling, Thomas und Stephan Augsten (2017a): Entstehung, Aufbau und Funktionsweise von SOAP, erschienen in: dev-insider.de, https://www.dev-­insider.de/entstehung-­aufbau-­und-­ funktionsweise-­von-­soap-­a-­602380/, abgerufen im Juni 2019. Drilling, Thomas und Stephan Augsten (2017b): Konzept, Aufbau und Funktionsweise von REST, erschienen in: dev-insider.de, https://www.dev-­insider.de/konzept-­aufbau-­und-­funktionsweise-­ von-­rest-­a-­603152/, abgerufen im Juni 2019. England, Rob (2014): Define DevOps, erschienen in: itskeptic.org, http://www.itskeptic.org/content/ define-­devops, abgerufen im Juni 2019. Fielding, Roy T. (2017): Statelessness, erschienen in: restfulapi, https://restfulapi.net/statelessness/, abgerufen im Juni 2019. Frank, Roland (2017): Vom CEO zum CTO – Sind Techniker die besseren Chefs?, erschienen in: cloud-blog.arvato, https://cloud-­blog.arvato.com/vom-­ceo-­zum-­cto-­sind-­techniker-­die-­besseren-­ chefs/, abgerufen im Juni 2019. Frank, Roland (2019): Unternehmensgründung vs. Unternehmenstransformation, erschienen in: cloud-­blog.arvato.com, https://cloud-­blog.arvato.com/unternehmenstransformation/, abgerufen im Juni 2019.

References

167

Gannon, Dennis, Roger Barga und Neel Sundaresan (2017): Cloud Native Applications, erschienen in: IEEE Cloud Computing, Nr. 5/2017, S. 16–21. Hahn, Dave (2016): How Netflix Thinks of DevOps, Vortrag auf den DevOp Days Rockies 2016, erschienen in: youtube.com, https://www.youtube.com/watch?reload=9&v=UTKIT6STSVM, abgerufen im Juni 2019. ING (2019): One agile Way of Working, erschienen in: ing.jobs, https://www.ing.jobs/Oesterreich/ Warum-­ING/So-­arbeiten-­wir/One-­agile-­Way-­of-­Working.htm, abgerufen im Juni 2019. ING Deutschland (2019): Die erste agile Bank Deutschlands, erschienen in: ing.de, https://www.ing. de/ueber-­uns/menschen/agile-­bank/, abgerufen im Juni 2019. ITWissen (2018): Three-Tier-Architektur, erschienen in: itwissen.info, https://www.itwissen.info/ Three-­Tier-­Architektur-­three-­tier-­architecture.html, abgerufen im Juni 2019. ITWissen (2019): Monolithische Software-Architektur, erschienen in: itwissen.info, https://www. itwissen.info/Monolithische-­Software-­Architektur.html, abgerufen im Juni 2019. Kratzke, Nane (2018): Cloud Native Applikationen, erschienen in: slideshare.net, https:// de.slideshare.net/QAware/cloudnative-­applikationen, abgerufen im Juni 2019. Jacobs, Peter (2017): ING’s agile transformation, erschienen in: mckinsey.com, https://www.mckinsey.com/industries/financial-­services/our-­insights/ings-­agile-­transformation, abgerufen im Juni 2019. Lean Production (2019): Kanban, erschienen in: lean-production-expert, http://www.lean-­ production-­expert.de/lean-­production/kanban-­beschreibung.html, abgerufen im Juni 2019. Luber, Stefan und Stephan Augsten (2017): Was ist Scrum?, erschienen in: devinsider.de, https:// www.dev-­insider.de/was-­ist-­scrum-­a-­575361/, abgerufen im Juni 2019. MacCormack, Alan, Robert Lagerstrom, Martin Mocker und Carliss Y.  Baldwin (2017): Digital Agility: The Impact of Software Portfolio Architecture on IT System Evolution, erschienen in: Harvard Business School Technology & Operations Mgt. Unit Working Paper, Nr. 17-105, https://ssrn.com/abstract=2974405, abgerufen im Juni 2019. Moore, Geoffrey (2016): Zone to Win, Vortrag im Rahmen der GoTo Konferenz, erschienen in: youtube.com, https://www.youtube.com/watch?v=fG4Lndk-­PTI&t=2885s, abgerufen im Juni 2019. Newman, Sam (2015): Microservices: Konzeption und Design, mitp, Frechen. Nadella, Satya (2017): Hit Refresh – The Quest to Rediscover Microsoft’s Soul and Imagine a better Future for Everyone, William Collins, London. Nazrul, Syed Sadat (2018): CAP Theorem and Distributed Database Management Systems, erschienen in: towarddatasince.com, https://towardsdatascience.com/cap-­theorem-­and-­ distributed-­database-­management-­systems-­5c2be977950e, abgerufen im Juni 2019. Nickel, Oliver (2018): AWS und Google hosten die iCloud, bestätigt Apple, erschienen in: golem. de, https://www.golem.de/news/cloud-­computing-­aws-­und-­google-­hosten-­die-­icloud-­bestaetigt-­ apple-­1802-­133003.html, abgerufen im Juni 2019. Osram (2019): Redefining IoT for Lighting Business and Beyond, erschienen in: osram.com, https:// www.osram.com/cb/applications/lightelligence/index.jsp, abgerufen im Juni 2019. Produktion (2017a): Die 7 Verschwendungsarten und was Sie dagegen tun können, erschienen in: produktion.de, https://www.produktion.de/technik/die-­7-­verschwendungsarten-­und-­was-­sie-­ dagegen-­tun-­koennen-­335.html, abgerufen im Juni 2019. Produktion (2017b): Kaizen: Lean-Philosophie in der Produktion anwenden, erschienen in: produktion.de, https://www.produktion.de/technik/kaizen-­lean-­philosophie-­in-­der-­produktion-­ anwenden-­106.html, abgerufen im Juni 2019. Produktion (2018): One-Piece-Flow: Beispiel aus der Praxis, erschienen in: produktion.de, https:// www.produktion.de/technik/one-­piece-­flow-­beispiel-­aus-­der-­praxis-­106.html, abgerufen im Juni 2019.

168

6  Mastering Software as a Core Competence

Raza, Muhammad (2018): Public Cloud vs Private Cloud vs Hybrid Cloud: What’s The Difference?, erschienen in bmc.com, https://www.bmc.com/blogs/public-­private-­hybrid-­cloud/, abgerufen im Juni 2019. Rouse, Margaret (2019): Hybrid Cloud, erschienen in: searchcloudcomputing.techtarget.com, https://searchcloudcomputing.techtarget.com/definition/hybrid-­cloud, abgerufen im Juni 2019. Schwaber, Ken und Jeff Sutherland (2013): Der Scrum Guide – Der gültige Leitfaden für Scrum, erschienen in: scrumguides.org, https://www.scrumguides.org/docs/scrumguide/v1/Scrum-­ Guide-­DE.pdf, abgerufen im Juni 2019. Srocke, Dirk und Florian Karlstetter (2017): Was ist eine REST API?, erschienen in: cloudcomputing-­ insider.de, https://www.cloudcomputing-­insider.de/was-­ist-­eine-­rest-­api-­a-­611116/, abgerufen im Juni 2019. Thienel, Albert (2018): Banken auf dem Weg zur digitalen Transformation, erschienen in: qualiero.com, https://www.qualiero.com/community/digitalisierung/allgemeines/auf-­dem-­weg-­zum-­ digitalen-­unternehmen-­z-­b-­die-­ing-­diba-­direktbank.html, abgerufen im Juni 2019. Wolff, Eberhard (2018a): Microservices? Oder lieber Monolithen?, erschienen in: heise.de, https://www.heise.de/developer/artikel/Microservices-­Oder-­lieber-­Monolithen-­3944829.html, abgerufen im Juni 2019. Wolff, Eberhard (2018b): Microservices: Grundlagen flexibler Softwarearchitekturen, dpunkt Verlag, Heidelberg.

7

Falling Transaction Costs and the New Network Economy

Abstract

From a business perspective, it has long made sense to internalize a large part of the processes along the value chain. With digitization, transaction costs are falling and simultaneously the number of opportunities to obtain standardizable and automatable processes from external providers is increasing. Companies can seize this opportunity and increasingly outsource processes that are not part of their core business. In addition, the cloud offers them the opportunity to appear on the market as a provider of digital services and to integrate into the emerging digital and global network structure of digital providers and customers. Former disruptors or competitors (enemies) can thus become potential cooperation partners (frenemies).

7.1 Transaction Costs Hold Traditional Value Chains Together In 1991, the British economist Ronald Coase received the Nobel Prize in Economics for answering a seemingly simple question: “Why do firms exist?” (Littmann 2013). Early in his academic career, Coase criticized one of the central assumptions of classical economic theory, namely the assumption that an exchange process between two participants takes place without incurring additional costs, i.e. “free of charge”. If this assumption were correct, then neither the buyer nor the seller would incur any additional costs beyond the purchase price in a transaction between two parties on the market. The consequences would be striking: There would be no reason for an entrepreneur to internalize a single process step – no matter how small. Why, for example, should a contract of employment be concluded with an employee when every single step can be purchased “free of charge” on the market? © The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2023 R. Frank et al., Cloud Transformation, https://doi.org/10.1007/978-3-658-38823-2_7

169

170

7  Falling Transaction Costs and the New Network Economy

For example, a company that manufactures furniture could buy the cutting of the boards from supplier A, the screws and nails from suppliers B and C, and the manufacturing of the table from supplier D. Since all transactions are free of charge, the furniture manufacturer can choose the supplier with the best price-performance ratio in each case and thus optimize its own value chain across all stages. As a result, the optimal table would be created at the optimal price. In this scenario, the company’s only task would be to orchestrate the individual process steps. In reality, this is not (yet) happening. On the contrary, large furniture companies like Segmüller from Bavaria try to internalize as many process steps as possible. From the production and procurement of materials to designs and the manufacturing of preliminary products, everything is controlled centrally at Segmüller. The reason why internalisation is worthwhile for a company like Segmüller is the existence of transaction costs. This type of cost was first mentioned by Ronald Coase in his 1937 essay entitled “The Nature of the Firm”, for which he would receive the Nobel Prize 60 years later (Coase 1937). In his essay, Coase stated that classical economics’ assumption of free market transactions was invalid. Rather, a series of additional costs arise around the process of purchasing, which must be added to the purchase price. With this observation, Ronald Coase succeeded in providing an answer to the question, “Why do companies exist?” The answer is: Because the costs that arise in addition to the purchase price – precisely the transaction costs – are too high, it is worthwhile for entrepreneurs to create hierarchies in order to control internal work processes. It is true that the transaction costs that are eliminated are now replaced by internal transaction costs (see Sect. 7.2 on internal transaction costs), which result from the set-up and operating costs – but in the end, these internalised processes are cheaper than purchasing the processes. This connection can be illustrated with a simple example: The effort a company would have to expend if it would buy the service of accounting every year on the open market would be enormous. The selection and control of the accountant’s work alone would generate high costs. And that’s not all: once the balance sheet is completed, all the knowledge would be lost again. That is why it is worthwhile for companies to have an administrative task such as bookkeeping, which has nothing to do with the core business of the company, done by their employees. Jobs for accountants are created, employment contracts are signed and the activity is integrated into the company’s workflow – a separate department within the company has been created. And this even though classical theory predicts that this activity could just as well have been bought in on the market. A total of five forms of transaction costs are distinguished (see Fig. 7.1). Here is an example: The management of a mechanical manufacturing group decides to use a new engine in its plant. However, before the new engine can be installed, the company incurs many transaction costs in addition to the costs of acquisition:

7.1 Transaction Costs Hold Traditional Value Chains Together

Initiation and information costs

Agreement costs

Control costs

• Visiting trade fairs

• Price negotiations

• Preparatory discussions with suppliers

• Costs for legal advice

• Incoming goods inspection • Quality management in production and afterwards

Enforcement costs • Legal costs • Court costs

171

Adjustment costs • Adjustments to the specified services • Supplier dialog for continuous improvement

Fig. 7.1  The types of transaction costs – example engine production

1. Initiation costs Before deciding on a new engine, the Executive Board of the machinery group compares its own requirements with the options available on the market – just like any conscientious purchaser. To this end, company employees visit trade fairs and invite potential suppliers to get to know various products. 2. Agreement costs The offers of the individual manufacturers are checked and compared before the contract is concluded. Subsequently, contract negotiations take place on the contract and delivery conditions. Several further costs are thus incurred until the actual conclusion of the contract. 3. Control costs When the engine is delivered, it is the company’s responsibility to check that the unit is in proper condition. The labor costs incurred in the process are part of the transaction costs. 4. Enforcement costs If one of the two parties to the contract is dissatisfied with the performance of the contract, legal disputes may arise. The costs for lawyers and negotiations are also added to the transaction costs. 5. Adjustment costs While the engine is being installed in the machines, there may be a need for adjustments. The costs of these adjustments are also added to the transaction costs after the contract is signed. The purchase shows that not only the purchase of the engine itself, but also the transaction costs incurred in the process make the entire production process more expensive. This makes purchasing on the market less attractive. If a company succeeds in reducing the costs for the production of the manufacturing part “engine” below the costs of the

172

7  Falling Transaction Costs and the New Network Economy

External IT

Corporate infrastructure HR

Transaction costs Initiation and information costs

Agreement costs

Control costs

Enforcement costs

Adaptation costs

Research & development

Procurement

Transaction costs

Internal logistics

Production

External logistics

Marketing and sales

External warehouse

Services

Transaction costs External call center

Fig. 7.2  High transaction costs hold the value chain together

purchase including the transaction costs by internalizing process steps, the integration of engine production into the company is worthwhile. This also theoretically clarifies why companies with internal value chains emerge. After all, in the industrial manufacturing economy, it often does not make economic sense to incur the high transaction costs of creating goods and services externally. The monitoring and control costs for the acquisition of process steps on the market are simply too high. Transaction costs thus form the cement or glue that holds traditional value chains together. Companies save money if they do not have to buy every service on the market but can integrate them into the company’s workflow (Fig. 7.2).

7.2 Excursus: Internal Transaction Costs Slow Down Economies of Scale in Production When a company decides to internalize a process, this decision changes the internal cost structure. In addition to the costs of actually producing a product or service, there are numerous other costs associated with setting up a company – this is the economic downside of internalizing work processes. The division of labor process in companies has expensive consequences: Information is exchanged between the employees involved, the individual stages of production are monitored and controlled, decisions have to be made about production forms and capacities, and finally the process steps are administered and managed. Thus, “internal transaction costs“arise, of which there are four types (the so-called “IKEA costs“): 1. 2. 3. 4.

Information costs Control costs Decision costs Administration costs

7.2 Excursus: Internal Transaction Costs Slow Down Economies of Scale in Production

173

External transaction costs Initiation and information costs

Agreement costs

Information costs

Decision costs

Control costs Purchasing costs

Production costs

Enforcement costs

Adjustment costs

Control and administration costs

Costs for further development

Internal transaction costs

Costs

Fig. 7.3  Comparison of additional costs incurred in the context of the product

Internal transaction costs

Variable costs per unit Economically optimal level

Average cost Quantity

Fig. 7.4  Internal transaction costs increase with rising output volume

Michael Porter refers to these activities resulting from the internalization of work processes as secondary activities. For Porter, these are activities that cannot be assigned to actual production or sales, but arise from internal information procurement, decision-­ making, control and administration (see Fig. 7.3). As the output of a company increases, the internal transaction costs increase. Therefore, in addition to the (economies of scale) and scope (see Chap. 3), a company also incurs disadvantages when it increases its production volume. Figure 7.4 illustrates this relationship. While the average cost of production naturally sinks as output increases, the cost of operating secondary activities rises. The reason for this in classical industrial production is the increasing coordination effort and the ever more complex granularity of the internal sub-processes that have to be coordinated with each other. Despite the economies of scale and experience curves described in Chap. 3, there is thus a natural limit to the growth of firms in the classical production of goods: as soon as

174

7  Falling Transaction Costs and the New Network Economy

the sum of the marginal internal transaction costs per unit produced (or the share of fixed costs) plus the marginal costs exceeds the market price of the product, it is no longer worthwhile to build up further capacities. This means that the development of total costs also takes into account the fact that internal transaction costs rise as output increases. This factor of rising internal transaction costs per unit clarifies the question of which production optimum arises for companies in the age of digitization. If the cost structure of a modern company is composed of fixed costs (KFix), a decreasing curve of marginal costs (KProd) and an increasing curve of variable IKEA costs (Kvar), the new formula for total costs is (K):

K  x   K fix  x   K var  x 

(7.1)

Kvar (x) consists of the variable costs per unit produced plus the variable cost increase of the internal transaction costs per unit.

K var  x   K prod  x   K intTAK  x 

(7.2)

This changes the classic formula for determining a company’s profit in the digital age. If the classic formula for determining the profit optimum were to apply, namely price = marginal costs, then companies would expand production to infinity as marginal costs fell. With the adjustment of the cost formula, the profit maximum is reached when: marginal cost plus marginal internal transaction cost equals the market price (p*).

p  K prod   x   K intTAK   x 

In a nutshell, the rising internal transaction costs of secondary activities in the company prevent the unlimited growth of traditional industrial and manufacturing companies. Subadditivity, i.e. the ability of a single company to cover the entire market demand, is therefore rather the exception in industrial goods production. As shown in Chap. 3, digital goods production and industrial goods production differ in this respect. This is because, in contrast to industrial goods, the risk of subadditivity by a single market supplier is extremely real in the case of digital goods.

7.3 Integrators, Orchestrators, and Layer Players – How Transaction Costs Influence Economic Structures From the Middle Ages to the present, there have always been phases in which companies have either integrated (insourcing) or outsourced (outsourcing) various steps along the value chain. In order to be able to classify these historical and thus also the current shifts, it is helpful to distinguish between three types of companies. Each organization can be assigned to one of these three types or represent a hybrid form:

7.3 Integrators, Orchestrators, and Layer Players – How Transaction Costs Influence…

175

1. Integrators 2. Orchestrators 3. layer player Integrators are companies that have an interest in internalizing large parts of the value creation processes for economic reasons. An example of integrators that is still current today are the globally active mineral oil companies (Gassmann et  al. 2013). The value chain of these companies extends from the selection of oil production areas to the extraction of the oil with rigs and drilling platforms to the operation of refineries and filling stations. Even the logistics used to transport the oil to the customers is usually controlled by the oil companies. Thus, an oil company controls the value chain from the first drop of crude oil that comes out of the ground to the moment the oil leaves the tap at the gas station. Full control over the entire value chain is the economic advantage of these companies. There are no transaction costs and internal control costs remain low due to well-rehearsed processes. At the same time, the companies can react quickly to changes in demand and prices. The second type of company is the so-called orchestrators. In contrast to integrators, orchestrators specifically outsource parts of their value chain and concentrate on the actual core tasks of the company. There are numerous orchestrators in the Western consumer goods industry today. Textile companies such as Adidas or Nike, for example, outsource their production to Asia because the manufacture of textiles is cheaper there. Also, companies like Walmart, Apple or Sony do not manufacture the products they sell but outsource production to external companies abroad. The third type of company is the layer player. These are companies that, from the perspective of an orchestrator, specialize in one stage within a value chain and offer this activity for different companies. An example of a layer player is the online payment service PayPal. Instead of going through all stages of the value chain and offering the customer a finished end product, the company focuses on the “payment process“segment. It thus offers a service that represents a separate stage in the value chains of companies and processes it for some (online) merchants worldwide. For a long time, integrators were on the rise. Digitization has changed that: The pendulum has been swinging in the direction of the orchestrators for decades – and the ever-­ cheaper transaction costs are accelerating this development. Overall, six phases of the integration shift can be distinguished in economic history: 1 . Until the sixteenth century: Time of the individual entrepreneurs and trade 2. Seventeenth to nineteenth centuries: Colonization 3. From 1890: Rise of the integrators – the emergence of large companies 4. From 1950: First wave of outsourcing 5. From 1980: Second outsourcing wave 6. From 2010: Digital outsourcing

176

7  Falling Transaction Costs and the New Network Economy

The Period of Individual Entrepreneurs and Trade (Until the Sixteenth Century) From the Middle Ages to modern times, individual enterprises dominated economic life in the Western world. A blacksmith, for example, bought the iron ore, processed it independently and finally shod the horses’ hooves. Since for many goods the producers and manufacturers were separate from the resources, merchants appeared on the scene who, as “layer players“, took over the logistics of transporting goods from the remote regions. Ginger and spices, for example, were goods in high demand in the Middle Ages, but sourcing them from the countries of the Near and Far East was fraught with great danger. Accordingly, the margins that merchants demanded from their customers for providing the commodities were high (Gildhorn 2019). Transaction costs did not play a role at this time due to the simplicity of the products offered. The medieval economy was therefore characterized by integrators (individual companies) and layer players (traders). Colonization (Seventeenth-Nineteenth Centuries) After the discovery of the American continent, a new player appeared on the market: the nation-states. Instead of leaving the profits with the traders, they founded their own enterprises and began to specifically exploit the raw materials of the countries overseas. As the example of Spain and the exploitation of the South American continent from the sixteenth century illustrates, the interest of nation-states was initially limited to particularly valuable goods such as gold, silver and precious stones (Glüsing 2009). Subsequently, nationally driven trade expanded to include more and more types of goods. Tea was imported from Indian tea plantations and cotton from North America. At that time, the nation states acted as independent layer players, claiming part of the value creation – the logistics – and thus also part of the profit for themselves. As transport costs fell and uncertainty eased, lower-margin products such as food also increasingly became the focus of nation states’ trade. At the same time, manufactories – i.e. integrators – spread across Europe, replacing individual companies in the production of goods. The manufacture of the new products required an ever-greater division of labor: Fine garments, weapons, or carriages could no longer be made by individuals. A wigmaker of the eighteenth century alone employed numerous workers for the elaborate production of hairdressing (Pröve 1995). Even then, the internalization of processes and thus the establishment of companies was worthwhile to avoid transaction costs on the market. Rise of the Integrators – Emergence of the Large Companies (1890 – Today) Around the same period that nation-states were slowly withdrawing from their colonial endeavors, entrepreneurs were discovering the benefits that came from mass production. The electrical revolution and the resulting increasingly complex products (automobiles, radios, refrigerators) further increased the transaction costs for the production of goods.

7.3 Integrators, Orchestrators, and Layer Players – How Transaction Costs Influence…

177

Thus, the cost of rework, but also the cost of recruiting new employees, made external purchasing more expensive. At this point, if employers had relied on purchasing each step of the work process on the open market, production would no longer have been profitable for them. Therefore, companies responded with a wave of internalization and changed the form of labor. The production process was broken down into smaller and smaller sub-­ steps. And so the degree of specialization of labor increased by leaps and bounds. One example of this development is the assembly line: Henry M.  Ford invented the assembly line to produce the Model T from 1913 onwards (Dombrowski and Mielke 2015). With this, the development of mass production reached a temporary peak. In a very short time, manufactories became large corporations worldwide. Companies such as the Standard Oil Company, which was founded in 1870 by John D. Rockefeller, played their economies of scale to the full in the market during this period. By exploiting economies of scale, competitors were first forced out of the market by price dumping, and then the market monopoly was setting prices. This led to numerous interventions of government market supervision in the USA, whose aim was to prevent dominant positions of individual companies in the goods markets. In 1911, Standard Oil was broken up into 34 individual companies (Desjardins 2017). The fact that regulators actually resorted to breaking up led to companies expanding their vertical dominance along the value chain (Wirtz 2012). They were united in this phase by the will to control the entire value chain. The First Outsourcing Wave (From 1950) After the heyday of integrators, the importance of orchestrators grew from the middle of the twentieth century onwards – and this development continues to this day. Driven by more favorable wage conditions abroad, good production conditions, falling transport prices and new transport possibilities, the degree of international division of labor has reached a new level. Even from remote areas, goods could be brought more and more cheaply and quickly to the processing factories and end customers. Thus it became increasingly profitable for companies to relocate food production to more distant regions of the world. Since the 1950s, more and more new tropical fruits became part of Western consumer and mass markets (Putschögl 1993). What was still a luxury 100 years earlier was normal from the 1950s onwards. In industrial manufacturing, too, the proportion of goods manufactured abroad was growing. Among the victims of the first wave of outsourcing were the textile manufacturing companies in Europe. Initially, factories were relocated from Germany to Italy, but textile manufacturers soon moved on from there to areas with even more favorable working conditions. Southeast Asia developed most rapidly into the extended workbench of producers from Europe and North America. More and more manufacturers of textiles and electronic products relocated their production facilities to Asia. This also marked the beginning of the rise of the so-called tiger economies (Kirchberg 2007).

178

7  Falling Transaction Costs and the New Network Economy

The Second Outsourcing Wave (From 1980) From the 1980s onwards, the increase in air traffic brought the world closer together. But not only physically due to new transport methods but also in terms of communications, thanks to the triumph of computer technology. Sinking communication costs expanded the range of outsourcing options for companies. Not only food and industrial goods could be produced abroad. Improved training conditions in some developing countries also made it possible to relocate information processing, service and communication services abroad. It was now worthwhile for Western corporations to operate office activities and call centers for example in India or Pakistan. (Fründt 2009). Digital Outsourcing Phase (From 2010) The third wave of outsourcing, which coincides with a significant increase in the degree of IT automation from 2010 onwards, differs from the two waves of outsourcing that preceded it. One example of such a development is the transition from automated installations of individual software components to the automated provision of complete IT infrastructures in the cloud. In these scenarios, work is no longer outsourced to people at other production sites; instead, software and algorithms can take over more and more activities on-site or from the cloud. This gives outsourcing a new quality. In the future, companies will be faced with the question of which work processes they still want to keep internally, which they want to digitize and which they want to outsource.

7.4 Fast Communication and Simple Automation – The Transaction Cost Levers of Digitization The fact that human activities can be replaced by software and algorithms is not an invention of the twenty-first century. Already with the development of the first mainframe computers took over tasks along the value chain of companies. From the very beginning, mainframe computers beat their human counterparts in the area of data storage and in the speed of data processing. Accordingly, the tasks for which software was used became specialized in these areas: Spreadsheets, data storage, image processing, etc. In the last ten years, the possibilities of outsourcing human activities to software have taken on new dimensions. The two decisive factors for this development are the decreasing communication costs and the automatability of business processes.

7.4.1 Decreasing Communication Costs For a long time, it was simply too expensive for companies to purchase computing power from the cloud. The implementation of this idea only became economically relevant from the mid-2000s. The problem up to this point was the connection costs or the costs for data transmission  – i.e. the prices per transmitted bit. The phenomenon of sinking data

7.4 Fast Communication and Simple Automation – The Transaction Cost Levers…

179

transmission costs will be illustrated below on the basis of the technical expansion of the German fixed network. The broadband network expansion in Germany started with the introduction of DSL by Telekom in 1999 (Reinhardt 2019). At that time, a DSL connection with a download rate of 1 Mbit/s cost 917 Deutsche Marks per month. As recently as 2005, only 62% of businesses in Germany had a broadband connection. Ten years later, it was already 95% (Eurostat 2017). The available bandwidths increased significantly when VDSL was introduced in 2006. VDSL made the internet significantly faster for both residential and business customers. Since then, the available speed of connections in the private sector has increased to currently 150 Mbit/s and more. Since that time, companies with a higher data turnover have increasingly relied on so-called leased lines (SDSL) to the Internet with fixed bandwidths. With SDSL technology, no single connection to the network is established, but the connection to the Internet dial-up point is permanently maintained. The speeds offered by Telekom for leased lines currently range between 2.2 Mbit/s and 1 Gbit/s (Telekom 2019). With the widespread introduction of fiber optic networks, entirely new transmission speeds would be possible. Fiber optic networks enable upload and download rates of 100 Gbit/s and more. However, in 2018, just 2.8% of total Internet connections were connected to Telekom’s fiber-optic network. This puts Germany far behind countries such as South Korea or Sweden (Brandt 2019). This rapid development of data transfer rates enabled many companies to enter the cloud. Since the mid-1990s, the speed at which data transfer rates have developed has exceeded the speed of growth in the computing power of new generations of processors (Evans 2013, see Fig. 7.5). Other forms of transmission have undergone similar developments. For example, mobile data transmission rates developed significantly in recent

108 107 106 105 104 103

ifor

Transistors in microprocessors

109

Un

wth

ro mg

2006

2009

2011

2003 2000 1995

1997

1992

1982

1985

1978 1971 1972

1974

Communication speed provided (bps)

1965

103

104

105

106

Fig. 7.5  Falling communication costs according to Philip Evans (Evans 2013)

107

180

7  Falling Transaction Costs and the New Network Economy

years, With the technology 5G, a transmission standard is now the new reality, which enables mobile data communication with a speed of up to 10 Gbit/s (Schanze 2018). As a result, it has become possible in recent years and will become increasingly attractive in the future to obtain computing power from the cloud.

7.4.2 Automation of Business Processes The second important factor that has contributed to a reduction in transaction costs is the increasing automation of business processes. Particularly in the area of market transactions, digitization has made it possible to automate numerous processes that can be standardized through ever new software solutions. One example of this development is the processing of share purchases and sales on the international capital markets through so-called “algorithmic trading”. Here, the buy or sell offers for individual shares are no longer carried out by humans, but by algorithms. These algorithms can make decisions about transactions on the financial markets within milliseconds and place the corresponding orders on the digital marketplaces. This automated form of share trading has significantly changed the transaction processes between exchange operators, investors and supervisory authorities in recent years. Current estimates suggest that around 50% of transactions conducted on the market are now carried out by algorithms (Bundesbank 2016). The offsetting of claims and liabilities arising from the transactions of market participants – known as clearing – has also long since taken place without human intervention. For this purpose, a central clearing house is interposed as a trustee between the seller and the seller of securities. The exchange of securities (clearing) and the final settlement usually take place without a human being controlling the sub-processes. In Europe, two companies are currently entrusted with clearing: Clearstream and Euroclear (Segna 2018). Just as the purchase process and the clearing of a share transaction have been automated, more and more individual steps along online transactions can be automated. It does not matter whether an algorithm adjusts the offer prices to the current demand situation or whether an algorithm independently decides which online user is shown which advertisement (see Fig. 7.6). Initiation and information costs Comparison of purchase and sales offered through algorithms

Agreement costs Automated on the basis of standardized contracts

Control costs Purchasing costs Via electronic monitoring

Enforcement costs Central, automated clearing house

Adjustment costs No adjustment costs

Transaction costs in online trading Example

Fig. 7.6  Reduced transaction costs through digitization – example of share trading

7.5 New Make-Or-Buy Decisions Through Digitalisation

181

The interplay of sinking communication costs and the advancing automation of business processes let the transaction costs approaching zero – and this will change the way we work in the future.

7.5 New Make-Or-Buy Decisions Through Digitalisation In 2013, the two authors Carl Frey and Michael Osborne from Oxford University caused a worldwide sensation with their study on the future of work. The result of their study was that in the next 20  years about half of current jobs could be automated or replaced by robots and machines (Frey and Osborne 2013). The two authors assume that two developments that are difficult to separate will change the world of work in the coming years: increasing digitization and the automatability of work steps. A step in the value chain can be automated if it is carried out in the same way over and over again. The automotive industry is a pioneer in this development: Tesla, for example, says that it has automated 95% of its car body production of the Model 3 (Donath 2018). However, Germany also occupies one of the top positions worldwide in this development. For every 10,000 employees, there are 309 industrial robots in Germany (as of 2018). This is third place worldwide behind South Korea (631) and Singapore (488). On average, 74 industrial robots are used per 10,000 employees worldwide (Eisenkrämer 2018). In the modern value chain, a work step can be digitized if the human activity can be mapped in a process that can be executed by an algorithm. Another example of the automation of individual stages of the value chain is so-called “Robotic Process Automation“(RPA). In the classic automation of work processes in production plants, human work steps are usually copied by robots. For this purpose, human actions are analyzed and then mapped into lists in order to imitate them by programming the robots via API interfaces. In contrast, RPA systems observe how users solve the tasks given to them in the graphical user interface (GUI). The neural networks of RPA systems are able to create independent lists of activities and repeat them autonomously in the GUI. For example, many tasks traditionally attributed to the back office can be automated by RPA – such as transferring data from customer emails to a company’s CRM and ERP systems (Willcocks et al. 2015). The Digital Jackpot: Processes That Can Be Automated and Digitized If a process step can be both automated and digitized at the same time this is the digital jackpot. These process steps will be completely outsourced in the future – at zero marginal costs. The insurance industry is a good example: To date, humans have dominated this area, but an individual consultation for efficient insurance coverage of possible damages follows repetitive work steps that can be digitally mapped in algorithms. The necessary calculations and analyses in the run-up to the consultation can be carried out by software. The actual consulting service can take place directly on the monitor via input interfaces. In the

182

7  Falling Transaction Costs and the New Network Economy

future, digital insurance advisors will be able to determine the appropriate product portfolio for insurance customers without the need for human intervention in this process. Processes that can neither be digitized nor automated are usually assigned to the core business of a company. In a manufacturing company, for example, the departments of Research and Development, Design, and Management fall into this category. These processes are usually not standardized because they need human creativity, so they cannot be automatically taken over by machines. If activities can be automated but not digitized, they are process steps that can be performed by machines and robots. Activities that can be digitized but not automated, on the other hand, can usually be assigned to the departments of sales or customer consulting. Here, interpersonal skills are required. Thus, to this day, many contract conclusions between companies are based on human interaction and trust between sales and purchasing departments. But here, too, digitization is causing disruption (Frank 2018). Figure 7.7 provides an overview of how work steps can be automated and digitized. The grey and dark grey areas indicate the area in which it is worthwhile for companies to internalize individual process steps. Here, the advantages of specialized knowledge outweigh the disadvantages of internal transaction costs. The typical positioning of these activities can be located in the lower left quadrant. Here people are in the foreground in the execution of the activities. Creativity, abstract understanding of the situation, and strategic thinking are required in this area. These “non-outsourceable” activities include the work of the feature teams and DevOp teams, among others. But the areas of business control and research will also remain human domains for the time being. Nevertheless, even classic areas of the core business are becoming increasingly digitalizable and automatable with every technological innovation. A spectacular example of this development is Google’s AutoML system. automatable Software-Services

Robotics

Platform-Services

Physical automation Physical activity required

Typical positioning Typical positioning of the core business, if the company does not want to become a digital "pure player" itself.

IT-Services Infrastructure-Services

Assembly line

IT operation

Software development DevOps

Sales Contract review Customer service

Artificial Intelligence

Mainly people-based Feature Teams

Marketing

Corporate management Research & development

Not automatable

Fig. 7.7  Core business between digitization and automation

Can be digitized

7.5 New Make-Or-Buy Decisions Through Digitalisation

183

Just a few years ago, it was unthinkable that algorithms could take over the tasks of a programmer. The task of programming and training machine learning algorithms for different tasks, for example, seemed too complex. With AutoML, Google has developed a system that enables algorithms to develop their own machine learning codes and test them. Google’s system is based on what is known as “reinforcement learning”: with this model, algorithms can independently check the quality of task performance using predefined digital reward systems and thus adapt the next generation of machine learning algorithms that have been developed. As early as 2017, AutoML succeeded in using these methods to develop machine learning codes that surpassed the quality of systems programmed by human developers. In an image recognition task, the machine learning code developed by the algorithms achieved a prediction accuracy of 82%. This was a better value than that achieved by the systems designed and trained by human developers (Greene 2017). The possibility of automated programming will change the work of programmers in the future. From now on, they will spend less time on “assembly line tasks” that are always repetitive in structure. Instead, they will be able to focus more on the complex development tasks involved in creating software and algorithms. Transferred to the diagram in Fig.  7.7, the tasks of developing standardized software parts will move out of the core business of software developers into the upper right quadrant. Such a departure of activities from the core business will become more and more common in the future. Among other things, it is conceivable that the review of contracts, management decisions, but also the creation of advertising clips will be taken over by machines in the future. For example, the company Deep Knowledge Ventures already added an algorithm called Vital to its board in 2014 (Wile 2014). When Is It Worth Outsourcing Process Steps – And When Is It Not? The fact that a process step can be automated or digitized does not necessarily mean that it should be outsourced – regardless of whether it is outsourced to external providers or an algorithm. On the contrary: If a company succeeds in automating or digitizing process steps better than the competition thanks to its experience in this area, and if this internalization leads to a significant and perceptible additional benefit for the customer, then there are good arguments for leaving the respective activities within the company. For example, it would be negligent for automotive companies to throw away their knowledge of automating production steps and outsource these activities to external companies. In 2019, for example, BMW and Microsoft announced the creation of a joint and open platform to manage digital production solutions. This demonstrates BMW’s commitment to continue to invest capacity and resources in building knowledge about manufacturing steps (Blackman 2019). However, the further technological development progresses, the smaller are the specific advantages resulting from internally acquired knowledge. Once automation reaches a certain level of maturity, the additional revenue that can be generated by manufacturing with specialized knowledge decreases. For example, the quality differences in the production of

184

7  Falling Transaction Costs and the New Network Economy

cotton T-shirts have been so small for years that outsourcing the manufacturing processes is worthwhile. Even a company like Apple can afford to outsource the manufacturing of its complex devices to its Chinese supplier Foxconn. So, there is a perception threshold with the customer that is extremely relevant for companies when deciding on make-or-buy: Does the customer notice the difference before and after outsourcing, or not? For textile manufacturers, this threshold of perceptible product differences has already been undercut for decades. The necessary production steps are much simpler and more interchangeable, and therefore the benefits of internalizing production have long since ceased to exceed the cost savings that result from outsourcing production to cheaper manufacturing areas and producers. This relationship has allowed the rise of the tiger economies since the 1960s (Sect. 7.3). This adds an important factor to the decision between insourcing and outsourcing: the additional revenue that the company can generate on the market with its products created through specialist knowledge. So, two equations can be compared:

Profit1  Revenueof the goods produced with specialised knowlledge  HK  TAK intern

(7.3)

Profit2  Turnoverof the goods purchased on the market  Purchase price  TAK extern

(7.4)

The following relationship applies: The more a process can be digitized and automated, the more it can be standardized and the smaller are the differences between manufacturers.

To put it in the language of the fashion industry: The product is “prêt-à-porter” and not unique. This increases the possibilities for outsourcing. Conversely, however, it can be worthwhile to transfer activities back to the own company – namely when the internal digitization knowledge grows and surpasses that of the suppliers. AI-based software solutions can help to take this path. For example, the Hiro application helps companies to independently optimize their business and IT processes. The company Arago offers this software as a SaaS solution that can be integrated into companies’ IT processes via APIs. With Hiro, business and IT processes in companies can be controlled and automated using artificial intelligence. If, for example, a German bank succeeds in independently developing and advancing digital financial advisors, this area can remain within the company as a core function (Arago 2019). Figure 7.8 provides an overview of the digitization and automation services that can be obtained from the public cloud. The portfolio of the cloud providers includes an extensive portfolio of IT prefabricated parts. The focus is on solutions from the areas of Software-­ as-­a-Service (SaaS), Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service (IaaS),

7.6 The Impact of the Cloud Revolution on the Transaction Costs of Software Use

185

automatable Robotics

Cloud scope of services

Platform Services

Extensive portfolio of ready-to-use IT components available in the cloud.

IT Services Infrastructure Services

Assembly line

IT operation

Software development DevOps

Mainly people-based Feature teams

Sales Contract review Customer consulting

Artificial Intelligence

Large catalog with IT prefabricated parts Can be digitized

Physical actvity required

Physical automation

Software Services

Usable in micro transactions

µ€ Globally scalable

Costs aligned with benefits

Marketing

Corporate management Research & development

Not automatable

Fig. 7.8  Exploiting the benefits of the cloud with low transaction costs

which are at the same time globally scalable and are only paid for when they are actually used (Chap. 4). The possible areas of use for these cloud products are shown in the grey and dark grey areas in the matrix in Fig. 7.8. This means that in the future the core tasks of the company (dark grey areas in Fig. 7.7) and the outsourcing options to the public cloud will complement each other: the core tasks will remain in the company, while the areas that can be digitized and automated can be outsourced. The following section provides an example of how outsourcing to the public cloud can reduce transaction costs and thus leverage major cost-saving opportunities.

7.6 The Impact of the Cloud Revolution on the Transaction Costs of Software Use The cloud transformation in companies not only reduces the marginal costs for the development and operation of software (Chaps. 4, 5 and 6), but also the transaction costs for the use of software, as it no longer has to be installed, operated and maintained on the company’s own servers. Thanks to fast data transfer and the lower transaction costs resulting from automation, entire software suites can be acquired and operated in the cloud via SaaS. These solutions cover many standard areas of enterprise software, from server capacity provisioning to simple spreadsheets. The decision as to which parts of the IT value creation are provided “in-house” and where external services are used is not a simple one. It is not a singular decision with a limited impact  – rather, it has a significant influence on the continued existence of the company. Before the management of a company can make this decision, it should clarify fundamental questions about its own core business. Where (in relation to Fig. 7.8) is the dark grey area of the company located?

186

7  Falling Transaction Costs and the New Network Economy

The starting point for positioning this dark grey rectangle is to answer the following two questions: What are the core tasks and the most important assets of the company? Core assets are all resources available to the company that are essential for the production of the product. Within Core Assets, four forms can be distinguished: 1 . Physical resources such as machines, servers, production equipment 2. Financial resources (credit, capital) 3. Human resources (the knowledge of the employees) 4. Intangible resources (licenses, access to commodity markets, patents, algorithms, software) To determine if an asset you have identified is truly a core asset in the enterprise value chain, you can answer the following question: Is the asset valuable to the creation of the product/service? If you answer “yes” to this question, you have usually identified a core asset of your business. Core competencies are defined as a company’s ability to combine core assets in such a way that a functioning value chain and, ideally, a viable business model are created. The strategic potential of a core competence depends on the extent to which the use of the core competence can create relevant competitive advantages over competitors in the market. The following characteristics distinguish core competencies (Prahalad and Hamel 2006): 1. Customer relevance: The service/product offered is relevant and perceptible to the customer and the customer is willing to pay for it. Example: Apple’s product design. 2. Durability: It is difficult for other companies to imitate the core competence. Example: the Coca-Cola brand and the “idea” of a secret recipe. 3. Transferability: The competitive advantage is also transferable to other markets. Example: the precision mechanics used by Canon in printers, copiers and digital cameras. Therefore, when determining internal core competencies, the answer to the following questions should be: “no”: • Can the company produce the service/product without the core competencies? • Would customers be just as satisfied with the product or service without the core competency? • Is the core competence also available to other companies? • Is the core competence easy to imitate? • Is the core competence easily transferable? The combination of core assets and core competencies results in the independence of a company’s value chain. Figure 7.9 illustrates this relationship.

7.6 The Impact of the Cloud Revolution on the Transaction Costs of Software Use

Core Assets

Physical resources

Financial resources

Machines, servers,

Assets, creditworthiness,...

Human resources

Intangible resources

Knowledge of employees

Licenses, access to markets, patents,...

187

Sustainable business model

Core competencies

Combine core assets into a functioning value creation Relevant for the customer

Durable and difficult to imitate

Transferable to other markets

Very low

Fig. 7.9  Core competencies and core assets

Single case Graphical user interface

Make

for important company app

Web server

Sales processes

Effort

for internet presence

for management of sales staff

Buy Own computer center

Very high

operate and further develop

Office software

for own manufacturing processes

New ways

develop yourself

Generic application

Learn software skills and leverage platform services of the public cloud, to generate competitive advantage.

Specialist application

Kubernetes service operate and further develop

Create competitive advantages

Find new ways • Create smart partnerships • Change business models Autonomous driving for car manufacturers

Core task

• Try winner-takes-it-all strategy

Very relevant to core business

Fig. 7.10  The digital make-or-buy decision: Exemplary positioning for a company with physical products and increasing relevance of digital business models

Once the question of core competencies and core assets is settled for the company, it can begin to address the question of in-sourcing or outsourcing IT value creation. Again, two simple questions help answer the initial question: Important

Does the part of the IT value creation belong to the core competencies of the company? How high is the time and financial effort if the IT value is added internally?

Once you have found an answer to both questions, Fig. 7.10 helps you to approach the make or buy decision. The relevance of the IT service for the core business (core asset or core competence) is plotted on the x-axis of the matrix. On the y-axis, the time and financial expenditure is described that would be caused by the independent production of the IT service.

188

7  Falling Transaction Costs and the New Network Economy

The individual entrepreneurial value creation activities that can be carried out employing software can now be entered in the matrix. The further an activity moves down to the right, the less it is assigned to the core business of the company and the less effort is required to create it independently. In this area, companies should opt to purchase the software solution from the cloud because this is not where the focus of the company’s activity lies. Also, in this area, the company cannot differentiate itself from its competitors and rivals in the market because running the software does not give the company a competitive advantage. These areas include office software, but also, in an increasing number of cases, the operation of a company’s own data center. In the upper right quadrant, those activities are plotted whose development on the one hand means a great deal of effort, but at the same time can be assigned to the core area of the company. This is where the activities are located that a company must master to distinguish itself from its competitors in the future and to be perceived as an independent provider with a diversified product range. This could be, for example, risk calculation software for an insurer, suggestion AI for individualized media consumption for digital music platforms, or independent software development for self-driving cars. If the effort for the development is low and it still promises a significant competitive advantage, there is scope for individual case decisions. For example, it may make sense for a company to create the graphical user interface for an important corporate application itself and thus strengthen the digital competencies in its own company in the area of user experience (UX). If the value-adding activity is one of the company’s core tasks and if, at the same time, development involves a prohibitively high level of effort, then the company may be forced to think about new ways. Conceivable are, for example, partnerships with companies that are intensively engaged in solving the problem. Examples of such new forms of cooperation are presented in Sect. 7.9. Filling out this matrix in a target-oriented manner is not a trivial task. For a company with its own data center and appropriately qualified employees, it may seem temptingly simple and inexpensive to run the CRM application on its own systems. But how are opportunity costs factored into the calculation? After all, the employees tied up by this administrative process could also be used in primary processes. And is the permanent commitment of capital and know-how in the data center taken into account in the cost calculation in a meaningful way? A company has a lot of leeway when it comes to calculating expenses. In perspective, it should assume in its calculation that the public cloud providers will continue to lower their prices in the next few years. The three large providers AWS, Microsoft and Google are fighting intensively for market share and still have room for further cost reductions thanks to their economies of scale (Joos and Karlstetter 2018).

7.7 Practical Example: How Software Purchasing Via the Cloud Reduces Transaction…

189

7.7 Practical Example: How Software Purchasing Via the Cloud Reduces Transaction Costs The changes that result from digitization in software purchasing are described below based on the implementation of customer relationship management (CRM) software. To this end, the effort involved in the traditional procurement and implementation of a CRM system such as SAP CRM or Oracle Siebel will first be presented. This effort is compared with the process of implementing a CRM system as a software service (SaaS). Examples of this are Salesforce or Microsoft Dynamics 365.

7.7.1 The Purchasing Process of a Classic CRM System in Your Own Data Center Customer relationship management systems (CRM systems) form the core of (digital) communication with customers in many companies. Generally, all customer data, communication histories and contact data along the customer lifecycle are stored in these systems. The stored data serves as the basis for the efficient design of customer relationships (Möhring et al. 2018). In the following example, it is assumed that a department of a large company wants to purchase a new CRM system because the existing system does not meet the requirements of the new business. Other departments of the same company are basically also interested in new CRM functions and reserve the right to join at a later point in time. They are therefore not aiming for an immediate change, but will be involved in the new acquisition process in order to be able to take into account as many requirements as possible for the new system. Other parties involved in the selection process are: • Marketing and sales experts: They initiated the actual selection process to support their new digital business model with automated sales processes. • Employees from IT operations: These employees should ensure that the new system can be operated and works cost-effectively in the company’s data center. • Software development staff: The current system required some special development to take into account the particular needs of the current business in the CRM processes. The software developers should bring this know-how into the selection process. • Enterprise architect: This IT architect has an eye on the entire IT landscape of the company and should ensure that the new CRM system can serve all current interfaces well. • Purchasing department: This department contributes its experience in the neutral and objective control of purchasing processes. • Manager: As this is likely to be a major investment decision, control boards will be set up with the relevant decision-makers.

190

7  Falling Transaction Costs and the New Network Economy Large decision scope

Business processes CRM application Middleware

• Entire stack: All levels of the stack, from infrastructure to business experts are affected. • Maximum demand: The IT value creation is aligned with maximum demand. • Long duration: IT employees and resources are tied up for several years.

Many stakeholders

Sales & Marketing Controlling, purchasing & legal

Provider A Provider B Provider X

Higher risks IS

Management

• Theoretical analysis: Decision based on theoretical analysis and demo data from vendors. • Many parties: Many stakeholders with different interests are involved. • Manual processes: In implementation and operation, there are many human-related sources of error.

CRM

IT architecture IT project & operation IT infrastructure

High transaction costs High management and coordination efforts in the entire process flow.

Fig. 7.11  High internal transaction costs in traditional IT

• Project manager: The number of employees and dependencies involved is large, so a project manager is needed. According to Fig. 7.11, we describe the internal process of the transaction “Acquire a new CRM system” as a model. Information Costs The transaction begins with the formation of a project team. This team collects all the requirements of the stakeholders and uses them to draw up a set of specifications. Now an initial selection of possible providers is made on the basis of publicly available information and their suitability is compared with the requirements of the specifications. A second pre-selection of usually three to six providers is made. These are invited to present their respective systems in person. Many of the stakeholders affected internally in the company are present at these presentations. In a further phase, the providers make a compartmentalized system available as a demonstration environment. Users can now try out their processes on a small scale and on the basis of demo data from the provider. Based on the information gathered, the project team prepares a decision document for the management. This lists all the anticipated costs of implementing the CRM system for the next few years. This includes the costs for building and operating all levels of the IT stack, including the infrastructure, middleware, application, and setting up the business processes. The infrastructure is designed for the maximum expected utilization and is usually depreciated over three to five years. Decision Costs In sum, this is a large investment that ties up capital and employees for at least the duration of the amortization period. Since the CRM application could only be tried out in a demonstration environment so far and not with the company’s actual data and processes, there is

7.7 Practical Example: How Software Purchasing Via the Cloud Reduces Transaction…

191

also a higher risk of a wrong decision. There are further risks in the implementation of the system, because none of the people involved in the company has any real experience with the new CRM system. Another decision problem, albeit an internal one, is the involvement of those departments that do not want to introduce the new system immediately. Should their business requirements be taken into account in the decision or not? How many licenses should be purchased and how much IT infrastructure should be maintained? The decision-makers are therefore confronted with high risks: the scope of the decision is large. In order to still be able to make the right choice, further information is obtained. In addition, attempts are made to transfer the risks to the other party: The company tries to commit the CRM provider to a fixed price in the implementation project. The CRM vendor, in turn, tries to agree on a minimum purchase of software licenses in order to refinance its own high sales expenses. In this climate of conflicting interests, lawyers and financial controllers are accordingly often involved in the decision-making process. Control and Administration Costs Due to the many stakeholders involved, high control and administration costs arise. The first control authority is usually the project manager: he plans, manages and controls the stakeholders during production. He in turn reports to the project’s control board. Depending on the size and constellation of the project, there are several levels (middle hierarchy and top management) and variants (with and without external suppliers). The project manager is usually supported administratively by Project Management Offices (PMO). The employees involved in the production are accompanied by further control and administration processes. In addition to the usual HR processes (such as evaluation, feedback, accounting), this also includes ensuring that employees are utilized with billable tasks. Once the system has been successfully implemented, it start operating (sect. 4.2). Control and administration tasks are taken over by the “Service Manager” and “Service Delivery Manager”. Costs for Further Development Costs for changes to the existing system arise in particular from • • • •

Failures and errors that occur during operation Necessary system updates on all levels of the IT stack Additional infrastructure resources Requests for new content functions

All of these efforts must be taken on by the company itself. Exceptions are those content-­ related functionalities that are added by the CRM provider as part of its regular updates.

192

7  Falling Transaction Costs and the New Network Economy

Few stakeholders involved

Developer

Sales expert Operaons employees

Feature Team

Tesng with real data

Lower transacon costs

Cloud architect

Automated management processes

Service A

Hidden IT complexity

Low investment risk

CRM

Service B

Service C

Small decision scope

Less involved people

Smaller risks

Fig. 7.12  Low external transaction costs in the use of software services

In summary, the acquisition of a CRM system can become a company-internal “never-­ ending story” on which numerous departments are working for years without finding a satisfactory solution for the company. The internal transaction costs of implementing large software applications in a company’s own traditional data center are high. This is especially true when internal stakeholder groups are not yet clear about what future requirements they will have.

7.7.2 Use of Software Services (SaaS) for CRM If a company decides to use CRM as a software service, the transaction costs change relevantly (see Fig. 7.12). In this case, too, the initiative to renew the CRM software comes from the same department, but the process still differs significantly. Often, this starts with a free trial of the Software-as-a-Service offering (SaaS) on the web. Many providers of such software services offer the first month free of charge. Once the software has proven itself with the first real data, it is temptingly easy to switch to a paid variant: The entry prices are in a range that many team and department heads can still approve without involving top management, purchasing and the legal department.1 That is the way, how often the “shadow IT“is created (Herrmann 2017) – that is, the autonomous procurement and development of IT by individual employees or departments without the involvement of the IT department (Rentrop and Zimmermann 2015). This section describes the recommended, official way of the transaction “Use of a CRM SaaS“.

 Market leader Salesforce offers (as of June 2019) a basic system for EUR 300 per user per year with an annual cancellation period. 1

7.7 Practical Example: How Software Purchasing Via the Cloud Reduces Transaction…

193

Initiation and Information Costs If the department goes the official route of software procurement for cloud services, it is important to first find out the internal regulations that apply to this. This applies in particular to compliance guidelines on the permitted locations for data storage – as a rule, the use of European cloud services is possible without any problems. The upper decision limits for the procurement of software provide information on whether and to what extent the purchasing, controlling and legal departments need to be involved. The actual software selection usually begins with the test use of the CRM systems by the main user himself. It is then a matter of understanding the pricing models: Are costs dependent on the number of users? Or rather on transactions? If so, exactly which transactions and how often are they likely to occur? Are there subdivisions according to specialist area, i.e. are additional costs due for marketing functions? At this point, the business department should calculate different scenarios. However, a simple rule of thumb of the cloud will continue to hold true in most cases: The costs correlate strongly with the actual use of the system. In the case of software services, the synergies with the other departments are limited – especially if they themselves do not yet know exactly whether and how they want to use the service. High volume discounts can rarely be negotiated in the cloud and the required infrastructure is only ramped up with actual use anyway. If companies already use feature teams or DevOps, they are usually free to select their specific software tools anyway. Another relevant question must not be forgotten in the information phase: How does the new service fit into the existing IT architecture? The new CRM service will generate important data – how exactly can the company access it and merge it with other relevant information? What dependencies will this service create and how can these be reduced through smart architectural decisions? Cloud service providers differ substantially in this regard. Cloud solution architects should create appropriate decision templates here. Decision Costs The decision is comparatively simple for software services. It essentially concerns three points: • Professional scope of functions: The actual features according to which the CRM providers differentiate themselves • Pricing model: The structure of the prices according to which the total costs develop according to the usage. • IT architecture: How well does the service fit into the IT landscape and what technical flexibility (interfaces, dependencies) will result in the long term? The overall scope of decision-making is smaller, as there are neither hardware investments to be depreciated over many years nor larger implementation projects at the infrastructure, middleware and application levels. The risks are also lower overall because the services

194

7  Falling Transaction Costs and the New Network Economy

can be tried out with real data and on a small scale before larger quantities of them have to be accepted. The smaller number of stakeholder groups involved reduces misunderstandings and dependencies, and the automation of implementation at the IT level lowers project risks. Purchasing and Control Costs The purchase costs – i.e. the prices of the software services – are not transaction costs, but their control is. Since the service provision of the SaaS is completely automated and software-­defined, the control can also be completely automated in the form of a monitoring tool. This can monitor the entire service, various parts of the service or specific process flows as required and – if necessary – send alerts or trigger other actions. Enforcement Costs Disputes between companies and software providers are relatively rare when using SaaS. On the one hand, this is due to the fact that most software services have been developed in a resilient manner (Chap. 6) and thus remain functional even if the underlying infrastructure is unavailable for a certain period of time. On the other hand, providers reimburse – if at all – only that portion of the costs during which the service was not available. In this case, it is hardly worthwhile to incur high expenses for the enforcement of one’s own claims. Adjustment Costs The individual company has effectively no possibility to make adjustments to the delivered service. The software service it receives was generated automatically and was already used by the company itself before payment. It has no insight into or individual power of disposal over the IT encapsulated behind the APIs or the GUI. This means that no post-­ contractual adaptation costs can arise. Although companies can easily add their own functions to most software services, these do not fall under the scope of services owed by the provider and therefore no external transaction costs are incurred there. In conclusion, it can be said that the external transaction costs of using a CRM software service are significantly lower than the internal transaction costs of purchasing and operating a classic CRM system. The main reason for this is the large scope of the decision in the classic approach as well as the greater uncertainty in the decision itself. In sum, this leads to the involvement of more people with partially diverging goals, which then generate more effort and costs over the entire period of the transaction.

7.8 Towards the Network Economy

195

7.8 Towards the Network Economy Paying attention to transaction costs in the context of IT value creation is of fundamental importance for companies. But what applies to each individual company also has the power to change the relationship between companies fundamentally. The possibilities described at the beginning of the chapter for reducing transaction costs through digitization lead in their entirety to a new way of doing business (see Fig. 7.13). The high transaction costs of the non-digital economy were the glue that held value chains together. They enabled companies to find a stable positioning within the value chain and, within this framework, to manufacture products and services themselves instead of buying them. With digitization, external transaction costs are falling faster than internal transaction costs. As a result, outsourcing activities is becoming worthwhile in more and more areas, dissolving the glue that has held traditional value chains together. Integrators can increasingly become orchestrators, which in turn employ layer players for the individual stages of value creation. As a result, network structures emerge that are fundamentally different from hierarchically organized internal value chains. Ever new combination possibilities will determine the way companies are working in the future. Product A can be created with supplier B and the software offering from the Google Cloud, for example. Product B, on the other hand, accesses logistics company C and is thus independent of the logistics provider with whom the company has always worked in the past. The company that wants to develop its own digital upload filter for its comment functions on the corporate website, for example, can choose to use the basic service of a cloud provider and only program the user interface and user management itself. It can also create an appropriate indexer itself and market it to other companies that have the same need. In turn, all participants in the value network can make these decisions for themselves. Thanks

Professional services for the core business

Core business

Business services for support processes General IT services Video indexer

Extensive outsourcing in network structure

Key functions of software, architecture, data, integration, interface and analytics.

Focus on orchestration of external services as well as the development of a few core functions.

Payment processing Identity management Travel expense management Office software Collaboration software CRM system

Computing power Storage Container



Core Services …

Layer players deliver specific parts of the value creation for almost all market players.



Fig. 7.13  On the way to the network economy: exemplary representation of the dissolution of classic value chains through falling transaction costs in the digital economy

196

7  Falling Transaction Costs and the New Network Economy

to the digital sales structures of zero marginal cost business models, such an offer can be profitable for the offering company even from small purchase quantities. A new form of economic cooperation is emerging: a network of cooperations is being spun across the companies, which can always be reknitted and broken up as needed. In this scenario, the companies act as orchestrators that focus on the implementation of their own core competencies. For the implementation of specific tasks, the different offerings of the layer players (agencies, hyperscalers) can be combined. For each newly created product of a company, the search for the right combination of own activities (insourcing) and tasks that can be solved externally (outsourcing) therefore applies. In the long term, digital networks are created around the data pools of the large companies and the infrastructures of the cloud hyperscalers and layer players (Toth 2015). Enemies Become Frenemies The new form of the network economy has relevant effects on the cooperation of companies. While companies from the same industry still see each other as competitors from whom market share must be “wrested”, the network economy is gradually changing the rules. The fragmentation of activities in the digital space results in different areas of application and roles for companies. For example, a company that is a major competitor in industry A may offer services in industry B that can be integrated into the company’s process. A second company, previously considered a competitor, may become a customer. Former competitors become frenemies. That is, companies that are both at the same time: friend and foe. This trend has long since arrived in the real economy. More and more companies are recognizing the importance that increased external cooperation offers. Here are a few examples: • As an e-commerce provider, Otto uses the cloud provider AWS to host its online shops. AWS is a subsidiary of Amazon  – the fiercest competitor in online shopping (Aberle 2018). • Netflix uses the AWS cloud for computing power and storage, while video streaming services are provided through its own content delivery network (Hahn 2016). The parent company of AWS is one of the fiercest competitors in video streaming with Amazon Prime Video. • Microsoft, SAP, and Adobe formed the Open Data Initiative in 2018 (Nickel 2018). This will simplify the management of data between the customer’s different applications. Microsoft and SAP compete in some areas of their value proposition, including CRM systems, analytics applications and databases. • Three competitors in the cloud business – Google, Microsoft and IBM – have joined forces to form the so-called “Bandwith Alliance”. This enables customers to transfer data between their respective clouds more cost-effectively (FAZ 2018). • BMW and Daimler are cooperating on the development of self-driving cars, joining forces against Google, among others (manager magazin 2019). At the same time, both allow the use of Google Assistant in their cars.

7.9 Conclusion

197

Traditional value creation Integrator with high transaction costs

Support Core

Support Core

TAC

TAC

A B Support processes C D 1 2 3 4 5 Core processes

Network-type value creation

Orchestrator with low transaction costs (TAC)

TAC

Digital enterprise

1



2

A

3

B

C

D



4 E







Fig. 7.14  From traditional to network value creation

Many examples of the trend towards optimizing value networks with the help of frenemies come from the American West Coast. In Europe, the optimization of processes along the classic, linear value chain dominated for a long time (and still does). Now that internal processes are becoming more and more interchangeable, it is up to managers to align their companies with new forms of external collaboration. Classic insourcing scenarios and the optimization of one’s own value chain are turning into outsourcing scenarios in which the aim is to maintain one’s own network (see Fig. 7.14).

7.9 Conclusion The recombination possibilities in a network structure are enormous. In a process with three sub-steps and 20 independent suppliers per process step, 8000 network combinations are already possible. Companies now have not only the possibility, but the obligation to restructure internal processes when developing new products and services. For each individual process step, they can choose whether to create the activity themselves or outsource it to another provider. Digitization is pointing in a clear direction: more and more processes can be automated and digitized using software solutions. This chain of reasoning is summarized in Fig. 7.15: • Due to falling transaction costs, IT processes that are not part of the company’s core competencies can increasingly be outsourced. • This enables companies to move away from the complete vertical integration of their own value creation (integrators) and to devote themselves fully to their actual core competencies. • As a result, a new form of global and digital collaboration between companies is emerging in network structures that are gradually replacing the classic vertically integrated companies of the industrial age.

198

7  Falling Transaction Costs and the New Network Economy

IT value creation is increasingly automated with the help of cloud technologies.

Dedicated

IaaS

PaaS

SaaS Large catalog with ready-to-use IT components

API

Make

Buy

API

Cloud-based IT, which can be ordered via APIs, significantly reduces transaction costs. Initiation and information costs.

Agreement costs

Control costs

Enforcement costs

Purchase costs

Adjustment costs

The lower the transaction costs, the more attractive outsourcing is.

MAKE becomes BUY: Companies can buy cheaper and better than they can produce themselves.

Integrators

become

Companies that cover large parts of the value chain

Orchestrators Companies that orchestrate their network

Our economy is moving toward a network economy.

Companies focus strongly on their core ... ... and orchestrate their network-based value chain

... with complementary but also competing providers of cloud services (frenemies).

Fig. 7.15  Transaction costs falling due to cloud technologies are changing the corporate world

This also changes the rules of the game in the economy, because the make-or-buy decision has to be rethought. Processes that are not part of the core business can be outsourced to a much greater extent than before. Thus, in the future, all processes that can be digitized and/ or automated will be put to the test. At the same time, the question of the ability to create one’s own software applications is becoming the crucial question for companies. In order to be able to develop their own

References

199

applications, it is necessary to restructure the company and its processes. This, in turn, requires cloud transformation in the enterprise. What such a transformation looks like and how it is implemented in practice is presented in Chap. 8.

References Aberle, Sebastian (2018): Otto goes AWS  – Teil 2, erschienen in: dev.otto.de, https://dev.otto. de/2018/12/03/otto-­goes-­aws-­teil-­2/. Arago (2019): Künstliche Intelligenz – KI mit arago HIRO, erschienen in: sysback.de, https://www. sysback.de/portfolio/it-­automatisierung/kuenstliche-­intelligenz-­mit-­arago-­hiro.html, abgerufen im Juni 2019. Blackman, James (2019): Microsoft and BMW corral industry around open platform for digital factory solutions, erschienen in: enterpriseiotinsights.com, https://enterpriseiotinsights. com/20190405/channels/news/microsoft-­and-­bmw-­corral-­industry, abgerufen im Juni 2019. Brandt, Matthias (2019): Deutschland bleibt Glasfaser-Entwicklungsland, erschienen in: statista.de, https://de.statista.com/infografik/3553/anteil-­von-­glasfaseranschluessen-­in­ausgewaehlten-­laendern/. Bundesbank (2016): Bedeutung und Wirkung des Hochfrequenzhandels am deutschen Kapitalmarkt, erschienen in: Deutsche Bundesbank Monatsbericht Oktober 2016, S. 37–61, https://www. bundesbank.de/resource/blob/665078/544876d8a09dd548ed15bd74ce14281f/mL/2016-­10-­ hochfrequenzhandel-­data.pdf. Coase, Ronald (1937): The Nature of the Firm, erschienen in: Economica, Ausgabe 4, S. 386–405, London. Desjardins, Jeff (2017): The Evolution of Standard Oil, erschienen in: visualcapitalist.com, https:// www.visualcapitalist.com/chart-evolutionstandard-oil/. Dombrowski, Uwe und Tim Mielke (Hrsg.) (2015): Ganzheitliche Produktionssysteme – Aktueller Stand und zukünftige Entwicklungen, Springer Vieweg, Berlin, Heidelberg. Donath, Andreas (2018): Tesla hat Karosseriefertigung zu 95 Prozent automatisiert, erschienen in: golem.de, https://www.golem.de/news/autofabrik-­tesla-­hat-­karosseriefertigung-­zu-­95-­prozent-­ automatisiert-­1806-­134870.html. Eisenkrämer, Sven (2018): Deutsche Industrie liegt bei Automatisierung weit vorne, erschienen in: springerprofessional.de, https://www.springerprofessional.de/automatisierung/industrieroboter/ deutsche-­industrie-­liegt-­bei-­automatisierung-­weit-­vorne/15446374. Eurostat (2017): Unternehmen, die einen Breitbandzugang haben, erschienen in: ec.europa.eu, https://ec.europa.eu/eurostat/tgm/table.do%3Ftab=table%26tableSelection=1%26labeling= labels%26footnotes=yes%26language=de%26pcode=tin00090%26plugin=0, abgerufen im Juni 2019. Evans, Philip (2013): Wie Daten die Wirtschaft verändern, Vortrag im Rahmen eines TED-Talks, erschienen in: ted.com, https://www.ted.com/talks/philip_evans_how_data_will_transform_bus iness?source=twitter&language=de, abgerufen im Juni 2019. FAZ (2018): Microsoft, Google und IBM verbünden sich gegen Amazon, erschienen in: faz.net, https://www.faz.net/aktuell/wirtschaft/diginomics/microsoft-­google-­und-­ibm-­verbuenden-­sich-­ gegen-­amazon-­15808145.html. Frank, Roland (2018): Beratung ist der neue Vertrieb  – Wie die Digitalisierung den Sales-­ Prozess verändert, erschienen in: cloud-­blog.arvato.com, https://cloud-­blog.arvato.com/ beratung-­ist-­der-­neue-­vertrieb-­wie-­die-­digitalisierung-­den-­sales-­prozess-­veraendert/.

200

7  Falling Transaction Costs and the New Network Economy

Frey, Carl Benedikt und Michael A. Osborne (2013): The Future of Employment: How Susceptible Are Jobs to Computerisation?, erschienen in: Technological Forecasting and Social Change, Ausgabe: 114, Amsterdam. Fründt, Steffen (2009): Wie deutsche Unternehmen Jobs ins Ausland verlagern, erschienen in: welt. de, https://www.welt.de/wirtschaft/article3505965/Wie-­deutsche-­Konzerne-­Jobs-­ins-­Ausland-­ verlagern.html. Gassmann, Oliver, Karoline Frankenberger und Michaela Csik (2013): Geschäftsmodelle entwickeln – 55 innovative Konzepte mit dem St. Galler Business Model Navigator, Carl Hanser Verlag, München. Gildhorn, Kai (2019): Pfefferhandel im Mittelalter, erschienen in: schwarzerpfeffer.de, https://www. schwarzerpfeffer.de/pfefferhandel-­im-­mittelalter/. Glüsing, Jens (2009): Der Fluch des Silbers, erschienen in: spiegel.de, https://www.spiegel.de/ spiegelgeschichte/a-­638682.html. Greene, Tristan (2017): Google’s AI can create better machine-learning code than the researchers who made it, erschienen in: thenextweb.com, https://thenextweb.com/artificial-­intelligence/2017/10/16/ googles-­a i-­c an-­c reate-­b etter-­m achine-­l earning-­c ode-­t han-­t he-­r esearchers-­w ho-­m ade-­i t/, abgerufen um Juni 2019. Hahn, Dave (2016): How Netflix Thinks of DevOps, erschienen in: youtube.com, https://www.youtube.com/watch%3Fv=UTKIT6STSVM. Joos, Thomas und Florian Karlstetter (2018): Microsoft Azure versus Amazon Web Services, erschienen in: cloudcomputing-insider.de, https://www.cloudcomputing-­insider.de/ microsoft-­a zure-­versus-­amazon-­w eb-­services-­a-­724581/. Herrmann, Wolfgang (2017): Die unsichtbaren Risiken der Schatten-IT, erschienen in: computerwoche.de, https://www.computerwoche.de/a/die-­unsichtbaren-­risiken-­der-­schatten-­it,3331969, abgerufen im Juni 2019. Kirchberg, Dennis (2007): Der Aufstieg der Tigerstaaten im 20. Jahrhundert: Eine historische Analyse, Akademiker Verlag, Saarbrücken. Littmann, Saskia (2013): Wirtschaftsnobelpreisträger Ronald Coase ist gestorben, erschienen in: wiwo.de, https://www.wiwo.de/politik/konjunktur/oekonom-­wirtschaftsnobelpreistraeger-­ronald-­ coase-­ist-­gestorben/8731904.html. Möhring, Michael, Rainer Schmidt und Barbara Keller (2018): CRM in der Public Cloud: Praxisorientierte Grundlagen und Entscheidungsunterstützung, Springer Gabler, Wiesbaden. manager magazin (2019): Daimler und BMW bilden Allianz bei Roboterautos, erschienen in: manager-­magazin.de, https://www.manager-­magazin.de/unternehmen/autoindustrie/daimler-­ und-­bmw-­bilden-­allianz-­bei-­roboterautos-­und-­autonomen-­fahren-­a-­1255576.html. Nickel, Oliver (2018): Adobe, Microsoft und SAP vereinheitlichen Kundendaten, erschienen in: golem.de, https://www.golem.de/news/azure-­adobe-­microsoft-­und-­sap-­teilen-­ihre-­kundendaten­1809-­136758.html. Prahalad C.K. und Hamel G. (2006): The Core Competence of the Corporation, erschienen in: Hahn, D. und B.  Taylor (Hrsg.): Strategische Unternehmungsplanung  — Strategische Unternehmungsführung, S. 275 – 292, Springer, Berlin, Heidelberg. Pröve, Ralf (1995): Stehendes Heer und städtische Gesellschaft im 18. Jahrhundert: Göttingen und seine Militärbevölkerung 1713-1756, Beiträge zur Militärgeschichte Band 47, Oldenbourg, München. Putschögl, Monika (1993): Karriere eines grünen Früchtchens, erschienen in: zeit.de, https://www. zeit.de/1993/50/karriere-­eines-­gruenen-­fruechtchens. Reinhardt, André (2019): Breitbandausbau-Historie: Festnetz von 2009 bis heute, erschienen in: teltarif.de, https://www.teltarif.de/bandbreite-­festnetz-­geschwindigkeit-­historie/news/75604.html.

References

201

Rentrop, Christopher und Stephan Zimmermann (2015): Schatten-IT, erschienen in: gi.de, https:// gi.de/informatiklexikon/schatten-­it/, abgerufen im Juni 2019. Segna, Ulrich (2018): Bucheffekten: Ein rechtsvergleichender Beitrag zur Reform des deutschen Depotrechts, Mohr Siebeck, Tübingen. Schanze, Robert (2018): 5G: Wann kommt der LTE-Nachfolger? Und wie schnell ist er?, erschienen in: giga.de, https://www.giga.de/extra/5g/. Telekom (2019): DeutschlandLAN Connect IP  – Das Tor zur Digitalisierung, erschienen in: geschaeftskunden.telekom.de, https://geschaeftskunden.telekom.de/startseite/festnetz-­internet/ tarife/internet-­telefonie/312348/deutschlandlan-­connect-­ip.html. Toth, Stefan (2015): Die umgekehrte Architekturbewertung, erschienen in: embarc.de, https://www. embarc.de/netflix-­architektur-­blogserie-­teil-­1-­die-­umgekehrte-­architekturbewertung/. Willcocks, Leslie, Mary Lacity und Andrew Craig (2015): Robot Process Automation at Xchanging, erschienen in: The Outsourcing Unit Working Research Paper Series, Paper 15/03, http://www. xchanging.com/system/files/dedicated-­downloads/robotic-­process-­automation.pdf, abgerufen im Juni 2019. Wile, Rob (2014): A Venture Capital Firm Just Named An Algorithm To Its Board Of Directors – Here’s What It Actually Does, erschienen in: businessinsider.com, https://www.businessinsider. com/vital-­named-­to-­board-­2014-­5?IR=T. Wirtz, Bernd M. (2012): Mergers & Acquisitions Management: Strategie und Organisation von Unternehmenszusammenschlüssen, Wiesbaden.

8

The Cloud Transformation

Abstract

The first step in reducing a company’s marginal and transaction costs to the level of those competitors who are already working with new technologies is to migrate a company’s classic IT infrastructure to the cloud (optimize infrastructure). Competitiveness is additionally strengthened if the company migrates its applications to cloud-native software architectures and modernizes its software development processes. The next step is to improve the content of the business models (modernize operating model) through data-based analyses of actual customer needs. In short cycles, the responsible teams provide better solutions. For this purpose, the teams can draw from the large portfolio of ready-made components in the cloud. These include services for machine learning or Internet-of-Things, among others. The most challenging step is to change the company’s entire business portfolio (transform portfolio). Usually this happens involuntarily when external disruptors threaten the core business. A transformation is voluntary if the company sees a special opportunity in the new business model and is willing to take the major transformation risks.

8.1 Scientific Models for Digital Transformation Three economic models play a major role in the practice of digital transformation. In the already discussed book “The Innovator’s Dilemma“, Clayton Christensen deals with the question of why successful companies repeatedly fail to adapt new technologies and thus develop their own business models further (Thrasyvoulou 2014). The second significant approach is the economic model of the Three Horizons of Growth by McKinsey (Coley 2009). It describes how different horizons of thought and action coexist in companies, © The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2023 R. Frank et al., Cloud Transformation, https://doi.org/10.1007/978-3-658-38823-2_8

203

8  The Cloud Transformation

Business value

204

Horizon 3

Develop new business ideas

Horizon 2

Grow relevantly with new businesses

Horizon 1

Maintain profitable core business

Time

Fig. 8.1  Three Horizons of Growth according to McKinsey and Baghai, Coley & White

simultaneously competing with each other for resources and yet depending on each other. Finally, Geoffrey Moore takes up both approaches and demonstrates how the digital transformation can succeed with the “Zone to Win“model he developed (Moore 2015).

8.1.1 McKinsey’s Three Horizons of Growth In their book “The Alchemy of Growth”, former McKinsey consultants Mehrdad Baghai, Stephen Coley and David White pose the question of how companies can grow sustainably and profitably (Baghai et al. 2000). They analyze what conditions must be met for growth and what model strategies can lead a company to success. The core of the model is the division of the planning periods of a company into three so-called “horizons” (see Fig. 8.1). Horizon 1: Maintain Profitable Core Business (12–18 Months) Employees, departments and processes of businesses attributed to the first horizon period are concerned with maintaining and developing the current business. They are concerned with achieving the current month’s or quarter’s revenue and cost targets, as well as with efficiency, cost reduction, and incremental improvement of the proven business model. Typically, the growth prospects and profitability of this area diminish over time and the old business models need to be replaced with new ones. Horizon 2: Relevant Growth with a New Business (18–36 Months) The business from Horizon 2 is responsible for the economic future, where the focus is on growth – processes and organization are agile. The employees act as intrapreneurs that want to capture a large share of the growing market as quickly as possible. Businesses in this area still need to invest heavily in the beginning. The task of these divisions is to grow strongly in terms of sales and become profitable after three years at the latest. The funds

8.1 Scientific Models for Digital Transformation

205

are not spent on research and development, but on marketing, sales and partner management – in other words, on scaling the new business model (Hill 2017). Horizon 3: Develop New Business Ideas (> 36 Months) The third horizon is about creating long-term options for the company with new ideas. Many of these ideas are discarded in the meantime, but some of the ideas and concepts can become the new foundation of the company in 5 to 12 years. Employees in this field are creative, curious, agile and not very rule-oriented. This distinguishes them from the “intrapreneurs” in Horizon 2 and the “doers” in Horizon 1. Investment funds for this area are limited: With a defined budget, technologies and concepts are tested, discarded or further developed. If, with the use of the new technologies and concepts, a new business model emerges that solves a customer problem, is technically feasible and is financially suitable as a growth driver, it is transferred to Horizon 2. The three horizons model helps to make conscious decisions about resource and task allocation (Ryan 2018). Geoffrey Moore explained this memorably in his presentation at the 2016 GOTO conference in London (Moore 2016). He describes how different people operate in each horizon and how important it is for the functioning of a company that these silos are carefully demarcated from each other on a day-to-day basis. McKinsey authors also emphasize the reflex in organizations to cut funding for horizons 2 and 3 in favor of short-term increases in profitability. This instinct, they say, can only be overcome through intense attention from senior management – especially the CEO himself.

8.1.2 Zone to Win by Geoffrey Moore In his book “Zone to Win“, Geoffrey Moore deals with the question of how companies should best behave in competition in the age of disruptive innovations (Moore 2015). He combines McKinsey’s horizon theory with Christensen’s dilemma analysis and divides the affected company into a matrix with four zones. On the x-axis, he distinguishes between disruptive and sustaining innovations (Christensen 1997) and on the y-axis between economically performing and supporting divisions (see Fig. 8.2). The performance zone corresponds to horizon 1 of the McKinsey model. This is where most sales are generated and by far the largest share of profits. It is supported by the Productivity Zone, which uses non-disruptive innovations to incrementally improve products and optimize manufacturing costs. As a current example, consider Apple’s iPhone: It generates a large part of Apple’s revenue and profit in the performance zone (Weddeling 2018) and is incrementally optimized with the help of the productivity zone (product development, purchasing, IT department) – i.e. it gets bigger screens, more cameras and more memory. In the incubation zone, according to Horizon 3, the innovations of the long-term future are being developed. At Microsoft and Google, this includes the quantum computer

206

8  The Cloud Transformation

Fig. 8.2  Zone to win according to Geoffrey Moore Value creating

Disruptive innovation Horiz on 2

Sustaining innovation Horiz on 1

Transformation zone

Performance zone

Innovator's dilemma

Supportive

Horiz on 3

Incubation zone

Horiz on 1

Productivity zone

(Krauter 2018), at Apple the self-driving car (Hawkins 2019). These research and development departments do not generate direct economic benefits in the sense of increased sales or profits. They observe technologies, test them, develop them further, test them again and also discard them again. A famous and very successful example of such a Horizon 3 team is Xerox Parc (DeCarlo 2018). Founded in 1970, it was involved in the development of some technologies that are now an integral part of everyday life: the Ethernet, graphical user interfaces, laser printers, and touch-screen mobile devices. Interestingly, companies other than Xerox benefited from the research institute’s successes. Thus, the ideas and patents became an important source of inspiration for companies such as Apple or Microsoft. Xerox Parc was spun off as a subsidiary of Xerox in 2002 and is still a cooperation partner of major corporations and companies today (Dernbach 2019). The Transformation Zone is oriented towards Horizon 2 of the McKinsey model. There, the focus is on products that should generate relevant sales growth in the near future – in a period of one to three years – and form the basis for the company’s future S-curve (see Innovator’s Dilemma in Chap. 1). Geoffrey Moore talks about a target value of 10% that the new business units should achieve in the company’s total sales. Only if this value is reached all stakeholder groups would recognize the seriousness of the new business and support it accordingly (Moore 2015). According to the model, the transformation zone should only exist temporarily. Moore suggests that companies only actually need such a zone in three out of ten fiscal years (Moore 2016). In the remaining seven years, it can operate entirely without a transformation zone merely with the other three zones. A well-­ known transformation example is provided by Microsoft, which was advised in this phase by Geoffrey Moore himself, with its cloud products Azure and Office 365. While the share of cloud services in the overall business was barely measurable in 2011 (Dediu 2011), it already accounted for almost a third of revenues in the second quarter of 2018 (Protalinski 2019).

8.2 The Three Levels of Cloud Transformation

207

Moore recommends targeted management of the four zones. In “good years”, the transformation zone should not exist at all and the other three zones should be managed separately. This separation is the most efficient way to profitably manage the regular business of a company. The price for this is increased effort in times of transformation, when these silos have to be overcome and reassembled into a profitable company. This requires the special attention of the CEO and a joint effort of the entire organization.

8.2 The Three Levels of Cloud Transformation Not every company needs to completely transform its business model in order to take advantage of the next wave of disruptive technologies with its own products and services. It may also be sufficient, for example, to “only” convert one’s own infrastructure and operating models to the new technologies. To explain this statement, Geoffrey Moore distinguishes three different scenarios (see Fig. 8.3): In scenario 1, only the company’s infrastructure is affected. In scenario 2, the company’s operating model is transformed. In scenario 3, the entire business model is affected by the change. If production, product and sales in a business model can be fully digitized, then scenario 3 comes into effect. Then the entire market is disrupted and the company must find and enter completely new markets and businesses. This is by far the most costly and risky type of transformation. Here, cloud transformation is only a relatively small part of the challenge. Comparable to Microsoft, the company has to reinvent itself (Nadella et al. 2017). If the business model remains largely stable in interaction with customers, suppliers and the organization, only the infrastructure model needs to be changed (scenario 2). This affects the IT department in almost all areas and is accordingly associated with major changes, while these are absent in the other subareas or in the organization of the company. An example of an infrastructure change is the Data Center Migration, as announced by TUI at the AWS Summit 2019 in Berlin. This goes hand in hand with the general trend

SaaS Machine Learning Internet of Things Data Analytics Cloud Social Media Mobile

Serverless PaaS

IaaS

Business model

New markets and competitive advantages

Transform portfolio of business models

Example: From operating system manufacturer to cloud provider

Operating model Productivity and effectiveness

Infrastructure model

Efficiency and price/performance

Fig. 8.3  Levels of disruption according to Geoffrey Moore

Modernize value creation

Example: Ordering a cab more easily with a mobile app

Optimize back-end systems

Example: Migrating the SAP system to the cloud

208

8  The Cloud Transformation

of large companies reducing the number and scale of their data centers (Hushon 2018; Braddy 2018). If cloud-based technologies help to qualitatively improve a business model without destroying it, then this is a change in the operating model. Data analytics can lead to better business decisions, machine learning can create individual services from standardized mass services. The use of mobile devices can make services more accessible or easier to use. The Cloud Strategy Team as Transformation Enabler In the preparation of the cloud transformation, some considerations must be made in order to align the implementation project in a promising way (see Fig. 8.4). The goal of the digital transformation on all three levels is to convert manual or partially automated activities to fully automated IT. At the same time, this transformation becomes a “means to an end” because it is intended to make new business models possible. Tasks that were previously performed by individuals or teams are now accomplished by software. This change can lead to individual roles becoming superfluous – but usually entire teams are affected with their processes, procedure models and with their organization. Only if these changes are also implemented the cloud actually generates benefits. If, instead, cloud is introduced without replacing the old processes and functions, the complexity of the overall system will increase, and with it its costs (Moore 2016). If roles, teams, processes, parts of the organization and legacy systems become redundant and cannot be continued in parallel with the new approach, this is a comprehensive, corporate change. The decisions required for this cannot usually be brought about democratically or reached by consensus with all stakeholders. The leadership of such a project must be located with those executives who have the change mandate for all affected areas. So if it is a transformation of the infrastructure model, the CIO may be sufficient. If the changes affect larger parts of the organization, an appropriately higher-ranking manager should be entrusted with the task.

Determine scope

Identify relevant management level

Establish central, interdisciplinary Team

Create tangible prototypes Concretize mission

Business model

CEO

Transformation lead

Experimentation

Operating model

CxO

Enterprise architects

Making transformation tangible

Infrastructure model

CIO

Subject matter experts

Engage stakeholders

Cloud solution architects

Generate commitment

Communication experts Company Skills

Fig. 8.4  Preparation phase of the cloud transformation

8.2 The Three Levels of Cloud Transformation

209

This is precisely where the problem lies: the responsible manager usually has neither the necessary technological and methodological expertise himself, nor can he devote enough time to familiarizing himself. Therefore, they need the support of an interdisciplinary team that reports directly to them, prepares the necessary decisions, trains the managers involved, communicates the results within the company, and drives forward the implementation of the decisions made. This cloud strategy team should combine all the skills relevant to the transformation. An experienced enterprise architect is essential for success. This person understands both the business world and the world of information technology; he or she can describe the interdependencies, develop workable overall solutions together with less diverse stakeholders, and drive implementation. If this position is not filled in the company, the team must try to take over this role in a division of labor. Business professionals, i.e. experts from the day-to-day business, who are ideally familiar with both the existing business model and the new business model, are still necessary. The team’s cloud solution architects design the concrete solutions for the new cloud IT. The relevance of communication experts should not be underestimated: Internally, there is a great need for change communication and internal recruiting; externally, the main issue is the credible repositioning of services in the eyes of the customers. Last but not least, the so-called “company skills” should not be forgotten, i.e. the knowledge of the specific culture of the company, the pitfalls in the processes, the personal idiosyncrasies of particularly relevant managers and the informal decision-making processes. Satya Nadella was able to achieve such rapid success in the transformation of Microsoft not least because he has known the company as an insider for more than 22 years in various functions and areas with its strengths and weaknesses (Kurtuy 2019). Once the outline of the cloud strategy team is in place, it can start to develop working prototypes. For example, for new business models, the first step is to prove that the customer relationships around these new products and services work. If it’s about migrating infrastructure, the first applications should have been successfully migrated to the cloud. These prototypes demonstrate the advantages of the new world. Concrete projects and successes suit usually better at convincing skeptical colleagues than theoretical explanations and PowerPoint presentations. Figure 8.5 illustrates how relevant the organizational involvement of the cloud strategy team is: • Transform infrastructure model: The transformation of IT infrastructure to cloud technologies and methodologies will affect all teams, processes and skills in the existing IT organization and will require reorganization. The team should report to the manager responsible for all IT areas. • Transforming the operating model: If the operating model is optimized by cloud technologies, it is usually about the effective interaction of IT and business skills. IT specialist responsible for the business to be modernized should take the lead. This should be clear to the corresponding CxO, and the CEO and CIO must support an adequate change.

210

8  The Cloud Transformation

CEO

Strategy Team (BM) In case of business model change

CIO

CxO

Strategy Team (IM)

Strategy Team (OM)

When changing the infrastructure model



For change of operating model



Applications Infrastructure

Change (IM)

Projects

Change (BM)













Change (GM)

Fig. 8.5  Cloud strategy team: Organizational structure and expected scope of change per level

Plan

Analyse the existing application landscape and obtain the required commitments.

Build

Prepare the new cloud-based landscape and address governance and compliance issues.

Migrate

Test and execute the actual migration per application.

Further development

Further develop the application landscape and keep compliance and governance up to date.

Fig. 8.6  The four steps of cloud transformation in the infrastructure model

• Transforming the business model: If the company as a whole has to face disruptive technologies, the strategy team will be communicating directly with the CEO. Once the cloud strategy team has been established with the necessary resources and has been given a corresponding mandate, it begins its work. The recommended process models for this are described in the following three sections, depending on the initial situation.

8.3 Transforming the Infrastructure Model The application is the linchpin of cloud transformation. It generates the business value for its users, and this should be preserved if possible. The underlying infrastructure and possibly the associated middleware such as databases and operating systems are replaced. First, the basic migration options per application are described and classified according to their most important differences. Then the four steps (see Fig. 8.6) of a migration of the infrastructure model are explained in detail.

8.3 Transforming the Infrastructure Model

211

8.3.1 The Typical Migration Scenarios for Applications In detail, every migration of an application to the cloud looks different. Nevertheless, certain types of migrations can be identified. Gartner provided the basic concept for this in 2010 with its “5R model” (SA Technologies 2018): These migration scenarios can be explained in terms of Sect. 4.4.4 as follows (see Fig. 8.7): • Rehost is the migration of an application from traditional IT (dedicated or virtualized) to a cloud infrastructure (IaaS). The application remains largely untouched, only network, storage and computing capacity are replaced by cloud IT. The effort is low, the direct benefit is measurable, but not disruptive. Many SAP applications, for example, can only be migrated in this way because the provider does not allow any intervention in the architecture of the application. • Replace is the complete replacement of an application by a cloud-based software service (SaaS). The benefits are often great, as the outsourcing company can dispense all of its own IT resources. This model is often used in application areas that are similar across industries. Examples include the replacement of in-house e-mail servers or office software with Office365. • In Refactor and Revise, the architecture of the application is changed. Depending on the individual requirements of the business model, the platform services of the cloud are used to meet the cloud-native ideal (Sect. 6.5.3) of a scaling, elastic and distributed application. The procedure was described as an example in Chap. 5. Revise leads to more changes than Refactor. Special applications that the company has already adapted to its processes are particularly suitable for these migration models. Here, the company

Rehost Application Security & Integration Runtimes & Libraries Databases Operating System Virtualisation

Refactor Revise Rebuild

Replace

Manual activities involved.

Computing Power

Automated IT value creation

Memory Network

Virtualized

IaaS

PaaS

Fig. 8.7  The 5Rs according to Gartner: Application migration scenarios

SaaS

212

8  The Cloud Transformation

has the freedom to design the architecture of the application, and the software that has been specially adapted to the processes is relevant enough to justify a costly modification. • In a rebuild, the application is rebuilt with identical business value based on cloud-­ native technologies. This can be useful if the old application is not well documented or is based on old technologies so that it is easier to rebuild. The catalog of IT prefabricated parts described in Sect. 4.4.4 significantly reduces the costs for new development. Following Gartner’s 5Rs, Amazon Web Service (AWS) has developed its own model, called the “6Rs” (Orban 2016). The six migration strategies are called: • • • • • •

Rehosting corresponds to the Rehost Re-platforming corresponds to the Refactor Repurchasing corresponds to the Replace Refactoring corresponds to the Revise Retire: The application is abolished. Retain: The application remains on the original infrastructure.

8.3.2 Plan – Analyze Applications and Obtain Commitments In their eBook “Enterprise Cloud Strategy”, Barry Briggs and Eduardo Kassner give an insight into the cloud transformation of Microsoft itself (Briggs and Kassner 2017). In the first major step, the cloud strategy team analyzed the company’s many applications and assigned them to the individual migration scenarios. Figure 8.8 illustrates this process. By implementing the first step, the company was able to do without 30% of its previous applications, as their functions were taken over by other systems or not used at all. Software services such as Office365 or Sharepoint Online were able to completely replace 15% of the applications. Only 5% of the applications could not be migrated to the cloud in a meaningful way, although Microsoft does not give any more precise reasons for this. The remaining 50%

30%

Retire

15%

Replace

Office 365

Sharepoint Online

CRM Online

Visual Studio Online

Azure Data Lake

3rd Party SaaS

Rehost 50%

Refactor Revise

Rebuild 5%

Remain

35%

10%

5%

First step

Second step

Later steps

Web applications, portals and applications with modern architectures

Applications with very high business relevance and a lot of input/output traffic

HVA and PKI systems and applications with legacy source control

Proportion of applications

Fig. 8.8  Analysis of the application landscape before a cloud migration to Microsoft

8.3 Transforming the Infrastructure Model

213

of applications were either transferred 1:1 to the cloud (Rehost), rebuilt (Rebuild) or optimized more (Revise) or less (Refactor) for the use of platform services (PaaS). The latter is simple and effective for web applications or applications with other, modern architectures. At Microsoft, this affected about 35% of all applications. The remaining 20% were distributed among business-critical applications such as ERP systems (15%) and high-availability systems as well as applications with public key infrastructure (5%). Another important insight: companies should not migrate all applications at the same time. Such a big bang is usually not possible because too few employees with the necessary key competencies are available and the overall organization would be overwhelmed with such a project. Instead, a sensible sequence for the migration steps should be determined (Briggs and Kassner 2017). Each application that is being considered for migration is examined for the following factors: • Performance requirements of the workload in terms of elasticity, scaling, resource intensity, latency, and data throughput. • Financial framework, actual costs and business benefits • Architecture of the application in terms of user interface, the application itself, the data and the infrastructure • Risk assessment related to the organization as a whole, to its business models, and to technology, resources and contracts • Operational issues • Security and compliance issues such as data protection, encryption and regulation This is where the interdisciplinary cloud strategy team comes into play. For the performance requirements such as the financial framework conditions and the evaluation of the business benefits, colleagues are needed who are familiar with the business model on which the application is based. On the other hand, to evaluate the architecture and assess the impact on operational issues, the expertise of technical colleagues is needed. To evaluate risks and assess security and compliance challenges, technical and commercial resources, as well as a sound knowledge of the specifics of the company, are required (“company skills”). If several applications have been subjected to the evaluation process, a simple matrix can be created according to the criteria “benefit of migration“(X-axis) and “difficulty of migration“(Y-axis) (see Fig. 8.9). Although the procedure is presented here in a linear fashion, in practice the analysis and implementation phases can run parallelly. Once the fundamental governance and compliance issues have been clarified and the benefits of transferring the application to the cloud have been determined, the migration can begin. In addition to the analysis and prioritization of the application landscape, the management level of the company is also consulted during the planning phase. Each application to be migrated must be tested by the users before the “go-live”. At the right time, these users must be able to devote sufficient time to the tests, so seasonal peculiarities, for example, must be taken into account when planning the migration (for example, the end of the year or quarter or the Christmas season). What else should be considered?

214

8  The Cloud Transformation Application portfolio Application analysis

Migration type Recommendation of a migration type

• Performance

Benefits of migration

Retire

• Finances

Replace

• Architecture

Rehost

• Risks

Refactor

• Operation • Security

Revise

• Compliance

Rebuild

low

Long-term bets

Start here

Approach last

Quick wins

low

Remain

Migration effort

high

high

Individual evaluation per application (workload)

Fig. 8.9  Evaluation and prioritization of the application landscape according to Briggs/Kassner Remain

Replace

Rehost

Refactor

Revise

Rebuild

Classic Operations Project Manager Subject Matter Expert Cloud Solution Architect Software Developer Cloud Operations

Scrum Master, Product Owner Assumption: Evaluation of benefit/effort of migration is already performed.

Fig. 8.10  Required skills per migration type

• Selection of the service provider: Most IT departments cannot carry out such a transformation without external help. The service provider should already be involved in the planning phase. • Economic planning: A cloud transformation changes cost structures (Chap. 5). In the short term, additional expenses arise (project expenses, training, OPEX, primary processes), while savings are possible in other places (data center, CAPEX, control, secondary processes). The company’s financial planning should be adapted to the changed framework conditions. • Recruiting and training: After the cloud transformation, companies need different skills and knowledge than before the transformation. The corresponding training programs and recruiting initiatives should be initiated internally at an early stage. Figure 8.10 schematically illustrates how the required employee profiles differ depending on the migration type.

8.3 Transforming the Infrastructure Model

215

In the case of a Remain, the employees for the classic operation and project managers for the implementation of changes are still required. Retiring an application is usually not possible at the push of a button. To do this, business professionals must be consulted and some processes must be changed so that other applications can take over the remaining functions of the software that has been switched off. Project managers are often needed for these projects. Replace means the complete outsourcing of an application. Here again it depends on the business managers who have to lead the transfer project in terms of content, so that the business requirements are actually covered by the external software services. A rehost requires new resource types, such as the cloud solution architect for the infrastructure layer (IaaS). But traditional operations experts are still needed for the databases and operating systems above. From Refactor to Revise to Rebuild, the importance of traditional roles is diminishing. In the case of Rebuild, additional developers are needed. In this case, Scrum Masters and Product Owners will also be needed.

8.3.3 Building – Preparing the New Landscape The cloud offers a huge amount of IT prefabricated parts that can be ordered quickly and easily with little know-how. This simplicity has the disadvantage that organizations can also quickly lose track of them. What happens when a lot of resources are ramped up for a short, intensive bulk data processing session, but not ramped down again? Which department pays which costs for which application? Global scaling is good, but should a video be downloadable millions of times over in the US when the company doesn’t even sell any products there? Making cloud Resources Administrable In order to maintain an overview, cloud providers offer comprehensive options for mapping the organizational structure of a company in the cloud landscape. For this purpose, company accounts are opened, within which even complicated service relationships can be mapped with subscriptions and tags. Consumption can be allocated, tracked and provided with rules on a fine-granular basis. Cloud providers and partners help beginners to make the right settings right from the start. Companies should always plan for this additional work to avoid risks during implementation and operation. Prepare Landscape Relevant provider core services can be used per subscription. These include: • • • • •

Network (Software Defined Network) Management of user identities (Identity Management) Encryption of data (Encryption) Reporting (Log & Report) Software tools used (toolchain)

216

8  The Cloud Transformation

This preparation of the IT landscape for later use has many implications for the actual security of the systems. It has to be determined which networks are isolated from each other to what degree, who has access to which areas of the company with which identities, and how encryption mechanisms are used. Establish Governance and Compliance There is no uniform or legal definition for the terms governance and compliance. Therefore, it is even more difficult to distinguish between the two terms with a view to cloud transformation (Siriu 2018). At its core, compliance is about ensuring that the company follows internal regulations as well as external laws and regulations. Governance follows the perspective of the owners to ensure that the company evolves safely and successfully on their behalf. In this regard, cloud providers have developed complex frameworks to guide transformation leaders. Fig.  8.11 shows a simplified version of AWS’s GRC (Governance, Risk, Compliance) program (South 2018). Any cloud project can fail if GRC issues have not been properly considered in advance. It is recommended to actively use the company’s existing compliance processes to obtain the necessary approvals for the cloud project. If colleagues from the compliance departments act skeptically in a way that hinders the project, it is recommended to obtain external support. The cloud providers have sufficient information material available for such cases and are happy to arrange exchanges with other companies. A discussion between GRC officers from different companies can be very helpful in such a situation. In the essay “Governance and Compliance in Cloud Computing”, Khaled Bagban and Ricardo Nebot of Otto GmbH provide the following concrete recommendations (Bagban and Nebot 2016):

Governance

Risk

Laws & Regulation

Assessment

Automatic monitoring

Standards

Risk-conscious decisions

Self-evaluations

Internal policies

Individual contracts

Responsible employees

Secure systems

Risk Management Framework

Compliance

Continuous improvement

Resilient organisation

External audits Reporting

Control elements Continuous Compliance

Fig. 8.11  Framework for Governance, Risk and Compliance according to Michael South (AWS) simplified and translated

8.3 Transforming the Infrastructure Model

217

• Certified providers: Use only ISO 27001 certified cloud providers.1 • Encryption: All business-critical data should be transferred to the cloud and stored there in encrypted form. • Data economy: As much data as necessary and as little as possible should be transferred to the cloud. The transfer should always serve a specific purpose. • Data retrieval: Prior to migration, there should be a planned procedure for data retrieval. • Emergency concept: The cloud solutions should each have an emergency concept for infrastructure failures or lost data. • Information: The affected stakeholder groups should be informed before the cloud migration. • Access: Secure browsers and devices should be used to access the cloud services. • Analysis and planning: For large or complex projects, proper analysis and planning should be done. The proposals date back to 2014, so they are comparatively old in the digital age. Mark Ryland of AWS argues that data is in fact safer in the public cloud because the customer and cloud provider share the security tasks (Ryland 2018). The provider can secure its infrastructure with much more effort than most customers can secure their own data centers. Customers could also protect their application in the public cloud much more easily and professionally. With this perfectly understandable argumentation, the question arises as to whether the data in one’s own data center is really better secured. In summary, it can be stated that GRC requirements can generally be mapped with the cloud (see also Sect. 4.7). However, these questions should be clarified in each individual case during the planning process and before the actual migration.

8.3.4 Performing Migrations The effort required for technical migration has decreased significantly in recent years. AWS has been offering its cloud services since 2006, Microsoft Azure and Google Cloud since 2010/2011. Cloud providers and migration service providers have outgrown the experimental stage, and the Gartner quadrant for “Public Cloud Professional and Managed Services“lists many companies worldwide with corresponding competencies (Rackspace 2019). For Germany, there are analogous evaluations by the consulting firm CRISP Research for managed public cloud services with smaller and local providers (Nordcloud 2018). The technical challenges of migration are therefore easy to plan and test from the current perspective. In its Cloud Adoption Framework, Microsoft proposes a management circle of “Create Architecture,” “Implement Changes,” “Run Tests,” and “Go Live” (Microsoft Azure 2019). 1

 As of March 2019, all three major providers – Google, AWS and Azure – are ISO27001 certified.

218

8  The Cloud Transformation

Application portfolio

Migrated

In test

In progress

To do

Migration Backlog

Start here

Assess Promote Finalize

Architect Remediate

Adopt Test

Replicate Stage

Migration Test

Fig. 8.12  Procedure for application migration to Microsoft

Figure 8.12 provides more details on this. An application is selected from the applications in the backlog that still need to be migrated, which then goes through the following steps: • Assess: The application is analyzed in detail for the resources used (virtual machines, data, network, permissions). • Architect: The Cloud Solution Architect analyzes dependencies and designs the new cloud architecture of the application. • Remediate: Depending on the migration type, more or less changes to the application are necessary, such as changes to the network design, the operating system or the services used. • Replicate: The application is replicated in the cloud, this can be done manually or through existing tools. • Stage: The application is made available for testing in an environment that is as realistic as possible. • Testing: Users test the application for performance and completeness and prepare for the new cloud environment. • Adopt: If necessary, end users are prepared for changes and a possible transition period. • Finalize: The last changes, such as the results of the user tests, are incorporated into the application. • Promote: The application goes live and the unused legacy resources are removed.

8.3 Transforming the Infrastructure Model

219

8.3.5 Further Development – Keeping the Landscape Up to Date and Safeguarding It In its Cloud Adoption Framework, AWS identifies four levels of cloud adoption maturity (Orban 2016): Project, Foundation, Migration, and Reinvention. There is some self-­ promotion in this; after all, the cloud provider has no interest in cloud projects ending with migration. But in essence, it is in line with the facts of modern cloud applications that they are always evolving. The most important reason is discussed in more detail in the next section: Competition is forcing companies to incorporate more and more new functions into their business models ever faster  – and this requires corresponding changes to the application. But even without changes in content, there is a constant need for adaption in the IT landscape. Cloud providers are constantly offering new services that make previously optimal architectures obsolete after a very short time. For example, AWS launched a whopping 516 innovations in 2014 alone, while Microsoft also released over 500 innovations in 2016 (Lorusso 2016). In addition, there may be changes in internal compliance policies and external requirements such as laws and regulations. The following areas should be regularly monitored and developed: • Cost Management: Monitor the costs of the applications and adjust the infrastructures according to the actual needs. • Security Management: Maintain and evolve security measures to protect the network, assets and data, and handle and resolve security incidents. • Resource Management: Keeping an eye on the status of assets such as servers and databases and adapting them to current requirements. • Identity Management: Manage user accounts, manage and monitor user identities across departments and companies. • Configuration Management: Deploy, update and optimize assets. Figure 8.13 provides an overview of the challenges and tasks involved in a data center migration.

Plan Build

Evaluate and prioritise applications Set up subscriptions & tagging

Migrate

Create architecture

Develop further

Develop application technically & in terms of content

Determine economic framework conditions

Involve management

Address recruiting and training

Prepare landscape Network

Identity

Encryption

Implement cloud-related changes

Log & Report

Establish Governance & Compliance

Toolchain

Perform user tests

Bring live

Governance & Compliance fortführen IT resource and configuration management

Cost management

Security and identity management

Fig. 8.13  Implementation of the cloud transformation in the infrastructure model

220

8  The Cloud Transformation Container Cloud Native Cloud Serverless

PaaS

IaaS

Mobile

... into the cloud From the traditional data centre ...

Microservices Automated Distributed

Rest API

Elastic DevOps Isolated State Loosely Coupled

... with modern software architectures and approaches.

Fig. 8.14  Transforming the infrastructure model – the basic steps of cloud transformation

8.3.6 Summary – Cloud and Modern Software Approaches Modernizing the infrastructure model of a company with the help of disruptive technologies initially means using cloud as automated IT in as many applications as possible at the highest possible virtualization level (see Fig. 8.14). The challenge of such a changeover is to make an efficient economic trade-off decision between costs and benefits for the entire IT landscape, but also for each application. Eliminating the traditional data center is usually only worthwhile if major investments in the data center are pending anyway. A conversion of the applications in the direction of cloud-native architectures (refactor, revise, rebuild) is more expensive than a 1:1 migration of the existing application to cloud infrastructures (rehost). Depending on the application scenario, the latter can provide more benefits. Regardless of the individual case decisions per application, the company should be aware that with the replacement of traditional IT by cloud, the IT infrastructure becomes software. Thus, the relevance of software in the company as a whole also increases. Employees must learn how software is designed and developed and how processes and procedure models differ from those in the world of manual IT (see Chap. 6).

8.4 Changing the Operating Model Moore refers to the transformation of the operating model as a modernization of value creation without questioning the business model itself. He cites the example of a Boston taxi company that – in contrast to Uber – still wants to own taxis and hire drivers, but feels compelled by growing competition to modernize its value creation processes and systems (“systems of engagement”) (Moore 2016). The company could, for example, simplify the ordering process for customers through a mobile app and provide them with better service

8.4 Changing the Operating Model

221

through on-screen taxi tracking. In addition, the company could reduce the number of call center employees in this way and possibly even transport more customers with fewer taxis. There are many examples of how customer-centric apps can streamline business models and improve the customer experience: Viewing inventory, paying via app, making recommendations, tracking order processes, ordering groceries online, photographing wines in-store and receiving reviews, filing a tax return easily and understandably and receiving feedback directly – many apps already exist. Beyond that, however, much more potential can be leveraged. Sect. 4.4 talked in detail about how cloud technologies and modern software approaches decouple the IT infrastructure from the actual hardware. This section now describes how the company can best use the resulting potential for its existing business models (see Fig. 8.15).

8.4.1 Focus on Business-Relevant Applications The potential for rapid development of new functions, scalability and flexibility depends on the type of migration used. As a rule of thumb, the higher the share of automated IT, the greater the potential benefits. Figure 8.16 schematically shows the relationship between the degree of IT automation and the opportunities for the business model. With Retire, the application is abolished. It therefore generates no longer any benefit for the business model. With Remain, the application remains in its original form and continues to support the ongoing business. The disadvantages of classic IT (see Chap. 4) remain in full force. With Replace, a software service (SaaS) from the cloud is used. This usually reduces costs significantly and allows the company to benefit immediately from the features offered. The high degree of standardization of these services usually limits their use to generic application areas such as Office365 for office software, Slack for team collaboration or Concur for travel expense reporting. Companies can rarely achieve a competitive advantage in this way, not least because all competitors can also access the same offerings. With Rehost, the application is migrated unchanged to an automated infrastructure (IaaS). Companies can thus benefit from the cloud at the lower levels of the IT value chain. Mostly, this is used for applications whose architecture cannot be changed by a single

By changing the infrastructure model, the foundations are laid for more functions, scalability and flexibility.

From own data centre

to the Cloud & to Cloud Native

5R

with modern software

Operating model Infrastructure Model

Fig. 8.15  Using the cloud to improve business models

How to exploit best the potential of cloud in the business model?

222

8  The Cloud Transformation Migration type

Effect on application

How was the infrastructure model migrated?

The application ...

Retire

... no longer exists

Opportunities for the business model

Replace

... is replaced by a software service that can be used by everyone

Competitors have the same access to the service with the same benefits.

Rehost

... remains unchanged, its infrastructure is automated

Changes to the software are still very expensive

Refactor Revise

...Uses the advantages of automated IT (cloud native)

Rebuild Remain

• • • •

Develop new features more quickly Fewer risk in case of failure Global scaling in case of success Virtually zero marginal cost per new customer

Continues to be fully based on traditional IT

Fig. 8.16  Migration type and opportunities for the business model

company (example: SAP ERP). Costs can be saved mainly by shutting down the application at night and providing test and development systems only on demand. However, changes to the functionality and thus to the business model cost practically the same with Rehost as with classic systems. The three migration types that change the application to take advantage of the cloud natively (Cloud Native) are Refactor, Revise and Rebuild. The application can profit from the large catalog of IT prefabs, global scaling is easy, billing is done in small units and only when actually used. These advantages play a role especially when: (a) the business model is subject to strong fluctuations over time, (b) global scalability is relevant, (c) new features are to be tested and developed quickly or (d) Parts of the application are rarely used. Figure 8.17 shows how migration types, migration efforts and benefits of a migration are interrelated. It must be taken into account that the illustration shows the relationships in a simplified way. In practice, the migration types cannot always be clearly distinguished from each other. The main advantages of a data center migration to the cloud are that the capital commitment is reduced to the operating costs and that employees are freed from support processes and can focus on directly value-adding activities in the primary process. In order to benefit from this opportunity, the company’s own traditional data center must be abandoned in both cases. If the traditional IT has already been outsourced to a supplier, this potential can also be leveraged there. Making entire applications flexibly switchable on and off and thus saving costs and improving system performance is a benefit that the cloud

8.4 Changing the Operating Model

223

Migration types

Benefits Resilient handling of failures Develop new features quickly and in a customer-oriented manner Individually cushion load variations in specific areas of the application

High migration costs are only worthwhile for certain applications and business models

Scale globally

Benefits accrue per application, independent of data centre migration.

Switch entire applications on/off or "rightsize" them Focus employees on primary processes

Occurs if a own data centre was operated before.

Turn CAPEX into OPEX Rehost

Refactor

Revise

Rebuild

Migration types

Fig. 8.17  Migration effort and benefit factors

always generates. So-called “rightsizing“works in a similar way. This involves analyzing the actual load behavior of an application and reducing oversized systems to the right level. Global scalability of an application is also usually possible with little effort even with a rehost. The three additional benefit factors can usually only be generated with a significantly higher development effort. Load Fluctuations in Application Parts In order to be able to individually cushion load fluctuations in individual application parts, the application is divided into areas that are decoupled from each other (microservices) (Chap. 6). For example, the user interface can be separated from the control of the supply chain. If there is then increased demand for the user interface, only the computing capacity for exactly this service is increased. The supply chain calculation in the back-end is not negatively affected by the user rush in the front-end. Additional costs are thus only incurred at one point in the system.

8.4.2 Resilient Handling of Errors Applications that do not consist of several microservices but represent a closed structure with many dependencies are called monoliths (Chap. 6). If a component fails in a monolithic software, the entire software is no longer usable. If a component is over-scheduled, this bottleneck often slows down the entire application. The classic way to deal with this problem is to improve the availability and resource situation of each component of the system. According to the descriptions in Chap. 6, this leads to ever higher fixed costs and to an increasingly restrictive approach to changes.

224

8  The Cloud Transformation

This can be compared to the management of potential forest fires in the vicinity of a city: Classic IT tries to reduce the probability of a forest fire by allowing fewer visitors (fewer changes) and keeping more firefighting aircraft on standby in case of the forest fire (more infrastructure resources). Cloud-native IT, on the other hand, tries to build systems independently of each other (so that all the houses don’t burn at the same time) and rebuild them as quickly as possible (so that no one notices the results of the forest fire). These approaches are called “designed for failure.” Instead of avoiding failure, failure management is improved. Companies like Netflix even go so far as to introduce a “Chaos Monkey.” This involves deliberately causing component crashes to test whether the systems based on them are sufficiently resilient in their response (Grüner 2016). Develop New Features Quickly If an application is modified (revise) or rebuilt (rebuild) according to the principles of Cloud Native (Chap. 6), the development processes become significantly faster and more cost-effective. In a study for Arvato Systems, CRISP Research assumes that features can be developed 70% faster and 80% cheaper at high abstraction levels in the cloud (PaaS and serverless) than in traditional IT (Schumacher 2018). These improvements cannot be generated by using the cloud alone. The key to creating these efficiencies is the adoption of the software methods and practices described in Chap. 6. Cloud-Native Transformation Pay Off Cloud technologies offer the greatest benefit when the application is converted to cloud-­ native principles (Chap. 4). However, this is also when the greatest costs arise for the migration of the application. So an expensive rebuild or revise only makes sense if it is actually useful for the business model to quickly develop new features, to be error-­resilient, and to be able to individually manage load fluctuations within the application. No company’s survival is threatened because Microsoft didn’t implement new features into its office software quickly enough. Lower infrastructure costs due to the ability to flexibly shut down test systems will also hardly help a company strategically. The advantages of cloud native are more relevant when the competitiveness of one’s own company depends on that very application (see Fig. 8.18). A gaming company always wants to be online, the delivery service has highly fluctuating traffic depending on the time of day, and the music streaming company constantly wants to deliver better functions in its app than its competitors. The disadvantages of traditional IT, with its long development cycles and high marginal costs of scaling, would sooner or later put these companies in a significantly worse competitive position.

8.4.3 Customer Focus and Data Analysis The modernization of key applications for the core business is the focus of this section. Decisive for the company’s success is how well these key applications take customer

Strategic / permanent

225

unlikely

Application area for Cloud Native Transformation

Modernisation of key applications for the core business

Switch to cross-sector software services

Operational / one-off

Benefits of a migration

8.4 Changing the Operating Model

Switch applications on/off and "right size" resources

Bad investments

Low

High

Cost of migration Fig. 8.18  Costs versus benefits of a cloud migration

Individual

Operating model Infrastructure model

Context

Focus on the customer Core business application becomes cloud native

Consistent focus on the customer and his data Fig. 8.19  Consistent focus on the customer

needs into account and how effectively the software translates customer demand as a whole into a functioning business model (see Fig. 8.19). The three major cloud providers consistently emphasize “customer focus.” Amazon, for example, lists at the top of its leadership principles (Amazon 2019), “Customer Obsession  – 100 % Customer Focused.” Google writes on its website, “Customer Focus is the key to digitalization” (Merks-­ Benjaminsen 2017). Satya Nadella, the CEO of Microsoft, puts it this way (Weinberger 2015): “We no longer talk about the lagging indicators of success, [...], which is revenue,

226

8  The Cloud Transformation

profit. What are the leading indicators of success? Customer love.” What do providers mean when they talk about customer focus? In classic IT, companies still had many secondary processes – i.e. employees who did not work directly for the external customer, but instead worked for their colleagues. The cloud automates IT and leads to more outsourcing. Dave Hahn, in his talk at DevOpsDays Rockies 2016, vividly describes how Netflix did away with its own data centers and now, thanks to using external cloud partners Akamai and AWS, delivers billions of hours of video per quarter to 81 million customers with a small IT operations staff (Hahn 2018). Companies can therefore focus their staff on primary value creation – for which Netflix employs over 500 developers (as of 2016). Dave Hahn describes what this looks like at Netflix (Chap. 2): no estimates, no gut feeling, no tradition, but “We focus on data”. Feature teams focus on the customer, especially their data. Zach Bulygo describes in a blog post how Netflix gets to know its customers (Bulygo 2013): how many of its 130 million customers start watching the show “Arrested Development”? How many watch an episode to the end? Where did those users who did not finish watching the series go? How long did it take for the consumer to watch the next episode of the same series? Other data is added: When did they pause, rewind, and skip forward? On which days was it watched and at what time? Where was streaming done and on which device? What was searched for? How was the search scrolled? There are also statistics about the circumstances under which customers quit: 95% of customers leave Netflix when they consume less than five hours per month. Analyzing customers quantitatively in this way is the way of choice in digital transformation: the data reflects the customer’s actual behavior much more accurately than people’s self-statements in interviews. Customer data can therefore be collected in the various apps in which the user is active. Based on this, their actual behavior can be used to improve the user experience (UX) or show them different content. In addition, companies can incorporate contextual data such as Twitter trends or regional information on weather and traffic jams to become more relevant to the customer: In technical terminology, this is called “increasing engagement” and “improving customer experience”.

8.4.4 Machine Learning and Artificial Intelligence The amount of data that can already be collected by companies today quickly exceeds the imagination and control of humans. Let’s assume the application has 10,000 users worldwide and relevant influencing factors on “user engagement” are the country, time, weather, type and content of local events, religion as well as relationship status of the user. How can all the interdependencies between all the factors and their manifestations be recognized by humans, developed as software, and kept up-to-date worldwide? Even if this is possible, it would jeopardize the core benefits of such cloud solutions: rapid scalability at zero marginal cost. The solution lies in what is called “systems of intelligence.” The model used by Ravi Kalakota (Kalakota 2015) is shown in Fig. 8.20.

8.4 Changing the Operating Model

227

Systems of Intelligence

Trend

Data Science, Machine Learning

Examples: Marketing, Sales, Customer Service

Trend

Mobile, Social, Cloud Native

Examples: ERP, CRM, HR, SCM

Trend

Cloud, IaaS, Container

Insight-based systems

Systems of Engagement Customer-oriented systems

Systems of Record Historical systems

Levels of enterprise applications according to Rava Kalakota

Fig. 8.20  Levels of enterprise applications according to Rava Kalakota

The historical systems are based on classic IT architectures. They can only be changed in long cycles with a lot of effort, the trend here is only to rehost in the cloud. The customer-­ oriented systems are more often based on cloud-native technologies – it is easier and faster to integrate advanced services such as mobile, social media and IoT. Much more data is produced that can be used to optimize customer-centricity. To be able to use large amounts of data to further improve business models, data science is used in conjunction with machine learning. Again, the public cloud is key to building the “systems of intelligence” for many enterprises. Cloud providers such as Google, Microsoft, AWS and IBM have built up a large offering of platform services (PaaS) for machine learning (ML) in recent years. Most of these belong to the “Narrow AI“category and can solve specific tasks: • • • • • • •

Convert speech to text Convert text to speech Recognize objects in pictures Recognize faces and emotions shown Recognize speech syntax Translate language Conduct text dialogues (chatbots)

All these services can be accessed without much effort via an API and can be used cost-­ effectively in a pay-per-use model without own investments. With some expertise in software development, companies can start to create their own intelligent applications. Customers are not dependent on using the predefined ML algorithms – they can also create their own models. For this purpose, public cloud providers offer frameworks such as Tensorflow (from Google, now open source; see Geißler and Litzel 2018) or Sagemaker (AWS). In this way, companies can develop machine learning models individualized to their needs, for example models for the detection of skin cancer or genetic analyses in medical technology.

228

8  The Cloud Transformation

8.4.5 People and Culture The use of the public cloud and modern architectures has a major influence on how quickly new features of a digital business model are developed and what quality they have. Underestimated in their impact on this development are human factors, such as the composition of a team and the culture of an organization. Interdisciplinary and Agile Feature Teams If the company wants to improve its business models with the help of disruptive technologies, interdisciplinary and agile working is an important prerequisite. In Chap. 6, the topic of DevOps and feature teams was already discussed in detail; here, the idea is concretized by means of an example. Feature teams are well composed when they include all the skills that can make a service successful end-to-end. These include, in particular, an understanding of the market and customers, architectural skills for cloud and software development, and skills in development and operations. Teams with ten or fewer members can support and develop large applications thanks to the many prefabricated parts in the cloud, its automation capabilities, and by outsourcing secondary IT processes. Figure 8.21 shows the composition of such a feature team for the exemplary service “Detect hate messages”, which can be used, for example, by a website operator in the

API Feature team of 10 employees for exemplary service:

„Detect hate messages

Continuous improvements in short cycles

Lecturer, linguist, historian political scientist Machine Learning Specialist/ Data Scientist, Software Architect Software developer with and without operational focus API

API

Infrastructure (IaaS)

API

API

Databases

Data

Other services

Fig. 8.21  Feature team for “Detecting hate messages” – exemplary composition of a team

8.4 Changing the Operating Model

229

comments section. The large number of professions involved impressively shows how work in teams is changing as a result of the digital transformation. A few software developers could easily develop the service in their silo, but they would hardly know how to analyze language to detect hate in it. They need political and historical contextual information to be more likely to recognize hate messages as such. This requires integrating linguists and proofreaders into the team. Machine Learning Specialists know how powerful the current models are and how to build, improve and train them. Data Scientists help with data analysis and cleansing, and software architects ensure the scalability, performance, and cost-effectiveness of the overall system. Changes are expected to be implemented quickly: another terrorist attack or a tweet from a president leads to changes of the content of hate messages. The service should be prepared for such developments. If editors were part of an independent team of editors, data scientists were employed by themselves, and software developers were employed by a completely different company, the speed and quality of the service would suffer – without any recognizable, positive effects elsewhere. The essential optimization potentials and effects are not achieved by cranking up the workload of software developers or subject matter experts. In software-defined IT, the major economic benefit lies in the combination of data-based understanding of customer needs and technology-based creation of scalable solutions. Once both of these requirements are met, the service is able to scale globally thanks to the technologies available. Culture as a Failure Factor A difficult, but at the same time important topic when modernizing the business model with the help of disruptive technology is corporate culture. Difficult because it is hard to grasp and actively manage; important because it influences the implementation of the strategy like hardly any other topic. The importance of corporate culture has already been elaborated in Chap. 6. According to Gerhard Schewe, corporate culture is a “system of shared patterns of thinking, feeling, and acting, as well as the norms, values, and symbols that convey them, within an organization” (Schewe 2018). A strategy is first of all just a PowerPoint presentation – how is it supposed to prevail against the feelings and actions of many people in an organization that have been shared for years? In concrete terms, the modernization of the business model  – in terms of corporate culture – usually involves the following topics: • Cooperation between developers and system administrators: In traditional IT, developers and administrators (operations) work in separate departments with very different goals and target systems. Developers that create new software functions, always want to use the latest tools and complete their project quickly. Administrators are responsible for the availability of a system and worry about the security and stability of an application every time a change is made. Under the new DevOps dogma, operations colleagues must learn to develop software, while software developers must also learn to take responsibility for the stability and security of their code.

230

8  The Cloud Transformation

• Dealing of developers with customers: Both developers and administrators are largely shielded from the customer in the traditional IT organization by the project managers and customer managers. In the cloud era, however, they are supposed to be “customer-­ focused” – in other words, they are supposed to get to know the customer and deal with his interests and requirements. This can be a challenge, both professionally and socially. • Thinking in terms of automation and data analysis among subject matter experts: Subject matter experts without an IT background often find it difficult when they are asked to develop software together with IT experts for a process that they have previously carried out manually themselves. On the one hand, it is not a pleasant idea for them that their job will be replaced by IT applications. On the other hand, they don’t always believe that such a changeover is even possible. • Making your own services accessible to others: Many people, regardless of their profession or level of education, are reluctant to share the results of their work. They are happy to be asked when it comes to their particular special topic. But if they make their knowledge available via an API, they are no longer asked  – the programming interface takes over their function. If they are on vacation or even leave the company, there is no loss in service quality. The fact that their productivity increases enormously when their performance becomes globally scalable with software, is an unknown concept for many colleagues. • Building on the services of others: Another phenomenon of operational activity is the “not-invented-here syndrome” (Möhrle and Specht 2018; Gairing and Weckmüller 2019). Employees refuse to use the developments of external partners or other teams in the same company – either because they were not involved when the company selected them or because they do not trust the quality. After all, they don’t know in detail how these developments work. But cloud transformation is based precisely on the idea of using the services of others without knowing how they work behind the API. Leaders are also affected by cultural transformation: • Software creation as the primary business process: Developing software becomes the primary business process. In the age of Uber, it may be more important for a taxi company to have a well-functioning app with fast dispatch management than to actually own all the vehicles. Becoming a software company, however, comes with its own challenges. Executives must learn to evaluate software architectures, they must introduce new job families, and they must grant software developers broad rights to use the necessary software and hardware tools. • Agile thinking: Acting as a software company also means thinking and acting agilely. Section 6.6.1 discusses the advantages of the agile approach in the digital context separately. • Give the responsible teams freedom to make decisions: An important element of DevOps is the team’s overall responsibility for its product (Sect. 6.6.4). However, the teams can only fulfill this responsibility if they have to obtain approval from higher

8.4 Changing the Operating Model

231

hierarchical levels for all decisions. Companies can therefore only use the full power of DevOps if they give them sufficient freedom to make their own decisions. • Inspire instead of control: But how can leaders influence independent teams? There are many suggestions from the world of tech consultants. Simon Sinek suggests clarifying “the why” of a company and leading on that principle (Sinek 2011). Andy Grove developed the OKR model at Intel, and Google picked it up about 20 years later. The model focuses not only on economic outcomes but also on employee engagement and creating transparency across hierarchies. “New Work” is also a concept regularly mentioned, a concept that emphasizes freedom for employees’ personal development (Haufe Akademie 2019). Every manager is free to view these approaches skeptically. He should include the following considerations in his evaluation: The key employees of cloud transformation – the cloud solution architects – are a sought-after resource on the labor market, and employees in this field are well aware of their negotiating position. After all, if they manage to successfully develop a globally scaling cloud service, they will achieve a ratio of value added to working hours that is almost inconceivable in the classic service or manufacturing industry. • Model collaboration: One of the most important factors is collaboration within teams, between teams within the company and externally with partners, customers and suppliers. In Chap. 7, the value of networks that are created thanks to low transaction costs were discussed. Employees observe very closely how their superiors work with other superiors. They have often a good intuition for how seriously the legitimate interests of partners and customers are taken within the company. Collaboration must therefore be exemplified – from the CEO to top managers to staffs and middle managers. Companies that want to systematically approach a cultural transformation and agile working will find numerous providers on the market that offer assistance with these projects. A quantitative cultural analysis can be particularly exciting and informative. The company Human Synergistics, for example, analyzes corporate cultures according to 12 criteria ranging from competitive orientation and perfectionism to avoidance behavior and security orientation (Schwitter et al. 2007). The company uses questionnaires to measure the factors mentioned above and compares them with over 10,000 other companies that have filled out a total of over two million questionnaires. Based on this data, Human Synergistics has identified statistical correlations between certain cultural variants and economic success: Companies whose cultures are particularly pronounced in the factors of performance, humanity and sociability are statistically significantly more successful than companies with a strong tendency towards avoidance behavior, perfectionism and power. These analyses enable a fact-based dialogue in the change process. In this way, objective goals can be set and the progress becomes measurable. Conclusion – Real Change in the Company When operating models are modernized with disruptive technologies, this is almost invariably accompanied by major changes for those affected (see Fig.  8.22). The close

232

8  The Cloud Transformation

With focus on the customer, …

... Data analysis and machine learning, ...

... and interdisciplinary & agile teams ...

Operating model Infrastructure model

... to globally scaling business models with zero marginal costs

Fig. 8.22  Modernizing the operating model with cloud technology requires real changes in the company

alignment of all employees responsible for the success of the business with the customer is the first major change for many. The focus of analysis shifts away from the industry experience of experts or the testimony of customers and toward the evaluation of actual customer data. This leads to the establishment of new best practices in companies. But the introduction of these practices is usually viewed skeptically by the employees involved. In addition, there are many cultural and human challenges: The new agile teams are much more heterogeneous than before. They may work in distributed locations and share their work results via IT tools. Experiential knowledge is replaced by technological knowledge, perfectionism is replaced by trial and error within the framework of controlled risks.

8.5 Changing the Business Model The zone-to-win matrix has already been described in Sect. 8.1.2. The next sections are devoted to the question of how a transformation of the complete portfolio of business models can look and how a company’s priorities should be set during this transition phase.

8.5.1 Transformation as the First Management Task One assumption underlying all of Geoffrey Moore’s advice is that companies, as social entities with strong cultures, ingrained habits, stable processes, and traditional views of the world in their respective zones, are not intrinsically open to a disruptive change in a business model. This is because disruptive technologies do not incrementally change ways of doing things within existing thought models; rather, they completely challenge the division of tasks, incentives, processes and systems. For example, the success factors of Nokia’s business model  – cost of manufacturing through economies of scale and integrated hardware value creation – were quite different from those that made Apple’s iPhone successful (iTunes, app ecosystem). Sales and delivery in the traditional Microsoft Office

8.5 Changing the Business Model

233

Supporting

Economically performing

business are radically different from those that make the Office365 cloud app successful. Cross-enterprise value creation is also unprepared for fundamental change. Suppliers do not offer the necessary license agreements and supply services, customers have not yet budgeted for the new technologies or purchasing models, and their buyers are still thinking in old schemes. In his lectures, Moore describes in detail the human dynamics between the different zones within the company (Moore 2017). Representatives of the performance zone regularly remind everyone else who is currently making the money in the company. Employees in the incubation zone consider the thoughts of their colleagues “old-fashioned”, and co-­ workers in the productivity zone feel they are “the only adults in the room.” In times of disruption, however, the future of the company emerges through the reordering of its elements. A reorganization that none of the players can handle alone, because they do not have the necessary power to do so. The responsibility then lies with the CEO, because theoretically he has the power to do so (see Fig. 8.23). Practically, the CEO needs a mandate from shareholders  – because he exposes the company to large, unknown risks. Moreover, the CEO has gained most of his professional experience in that world that is being disrupted, and he is more familiar with the old processes than with the new technology. Also, not every CEO is suited for every business challenge. Some are better decision makers, others are good at shareholder management, still others are better suited for transformation phases (Tappin 2015). Moore positions himself clearly: times of transformation are “crises of prioritization” (Moore 2017). Companies would have to realign the priorities of their zones in order to not only tread the so-called “hockey stick curve”, but also to successfully navigate it. The hockey stick curve (or “J-curve”) describes a typical profitability trajectory of a new business model. This visually resembles a hockey stick or a “J”: profitability thus initially falls

Disruptive innovation

Sustaining innovation

Transformation

Performance

Uncertain, future product

Research Development Testing

Incubation

The cross-zonal shift of resources and the alignment of focus must be led by the CEO.

HR Facility Management Public Relations Finance Training

Productivity

Legend Existing products Incubation topics

Fig. 8.23  Transformation as a management task according to Geoffrey Moore

234

8  The Cloud Transformation

and only rises back into the positive after some time (Kenton 2019). The short-term goal is to achieve a minimum share of 10% of the overall company’s business, if possible, in order to successfully establish the new business internally. He suggests two model strategies to achieve this: Zone Offense and Zone Defense.

8.5.2 Zone Offense – Acting as a Disruptor

Economically performing

In the attack strategy, a company acts as a disruptor and tries to conquer a market with a new product or a new service (see Fig. 8.24). Moore cites the examples of “Apple with the iPhone” and “Salesforce with the Marketing Cloud“. Both were not active in their respective markets (mobile phones and digital marketing respectively) before entering the market. Both had solid footholds in other areas (computers and Sales Cloud respectively), but were so convinced of their new business model that they were prepared to put the entire enterprise at risk by the size of their venture. Hence the companies using the Zone Offense strategy need to reprioritize: the transformation zone needs to be prioritized over the performance zone in the time it takes for the new business to reach the 10% target. If management is not sure whether the new business is really worth going through this phase, it should not even consider the transformation. If the transformation succeeds, a significantly higher company valuation is possible: Salesforce’s share price has developed significantly better since the launch of the Marketing Cloud than that of its competitors SAP and Oracle, which continued to develop without disruptive transformations. At the same time, the business models of the performance zone of both companies yielded excellent profits in the according years. For example, SAP’s annual net profit has more than quadrupled since 2001 (SAP 2002) – to currently around EUR 4 billion. Disruptive innovation

Sustaining innovation

Transformation

Performance Move only one product from incubation to transformation

10%-Goal

Supportive

Focus on transformation

HR Marketing Facility Management Research Public Relations Development Finance Education Testing

Incubation

Productivity

10%

Transformation becomes priority 1 until the new product generates 10% of the company's total revenue. Performance zone receives priority 2, productivity zone priority 3, incubation zone priority 4. Refrain from transformation if the company is not sure whether it wants to go through the „J-curve“.

Fig. 8.24  Zone Offense – Acting as a Disruptor according to Geoffrey Moore

8.5 Changing the Business Model

235

Even if the company’s current core business is not threatened with the “Zone Offense” strategy, the entire management team must support the change. Moore estimates that the transformation zone needs about 15 to 20% of the company’s resources to successfully reach the set targets – and the appropriate funds have to be provided by the other zones during the transition phase. This necessarily leads to conflicting goals that can only be resolved through clear prioritization of the Transformation Zone and the commitment from all executives.

8.5.3 Zone Defense – Countering Disruption The defense strategy – shown in Fig. 8.25 – depicts the transformation from the perspective of the disrupted company (disruptee). An example of the implementation of the Zone Defense strategy is Microsoft. The company’s three major business models (Office programs, operating systems, back-end servers) were highly profitable until cloud and mobile technologies disrupted the market. Apple and Google took over the market for operating systems for mobile devices with their mobile operating systems, and the cloud threatened the business for licensed software and back-end servers. The central thesis of the defense strategy is: A business model that is existentially threatened cannot simultaneously modernize and deliver the traditionally expected earnings figures and revenue increases. The core business is shifted to the transformation zone, the passage of a new hockey stick curve must be enabled. The requirement for the old business model is not to overtake the disruptor; it is enough to integrate so many elements of the new technology into its own offering that customers refrain from switching completely to the disruptor’s offering. So Azure, Microsoft’s cloud solution, doesn’t have to be Sustaining innovation

Transformation

Performance

Economically performing

Disruptive innovation

Move the threatened product to transformation zone

Supportive

Stabilize the disrupted product

HR Marketing Facility Management Public Relations Development Finance Training Testing Research

Incubation

Productivity

Modernize the operating model as much as possible with the help of disruptive technologies. The transformation zone gets priority 1, followed by the Incubation zone. The performance zone receives priority 3, the productivity zone priority 4. Maintain transformation mode until company is on a normal growth path.

Fig. 8.25  Zone Defense – The Disruption Encounter according to Geoffrey Moore

236

8  The Cloud Transformation

better or more disruptive than AWS; it just has to hit the core of the cloud value proposition. Customers then often shy away from switching providers because of the expense and risk involved. This statement by Moore, made in 2016, has been borne out in subsequent years: Azure is now catching up with the competition in terms of market share (Agarwal 2017), but is still listed behind AWS in the corresponding Gartner quadrant (Weaver 2019). Companies must assume that the old performance zone will be negatively affected in its economic performance by the transformation. Again, Microsoft serves as an example: As part of the Azure rollout, the incentivization of the sales employees was changed. Whereas they had previously been financially motivated to sell profitable legacy business. But from the moment the company decided to transform their business model to the cloud they were doubly incentivized to sell the cloud solution – even though it did not even generate a positive contribution margin (Moore 2017). So the disruption was active, from within. The success of these actions was immense: Microsoft’s stock value tripled from 2014 to 2019. Moore’s rationale is that shareholders recognize the threat of disruptive technologies and prefer a company that credibly embraces transformation over one that focuses on short-­ term compliance with legacy financial metrics.

8.6 The Impact of Cloud Transformation on Potential Employees Chapter 7 has already presented the greatest challenges for adapting the corporate culture to the digital challenges. In addition, the implementation of the transformation steps leads to a new understanding of the roles of (potential) employees in the company. The effects of this new understanding of the roles of the individual employee groups are examined in the following sections.

8.6.1 Developers – The New Paradise Swedish truck manufacturer Scania claims its cloud-first initiative is a “paradise for developers” (Scania 2017). When talking to cloud developers, the following statement is often heard: “There has never been a better time to be a software developer than now.” No wonder: in Chap. 4, the new power of the cloud as software-defined IT was described – and this power falls primarily into the hands of software developers. In the days of traditional IT, they were dependent on project managers and business leaders who prioritized them on behalf of customers and tried to use extra pressure to speed up the development of new features. They were even more dependent on system administrators, who they had to ask to provide them with additional resources. But the administrators were far away, usually in remote locations, in other departments, and pursuing other goals. The process of software development at this point was tough and cumbersome. Because of the high degree of division of labor in the overall process, developers often could not experience the result of their work in live operation until months or years after the start of the project and after they had written their first line of code.

8.6 The Impact of Cloud Transformation on Potential Employees

237

With the help of the cloud, software developers are moving away from the role of “small entities in a big machine”. They become key players in small teams made up of heterogeneous specialists. Transferred to the world of sports, the changes become clear: these teams play fast and the results are immediately visible. Just a few years ago, it was said that development tasks were best outsourced to low-cost offshore locations. Now the tide has turned: Developers need to work as close to the customer as possible, closely embedded on-site in a small team. The demand for their skills is increasing enormously (Gelowicz 2018). However, with increasing responsibility and proximity to customers, the profile of the developer is changing. Whereas he used to be able to withdraw in the big process, he is now expected to communicate directly with the customer, understand their requirements and think more in terms of business models.

8.6.2 Cloud Architects – The Scarcest Resource on the Market The outstanding importance of the software architecture for the company’s success has already been elaborated in Chap. 6: Business applications have always had to meet the criteria of robustness, availability, and performance. Achieving this is the task of the software architect. So what is the difference to today and why is the cloud architect the scarcest resource on the job market? The public cloud is changing the way applications achieve the above requirements. When software is no longer developed and operated entirely in-­ house, but is assembled to the greatest possible extent from the ready-made IT components available in the public cloud, the software architect has a prominent role in this game. He knows and understands the large portfolio of components and uses them to design a solution that meets the business requirements. He also makes the decision as to which part of the application is still developed and operated in-house and where a ready-­ made cloud service can be used. Enterprises are migrating their applications to cloud infrastructures, to edge devices (any type of digital touchpoints), or to software services (SaaS). These migrations are architectural challenges: What needs to be clarified are the requirements of the business and the current workload, which services are available in the cloud for this purpose, and which of these should be programmed in-house. Cloud solutions are created by “cloud solution architects“. When 80% of companies are transforming their applications to cloud technologies, it is the cloud architects who are especially needed. One metric that illustrates this relationship is the salaries of cloud architects: These are significantly higher than those of developers. Cloud architects know the worth of their work because they know the cost calculations before and after the migration. They see how the business success of an application changes once they have established the new way of working together in teams.

238

8  The Cloud Transformation

8.6.3 Traditional IT Specialists – Real Threats and Great Opportunities From the perspective of IT employees, there are numerous disadvantages associated with the transformation. The increasing automation of IT puts the jobs of many IT specialists at risk. The tasks of those employees who have maintained and operated systems with classic Microsoft technologies could be eliminated to a large extent in the coming years. Microsoft offers applications such as Exchange, Sharepoint, Onedrive and CRM as cloud services, as well as communication solutions such as Skype or Teams. All that remains for on-site employees are administrative tasks, such as creating new users or configuring user profile settings. Security and network specialists are still needed in the IaaS world, but their profile is changing significantly: they are more likely to end up in advisory roles above the API. In addition, many management roles are eliminated: When installation projects shrink to a few hours or days, a dedicated project manager is no longer needed. If a cloud architect alone controls the interaction of network, server and database, there is no longer any need for a service delivery manager to coordinate service provision. At the same time, the number of vacancies for IT experts reaches record levels every year (Bitkom 2018). To take advantage of the opportunities that come with this demand in the labor market, employees in classic IT roles need to rethink their role. Knowledge of new approaches and tools, as well as the ability to develop software, increase their value to their (future) employers. For many, this is not a huge leap, having already “scripted” – practiced a simple form of programming – in their old jobs. Others are already excellent developers and use those skills outside of their jobs, for example in open source projects or when programming their smart homes. Moving from secondary value creation to primary value creation presents other challenges: Employees must engage directly with customers. This includes travel, translating IT language into presentations that are understandable to customers, and personal skills such as facilitation or dealing with conflict. So how does change feel for traditional IT colleagues? Very differently. For some it will be a liberation, for others it will be the loss of a world in which they have settled comfortably.

8.6.4 Middle Management – Pressure and Fear of Loss Middle management has a lot to lose in the process of the digital transformation. The team and department leaders have started their careers in the old world. There they know their way around and have established themselves. In the new cloud world, the paradigms are changing: specialist teams involved in overarching processes are becoming interdisciplinary teams with overall responsibility for their respective services. Leadership profiles are changing in a few different directions: In some cases it’s now more about agile coaching, in others it’s about specialist knowledge of cloud architectures, and sometimes the whole team disbands because it’s no longer needed in the new world.

8.6 The Impact of Cloud Transformation on Potential Employees

239

In addition, middle management is still under pressure to maintain the old business for a while and generate the margins that will enable the company to carry out the digital transformation. The company’s top performers may be the first to switch to the new world. As a result, the managers’ room for maneuver to maintain the development of the old division is narrowing more and more.

8.6.5 Specialist Departments – Freedom, Chaos and Responsibility Cloud transformation frees up resources that were previously tied up in administrative processes and allows companies to digitize their core tasks. The first steps there are simple: a SaaS-based CRM system is introduced with a few mouse clicks and the resulting data is regularly copied into a cloud-based analytics platform. The results are copied into a PowerPoint file and stored in a “Dropbox”, for example. Such public cloud solutions offer uncomplicated storage space and the data can be shared immediately with external partners. Instead of writing countless emails to each other, tasks are prioritized via digital map walls in Trello, a cloud-based tool that easily visualizes lists, maps and boards. All of the tools mentioned can be implemented and used easily by employees in the specialist departments without any IT knowledge. The problems of cloud transformation only emerge in the business departments over time: Departments suddenly have trouble allocating costs. It is not clear who has access to which data and how admin rights can be reassigned when the employee who introduced the respective tool leaves the company. The legal department does not know the contact persons of the tools, their inquiries about whether the applicable requirements regarding data protection are implemented remain unanswered. The freedom of the employees quickly turns into chaos if the new possibilities are not used responsibly. The specialist departments now need know-how in the areas of software development, cloud architectures, security and compliance. Some professions certainly feel threatened by the new proximity of IT to their subject. The German Association of Freelance Translators and Interpreters, for example, felt compelled to inform the public that the machine learning-based translation service DeepL still does not translate perfectly (Bernard 2018). Overall, the trend towards cloud-based IT can be interpreted as a positive development for the departments. Cloud-based IT makes significantly more services available to the employees of the departments, which can be introduced more quickly and used with little investment. The downside of this development is managing the associated control and compliance tasks.

240

8  The Cloud Transformation

8.6.6 Top Management – Financial Ratios, Threats of Disruption and New Ways of Doing Things Much of the burden of cloud transformation rests on the shoulders of the top management. That makes their situation anything but easy: the owners continue to insist on meeting revenue and profit targets. Meeting these is becoming more difficult every year; after all, existing business models are coming under pressure from new competitors who have been able to build their processes directly on new technologies without ballast. Especially the well-trained employees who are familiar with the new technologies are more likely to leave the company, because they want to work with the new methods and not maintain old-fashioned processes. While the transition to the new digital best practices such as Cloud Native, Agile and DevOps is technically comparatively easy, the organizational transformation required to make it happen is not. There are existing power structures, processes are entrenched and closely linked to current IT tools and manufacturing methods. The corporate culture is not easily changeable, leadership models and career paths are closely intertwined with the current business model. The theoretical connection between modern software architecture and competitiveness in the digital business model is quickly understood, but how can enterprise applications be improved if they are provided by external suppliers like SAP and cannot be changed in the first place? In addition, there is the question of access to capital. Many companies are credit-­ financed and banks have special termination rights if certain profitability ratios are not met in the short term. Cloud-native solutions scale excellently on the delivery side and only generate costs if the customers also use them. Nevertheless, especially on the sales side, long, unprofitable growth phases often have to be endured. Sometimes competitors with sufficient venture capital first conquer the market and profit from the digital network effects before they begin to monetize their market share (see Chap. 2). Clayton Christensen already pointed out that it was not the fault of top managers that drove companies into the Innovator’s Dilemma trap (Christensen 1997). On the contrary, after all, management had proven over the years that they could successfully lead and evolutionarily develop the company. In the era of the cloud, however, top management faces entirely new challenges: In the simplest case, it’s just a matter of learning and implementing the new best practices on the infrastructure side. In the worst case, however, the future of the company is at stake, all stakeholders have to be committed to a major change and a new, risky journey into a completely new market has to be led.

8.7 A Successful Cloud Transformation – Explained in One Picture The basic models for describing the interrelationships in the context of cloud transformation are the Three Horizons Model by McKinsey, the Innovator’s Dilemma by Clayton Christensen, and “Zone-to-win” by Geoffrey Moore. The horizons model describes the

8.7 A Successful Cloud Transformation – Explained in One Picture

241

three investment horizons that exist in large companies. Looking at the current year, business profit is made in the first horizon. With limited budget and risk, the third horizon develops the innovations that will be relevant in more than three years. The middle horizon should contain the next big growth business. In the case of disruptive technologies, there may be an “innovator’s dilemma“between the first and second horizons. The new technology is significantly cheaper but has at the beginning lower margins, it does not seem attractive for the existing company to invest in this disruptive technology. In Zone-to-win, the innovator’s dilemma model is combined with the different horizons model in a matrix and supplemented with an analysis of strategy options. Moore describes in detail the case where a company’s business model is threatened holistically. In many cases, however, it is merely enough to use the disruptive technology in the corporate infrastructure or to make existing business models more effective as a result. Overall, three different levels of cloud transformation can be distinguished (Fig. 8.26). 1. Cloud technologies are disrupting traditional IT. With their help, companies can develop and operate software faster and more cost-effectively, as well as scale worldwide at virtually zero marginal costs. In order to keep up with companies, such as startups, that rely exclusively on the new methods, it is generally advisable to also carry out a transformation to the public cloud. Depending on the individual case, different migration models are recommended for this: Migrations to lower cloud levels (such as IaaS) are faster and more cost-effective – migrations to higher cloud levels (such as Platform Services) are more expensive and possibly more effective. The company is also building up expertise in modern software methods and technologies. 2. If a company’s existing business models are to benefit from the new technologies in terms of content, a more far-reaching conversion of the operating model is recommended. The employees relevant for the success of the business model – experts for the business model itself, software developers and operations staff – are brought together

Business model Portfolio of business models

Focus transformation

Go through J-curve

T

Manage zones 10%

SaaS MachineLearning Internet of Things Data Analytics Operating Model Modernize value creation Cloud SocialMedia Mobile Serverless IaaS PaaS Infrastructure model Optimize back-end systems

Interdisciplinary & agile teams

Own data center

Customer Focus & Data Analytics

Machine Learning & AI

Cloud & cloud native

Modern software

5R

Fig. 8.26  Public cloud transformation – summary of the most important points per level

242

8  The Cloud Transformation

in a team. They are given end-to-end responsibility and can develop the business quickly and successfully with the help of their proximity to the customer and their implementation expertise. They prefer to analyze the customer in a data-driven manner and understand them and their interests based on their actual behavior. If sufficient data is available, the optimizations can be supported by machine learning algorithms. 3. The greatest effort for the organization is when the entire portfolio is to be completely realigned. This can be the case when a company is so convinced of a disruptive product that it wants to enter a new market – in addition to its current service portfolio. This is the path Apple has successfully taken with its iPhone. More often, it is the case that a company’s core business is threatened by a new, disruptive technology. A less successful example of this is Kodak with its film business, which was disrupted by the digital camera. In both cases, the CEO should secure a mandate from his shareholders to transform, because the company is exposed to significant risk during this time. Only one new service should be introduced at a time, and transformation should be a top priority until the new portfolio share is about 10% of the entire company’s revenue. Only then is the new business viable in the organization under normal conditions. Regardless of which of the three levels described above a company chooses, the effects for the employees affected are significant in all cases. Familiar processes and tools will change, more processes will be outsourced, teams will be put together differently, proximity to the customer will grow, and the interaction between employees and managers will change. If the transformation is successful, it will be a win-win-situation for everyone: services will be closer to the customer, companies will have a lower investment risk and can scale globally with less capital investment. Labor productivity increases and employees can be better compensated.

References Agarwal, Nitin (2017): Why Microsoft Azure is growing faster than AWS, published in: motifworks. com, http://motifworks.com/2017/03/20/why-­microsoft-­azure-­is-­growing-­faster-­than-­aws/, retrieved June 2019. Amazon (2019):Leadership Principles, retrieved from: amazon.jobs.com, https://www.amazon.jobs/ de/principles, accessed June 2019. Bagban, Khaled and Ricardo Nebot (2016): Governance and compliance in cloud computing, published in: Reinheimer and Robra-Bissantz (2016), pp. 163–179. Baghai, Mehrdad, Steve Coley and David White (2000): The Alchemy of Growth, Basic Books, New York. Bernard, Andrea (2018): DeepL: Appearances are deceiving, published in: dvud.co.uk, https://dvud. de/2018/05/deepl-­der-­schein-­truegt/, accessed June 2019. Bitkom (2018): 82,000 vacancies: IT skills shortage intensifies, published in: bitkom.org, https:// www.bitkom.org/Presse/Presseinformation/82000-­freie-­Jobs-­IT-­Fachkraeftemangel-­spitzt-­ sich-­zu, accessed June 2019.

References

243

Briggs, Barry and Eduard Kassner (2017): Enterprise Cloud Strategy, 2nd edition, Microsoft Press, https://info.microsoft.com/rs/157-­GQE-­382/images/EN-­US-­CNTNT-­ebook-­Enterprise_Cloud_ Strategy_2nd_Edition_AzureInfrastructure.pdf, retrieved May 2019. Braddy, Rick (2018) When Do You Decide To Ditch Your Own Data Center? published in: forbes. com, https://www.forbes.com/sites/forbestechcouncil/2018/07/05/when-­do-­you-­decide-­to-­ ditch-­your-­own-­data-­center/#4e8c802b426e, accessed June 2019. Bulygo, Zach (2013): How Netflix Uses Analytics To Select Movies, Create Content, and Make Multimillion Dollar Decisions, published in: neilpatel.com, https://neilpatel.com/blog/how-­ netflix-­uses-­analytics/, accessed June 2019. DeCarlo, Matthew (2018): Xerox PARC: A Nod to the Minds Behind the GUI, Ethernet, Laser Printing, and More, published in: techspot.com, https://www.techspot.com/guides/477-­xerox-­ parc-­tech-­contributions/, accessed May 2019. Christensen, Clayton M. (1997): The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail, Harvard Business Review Press, Boston. Coley, Steve (2009): Enduring Ideas: The three horizons of growth, published in: mckinsey.com, https://www.mckinsey.com/business-­functions/strategy-­and-­corporate-­finance/our-­insights/ enduring-­ideas-­the-­three-­horizons-­of-­growth, retrieved May 2019. Dediu, Horace (2011): Comparing top lines: Apple vs. Microsoft, published in: asymco.com, http:// www.asymco.com/2011/09/29/comparing-­revenues-­apple-­and-­microsoft/, accessed May 2019. Dernbach, Christoph (2019): Apple and Xerox PARC, published in: mac-history.co.uk, https://www. mac-­history.de/apple-­geschichte-­2/2012-­01-­29/apple-­und-­xerox-­parc, accessed June 2019. Gairing, Fritz and Heiko Weckmüller (2019): Successful approaches to managing organizational change processes, published in: PERSONALquarterly, No. 2/2019, pp. 46–49. Gelowicz, Svenja (2018): Buhlen um Softwareentwickler, published in: sueddeutsche.de, https:// www.sueddeutsche.de/auto/arbeitsmarkt-­buhlen-­um-­softwareentwickler-­1.4173881, accessed June 2019. Grüner, Sebastian (2016): Stress test – Netflix’s Chaos Monkey destroys more effective infrastructure, published in: golem.de, published on: October 20, 2016, https://www.golem.de/news/ stresstest-­netflix-­chaos-­monkey-­zerstoert-­effektiver-­infrastruktur-­1610-­123938.html, retrieved May 2019. Hahn, Dave (2018) in: How Netflix thinks of DevOps, retrieved from: youtube.com, https://www. youtube.com/watch?v=UTKIT6STSVM, accessed May 2019. Haufe Akademie (2019): Whitepaper New Work, published in: haufe-akademie.de, https://www. haufe-­akademie.de/l/new-­work/, accessed June 2019. Hawkins, Andrew J. (2019):Apple’s secretive self-driving car project is starting to come into focus appeared in: theverge.com, https://www.theverge.com/2019/2/20/18233536/apple-­self-­driving-­ car-­voluntary-­safety-­report-­project-­titan, accessed May 2019. Hill, Andreas F. (2017): Sustaining Growth with the Three Horizons Model for Innovation, published in: medium.com, https://medium.com/frameplay/planning-­for-­future-­growth-­with-­the-­ three-­horizons-­model-­for-­innovation-­18ab29086ede, accessed May 2019. Hushon, Dan (2018): 6 digital transformation trends for 2019, published in: blogs.dxc.technology, https://blogs.dxc.technology/2018/11/15/6-digital-transformation-trends-for-2019/, accessed June 2019. Kalakota, Ravi (2015): Customer Engagement Architecture: A Quick Reference Guide, published in: disruptivedigital.wordpress.com, https://disruptivedigital.wordpress.com/2015/03/26/customer-­ engagement-­architecture/, retrieved June 2019. Kenton, Will (2019): J-Curve Effect, published in: investopedia.com, https://www.investopedia. com/terms/j/j-­curve-­effect.asp, accessed June 2019.

244

8  The Cloud Transformation

Krauter, Ralf (2018) in: Der Hype um die Quantencomputer – Ralf Krauter im Gespräch mit Manfred Kloiber, published in: deutschlandfunk.de, https://www.deutschlandfunk.de/rechnen-­mit-­qubits-­ der-­hype-­um-­die-­quantencomputer.684.de.html?dram:article_id=422355, accessed May 2019. Kurtuy, Andrew (2019): What can you learn from Satya Nadellas’s Rise to CEO, published in: novoresume.com, https://novoresume.com/career-­blog/satya-­nadella-­one-­page-­resume, accessed June 2019. Lorusso, Vito Flavio (2016): Overview of Prestashop and Azure integration and Partnership, presentation at Prestashop Day conference, published in: slideshare.net, https://www.slideshare.net/ vflorusso/prestashop-­and-­azure, accessed May 2019. Merks-Benjaminsen, Joris (2017): Customer focus as the key to digitalization, published in: thinkwithgoogle.com, https://www.thinkwithgoogle.com/intl/en-­gb/marketing-­resources/content-­ marketing/customer-­focus-­key-­digitalization/, accessed May 2019. Microsoft Azure (2019): Framework for Adopting the Microsoft Cloud (Microsoft Cloud Adoption Framework), retrieved from: docs.microsoft.com, https://docs.microsoft.com/de-­de/azure/architecture/cloud-­adoption/overview, accessed May 2019. Möhrle, Martin and Dieter Specht (2018): Not-Invented-Here-Syndrome, published in: wirtschaftslexikon.gabler.de, https://wirtschaftslexikon.gabler.de/definition/not-­invented-­here-­ syndrom-­40808/version-­264185, accessed June 2019. Moore, Geoffrey (2015): Zone to Win: Organizing to Compete in an Age of Disruption. Diversion Books, New York. Moore, Geoffrey (2016): Zone to win, talk at GoTo conference, appeared in: youtube.com, https:// www.youtube.com/watch?v=fG4Lndk-­PTI, retrieved May 2019. Moore, Geoffrey (2017): Zone to Win  – Organizing to Compete in an Age of Disruption, paper presented at TSIA conference, retrieved from: youtube.com, https://www.youtube.com/ watch?v=FsV_cqde7w8, accessed June 2019. Nadella, Satya, Greg Shaw and Jill Tracie Nichols (2017): Hit Refresh – The Quest to Rediscover Microsoft’s Soul and Imagine a Better Future for Everyone, HarperCollins, New York. Nordcloud (2018): Nordcloud is leading “Accelerator” for “Managed Public Cloud“ in Germany, published in: nordcloud.com, https://nordcloud.com/de/nordcloud-­ist-­fuehrender-­accelerator-­ fuer-­managed-­public-­cloud-­in-­deutschland/, retrieved May 2019. Orban, Steven (2016): 6 Strategies for Migrating Applications to the Cloud, published in: aws. amazon.com, https://aws.amazon.com/de/blogs/enterprise-­strategy/6-­strategies-­for-­migrating-­ applications-­to-­the-­cloud/, retrieved May 2019. Protalinski, Emil (2019): Microsoft reports $32.5 billion in Q2 2019 revenue: Azure up 76%, Surface up 39%, and LinkedIn up 29%, published in: venturebeat.com, https://venturebeat. com/2019/01/30/microsoft-­earnings-­q2-­2019/, retrieved May 2019. Rackspace (2019): Unlocking the value of cloud faster with a leader according to Gartner Magic Quadrant, published in: rackspace.com, https://www.rackspace.com/de-­de/about/magic-­ quadrant-­leader, retrieved June 2019. Ryan, Joe (2018): Three horizons for Growth, published in: youtube.com, https://www.youtube. com/watch?v=cwwAswmJ_yE, retrieved May 2019. Ryland, Mark (2018): AWS Summit Berlin 2018 Keynote, talk given at AWS Summit conference, retrieved from: youtube.com, https://www.youtube.com/watch?v=s2Lpkm-­jewo&feature=youtu. be, accessed June 2019. SAP (2002):SAP Annual Report 2001, published in: sap.com, https://www.sap.com/docs/download/ investors/2001/sap-­2001-­geschaeftsbericht.pdf, accessed June 2019. SA Technologies (2018): The 5 R’s of Cloud Migration Strategy, published in: medium.com, https:// medium.com/@satechglobal/the-­5-­rs-­of-­cloud-­migration-­strategy-­3b6a5676dda2, accessed June 2019.

References

245

Scania (2017):Scania’s Cloud First initiative is paradise for developers, published in: scania.com, https://www.scania.com/group/en/scanias-­cloud-­first-­initiative-­is-­paradise-­for-­developers/, retrieved June 2019. Schewe, Gerhard (2018): Organizational Culture, published in: wirtschaftslexikon.gabler.de, https:// wirtschaftslexikon.gabler.de/definition/organisationskultur-­46204/version-­269490, accessed June 2019. Schumacher, Gregor (2018): Cutting Costs with Cloud Native, published in: cloud-­blog.arvato.com, https://cloud-­blog.arvato.com/kosten-­senken-­mit-­cloud-­native/, accessed May 2019. Schwitter, Konrad, Peter Weissmüller and Christian Katz (2007): Unternehmenskultur als Treiber für den Unternehmenserfolg, published in: SME Magazine, Number 5/2007, pp. 10–33. Sinek, Simon (2011): How great leaders inspire action, TED Talk talk, published in: youtube.com, https://www.youtube.com/watch?v=7zFeuSagktM&feature=youtu.be, retrieved June 2019. Siriu, Stefanie (2018): Corporate Covernance, published in: Haufe.de, https://www.haufe.de/ compliance/management-­praxis/corporate-­governance/corporate-­governance-­defintion-­und-­ ziele_230130_479056.html, accessed May 2019. South, Michael (2018): Scaling a governance, risk, and compliance program for the cloud, ermerging technologies, and innovation, published in: aws.amazon.com, https://aws.amazon.com/de/ blogs/security/scaling-­a-­governance-­risk-­and-­compliance-­program-­for-­the-­cloud/, accessed May 2019. Tappin, Steve (2015): The six types of CEO, published in: hrmagazine.co.uk, https://www.hrmagazine.co.uk/hr-­in-­the-­boardroom/article-­details/the-­six-­types-­of-­ceo, accessed June 2019. Thrasyvoulou, Xenios (2014): Understanding the innovator’s dilemma, published in: wired.com, https://www.wired.com/insights/2014/12/understanding-­the-­innovators-­dilemma/, retrieved May 2019. Weaver, Marc (2019): As 2018 Ends, AWS Makes Surprise Announcements for 2019, published in: simplilearn.com, https://www.simplilearn.com/aws-­makes-­surprise-­announcements-­for-­2019-­ article, retrieved June 2019. Weddeling, Britta (2018): Fewer iPhones, more profit – Apple figures in flash analysis, published in: handelsblatt.com, https://www.handelsblatt.com/unternehmen/it-­medien/quartalszahlen-­ weniger-­iphones-­mehr-­gewinn-­die-­apple-­zahlen-­in-­der-­blitzanalyse-­/23258442.html?ticket=ST-­ 477138-­1x2ld5nEdcDuFAfsx46L-­ap2, accessed May 2019. Weinberger, Matt (2015): Satya Nadella: ‘Customer love’ is a better sign of success than revenue or profit, published in: businessinsider.com, https://www.businessinsider.com/microsoft-­ceo-­satya-­ nadella-­on-­culture-­2015-­10?IR=T, accessed June 2019.

9

Cloud Transformation – How the Public Cloud Is Changing Businesses

Abstract

“Cloud Transformation – How the Public Cloud is Changing Companies” bridges the gap between the theoretical foundations of digitization (Chaps. 1, 2 and 3), the technical factors of implementing a cloud strategy (Chaps. 4, 5 and 6) and organizational implications for corporate management (Chaps. 7 and 8). To ensure that the reader always has the most important points quickly at hand, this chapter provides you with a compact overview of the central theses.

9.1 Businesses Fail – Even When Managers Seem to Do Everything Right Disruptive technologies are changing the market logic of entire industries. Products that were previously only accessible to a few customers due to their price and complexity are now becoming available to a broad mass. Not only the price of a disruptive product changes in comparison to the traditional product, but also the target groups and the distribution channels as well as the scope of services and the degree of standardization. Established providers usually recognize the emerging, disruptive technology. However, it is not worthwhile for them to launch the new disruptive products, as their existing, secure business model with high margins would be negatively affected. The necessary change in the value chain is also exceptionally risky. According to the company’s standard decision-making patterns, many of the managers involved do not consider it appropriate to invest in disruptive business. Once the disruptive technology has changed the market to such an extent that the margins of the old market leaders decrease, it is usually too late to take rescue measures. The company has lost the battle for the new business (Fig. 9.1 and 9.2). © The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2023 R. Frank et al., Cloud Transformation, https://doi.org/10.1007/978-3-658-38823-2_9

247

248

9  Cloud Transformation – How the Public Cloud Is Changing Businesses

Companies under pressure

Chapter 1 Disruptive technologies endanger the existing businesses of previously successful companies. Chapter 2

Digital dominates Marginal costs decide Cloud automates IT

Certain framework conditions increase the probability of success in digital business.

Platforms are the new economic paradigm.

Chapter 3 Low marginal costs are the key factor in competition.

Digital business models are zero marginal cost business models. Chapter 4

Many readyto-use IT components

µ€

microtransactions

Costs aligned with benefits

Global scaling

Chapter 5

Transform applications

Application which are key for the competitive advantage should be transformed to cloud.

Master cloud and software value chain

Successful digital businesses require new skills:

Benefit from sinking transaction costs

Transform companies

Virtualization Layer

Sourcing options

Cloud technologies reduce the costs and risks of outsourcing.

This will enable any company to offer zero marginal cost business models.

Software architecture

Chapter 6 Process flows

Chapter 7 Companies focus only on their core business. Complex network-based value chains are created.

Disruptive technologies require different types of transformation IT infrastructure

People & Organizations

Operating model

Chapter 8

Business model

Fig. 9.1  Cloud transformation – How the public cloud is changing companies

The cloud is in many cases the most important example for such a disruptive technology at the moment. It disrupts classic corporate IT by making computing power, storage, network and virtually all other IT components available to a broad user group in a simple manner. This simplification is achieved by fully automating the ordering and delivery processes. Whereas manual steps and thus people are still necessary in traditional IT, all customer-oriented processes are automated in the cloud. With the cloud, the IT of a company can be digitized. This sounds paradoxical at first, since the IT products and services in a company are already digital before the switch to the cloud. However, this only applies at the level of the products, not at the level of the organization of the work processes. Since powerful IT is essential for the operation of digital

9.1 Businesses Fail – Even When Managers Seem to Do Everything Right

249

Business development

Personal Computer

Standardized small computers in mass business for B2B and B2C. Strong relevance of the operating system business.

New winner

ManagementDilemma

Microsoft

Market leader

IBM

Mainframe computers

Few, highly customized mainframe computers with few major customers in B2B business.

Time

Fig. 9.2  Innovator’s Dilemma according to Clayton Christensen using the example of “Mainframe Computer versus Personal Computer”

Customer

Customer

Application Operation Databases Infrastructure

Traditional IT Manual steps involved

Cloud Serverless

PaaS IaaS

Mobile

Cloud IT Digitized order and delivery processes

Fig. 9.3  Cloud as digitization of IT

business models, the cloud has a key role to play in the digitization process: companies that make use of the new functionalities can set up and replicate business models faster and more cheaply than their competitors. In this way, they create relevant competitive advantages over companies that pursue classic IT approaches (Fig. 9.3). Cloud disrupts not only traditional IT companies, but potentially all providers of digital business models. Understanding the changing power of the cloud is therefore essential for all companies that want to operate successfully in the digital age (Fig. 9.4).

250

9  Cloud Transformation – How the Public Cloud Is Changing Businesses Digital business models

Users of traditional IT

Application

e.g. Wal Mart, Otto, Disney

Operations

Cloud IT users

Disruption

e.g. Amazon, Zalando, Netflix

Databases

Traditional IT companies e.g. SAP, T-Systems

Infrastructure

Traditional IT

Cloud providers

Disruption

e.g. Salesforce, AWS

Cloud IT

Fig. 9.4  Cloud technologies as a multiple disruptive factor Platform-based business models

Powerful digital devices Omnipresent Internet

The world goes digital

Mass data-based Individualization

Digitization of products Digitization of processes

Increasing economic attractiveness Winner takes all markets Changed risk profile of investments

Fig. 9.5  Digitalisation is changing economic paradigms

9.2 Digitalisation as a Defining Trend in the Economy The technical term digitization initially refers only to the conversion of analog signals into digital ones. The great economic relevance of this transformability has only become apparent in the last three decades in combination with further technological developments. The personal computer made it possible to work decentrally at every desk in a company. The ubiquitous Internet combined with the easy-to-use smartphone led to new forms of social communication, and the ease with which digital data can be processed on all devices led to the convergence of media. All in all, this development leads to a digital omnipresence. Digital technologies are being used in all areas of value creation – as far as possible – and are replacing human work steps (Fig. 9.5). If the actual product itself cannot be digitized, such as a pair of jeans or an automobile, at least all processes steps along the value chain will be digitized – for example production control, quality control and distribution. In cases like taxi services, much of the value creation is mapped by digital platforms like Uber. The fact that customers have easy access to the service via the app and thus provide a lot of usable data for analysis and further product development forms – in addition to the actual taxi ride – the basis for the company’s future economic profit. The digital economy is changing the economic paradigms. Global business models for non-digital products are very investment-intensive; in addition, local production and distribution facilities must be set up and complex, worldwide logistics organized. Each local market must be supplied and conquered individually. Digital businesses, on the other

9.3 Marginal Costs Determine Competitiveness

251

Modern error culture ...

Access to venture capital ... ... to enable high-risk growth phases.

Cloud & software know-how

Digital PlatformEconomy

... to enable competitiveness in the underlying digital product

... to try out new business models and features more faster.

Regulatory opportunities ...to leverage value creation network and enable data-based business zu ermöglichen.

Fig. 9.6  Success factors in the digital world

hand, require significantly less investment in production facilities and logistics. They use existing cloud infrastructures that only need to be paid for when they are successful – i.e. when customers actually access the digital products and services. In turn, higher investments are necessary in the growth phase of a product, because this is when it is decided whether the company achieves the decisive size for a “winner-takes-all” market. This new risk profile of digital businesses requires different economic, cultural and political framework conditions than traditional, non-digital businesses (see Fig.  9.6). Companies can make much better use of the opportunities presented by this development if. • • • •

the investors allow high-risk growth phases (e.g. venture capital), the use of existing IT infrastructures is possible (e.g.: public cloud, software services), modern software architectures and methods are used (e.g. microservices, DevOps), Business models are quickly tested under market conditions, discarded and improved (e.g. error culture, minimum viable product).

9.3 Marginal Costs Determine Competitiveness Marginal costs are those costs that a company incurs to produce one additional unit of a good. They have always been one of the determining factors in a business. As a rule, the larger a firm is and the more experience it has in producing a good, the lower its marginal costs. The lower the marginal cost, the better the competitive position. This can then be used to either grow more or make higher profits than the competition. In many phases of the global economy, the battle for the lowest marginal costs has produced large corporations, mostly through corporate mergers. In the digital world, on the other hand, even small companies can very quickly operate with very low marginal costs. This is due to the special characteristics of digital goods: • Provisioning takes place via existing infrastructures, such as Google’s cloud data centers and Deutsche Telekom’s network infrastructure.

252

9  Cloud Transformation – How the Public Cloud Is Changing Businesses

Costs

Average costs

Complete digitization CD Marginal costs

Streaming

Production

Product or service

Sales

A zero marginal cost business model is being created.

Output quantity

Fig. 9.7  The emergence of zero marginal cost business models with simultaneous digitization of production, product and sales

• The digital good itself is generated by software. Its actual, individual cost per use originates from the additional power consumption per computing operation. In most cases, this is very low and in fact approaches zero. If all process steps along the value chain, from production to distribution, are digitized, zero marginal cost business models emerge (Fig. 9.7) The music industry has developed into a zero marginal cost industry in recent years. In the 1990s, the production of music was first digitized with the invention of the CD, while distribution was still physical. The next step was to digitize the distribution channel as well, with Apple’s iTunes leading the way. The size of the company itself no longer plays a decisive role in the success of the business model. The low marginal costs arise from the interaction of the use of already existing IT infrastructures (usually the public cloud) and modern software architectures that scale globally in an automated manner. Initially small startups, such as Spotify, can thus successfully conquer global markets and destroy the existing business models of large companies.

9.4 Cloud as a Key Technology of Digitization The three most important core processes of IT value creation are: Creating Software, Operating Software, and Scaling Software. In traditional IT, all three processes involve considerable manual effort: • Create software: Requirements are elicited, agreed upon, and specified. A project is started, resources are allocated and the architecture is created. Then the software is developed, tested, adapted, tested again, and went live.

9.4 Cloud as a Key Technology of Digitization

253

• Operating software: For software to be used by users, an IT infrastructure is necessary  – i.e. network, storage and computing power. In addition, databases, operating systems, runtimes and other middleware must be made available. The application itself must be kept executable and constantly adapted to new requirements. • Scaling software: Depending on the usage situation, additional IT resources are added and reduced again. Depending on the geographical distribution of use, this is also done globally. Most of the tasks described require individual, human activities in traditional IT. Programmers develop the software, testers check the functionality, system administrators provide infrastructure and middleware, operations staff take care of downtimes, helpdesk staff members deal with customer enquiries. Coordination between the many roles is handled by project managers, service managers and other management functions. Especially when it comes to changes and innovations, traditional IT is therefore expensive, slow and prone to errors. Cloud computing enables the automation of IT value creation. The key to this is the systematic use of programming interfaces – the so-called APIs (Application Programming Interfaces). Via such interfaces, IT services can be called up by the customer in a standardized manner via a programming call (Fig. 9.8). If, for example, a developer needs a server for his software, he can call it up via an “API call” with a simple command. A process that takes several days and involves several people in traditional IT becomes an automatable process step that is completed within a few minutes. The same applies in principle to all conceivable IT components, such as storage, network, databases and operating systems, but also to facial recognition, translation services, blockchain applications or entire CRM systems, online marketing tools and office software. In addition, any of these cloud services can be easily combined with each other and in turn offered via an API. In this way, fast, very powerful value networks are created. If a software fully utilizes the advantages of the cloud, the resulting application is called “cloud native”. The economic benefits of the cloud can be summarized as follows:

Cloud application

Traditional application Many dependencies Human efforts No direct customer benefit High holding costs

Security & Integration Runtimes & Libraries Databases Operating system Virtualization Computing Power Memory Network

Automated processes API

API Face recognition

Focus on core business

API Database 1

API

Übersetzung

ready-to-us IT components without investment

Database 2

Hidden complexity API Network

API Storage

Fig. 9.8  Digitization of software processes through the cloud

API Compute

API Compute

254

9  Cloud Transformation – How the Public Cloud Is Changing Businesses Cloud automates IT

The most important opportunities for companies arise from ...

µ€

Concrete economic effects

... the many IT prefabricated parts

Faster software Develop

Cheaper software operate

Easier to scale

... direct synchronization of costs and benefits

Lower fixed costs

Cope with peak loads absorb at low cost

Low-risk Trial and error

... the usability in microtransactions ... the possibility of global scaling

Enable zero marginal cost business models No own investments necessary

Worldwide business models also for small companies

Fig. 9.9  Concrete economic effects of the cloud transformation

• Ready-to-use IT components: The public cloud in particular offers a comprehensive catalog of ready-to-use IT components. In this way, software can be created significantly faster and more cost-effectively. Since the prefabricated parts are maintained by the cloud provider, the operating costs are also significantly reduced. • Costs only when used: In traditional IT, IT components are purchased in advance in the amount of the expected, maximum use. Since the average utilization of the components is usually significantly below the peak load, too many resources are usually held in reserve. This results in high fixed costs. With cloud-based IT, billing is based on actual usage. Load peaks can be absorbed at short notice with the help of additional resources made available for a short time. • Microtransactions: Many cloud services can be consumed and billed in very small units of measure. For some services, this means that billing is based on the amount of memory and computing power used. Prices per gigabyte second are often in the micro-­ cent range. This fine granularity enables the emergence of low marginal costs when scaling the application. • Global scaling: The large public cloud providers have invested massively in their global infrastructure in recent years. This can now be used by customers in microtransactions and without investment costs. Assuming a corresponding cloud-native architecture of the application, even small companies can roll out global business models in a very short time (Fig. 9.9).

9.5 Classic Applications Can Be Migrated to Cloud Technologies When an application is transformed to take advantage of the cloud natively, the economic parameters of the underlying business model change. This enables the economic disruption of companies or entire industries. For example, if a simple website is provided with classic IT, a network, storage, redundant virtual machines and a redundant database are necessary for this. These are set up and

9.5 Classic Applications Can Be Migrated to Cloud Technologies Simple example website

Traditional IT

Usage history in one week

Website Application operations Other operation services Usage

Serverless

Virtual machines Storage Network Total costs/month: ~1000€ (Fix) Scaling according to individual agreements

Website

Unused resources

System operation Backup

Cloud Native

Scales automatically according to the actual usage of the Serverless Service and the Platform Service (PaaS).

Reserves resources according to expected maximum work load.

Database cluster

255

Weekly usage pattern

Database

Cost per actual usage • 0.000014/GB-second execution time • 0,169€ per million executions • 0,0137€ per request unit for database

Fig. 9.10  Conversion of a simple application to Cloud Native

maintained once in their individual interaction. Costs are incurred for setting up the system environment, for purchasing the infrastructure, and for monthly operation – regardless of whether and how much the website is actually used. In the example of an average website shown in Fig.  9.10, a fixed monthly amount of over 1000 EUR is incurred. If more resources are needed, for example for more baseload, larger peak loads or geographical scaling, this results in an individual project with several people involved as well as additional investments in the infrastructure. If the possibilities of the cloud are used natively, a different picture emerges. Instead of building and providing the complete value chain of infrastructure, middleware and application, the website operator only uses one of the existing cloud services. Depending on the application, these can be platform services, serverless or container services. All these services also use storage, network and virtual machines in the background. But this complexity – including the associated operating expenses and provisioning costs – remains hidden from the customer. In the example referenced here, a serverless service (Azure Functions) is used for the website in combination with a database platform service (PaaS). Both generate no fixed costs, are globally scalable, payment is based on actual usage, and the marginal cost of a single call is close to zero. An application with fixed costs, which is associated with investments and can only be scaled at great expense if additional demand arises, now becomes an investment-free application with virtually zero marginal costs. The example of a website presented here has now become a “commodity” good, i.e. a good that is freely available on the market and for which the various providers act transparently on the market. The business applications developed for the specific needs of companies are usually much more complicated. They are based on traditional architecture concepts (monoliths), have grown over years and are sometimes poorly documented. Nevertheless, migrations to zero marginal cost architectures are also possible there (Fig. 9.11). Cloud providers are aware of this challenge and have expanded their IT toolbox accordingly. The cloud transformation is usually designed by a “cloud solution architect“. He analyzes the old application and compares it with the requirements from the new business

256

9  Cloud Transformation – How the Public Cloud Is Changing Businesses

High fixed costs

-50,000 €1 month for Infrastructure for a maximum of 3 million Transactions (TA)

Application before transformation

Cloud-native application

Application

Application

Fixed marginal costs

plus -20.000€ for additional 3 Mio. TA per month

Long duration of scaling 6 weeks for setup and delivery of hardware

Operation Klassische Infrastruktur Databases

Higher average cost

Infrastructure

Average total cost per transaction 0.0339 €

Use of platform services

Webserver

Lower fixed costs

5 of the 13 components are fixed costs (-20,000 €/ month)

Usage-based scaling

Low unit cost of 0.0027 € per TA from the beginning

Logic Apps

Lower project costs

Service Bus Database Firewall Storage …

Faster projects due to automated ordering processes

Lower total costs

Over the entire runtime this results in results in -60% cost reduction

Cloud Transformation Expenditure: ~230.000 €

Fig. 9.11  Cloud transformation of a business model-relevant application Software value chain Creating software

Operating software Security & Integration Runtimes & Libraries Databases Operating System Virtualization Computing Power Memory Network

Scaling software

Affected competitive factors in the digital business model Creation costs First project

New functions

Time-to-market First project

New functions

Error proneness Security

Fig. 9.12  Software value creation as a factor in digital competition

model. These are often very individual for different applications. In some cases it is about high availability, in others about frequent changes to the user interface or the global synchronization of certain data. The task of the cloud solution architect is to divide the old monolith into services that are decoupled from one another in such a way that the technical and economic properties fit the requirements of the business model. If he succeeds in this, the result is a potentially globally scalable application to which new features can be added quickly and securely. Fixed costs are low and marginal costs are virtually zero.

9.6 Becoming Competitive for the Digital World with Software and Cloud Skills The software underlying the business model strongly influences the cost structure and performance of a digital business model (see Fig. 9.12). The relationship is comparable to the development and manufacturing process in the automotive industry: If a car is poorly developed, it will never be able to go fast. If it is developed well but manufactured poorly, it will be able to drive fast but will break down frequently and spend a lot of time in the workshop.

9.6 Becoming Competitive for the Digital World with Software and Cloud Skills

257

A core thesis of this book is that digital leaders should have a fundamental understanding of the software value chain to keep their organizations digitally competitive. The value chain includes the three steps of software creation, software operation, and software scaling. The sequence is not linear; software may well scale during operation while adding new functionality. Below the software value creation lies the “stack“, i.e. the infrastructure (network, storage, computing power) as well as other components such as virtualization, operating system and databases. The following factors influence the performance of software value creation: • Virtualization layers: IT value creation has been increasingly virtualized by cloud technologies, i.e., decoupled from actual hardware and made available to customers in an automated manner via interfaces as a software service. On the one hand, this virtualization generates advantages in terms of speed, reproducibility, scalability and flexibility at higher levels of the IT stack. On the other hand, the standardization that accompanies automation also reduces flexibility and transparency at the lower levels of the IT value chain. • Sourcing options: Cloud technologies influence the make-or-buy decision due to the decreasing transaction costs. There are now strategically relevant questions to decide whether to continue with a data center, build a private cloud, or perform a public cloud transformation. Furthermore, there is the possibility of creating hybrid solutions, which are summarized under the term “multi-cloud“. • Software architecture: Software architectures have undergone relevant further development since the 1990s. While client-server applications were still primarily created in the 1990s, the multi-tier model developed from the beginning of the 2000s. Then, with SOA (Service-oriented Architectures), an architecture pattern developed that relied on distributed systems. Currently, new applications are often developed in the microservices model according to cloud-native principles. • Process flows: Despite their digital nature, software development, operation, and scaling are actual business processes in which – similar to the production of a car – people, tools, suppliers, and machines function in an integrated manner and ultimately produce a product. The ideal of optimal software development processes is usually represented by the idea of the “DevOps team”, which are small, interdisciplinary team that takes care of all aspects of the service assigned to it. • People & organization: The greater the share of automation in the overall system, the greater the responsibility and influence of the remaining people who are responsible for automation and its further development. On the one hand, these are the cloud solution architects. On the other hand, these are the executives and top managers who are forced by the market to leave their previous patterns of success. Rigid structures and internal silos are being replaced by an agile attitude, thinking in terms of Minimum Viable Products and a new error culture.

258

9  Cloud Transformation – How the Public Cloud Is Changing Businesses

Sourcing options Private / Public / Hybrid / Multi-cloud, own data center

Software Architecture Architecture patterns, ReST API, Cloud Native, Security

Virtualization Layers

Process flows Continuous Integration & Deployment, Agile/Scrum, DevOps

People & Organization

Software value chain

Traditional IT, laaS, CaaS, PaaS, FaaS, SaaS

Creating software

Operating software

Scaling software

Agile mindset, MVP, New Work, skills shortage, feature teams

Fig. 9.13  Relevant factors influencing the performance of software value creation

Company A

Car manufacturer

Auto

... the more “make The higher ...

Outsourcing generates transaction costs Outsourcing generates costs for contract initiation and agreement, control, enforcement and adjustment.

Company B

Tire manufacturer

The company prefers to produce the service itself rather than buying it on the market.

High transaction costs keep the company together.

... the more “Buy” The lower ...

The service is easily and reliably available on the market, the company decides to outsource.

Fig. 9.14  Transaction costs are the glue that holds companies together

All five factors can have a positive or negative impact on the competitiveness of the digital business model (Fig. 9.13).

9.7 Sinking Transaction Costs Lead to More Outsourcing and Change the Economy Transaction costs are the costs incurred by a company as soon as it outsources a good or service. These costs are incurred for the initiation and agreement of the contract, for the execution of the external transaction, for the monitoring of the contract agreement and for the enforcement of performance claims. Every company regularly makes make-or-buy decisions in this regard. It analyzes, if it is more expensive in total (purchase price + transaction costs) to have the service provided externally or to produce it internally. The lower the transaction costs, however, the greater the incentive to no longer produce services oneself but to orchestrate suppliers instead. Transaction costs thus have an important effect on companies. They form the glue that holds the traditional organization together (Fig. 9.14). Cloud technologies reduce transaction costs relevantly, because IT services are ordered automatically via programming interfaces (APIs). Complex contract negotiations are very rare, controls can also be optimized and enforcement problems are avoided through

9.8 Cloud Transformation Affects All Companies with Digital Value… Integrators ... Companies that provide many services themselves

... become orchestrators. Companies that orchestrate their network.

Overall, we are moving toward the network economy.

Cloud application

Traditional application Security & Integration Runtimes & Libraries Databases Operating System

API

API Facial recognition

API Database 1

API

259

Translation

Companies are focusing strongly on their core...

Database 2

Virtualization Rechenleistung Memory Network

...with complementary but also competing providers of cloud services (frenemies)

Fig. 9.15  Falling transaction costs are changing the corporate world

intelligent IT architectures. The development of adaptations to existing applications is significantly faster and cheaper with modern cloud technologies. Accordingly, as transaction costs fall due to cloud technologies, the tendency of companies to outsource their IT services is increasing. The market analyst Gartner assumes that around 80% of companies will close their traditional data centers in the future and use new forms of automated IT. In addition to reducing capital lockup and providing faster access to innovations, outsourcing IT services to the cloud has the following main advantage: companies can focus on their primary processes with their remaining IT staff. Instead of purchasing and maintaining existing infrastructure themselves, the IT experts can now program new applications and develop functions that are part of the core business of the company. The focus on the core business leads to companies transforming themselves from integrators to orchestrators. Instead of providing a large part of the value creation themselves, they orchestrate their value creation network. This makes them smaller and more flexible. In the result, the whole economy is thus more and more developing into a network economy (Fig. 9.15).

9.8 Cloud Transformation Affects All Companies with Digital Value Creation – On Three Different Levels According to McKinsey’s “Three Horizons” model, companies operate parallelly in three different time horizons. The first horizon – with an outlook of 12 months – is about maintaining and expanding the core business profitably. The third horizon – with an outlook greater than five years – is about inventing and trying out possible products and services of the company’s future. The second horizon covers the time between the first and third horizons – usually the period of one to five years in the future. In this period new products are tested and integrated into the portfolio. Geoffrey Moore adds the factor of disruptive technologies to this model. As described by Clayton Christensen, it is particularly difficult for successful companies to use

260

9  Cloud Transformation – How the Public Cloud Is Changing Businesses

SaaS Machine Learning

Transform portfolio

Internet of Things Cloud

Modernize Operating model

Social Media Mobile

Serverless

From operating system developer to cloud provider Ordering a cab easier with a mobile app

IaaS

Optimize infrastructure

Migrating the SAP system to the cloud

Disruptive technology

Level of action in the company

Examples

PaaS

Copyright Frank/Schumacher/Tamm

Levels of disruption according to Geoffrey Moore

Fig. 9.16  Levels of disruption according to Geoffrey Moore Container Cloud Native Cloud Serverless

PaaS

IaaS

Mobile

... to the cloud From the traditional data center ...

Microservices Automated Distributed

Rest API

Elastic DevOps Isolated State Loosely Coupled

... with modern software architectures and approaches.

Fig. 9.17  Change infrastructure model

disruptive innovations in their own business. They sometimes threaten the attractiveness of their own business and they bring about major changes in sales, marketing and service delivery. In his book “Zone to Win“, Moore has outlined model strategies for how companies should deal with disruptive technologies. He focuses in particular on cloud technologies, mobile services, data analytics and machine learning functions. He identifies three levels at which companies can use disruptive technologies for their own benefit (Fig. 9.16). For Moore, the ability to reduce marginal costs to effectively zero is the main reason why he recommends the systematic use of cloud technologies. The first level of cloud transformation is the transition of the infrastructure model. This involves moving within a company from traditional IT, which is characterized by many human interactions, to automated IT in the cloud. The main effort in this transformation

9.8 Cloud Transformation Affects All Companies with Digital Value… with a focus on the customer, ...

data analytics, machine learning, ...

... and interdisciplinary & agile teams ...

261

... to globally scaling business models with zero marginal costs.

Modernizing the operating model

Fig. 9.18  Modernize operating model

lies in the application transformation. Each application is analyzed individually for its current status quo as well as the requirements from the business and the effort required for the transformation (see Fig. 9.17). At this level, the business models remain unchanged for the time being, but the changes in the architecture of the application improve the economic conditions for the businesses based on them: less capital tied up, lower marginal costs, lower transaction costs, focus on the core business. On the second level, it is a matter of exploiting the opportunities offered by disruptive technologies for the business itself. For example, IT components from the cloud can be used to easily perform Big Data analyses and deploy machine learning models (see Fig. 9.18). In order to keep up with the digital competition, the transformation of technology is only a first step. Additional changes in the internal organization of the company are necessary. Agile and interdisciplinary teams can act much faster than traditional teams. This form of organization is called DevOps or feature team. A prerequisite for the use of this team, however, is that the relevant employees are actually closely involved in the process, otherwise they will be slowed down by the existing hurdles of the old organization. The challenge at the third level is by far the greatest. There are two predominant situations there: • Zone Offense: A company is so excited about a disruptive product (ex: Apple with the iphone) that it is willing to shift the focus of the entire company to this disruptive product and to lower the priority of the current existing business at the same time. • Zone Defense: This is a company whose profitable existing business is threatened by a disruptive technology (e.g. IBM mainframe computers). The previously profitable, traditional product must now be provided with the help of new technology as quickly as possible so that the existing customers do not switch to the new competitor. In both cases, it can be assumed that a “period of transition” must be traversed with the new, disruptive product. Moore calls this the “J-curve”, in reference to the shape of the

262

9  Cloud Transformation – How the Public Cloud Is Changing Businesses

The CEO takes charge of the transformation of the entire company himself.

The prioritization of the horizons follows the conditions of the transformation.

Transformation

The valley of tears is passed through, until the new product reaches 10% of total sales.

10%

Fig. 9.19  Transform total portfolio

letter. The key to the success of the overall portfolio transformation is the right prioritization between the three horizons (or zones). Only when the new business accounts for about 10% of the total company’s turnover, the transformation can be considered successful (Fig. 9.19).

9.9 Conclusion In summary, there is much more to the cloud transformation of a company than the inflationary use of the associated buzzwords in management meetings would suggest. The transfer of traditional IT into a cloud-native architecture is becoming a prerequisite for aligning a company with the digital future. A historical comparison makes this even clearer. The transfer of on-prem IT systems from the enterprise to the cloud is as revolutionary as connecting companies to the public power grid at the beginning of the twentieth century. The parallels between these two infrastructure revolutions are striking. Before electricity grids existed, businesses had huge steam engines that needed to be constantly fired up so that the steam could be used to drive the pistons. This created mechanical energy that was used to manufacture goods such as textiles. The maintenance tasks alone were immense. As the operator of the steam engines, the company was responsible for ensuring that the manufacturing plants were permanently supplied with mechanical energy. To this end, the entrepreneurs hired specialists who took care of the maintenance and operation of the steam engines. If you had asked an entrepreneur at that time whether he could imagine connecting his factory to the electrical grid, he would certainly have vehemently objected. Making use of a decentralized service? And then pay for it? Unthinkable. After all, the machines are there and they work. It took a new generation of corporate leaders before the use of the electrical grid became commonplace on factory floors in Europe and the United States. The advantages were enormous. Because thanks to the power grid, companies no longer had to worry about energy supply. This service was now provided centrally by a third-­ party provider, the electricity producers or power plants. As a result, manufacturing plants

9.9 Conclusion

263

only used as much electricity as they needed, and the inefficiencies of energy production were eliminated in one fell swoop. Best of all, the power grid was much more reliable than the company’s own maintenance people who monitored the operation of the steam engines. You see the parallels? Right. A similar transformation is repeating itself right now in the transition to the public cloud. The decision for or against entering the public cloud is a decision with great consequences. If these steps are not considered, the consequences could be serious. In the worst case, the company will disappear into the business history books. Then the question will arise, “Do you still remember Company X?”

Index

A Abstraction, 91, 93, 94, 98, 99, 138, 139, 224 Adjustment costs, 171, 194 Administration costs, 172, 191 Agility, 66, 107, 150 AI, see Artificial Intelligence (AI) Analytics, 95, 96, 142, 196, 208, 239, 260 API, see Application programming interface (API) Application programming interface (API), 10, 87–94, 96, 97, 105, 139, 148, 156, 181, 227, 230, 238, 253 API manifest, 90, 91, 93 Artificial intelligence (AI), 27, 39, 62, 67–71, 76, 87, 95, 107, 184, 188, 226, 227 Automation, 48, 58, 59, 75–110, 117, 118, 148, 155, 157, 158, 160, 162, 178–185, 194, 221, 228, 230, 238, 253, 257 Autoscaling, 122 Average cost, 51, 52, 54, 57, 121, 122, 128, 130, 132, 173 B Billing, usage-based, 88, 89 BizDevOps, see Feature Team Blockchain, 15, 60, 62, 96, 253 Boston Consulting Matrix (BCG Matrix), 36, 47, 48 Bottleneck, 121, 123, 137, 138, 223 Business model, 2–5, 7, 8, 10, 11, 17, 23, 26–28, 33, 34, 37, 38, 40, 47, 49, 52–56, 58–61, 64, 67, 75, 76, 78, 81–85, 87–89, 93, 94, 100, 101,

108–110, 115, 116, 119, 122, 130–133, 135, 138, 143, 147, 149, 186, 187, 189, 203–205, 207–211, 213, 219–222, 224, 225, 227–229, 232–237, 240, 241, 247, 249–252, 254, 256, 258, 261 C CAP theorem, 146 Centaur team, 70, 71 Change mentality, 37 Chaos Monkey, 224 Chapters, 11, 20, 30, 40, 64, 65, 67, 121, 134, 141, 147, 164, 195 Client separation, 104, 117 Client-server architecture, 144 Cloud, 5–11, 24, 60, 62, 71, 75–110, 115–132, 134, 135, 138, 139, 141–143, 147–149, 156, 158, 159, 162, 178–180, 184–186, 188–196, 198, 206, 208, 209, 211–213, 215–222, 224–228, 230–241, 248–262 Cloud-native architecture, 147–150, 220, 254, 262 Cloud solution architect, 122, 193, 209, 215, 218, 231, 237, 255–257 Cloud strategy team, 208–210, 212, 213 Cloud transformation, 7, 9–11, 85–96, 106, 122–125, 150, 163, 185, 199, 207, 208, 210, 212, 214, 216, 219, 220, 230, 231, 236–242, 247–263 level, 105 Communication costs, 178, 179, 181

© The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2023 R. Frank et al., Cloud Transformation, https://doi.org/10.1007/978-3-658-38823-2

265

266

Index

Compliance, 79, 102, 106, 107, 122, 137, 142, 193, 213, 216, 219, 236, 239 Compound effect Container, 94, 98, 100, 107, 139, 141, 142, 255 Continuous deployment (CD), 17, 155, 156, 252 Continuous integration (CI), 155, 156 Control costs, 171, 172, 175, 194 Convergence, 17, 18, 20, 21, 250 Core assets, 186, 187 Core competence, 133–165, 186, 187 Corporate culture, 159, 161, 229, 231, 236, 240 Cost-income ratio, 60 Customer focus, 224–226 Customisability, 34, 61

G Governance, 107, 138, 213, 216 Governance, Risk, Compliance (GRC) Program, 216 Graphical user interface (GUI), 84, 87, 88, 105, 151, 181, 188, 194 GUI, see Graphical user interface (GUI)

D Daily scrum, 153 Data analysis, 224–226, 229, 230, 261 Data leveraging, 26, 28, 30, 37, 65 DDoS attack, 104, 106 Decision costs, 172, 190, 191, 193, 194 Dedicated IT, 94 Design thinking, 38 Designed for failure, 224 Development sprint, 153, 154 Development team, 137, 152–154, 156 DevOps, 150, 157–159, 161, 163, 182, 193, 228–231, 240, 251, 257, 261 Digitalization, technical Dilemma analysis, 205

I IaaS, see Infrastructure-as-a-Service (IaaS) Identity management, 84, 104–106, 215, 219 IKEA costs, 172, 174 Information costs, 172, 190, 193 Information technology (IT), 5–10, 59, 75–110, 115–132, 134–136, 138, 139, 141–143, 145, 147–149, 156–159, 162, 164, 178, 184, 185, 187, 189–195, 197, 205, 207–209, 211, 212, 214–216, 219–222, 224, 226–230, 232, 236–241, 248, 249, 251–255, 257–262 added value, 22, 25, 231 landscape, 82, 94, 95, 100, 106, 118, 142, 189, 193, 216, 219, 220 operation, classic, 215 process cloud-based, 76, 85, 88, 93, 97–99, 108, 109, 239, 254 classic, 10, 76–83, 93, 99, 115–132, 221, 224, 226, 227, 238, 249, 254 Information Technology Infrastructure Library (ITIL), 82 Infrastructure model, 207–213, 215–217, 219, 220, 260 Infrastructure-as-a-Code, 135 Infrastructure-as-a-Service (IaaS), 94, 100, 123, 141, 184, 211, 215, 221, 238, 241 Innovator’s dilemma, 2–5, 203, 206, 240, 241, 249 Insourcing, 129, 141, 143, 174, 184, 196, 197

E Economies of scale, 30, 47–49, 65, 66, 98, 99, 107, 139, 172–174, 177, 188, 232 Ecosystem, digital, 34 Elastic, 126, 148, 211 Employee management, 159, 160 Encryption, 58, 106, 152, 213, 215–217 Enforcement costs, 171, 194 Experience curve, 47–49, 52, 173 F Feature team, 157–159, 182, 193, 226, 228, 261 Frenemies, 196, 197 Function-as-a-Service, 94

H Hardware, 76, 78, 80–82, 98, 99, 101–106, 109, 117, 135, 138, 142, 143, 148, 162, 193, 221, 230, 232, 257 Hybrid cloud, 142 Hyperscaler, 6, 94, 95, 196

Index Integrator, 174–178, 195, 197, 259 Intelligence, artificial (AI), 27, 62, 67–71, 76, 87, 95, 107, 184, 188, 226, 227 Internet of Things (IoT), 15, 25, 59, 62, 76, 96 L Layer player, 174–178, 195, 196 Leadership, 10, 134, 163, 165, 208, 225, 238, 240 Log & Report, 215, 219 Loosely coupled, 148 Loyalty loop, 28–30 M Machine learning (ML), 25, 27, 59, 62, 94, 95, 106, 108, 183, 208, 226, 227, 229, 242, 260, 261 Make-or-buy decision, 141, 187, 198, 257 Marginal cost analysis, 10, 49–52 Marginal cost curve, 49 Marginal costs, 7, 10, 45–71, 101, 109, 121, 122, 129, 130, 132, 147, 148, 174, 181, 185, 224, 226, 241, 251, 252, 254–256, 260, 261 Markowitz portfolio theory, 37 Microservice, 10, 145–150, 223, 251, 257 Microtransaction, 88, 89, 92, 93, 100, 108, 138, 147, 149, 254 Migration, 80, 94, 116, 122–125, 207, 210–214, 217–225, 237, 241, 255 scenario, 211, 212 Minimum viable product (MVP), 34, 133, 152, 251, 257 Monolith, 136–138, 149, 150, 223, 255, 256 Monopoly formation, 32, 46, 177 Multi-cloud, 142, 257 Multi-factor authentication (MFA), 105, 126 Multi-tier architecture, 144 MVP, see Minimum viable product (MVP) N Negative cash flow, 34 Network economics Network effect, 30, 31, 33, 38, 65, 66, 240

267 O On-premises system, 80 Operating costs, 64, 76, 98, 118–120, 122, 126–127, 148, 170, 222, 254 Operating model, 207–209, 220–232, 241, 261 Orchestrator, 174–178, 195, 196, 259 Outsourcing, 10, 11, 106, 115–117, 119–121, 123, 125, 141, 143, 174, 175, 177, 178, 183–185, 187, 195–197, 211, 215, 226, 228, 258, 259 P PaaS, see Platform-as-a-Service (PaaS) Perception threshold, 184 Pivoting, 37, 38 Platform-as-a-Service (PaaS), 11, 94, 96, 123, 141, 184, 213, 224, 227, 255 Platform, digital, 22–24, 26–40, 53, 54, 56, 61, 250 Platform economy, 22–26, 37 Preemptive shipping, 28 Private cloud, 100, 101, 106, 142, 257 Product, 2, 3, 5, 7, 10, 20–25, 28, 30–37, 39, 46–62, 64, 66, 67, 96, 133–135, 150–154, 161, 164, 165, 170–177, 182, 184–188, 195–197, 205–207, 209, 215, 230, 234, 242, 247, 248, 250–252, 257, 259, 261 backlog, 154 increment, 153, 154 owner, 152–154, 215 Public cloud, 5–7, 80, 91, 100–106, 108, 142, 149, 184, 185, 188, 217, 227, 228, 237, 239, 241, 247–263 R Ready made, 84, 89, 92, 93, 96, 147, 158, 237 Rebuild, 212, 213, 215, 220, 222, 224 Refactor, 211–213, 215, 220, 222 Rehost, 94, 123, 211–213, 215, 220–223, 227 Remain, 8, 18, 22, 47, 57, 58, 61, 68, 76, 91, 97, 101, 104, 106, 108, 116, 125, 126, 133, 146, 148, 150, 165, 175, 182, 184, 185, 194, 207, 211, 212, 215, 221, 238, 239, 255, 261

268 Replace, 3, 5, 68, 70, 211, 212, 215, 221 Resilience, 148, 150 Retire, 212, 221 Revise, 211–213, 215, 220, 222, 224 Rightsizing, 223 S SaaS, see Software-as-a-Service (SaaS) Scaling, global, 88, 89, 149, 215, 222, 254 Scrum, 152–155, 215 master, 152–155, 214, 215 Security, 58, 78–82, 84, 85, 96, 102–106, 122, 125, 141, 143, 157, 164, 180, 213, 216, 217, 219, 229, 231, 238, 239 Serverless, 11, 94, 98, 100, 224, 255 Server, virtual, 83 Service, 4–6, 10, 18, 20–28, 30, 32–35, 39, 47, 50, 52, 54, 55, 57–62, 64, 66, 78, 80–82, 84, 85, 87–108, 115–120, 122–131, 138, 139, 141–148, 152, 157, 164, 170, 172, 175, 178, 181, 184–187, 189, 191–197, 206–209, 211–215, 217–221, 223, 224, 227–231, 233, 234, 237–239, 241, 242, 247, 248, 250, 251, 253–260, 262 distributed, 147, 148, 211, 213, 232, 257 level agreements, 78, 82 Oriented Architecture, 144, 145 Silo Thinking, 158, 159 Simple Object Access Protocol (SOAP), 144, 147 Sizing, 80, 81 Software, 2, 6, 7, 10, 11, 17, 18, 25, 28, 35, 38, 53, 54, 62, 68, 70, 71, 75–85, 87, 88, 90–94, 96–100, 102, 104–106, 108, 109, 117, 125, 127, 129, 132–165, 178, 180, 181, 183–186, 188–195, 197, 198, 208, 211, 212, 215, 220, 221, 223–230, 235–241, 251–254, 256–258 architecture, 104, 108, 125, 129, 134, 136, 138, 143–151, 159, 162, 219, 230, 237, 240, 251, 252, 257 monolithic, 136 creation, 78, 97, 108, 109, 134, 155, 230, 257

Index defined network, 215 development, agile, 161 operation, 78, 134, 257 scaling, 82, 257 Software-as-a-Service (SaaS), 6, 96, 98, 105, 141, 142, 184, 185, 189, 192–194, 211, 221, 237 Sourcing option, 138, 141–143, 257 Sprint, 38, 153, 154 backlog, 154 planning, 153, 154 retrospective, 154 review, 153, 154 Squads, 163–165 Stack, 83–85, 88, 91, 93, 98, 99, 134, 139, 149, 162, 190, 191, 257 Stateless, 148 Subadditivity, 174 System, distributed, 144–147, 257 T Taylorism, 85 Technology, 2–7, 9–11, 17, 18, 31, 40, 46, 56, 59–62, 64, 66, 68, 71, 76, 77, 80, 87, 90, 91, 96, 97, 134, 139, 156, 160, 178–180, 198, 203, 205–213, 220, 221, 224, 227–229, 231–233, 235–238, 240–242, 247, 248, 250, 252–261 disruptive, 3, 5–7, 10, 11, 207, 210, 220, 228, 229, 231, 232, 236, 241, 242, 247, 248, 259–261 maintaining, 50, 147, 150, 154, 204, 259 The Big Bang, 213 Three Horizons of Growth, 203–205 Time-to-market, 35, 107 Toolchain, 215 Transaction costs, 7, 10, 169–199, 231, 257–259, 261 internal, 170, 172–174, 182, 190, 192, 194, 195 Transformation, digital, 40, 60, 134, 159, 160, 165, 203–208, 226, 229, 238, 239 Tribes, 163–165 Twin, digital, 21 Two-sided market, 31

Index V Value chain, 2, 18, 24, 46, 49, 53, 56–62, 64, 65, 67, 68, 71, 75–110, 134, 135, 162, 169–172, 174, 175, 177, 178, 181, 186, 195, 197, 221, 247, 250, 252, 255, 257 Virtualization, 10, 83, 84, 91, 94, 100, 138, 139, 141, 147, 159, 220, 257 Virtualization layer, 106, 138, 139, 141, 257 Virtualized IT, 94 Virtual machines, 83, 91, 107, 118, 139, 218, 254, 255

269 W Waterfall model, 77, 150 Z Zero marginal cost business model, 52–61, 63–68, 71, 196, 252 Zone, 3, 126, 204–207, 232–236, 260–262 Defense, 234–236, 261 Offense, 234, 235, 261 to win, 204–206, 232, 240, 241, 260