Digital Disruption : Implications and opportunities for Economies, Society, Policy Makers and Business Leaders [1st ed.] 9783030544935, 9783030544942

This book goes beyond the hype, delving into real world technologies and applications that are driving our future and ex

2,755 131 5MB

English Pages XXXVII, 299 [317] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Digital Disruption : Implications and opportunities for Economies, Society, Policy Makers and Business Leaders [1st ed.]
 9783030544935, 9783030544942

Table of contents :
Front Matter ....Pages i-xxxvii
Introduction (Bharat Vagadia)....Pages 1-8
A Framework for Understanding Digital Disruption (Bharat Vagadia)....Pages 9-17
Front Matter ....Pages 19-19
Data Connectivity and Digital Infrastructure (Bharat Vagadia)....Pages 21-63
Front Matter ....Pages 65-65
Data Capture and Distribution (Bharat Vagadia)....Pages 67-103
Front Matter ....Pages 105-105
Data Integrity, Control and Tokenization (Bharat Vagadia)....Pages 107-176
Front Matter ....Pages 177-177
Data Processing and AI (Bharat Vagadia)....Pages 179-225
Front Matter ....Pages 227-227
Disruptive Data Applications (Bharat Vagadia)....Pages 229-259
Front Matter ....Pages 261-261
Other Disruptive Technologies (Bharat Vagadia)....Pages 263-270
Front Matter ....Pages 271-271
Enterprise Digital Transformation (Bharat Vagadia)....Pages 273-289
Front Matter ....Pages 291-291
Global Policy Responses: A Snapshot (Bharat Vagadia)....Pages 293-299

Citation preview

Future of Business and Finance

Bharat Vagadia

Digital Disruption

Implications and opportunities for Economies, Society, Policy Makers and Business Leaders

Future of Business and Finance

The Future of Business and Finance book series features professional works aimed at defining, describing and charting the future trends in these fields. The focus is mainly on strategic directions, technological advances, challenges and solutions which may affect the way we do business tomorrow, including the future of sustainability and governance practices. Mainly written by practitioners, consultants and academic thinkers, the books are intended to spark and inform further discussions and developments.

More information about this series at http://www.springer.com/series/16360

Bharat Vagadia

Digital Disruption Implications and opportunities for Economies, Society, Policy Makers and Business Leaders

Bharat Vagadia London, UK

ISSN 2662-2467 ISSN 2662-2475 (electronic) Future of Business and Finance ISBN 978-3-030-54493-5 ISBN 978-3-030-54494-2 (eBook) https://doi.org/10.1007/978-3-030-54494-2 # Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

I dedicate this book to my late father Khimji Karsan Vagadia for showing me the light

Foreword

Dr Vagadia brings a breadth of experience in the telecommunications industry to his examination of the evolution of digital technologies and their impact on the economies and societies that they are rapidly reshaping. His review and analysis of the policy, regulatory and legal implications arising from these digital technologies is timely and thought-provoking. These issues cut across borders, and the book brings a unique vantage point to analysing the different policy and regulatory responses across the globe. Dr Vagadia cuts through the hype surrounding many of these digital technologies, whilst illustrating that a real data revolution is firmly under way. He argues for a more holistic policy response across the various layers of the data ecosystem. As he puts it, our digital economy will be driven by data and will require an ecosystem built around it, which will depend upon an enabling policy and regulatory environment. The book blends together, in an easy-to-read style, developments in technology against their likely impact, for better or worse, on economies and societies. Going beyond these macro-level issues, he examines the transformations that will be required within organisations to compete in an increasingly digital era. The GSMA represents the interests of mobile operators worldwide, uniting more than 750 operators with almost 400 companies in the broader mobile ecosystem, including handset and device makers, software companies, equipment providers and Internet companies, as well as organisations in adjacent industry sectors. GSMA, London, UK

John Giusti

vii

Preface

Writing a book on digital disruption was a daunting task, where exactly does one start given its vast canvas? It was the magnitude of the challenge that actually awakened my interest. How does one look at digital disruption in a holistic manner, how does one make sense of what it means for us as society and what must we do to ensure we maximise its value—not just economically, but also in creating a world that benefits all its inhabitants. There has been a lot of hype around the various technologies and elements that make up the digital economy, from 5G in the telecommunications sector to Blockchain in the context of cryptocurrencies to AI and its potential to find cures for illnesses that have plagued humanity. There have also been concerns around how these technologies are fuelling the underworld to launder ill-gained money, the use of AI to manipulate public opinion and the vast global enterprises that have been created on the back of this data revolution and that are making money on the back of personal data, some freely provided and some gained and used in a manner without consent. There have been concerns around how AI is becoming too powerful with the potential to be used against humanity, or just extenuating the biases that are built into our societies. I wanted to get beyond the hype and hysteria and dive deep into these digital technologies and understand what real-world applications could be delivered and what potential they offer to drive economic value and explore what governments and regulators must do to ensure a balance between enabling real innovations to thrive and the need for appropriate safeguards to be created and where that balance might lie. Having understood the scale and potential of digital technologies and a world very much driven by data, I wanted to understand what that means for existing enterprises and what it may take for them to join the digital revolution. In writing the book, I have reminded myself of who the audience might be for the book. That audience is likely to include policymakers, regulators and executives from a variety of sectors. I have therefore been minded to ensure I explain the key concepts with a sufficient level of history and background, whilst not being too technical, with a focus on what is actually happening today and what it actually means for society, policymakers, regulators and business leaders. In doing so, I did not want to simply impose my own ideas and beliefs on others. I have therefore been ix

x

Preface

careful to strike a balance between a book that is easy to read and having some level of academic rigour. I hope I have fulfilled this self-imposed discipline. This is not a how-to book. That would be impossible and quite disillusioned given the sheer complexity involved and the incredible pace of development in these key digital technologies. Instead, I hope to provide a framework for thinking about these issues and starting the debate. I pose the choices that we are faced with and what might be possible answers to these. London, UK

Bharat Vagadia

Road to Paradise (or Hell)

I was woken up a couple of weeks ago on a Monday morning, not by the sound of an alarm clock set at a predetermined time, but by beautiful music just when I was coming out of deep sleep (thanks to the ingestible miniature sensors)—this meant I was incredibly fresh for the rest of my day. It was a busy schedule, with a couple of meetings set up in Dubai, then one in Singapore the following day and a final few days in Bangalore—where the family were to join me for a short vacation. As I was awoke, the curtains opened, the room was already preheated and I did not notice that it had been snowing outside, the coffee was brewing and my virtual personal assistant spoke out aloud—‘you’ve got around 30 minutes to get ready, there’s traffic on a section of the road leading to the airport—there was an accident just two minutes ago. The plane is likely to depart thirty minutes late, there is also likely to be traffic on the road from Dubai airport to DIC—I have sent a request for the meeting to be delayed by thirty minutes and have also sent a request for the subsequent meeting to be pushed back. A summary of the meeting and background documents have been uploaded to your PDA’. I drank the coffee—it was sweeter than normal—this happened sometimes when my blood sugar level was lower than normal and the coffee maker adjusted the coffee for this—it even added vitamins sometimes, although I could not taste them. As I got out of my door, the car—driverless of course—was waiting outside. As I got in, it started driving off. ‘Hello Bharat, I understand we’re off to Heathrow T5— there is a bit of traffic on the way, and I’ve got a new route that saves a few minutes—I’ve set the in-car entertainment to BBC news’. As I sat back, there was a summary of the news of the day in Dubai, the latest politics, weather information and business news. After these flying automated cars were introduced a few years back, the airspace was getting more congested—although still much faster than the normal street cars. At Heathrow T5 as expected, slighter later than originally expected, I walked straight through the security gate. This new technology that T5 introduced was great—no more checking in, passport control and all that. The digital ID tags inserted in the skin, together with remote monitoring of my retina and brain waves, were enough to determine who I was—to check the flight I was scheduled to go to and the immigration status of the countries I was booked to visit. From xi

xii

Road to Paradise (or Hell)

arriving at the airport to reaching the gate took less than 10 minutes. Departed as expected. The entertainment system—a pair of glasses placed on my seat—had preloaded the films and documentaries my digital PA had added to the wish list. I landed in Dubai—walked straight out of the airport—no waiting in queues— getting your passport stamped and all that nonsense. Dubai had joined the list of countries that had joined the passport Blockchain. On arriving at the hotel—the robot porter had recognised my digital ID—‘hello Bharat, you’ve been checked into room 102—we’ve changed the digital walls to your preference, the coffee has been freshly brewed and a taxi has been ordered to take you to your meeting for 2.30 pm—we’ve been told you still prefer paper printouts and so documents have been printed out for the meeting at 2.30 pm and the following one at 4.30 pm. Your business acquaintance from the UK is also in town and is scheduled to leave tomorrow—he is also on his own—perhaps I can set up dinner between you? I will contact his PA if you wish. Your other colleague from Qatar who emailed you last month had suggested you contact them next time you are in Dubai—unfortunately their policies and legal systems have prevented him from sharing his digital ID or his digital PA to be interconnected with the digital platform in Dubai—so if you would like to meet him—I will email him’. ‘Forget it—maybe next time’, I said. On the way to the meeting, about half way to the office, my digital PA stated that she had received a special offer from an upcoming coffee shop—the one I had visited last time—‘they had noticed you were in the area and would like to offer you a complimentary coffee for your kind review you had left last time’. I was right to give them a fantastic review—their coffee was just great. The meeting proceeded as planned. Although one of the attendees could not make it for family reasons—his digital avatar was projected and he joined remotely. These meetings were much quicker than the old days and frankly had lost some of the enjoyment. With the business intelligence systems and big data systems in place— getting access to the latest financial or market information was instantaneous and there was little room for negotiation. We did not really need this face-to-face meeting—we could have done it using avatars, but being humans, it is still useful to see the whites in the eyes. Once the deal was done, a digital smart contract was set up immediately—the poor lawyers were not really needed anymore—all the supply chain partners were set up—together with all the linkages with the tax authorities in a matter of a few seconds—unfortunately there were no more opportunities to dodge taxes—mind you I am not complaining—the government was now much better managed and efficient and fraud was a thing of the past. I met my old colleague in the evening. Dietary preferences together with dietary requirements were sent by both the respective digital PAs and the menu was sent back, so on my way to the restaurant, I chose what I wanted. The trip to Bangalore was great—the family joined—the kids spent some time visiting their lecturer—who until then they had seen as a digital avatar teaching them mathematics. Advertising had become a lot clever—as we passed billboards on the streets in Bangalore—they had noticed that we have been a few times to Bangalore

Road to Paradise (or Hell)

xiii

and had visited a few museums. Well, they were one of the few countries that had not switched to virtual reality and augmented reality and made going to museums somewhat irrelevant—however, I was old-fashioned and like to see the real thing. Well these street signs started advertising the ‘Museum of Vedic India’—they had noticed that my daughter was the right age to be studying Indian history as part of the UK curriculum—and they cleverly displayed ‘that it will help towards the Year 5 examination taking place in June that year in the UK’—how could we refuse to go? Life was pretty good—well for the few lucky ones like me. There was mass unemployment—from lawyers, consultants, medics, public servants and engineers. Major industries had been streamlined—the telecommunications operators that once employed thousands of people now had a few hundred—well their margins were razor thin; now that roaming rates had disappeared, there were no voice revenues; and data was seen as a commodity—well actually a utility. What I have described is probably the day in the life of my daughter in the next ten years. It is no longer science fiction. Whilst there has been a lot of hype around 5G, Blockchain, AI, drones and many other digital technologies, we are slowly getting to a stage where many of these are now maturing and becoming real-world applications. And whilst some like my daughter may be the lucky ones—there will be millions, possibly billions who will not be that lucky. The demise of entire industries, professions and even nations that are left behind the digital revolution is on the horizon. Many have called this the fourth industrial revolution.

An Inflection Point The original Cambrian Explosion began 500 million years ago and was a brief period of time during which most of the major forms of life on Earth appeared. Almost all the body types present on the planet can trace their origin back to this burst of intense evolutionary innovation. Many believe we are about to experience something similarly transformative with the so-called fourth industrial revolution. One of the most important enablers of the Cambrian Explosion was vision—the moment when biological species first developed the ability to see the world. We are now at a similar threshold with machines and computing. The first industrial revolution (the first machine age during the period 1775 to 1840) saw the transition from hand production methods to machines. The rise of machines was made possible with the convergence of a number of developments in mechanical engineering, chemical manufacturing, metallurgy, iron production process and the use of water and steam power. During the period, a sharp and sustained jump in human population and progress was seen. It was the most profound time of transformation our world has ever seen. From 8000 BCE to 1775 AD, the population was less than half a billion people and human social development still hovered

xiv

Road to Paradise (or Hell)

around an index of zero. Two centuries later, the human population increased to seven billion and the human social development index approached 700.1 The second machine age came during the period 1870, when mass manufacturing of steel was invented and with electrification within electrical industries together with further advances in chemical processes and petroleum production. The second machine age saw gasoline-based engines in automobiles and airplanes, powered by the assembly line. Life went from being all about the farm to all about the factory and people moved from the countryside into towns with the introduction of mechanical production. The third machine age came with the introduction of semiconductors, mainframe computing, personal computing and increased telecommunications connectivity. Things that used to be analogue moved to digital, like an old television replaced by an Internet-connected tablet. The fourth machine age is just emerging with the advance of digital technologies/ computing and, more importantly, their convergence, driven by Internet of Things (IOT), Artificial Intelligence (AI), high-speed ubiquitous connectivity and miniaturisation of machines. Economists call innovations like steam power and electricity general purpose technologies (GPT): deep new ideas or techniques that have the potential for important impact on many sectors of the economy. Impact here means significant boosts to output due to large productivity gains. GPTs need to be pervasive, improving over time and able to spawn new innovations (as was seen during the previous three machine ages). New digital technologies now emerging in the fourth machine age meet these requirements; they improve with Moore’s law2, are used in every industry in the world and lead to innovations like autonomous vehicles, digital twins and drones to name a few. Another way to think about these waves of innovations is that each was seen as recombinant innovations. Each development becomes a building block for future innovations. The fourth revolution, what some call the digital revolution, collects, distributes and makes available massive amounts of data to almost any situation, which can be reproduced infinitely at virtually zero cost. As a result, the number of potentially valuable building blocks is exploding. The impact this digital revolution will have on society and economies is likely to be as dramatic, if not bigger than the previous revolutions. We are at an inflection point. This digital revolution is driven by data. Many claim that data is the new oil. I would agree that data is the new oil; however, one forgets that oil is typically hundreds of miles deep in the ocean or desert. To turn this into value, it must be extracted, refined, safely distributed and capable of being used in a variety of use

1

Morris, I. (2011). Why the west rules - for now: The patterns of history, and what they reveal about the future, Picador. 2 Moore’s law, prediction made by American engineer Gordon Moore in 1965 that the number of transistors per silicon chip doubles every year, though the cost of computers is halved.

Road to Paradise (or Hell)

xv

cases seamlessly. It has a whole ecosystem built around it. It has the support of governments and regulators with appropriate policies and investment incentives. In a similar sense, to turn data into value, it will need to be captured/extracted from a variety of devices/sensors; it will need to be refined/cleaned/anonymised, distributed securely and be capable of being used in a variety of use cases to drive insight. It will need an ecosystem built around it and have the support of government and regulators with appropriate policies to ensure value is extracted ethically and such value distributed equally amongst society, guaranteeing its safe use and with government investment to ensure its economic impact is nurtured to its full potential. In each of the stages described above, there are in fact many technology revolutions taking place: • Data capture/extraction is being made possible through advances in miniaturisation and cost improvements of sensors. The tablet today includes receivers to participate in cellular as well as WiFi networks, high definition cameras, GPS receiver, digital compass, accelerometer, gyroscope and light sensors, all within a handheld device for a cost of less than a US$1000. Compare that with the Cray-2 supercomputer (introduced in 1985) which cost more than US$35 million in 2011, which by comparison was deaf, dumb and immobile.3 Apple was able to cram all this functionality in the iPhone because a broad shift has taken place in recent decades; sensors like microphones, cameras and accelerometers have moved from the analogue world to the digital one. As they did so, they became subject to the exponential improvement, thanks to Moore’s law (the doubling of computing power every two years for half the cost). The combination of cheap raw materials, mass global markets, intense competition and large manufacturing scale economies is essentially a guarantee of sustained steep price declines and performance improvements. The technology today is also the peace dividend of the smart phone wars between Apple, Google and others, as well as the significant investments made by governments into such technologies. These technologies were essentially unobtainable ten years ago. What was military industrial technology a decade ago is today being used in toys. • Data refinement/cleansing/anonymisation is being made possible with advances in big data, which is largely driven by improvements in computing power, database technologies and information processing algorithms. • Secure data distribution is being made possible through recent innovations such as digital identities, Blockchains and smart contracts. • Data insight is being made possible through advancements in computing power at much lower costs and mathematical algorithms that can find patterns or clusters in data to provide insight (Artificial Intelligence). Whilst individually, each of the above mini-technology revolutions may not be seen to be disruptive, in combination they will have a substantial impact on business,

3

Company News. (1988). Cray to introduce a supercomputer. New York Times. 11 February 1988.

xvi

Road to Paradise (or Hell)

the economy and society over time. Disruptive technologies are not simply a driver of growth and opportunity—they are fundamentally changing the global economy, the way firms gain competitive advantage and potentially the security of nations. According to the UK industry body, Tech Nation, the digital technology sector contributed £149 billion in 2018, with the digital tech sector GVA growing nearly six times faster than that of the rest of the UK economy—accounting for 7.7% of UK GVA and employing nearly three million people in the UK, an increase of 40% from 2017 (now accounting for nine percent of the national workforce).4 It would be wrong to assume that this new revolution is only driven by data. Whilst data is a fundamental enabler for this fourth revolution, it is disruption through engineering and physics such as miniaturisation, nanotechnologies, energy storage, etc., which cannot be overlooked. The most significant advancements are happening where these two disruptive forces intersect, such as Internet of Things where improved connectivity and improvements in devices are combined with data-driven disruption, such as Artificial Intelligence. Figure 1 illustrates the constituent parts of these pieces of the disruption pie. The digital economy has benefited consumers by either creating entirely new categories of products and services or new means for their consumption, through entirely new business models and players. These new business models take the form of ‘platform businesses’. Many of the products and services delivered through these platforms are of high quality, with low prices, in many cases a monetary price of zero. These new forms of business themselves have only been possible through the lower cost of starting a business and scaling up through things like cloud computing, their access to global markets through the Internet and a consumer base that is now always digitally connected, with the ability to securely transact. Whilst this new digital revolution will bring significant benefits to the material world, it will bring with it some serious challenges. These challenges will include major disruption to traditional enterprises, job losses for some, privacy threats to many and changes to physiological behaviours as people interact through digital medium rather than face-to-face—the very fabric of society will change. Many are also concerned about how these advances could be used against humanity through exploitation by businesses or individuals. Some are concerned about how these advances will affect the distribution of wealth and military might across nations. Those of a positive disposition claim that the previous industrial revolutions were accompanied by soot-filled London skies and horrific exploitation of child labour; however over time, governments realised that some aspects of the industrial revolution were unacceptable and took steps to end them. They claim that these new challenges of the digital revolution can also be met, once we know what these are likely to be.

4

https://technation.io/news/tech-nation-report-2020/

Fig. 1 The intersection of technology and data disruption

Internet of Things

Plaorms – including smart cies

Autonomous Vehicles

Blockchain, Tokens and Smart Contracts

Virtual and Augmented Reality

Arficial Intelligence

Disrupve Technologies

Devices and Sensors

Connecvity including 5G

Quantum Compung

Nanotechnologies

3D Prinng

Cloud and Edge Compung

Road to Paradise (or Hell) xvii

Engineering Driven Disrupon

xviii

Road to Paradise (or Hell)

Others fear that we do not know as yet the extent of the challenges the world will face with these advances and it may simply be too late by the time we do discover these threats. The jury is out. However, what is clear is that we cannot simply ignore the tsunami that is on its way—that would be economic and social suicide. One study estimates that Artificial Intelligence (AI) could generate an additional US$15.7 trillion in economic value by 2030, close to the current annual economic output of China and India combined. However, forty percent of this value is likely to accrue to China and the USA alone.5 The EU estimates its digital market could contribute US$472 billion per year to the economy6, whilst projections for ASEAN economies are around US$1 trillion by 2025.7 Whilst AI is likely to generate new wealth, some analysts suggests it could make inequality worse8 and even increase the risk of nuclear war.9 There are also potential environmental and social costs of the technology revolution. Bitcoin10 for example requires a network with energy consumption roughly equal to Singapore,11 and the recent concern over ‘fake news’ has been connected to the proliferation of ‘bots’ (automated accounts driven by algorithms).12 Disruption does not take place in a vacuum. Just as railroads gave rise to standardised timekeeping, new forms of industrial organisation gave rise to antitrust and labour legislation, and earlier information technology innovations gave rise to new privacy and intellectual property frameworks. The latest wave of technologies will also require reconsideration of both national and international regulatory regimes. These frameworks will need to adapt to realise the potential benefits and opportunities from disruption; manage sweeping change in an equitable manner; and enable new kinds of property and new kinds of markets to function. To turn these disruptive technologies into a force for good requires that we do not look at technology disruption in isolation, but consider the wider transformation within society and within businesses that is needed to turn new technologies into economic and social opportunities. Policymakers and other stakeholders need to orchestrate an ecosystem and regulatory environment that magnifies the positive externalities associated with such disruptive technologies whilst minimising potential negative externalities. To date, there is no overall regulatory authority covering the entire digital sphere. Regulation is fragmented with overlaps and gaps.

5

PWC. (2017). Sizing the prize: What’s the real value of AI for your business and how can you capitalise? 6 https://ec.europa.eu/commission/priorities/digital-single-market_en 7 Bain & Company. (2018). Advancing towards ASEAN digital integration: Empowering SMEs to Build ASEAN’s Digital Future’. 8 Korinek. A., Stiglitz. J. (2017). Artificial Intelligence and Its Implications for Income Distribution and Unemployment. NBER Working Paper No. 24174, Issued in December 2017. 9 RAND Corporation. (2018). How Artificial Intelligence Could Increase the Risk of Nuclear War. 10 A cryptocurrency primarily used as a means of payment. 11 https://digiconomist.net/bitcoin-energy-consumption 12 Temming. M. (2018). How Twitter bots get people to spread fake news. Science News.

Road to Paradise (or Hell)

xix

The Digital World Does Not Merely Require More Regulation but a Different Approach to Regulation A balanced policy approach that includes a combination of self-regulation, voluntary and market-driven standards and sharing of best practices, together with the application of existing regulations, and updated policy and regulatory frameworks needs to be considered. Achieving trust in the supply and demand of these technologies is vital to achieving potential growth and social progress that these disruptive technologies promise to delivery. Policymakers and business leaders are at the heart of nurturing such trust. Figure 2 illustrates the interconnectedness of these disruptive technologies and also provides a glimpse of what this books aims to cover. The role of government funding for innovation must not be overlooked. Many of the revolutions seen in previous incarnations were stimulated through government investment, either for military use or scientific exploration. The payoff from government investment can be substantial for their economies. Take the Internet for example. It was born out of the USA Defense Department research into how to build bomb-proof networks. GPS systems, touchscreen displays, voice recognition software and many other digital innovations also arose from basic research sponsored by governments. It is safe to say that the hardware, software, networks and robots would not exist in anything like the volume, variety and forms we know today without sustained government funding.13 Much of that funding came from the USA. The USA today leads the world in the digital space, with many of the digital giants powering global social media and other platforms. Governments across the world need to take a good and hard look at funding programmes and determine if they are nurturing locally grown digital talent, enabling digital enterprises to flourish and investing in digital technologies, whose spillover benefits can benefit broader society. Unfortunately when it comes to digital, many governments focus only on regulation and not enough on investment. Mariana Mazzucato has suggested that many of the current technology leaders in the USA had benefited from US government investment and that they had in fact unfairly benefited from such investments.14 In a sense, government had borne the downside risk, but private investors had benefited from any upside. Google’s algorithm was developed with funding from the National Science Foundation and the Internet came from DARPA funding. The same is true for touchscreen displays, GPS and Siri to name a few. From this, the technology giants have created de facto monopolies whilst evading the type of regulation that would rein in monopolies in any other industry. She goes further and suggests that these technology giants are

13

Mazzucato, M. (2013). The entrepreneurial state: debunking public vs. Private sector myths. Anthem Press. 14 Mazzucato, M. (2018). The Value of Everything: Making and Taking in the Global Economy. Public Affairs.

Connecvity and latency

Liability regime

Ex-ante vs Expost regulaon

GIG economy and worker rights

Plaorms

5G

Token economics

Digital Policies

Connecvity

Infrastructure deployment rights

Drones

Spectrum Fees and taxes

Smart Contracts

Blockchain and Smart Contracts

Internet of Things

Safety

Liability

Standards

Privacy

Interoperability

Open data

Data protecon

Funding models

Smart Cies

Encrypon

Cloud / data localizaon

Data protecon

Interoperability

Device security

Devices and wearables (inc. medical cerficaon) Liability regime

Cryptocurrency

Reliability

Devices

AR/VR/Digital Twins

Digital Identy

Crical infrastructure

Internet access and USO

Autonomous Vehicles

Connectivity Fake news

Regulatory framework

Interoperability

Liability regime

Compeon policy

Arficial Intelligence

Data protecon

Fig. 2 Interconnectedness of disruptive technologies

Dominance and Tipping

Open APIs

Business model disrupon

Training data

Bias, Ethics and Standards

Educaon /re-educaon

Impact on jobs

xx Road to Paradise (or Hell)

Road to Paradise (or Hell)

xxi

now attempting to minimise their taxes when it was tax dollars that had made them a success and are only becoming successful by exploiting a public good, that is the public’s personal data, whilst shirking corporate and social responsibilities that would apply to most other businesses. Digital disruption calls into question broader policy choices and requires public seed funding, refinements to competition policy and rules to safeguard consumers. Another factor that has led to the success of the USA in the field of digital (and more generally it could be argued) has been its approach towards a more liberal regulatory regime. A key contributor behind depressed entrepreneurship is excessive regulation. Innovation researcher Michel Mandel has pointed out that any single regulation might not do much to deter new business formation, but each one is like another pebble in a stream. Their cumulative effect can be increasingly damaging as opportunities to work around them are diminished. Economists Leora Klapper, Luc Laeven and Raghuram Rajan found that higher levels of regulation reduce start-up activity.15 This is not to say that government should not regulate. Many point to the lack of regulatory oversight by the US government over the financial sector that led to the credit crunch and the recession seen in the 2000s. What I am saying and which is not exactly rocket science is that there must be a fine balance between the need to regulate in order to protect citizens and the competitive process, but not so excessive that it has the potential to dampen the entrepreneurial process and limit innovation— to get that balance right you need to look at the impact of regulatory accumulation on the digital ecosystem and what that means for innovation and consumer protection.

A New Source of Wealth Creation at the Expense of Employment Most countries do not have extensive mineral wealth or oil reserves and thus cannot simply get rich by exporting them. So the only viable way for these nations and their societies to become wealthy and to improve the standards of living available to its people is for their companies and workers to keep getting more output from the same number of inputs—in other words more goods and services from the same number of people. Innovation is how this productivity growth happens. Digital technology has the potential to drive real innovation and real productivity growth. Advances in technology, especially digital technologies, are also driving an unprecedented reallocation of wealth and income. Digital technologies can replicate valuable ideas, insights and innovations at a very low cost. This creates bounty for society and wealth for innovators, but diminishes the demand for previously important types of labour, which can leave many people with reduced incomes. This is the story around outsourcing and offshoring, where many countries have introduced policies to stop or reduce the extent of offshoring by firms in their countries, because of the fear that it will lead to a decline in wages and job losses more generally. Many

15 Klapper. L., Laeven. L., Rajan. R. (2006). Entry regulation as a barrier to entrepreneurship. Journal of Financial Economics 82, no.3.

xxii

Road to Paradise (or Hell)

point to data that suggests that median income in the USA has increased very little since 1979 and it has actually fallen since 1999—many suggesting these affects are as a result of increased offshoring. However, there is no real evidence for the causal relationship between the extent of offshoring and median wages. In reality, growth of overall income or productivity in the USA has not stagnated; GDP and productivity have been on impressive trajectories. Instead, the trend reflects a significant reallocation of who is capturing the benefits of this growth and who is not. The disparity between the rich and the poor is widening, whilst the overall economy has been growing. These economic inequalities are only likely to widen as a result of digital disruption. In any single market, competition will tend to bid the prices of the factors of production, such as labour or capital, to a single common price. Over the past few decades, lower communication costs have helped create one big global market for many products and services. Businesses can identify and hire workers with skills they need anywhere in the world. If a worker in India can do the same work as a US worker, then what economists call the law of one price demands that they earn essentially the same wages. That is good news for the Indian worker and overall economic efficiency, but not for the US worker. However, as digital technologies and automation pervade more sectors, more output will be achieved with less labour input. Over time, the impact of automation is likely to be felt more heavily on developing countries than developed countries. If you take most of the costs of labour out of the equation by installing robots and other types of automation, then the competitive advantage of low wages largely disappears. This may mean that manufacturing that has been sent overseas may well come back—something that appears to have already started at a small scale. The fear is in fact that the digital revolution will follow a similar trajectory, where overall economic growth will be seen (in at least some countries that are investing resources and policies towards digital innovation), but the distribution of that wealth will be skewed to those that are digitally savvy. Technology creates possibilities and potential, but ultimately the future we get will depend on the choices we make. We can reap unprecedented bounty and freedom, or greater disaster than humanity has ever seen before. The digital technologies we are creating provide vastly more power to change the world, but with that power comes greater responsibility. The solution to the problems is not going to be a tweak here or there, but a policy response that is coordinated across multiple policy areas. Law and policy are tools by which we simultaneously express the values of our society and the mechanism by which we seek to achieve real-world outcomes consistent with those values. Before we get to policies and regulation, we need to think much more deeply about what it is we really want and what we value, both as individuals and as society. Governments need to proactively seek a deeper understanding of the potential implications for society as well as of the critical challenges these emerging digital technologies pose to society. The European Union (EU) is attempting to do this with their White Paper on Artificial Intelligence released in February 2020, where the focus around the EU policy is very much around developing AI in a way that is aligned with European

Road to Paradise (or Hell)

xxiii

values. Unfortunately, it is not clear what these European values are and if they are in fact shared across the members of the EU. We have been here before, when we saw previous so-called industrial revolutions—however, this time it is different. Digital technologies pay no regard to national or jurisdictional boundaries. Digital businesses today have global reach, and they may not be able to be adequately regulated by national regulators. At the same time, their impacts will be felt nationally in terms of employment, tax policy, data protection, societal attitudes and even the very democratic process. This calls for a global solution. I have written this book explicitly to begin this conversation. I do not claim to provide concrete solutions, for that would be both unwise and most likely wrong given that we are at the early stages in this revolution, the digital revolution.

Acknowledgements

Writing a book on digital disruption would simply have not been possible without the help of the many people who have provided insight and feedback on the concepts and recommendations described in the book. It goes without saying that I owe a debt of gratitude to the academics and business colleagues I have had the privilege of meeting over the many years spent studying and working in diverse fields that have contributed to the development of my thinking in this area—this book stands on the shoulders of many giants in the field and there are simply too many for me to try and list. I am also immensely grateful to a number of leaders who gave up their valuable time to share their perspectives and experience in this area. I would like to specifically thank the following people who provided invaluable insight and helped shape my thinking: Dr Mike Short (chief scientific advisor at the Department of International Trade, UK); Steve Unger (former head of Ofcom); Hans Kuropatwa (group chief M&A officer at Ooredoo and ex CEO of Vodafone); Jacky Wright (chief digital officer at Microsoft); Haroon Shahul Hameed (former CEO and COO at a number of telecommunications operators and lead investor in digital start-ups); Gordon Moir (a TMT Partner at Shepherd and Wedderburn); Ara Margossian (a TMT Partner at Webb Henderson); Chris Watson (a TMT Partner at CMS); John Buyers (a Partner in AI and Machine Learning at Osborne Clarke); Frederik Bisbjerg (chief digital acceleration officer and digital transformation specialist); Simon Torrance (a leading digital transformation consultant, advisor to the World Economic Forum and author of Fight Back); Helie Dhatefort (founder of Globcoin—a stablecoin working with a number of central banks); Raja Sharif (CEO of Farmatrust, a Blockchain-enabled pharmaceuticals supply chain enabler); Deepak Sharma (management consultant to a number of Indian corporations); Shalin Teli (automation and AI advisor at Wipro Digital); Mustapha Bengali (a leading information security expert, ex-CISO of Ooredoo); Namrita Mahindro (chief digital officer Aditya Birla Chemicals, Fertilizers and Insulators); and Omar Shaath (a TMT industry expert based in Qatar). Another special mention goes to my colleague Bertrand Alexis, a leading scholar on LegalTech and an alumnus of Harvard Law School, a fact he reminds me of on a regular basis. He provided a much-needed sounding board for many of the ideas in the book. xxv

xxvi

Acknowledgements

Finally, I would like to thank my wife Bhavna, who has supported and encouraged me in writing this book, and my daughters Divyamayi and Trividya, who bring a ray of sunshine every time the dark shadows of despair hover above. My heartfelt thanks go to my parents who have instilled in me a thirst for knowledge and wisdom and other friends and family members who have supported me along the journey. London, April 2020 www.DigitalDisruption.xyz

Bharat Vagadia

Contents

1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 A Precipitous Cross Road . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 The Fourth Industrial Revolution . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 What is at Stake? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 The Need for Refreshed Government Policy . . . . . . . . .

1 1 3 4 7

2

A Framework for Understanding Digital Disruption . . . . . . . . . . . . 2.1 The Digital Ecosystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 A Framework for Understanding Data and Technology Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9 9

Part I 3

14

Data Connectivity

Data Connectivity and Digital Infrastructure . . . . . . . . . . . . . . . . 3.1 Unprecedented Growth: Limited Opportunities . . . . . . . . . . . . 3.2 Fit for Purpose Regulatory Policy . . . . . . . . . . . . . . . . . . . . . 3.3 Promoting Investment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Rethinking Regulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Clear, Nuanced Network Neutrality Policies . . . . . . . . . . . . . . 3.6 Rethinking Competition Policy . . . . . . . . . . . . . . . . . . . . . . . 3.7 Unpicking the Hype Around 5G . . . . . . . . . . . . . . . . . . . . . . 3.8 Spectrum Policy to Drive Investment . . . . . . . . . . . . . . . . . . . 3.9 New Players and Priorities for International Connectivity . . . . . 3.10 The Re-emergence of Satellite Connectivity . . . . . . . . . . . . . . 3.11 Delivering IOT Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . 3.12 The Relevance of Universal Service Access . . . . . . . . . . . . . . 3.13 Facilitating Access to Public Land . . . . . . . . . . . . . . . . . . . . . 3.14 Rethinking Network Sharing Policies . . . . . . . . . . . . . . . . . . . 3.15 Re-emergence and Importance of Fixed Infrastructure . . . . . . . 3.16 Numbering Relevant for Digital World . . . . . . . . . . . . . . . . . . 3.17 Progressive Cloud and Data Centres Policies . . . . . . . . . . . . . 3.18 Customer Registration Regulations Fit for the Digital World . . 3.19 Recalibration of Sector Taxation Policies . . . . . . . . . . . . . . . . 3.20 Changing Operating Models . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

21 22 24 25 30 34 35 38 42 45 46 47 50 52 53 54 55 56 60 61 61 xxvii

xxviii

Contents

Part II 4

Data Capture and Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 The Rise of Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . 4.2 The Emergence of New Business Models Due to IOT . . . . . . . 4.3 The Importance of IOT Interoperability . . . . . . . . . . . . . . . . . 4.4 IOT Security and Trust Policies . . . . . . . . . . . . . . . . . . . . . . . 4.5 Wearable and Medical Sensors: Potential and Hype . . . . . . . . . 4.6 Energy Storage Innovations Driving IOT . . . . . . . . . . . . . . . . 4.7 Connected Home . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 Connected Car . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9 Connected Government . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10 Connected City . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11 Connected Health . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12 Connected Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.13 Connected Enterprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.14 Connected Agriculture/Mining . . . . . . . . . . . . . . . . . . . . . . . .

Part III 5

Data Capture and Distribution . 67 . 68 . 73 . 74 . 78 . 80 . 83 . 85 . 87 . 88 . 89 . 94 . 99 . 101 . 102

Data Integrity, Control and Tokenization

Data Integrity, Control and Tokenization . . . . . . . . . . . . . . . . . . . 5.1 Data Value and Tradability . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Data/Cyber Security Risks . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Protocol Vulnerabilities . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Network and System Threats . . . . . . . . . . . . . . . . . . . 5.3 Creating Trust in the Data Ecosystem . . . . . . . . . . . . . . . . . . . 5.4 Data Confidentiality: Cryptography and Encryption . . . . . . . . . 5.4.1 Symmetric Key Encryption . . . . . . . . . . . . . . . . . . . . 5.4.2 Asymmetric Key Encryption/Public Key Cryptosystems . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Data Integrity: Hash Functions . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Data Availability and Access: Digital Signatures . . . . . . . . . . . 5.7 National Digital Identities . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Blockchain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 Alternative Implementations to POW Blockchains . . . . . . . . . 5.9.1 Lightning Network . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.2 IOTA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.3 Ethereum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.4 Cross-Chain Technologies . . . . . . . . . . . . . . . . . . . . 5.10 Regulating Blockchain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11 Smart Contracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

107 107 109 112 113 117 120 121

. . . . . . . . . . . .

122 123 125 128 133 148 148 149 149 150 151 155

Contents

5.12

5.13

Part IV 6

xxix

Token Economics, Cryptocurrencies and Initial Coin Offerings (ICO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12.1 Token Classification . . . . . . . . . . . . . . . . . . . . . . . . . 5.12.2 Regulating Cryptoassets, ICOs and Cryptocurrencies . 5.12.3 The Debate Between Utility and Securities Tokens . . . Privacy and Data Protection . . . . . . . . . . . . . . . . . . . . . . . . . . 5.13.1 Anonyimization Techniques . . . . . . . . . . . . . . . . . . . 5.13.2 Reliability and Accuracy Standards . . . . . . . . . . . . . . 5.13.3 Data Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.13.4 Open Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . .

159 159 160 161 168 172 174 175 175

. . . . . . . . . . . . . . . . . . . . . .

179 179 180 185 186 188 190 192 193 194 195 195 197 198 201 204 204 205 208 212 213 214

. . . . . .

215 217 219 221 221 223

Data Processing and AI

Data Processing and AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 The Rise of Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . 6.1.1 Why the Sudden Excitement? . . . . . . . . . . . . . . . . . . 6.2 Impact of AI on Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 AI in Health and Medicine . . . . . . . . . . . . . . . . . . . . 6.2.2 AI in Financial Services . . . . . . . . . . . . . . . . . . . . . . 6.2.3 AI in Public Sector . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.4 AI in Retail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.5 AI in Agriculture . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.6 AI in Manufacturing/Logistics . . . . . . . . . . . . . . . . . . 6.2.7 AI in Education and Training . . . . . . . . . . . . . . . . . . 6.3 Impact of AI on Economies . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Impact of AI on Society . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 AIs Impact on Employment . . . . . . . . . . . . . . . . . . . 6.4.2 AI Likely to Put Downward Pressure on Wages . . . . . 6.4.3 The Need to Rethink Education . . . . . . . . . . . . . . . . . 6.4.4 The Risks of Further Inequality . . . . . . . . . . . . . . . . . 6.4.5 Social Bias and Misuse . . . . . . . . . . . . . . . . . . . . . . . 6.4.6 AI May Become a Threat to Humanity . . . . . . . . . . . 6.4.7 Cyber Security Threats Likely to Increase . . . . . . . . . 6.4.8 Mental Health . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.9 Political Manipulation and Fake News . . . . . . . . . . . . 6.4.10 The Risk of Creating Data Monopolies That Have Immense Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.11 Policy Responses to Date . . . . . . . . . . . . . . . . . . . . . 6.5 Understanding How AI Works . . . . . . . . . . . . . . . . . . . . . . . . 6.5.1 Artificial Neural Networks . . . . . . . . . . . . . . . . . . . . 6.5.2 Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.3 Limitations of AI . . . . . . . . . . . . . . . . . . . . . . . . . . .

xxx

Contents

Part V 7

Disruptive Data Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Digital Assistants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Virtual and Augmented Reality . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Digital Twins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Size Matters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 Platform Business Models . . . . . . . . . . . . . . . . . . . . . . 7.4.3 Platform Revenue Management and Governance . . . . . 7.4.4 APIs Crucial for Platform Business Models . . . . . . . . . 7.4.5 Data Protection Concerns . . . . . . . . . . . . . . . . . . . . . . 7.4.6 Competition Concerns . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Autonomous Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.1 V2X Communication . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.2 Ridesharing and Autonomous Vehicles . . . . . . . . . . . . 7.5.3 Cyber Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.4 Testing Environments . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.5 Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.6 Liability Regimes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Drones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Part VI 8

229 230 232 233 235 236 238 241 243 243 244 249 252 252 254 254 254 255 256

Other Enabling Disruptive Technologies

Other Disruptive Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Nanotechnology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Quantum Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 3D Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Genome Editing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Renewal Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Part VII 9

Disruptive Data Applications

263 263 265 267 268 269

Enterprise Strategies

Enterprise Digital Transformation . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 The Need to Metamorphosis into Ambidextrous Digital Organisations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 New Digitally Driven Operating Models . . . . . . . . . . . . . . . . . . 9.3 Reviewing Your Business Model: A Five Step Plan . . . . . . . . . 9.4 A Vision with Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Agility and Adaptability the DNA of the Digital Firm . . . . . . . . 9.6 Good Governance at the Heart of Digital Transformation . . . . . . 9.7 Leading the Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8 A Roadmap for Digital Transformation . . . . . . . . . . . . . . . . . . . 9.9 Learn Fast, Act Faster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

273 273 277 279 281 282 284 285 287 288 289

Contents

Part VIII 10

xxxi

Policy Responses

Global Policy Responses: A Snapshot . . . . . . . . . . . . . . . . . . . . . . 10.1 Shift in Regulatory Focus . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Government Led Versus Private Sector Led Approaches . . . . . 10.3 Europe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 USA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Asia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Middle East . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

293 293 294 295 297 298 299

About the Author

Bharat Vagadia is a seasoned international executive in the digital space. He has a diverse background having spent over twenty years in the TMT and digital field, advising governments, regulatory authorities and businesses in over forty countries. He has advised many governments on sector liberalisation, public policy and regulation. He has advised organisations on strategy, operating models and governance frameworks to deal with and thrive in a rapidly changing environment. He has most recently been responsible for public policy and regulatory affairs in an international telecommunications operator with oversight over ten country operations across Asia and MENA. He has taught, mentored and coached many aspiring leaders in the field and is a regular speaker on topics as diverse as globalisation, offshoring, outsourcing, telecoms policy, digital policy, data protection, enterprise governance, highperformance organisations and enterprise agility. He is also the author of (1) Enterprise Governance, (2) Strategic Outsourcing and (3) Outsourcing to India—A Legal Handbook. He has a PhD researching the interplay of legal contracts and trust in alliance relationships. He has been awarded an LLM in Commercial Law, an MBA from Imperial Business School, a First Class (Hons) in Engineering from King’s College London, and a CIM Diploma in Marketing and qualifications in corporate governance.

xxxiii

List of Figures

Fig. 1 Fig. 2

The intersection of technology and data disruption . .. . . . .. . . .. . . .. . . xvii Interconnectedness of disruptive technologies . . . . . . . . . . . . . . . . . . . . . . . xx

Fig. 1.1

A precipitous cross road/roundabout . . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . .

2

Fig. 2.1 Fig. 2.2

A holistic approach to policy development . .. . .. .. . .. .. . .. .. . .. .. . .. . Framework for understanding digital disruption . . . . . . . . . . . . . . . . . . . . .

10 15

Fig. 3.1

Cloud due diligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

58

Fig. 4.1 Fig. 4.2 Fig. 4.3 Fig. 4.4 Fig. 4.5

IOT applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IOT economic value . . . .. . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . .. . . . . . . . .. . . . . IOT trust . . . .. . . . .. . . .. . . . .. . . .. . . . .. . . .. . . .. . . . .. . . .. . . . .. . . .. . . . .. . . .. . . Creating trust in devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Connected city applications . . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. .. . .. . .. . .

69 72 79 82 91

Fig. 5.1 Fig. 5.2 Fig. 5.3 Fig. 5.4 Fig. 5.5 Fig. 5.6 Fig. 5.7 Fig. 5.8

Data privacy and security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating trust with data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The CIA triad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The workings of Blockchain . . .. . .. . .. . .. . .. .. . .. . .. . .. . .. . .. . .. . .. . .. . Blockchain—decentralised trust . . . .. . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . General data protection principles . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . . .. . Creating trust in data usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Liability across the value chain . .. .. . .. . .. .. . .. .. . .. . .. .. . .. .. . .. . .. .. .

110 119 120 135 136 169 171 173

Fig. 6.1 Fig. 6.2

Rules based versus statistical AI technologies . . . . . . . . . . . . . . . . . . . . . . . 219 The workings of multiple layer neural networks . . .. . . . . . .. . . . . . .. . . . 223

Fig. 7.1

V2X communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253

Fig. 9.1 Fig. 9.2

Digital transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 Joined up governance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285

xxxv

List of Tables

Table 2.1

Snapshot of digital plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

12

Table 3.1 Table 3.2

Core principles in reforming regulations . . . . . . . . . . . . . . . . . . . . . . . . . . . Substantive and procedural discrimination . . . . . . . . . . . . . . . . . . . . . . . . .

31 33

Table 4.1

IOT enablers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

68

Table 6.1 Table 6.2 Table 6.3

Global AI policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 AI and related standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 AI design principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209

Table 7.1 Table 7.2

Acquisitions in the platform/digital world . . . . . . . . . . . . . . . . . . . . . . . . . . 246 Drone regulation examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258

Table 9.1 Table 9.2

Traditional versus emergent strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Pillars of high performance organisationsa . . . . . . . . . . . . . . . . . . . . . . . . . 284

xxxvii

1

Introduction

This is not a how to book or one that details comprehensive business strategies or specific policies for the digital ecosystem. To do so, would be delusional given the unbelievable change and uncertainty at present. Instead, this book details the technology and digital disruption that is taking place today, beyond what is hype, examines the developments foreseen and the considers the impact these changes will have on industries, economies and society at large. In light of these changes, it details as far as is possible at this juncture, the actions governments and regulators need to consider in order to ensure these changes bring about positive benefits to society at large without stifling the innovations that may well be the source of value creation.

1.1

A Precipitous Cross Road

To realise the full benefits of digital disruption, governments need to reach across traditional policy silos and across different levels of government and develop a ‘whole of government’ approach to policy making. At the same time, business leaders need to explore the new possibilities such digital disruption opens, whilst becoming more ethical in terms of how their products and services impact society at large. The true (equitable) impact of digital disruption can only be realised where the regulatory environment and wider ecosystem enables frictionless digital trade, where cost effective and trusted digital services are utilised by all and where digital content (in all its forms) is produced and consumed with appropriate safeguards against antitrust behaviour, privacy, cyber security and interoperability standards. If fundamental and lasting economic benefits are to be realised at a nation level, digital entrepreneurship also needs to be nurtured, otherwise nations will find value flowing from their countries to those countries where innovation and entrepreneurship lead (which currently includes the USA, Israel, South Korea and increasingly will be countries like China and India). Digital entrepreneurship includes a range of # Springer Nature Switzerland AG 2020 B. Vagadia, Digital Disruption, Future of Business and Finance, https://doi.org/10.1007/978-3-030-54494-2_1

1

2

1 Introduction

Fig. 1.1 A precipitous cross road/roundabout

factors that need to be addressed, from digital and technology skills and education, to government promotion agencies, to universities, to financers such as venture capitalists, to R&D agencies, to the attitudes and legal systems that accept failure as a learning process rather than a curse to be avoided. Many may say that there has been significant strides made in technology innovation in the past, and that this will just be another gradual process of adaption by societies and enterprises. I however believe that the pace of technology innovation, together with the convergence and integration of such technological innovations, which to date have been treated as separate streams, makes this juncture very different. We are at a precipitous cross road. Actually it would be better described as a precipitous roundabout with many roads to choose—some of which lead to dead ends, some of which lead to a cliff edge, whilst some may lead to the ‘promised land’. Choosing the right road is a major challenge and one that will require much thought and many difficult choices. However, that choice must start to be detailed today, for many of the roads that may lead to the ‘promised land’ may be cut off if we leave it too late. Figure 1.1 lists some of the decisions we need to start thinking about now. As Max Tegmark remarks, technology is giving life the power to flourish like never before or to self-destruct. We might create societies that flourish like never before, on Earth and perhaps beyond, or a Kafkaesque global surveillance state so

1.2 The Fourth Industrial Revolution

3

powerful that it could never be toppled.1 The potential dominance of a few key organisations, with their own self interested goals could command control over governments and society. Policy makers, regulators, industries and civil society need to begin the conversation. We need to have an open dialogue of how technology, policies, and leadership can be used to turn the world into a better place, a better place for all, rather than the few, rather than those lucky enough to be in the right profession, or those lucky enough to be born into countries that are net beneficiaries from this digital disruption. This book explores these digital technologies that will shape the future world—a world in the next decade or two. It explores the ingredients required to nurture such technologies to bring economic and societal benefits, the policies that will help drive such innovation and wide-spread adoption, as well as the moral questions that we must as humanity ask ourselves. The book also considers possible controls that we need to place on such technologies to ensure the technologies are not used for harm either by unscrupulous individuals, businesses or rogue nations, or the technologies themselves (sorry, this book will not be discussing terminator style artificial intelligence developments).

1.2

The Fourth Industrial Revolution

The production, delivery and consumption of goods and services within the economy is changing, and changing fast. The availability of technologies such as smart phones, mobile connectivity, artificial intelligence, cloud computing, analytics and platforms such as Facebook are dramatically altering the way we live, work and interact—in what has been termed the fourth industrial revolution and which some people refer to as ‘digitalisation’. Society as a whole will see a fundamental shift in the way governments, businesses and citizens communicate, collaborate, share information or buy products/services as a result of this digitalisation. Virtual visits to museums that are on another continent; driving safely thanks to autonomous vehicles or making transactions and commerce without the need for a centralised bank, or indeed the optimisation of cities or improving the production, manufacture and interaction with customers are just some of the changes we are starting to see today. The technology and telecommunications sector at the beginning was focused on connecting people; then in the 2000s when smart phones and tablets came around, the industry focused on capacity. Today, we are at the beginning of the fourth wave; one that is about creating a digital economy, connecting not only people but things (‘Internet of Things’ or ‘IOT’), and connecting these at speeds that were never dreamed possible just a decade ago. That connectivity is being complemented by massive strides in data processing capabilities, giving rise to Artificial Intelligence 1

Tegmark, M. (2017). Life 3.0 being human in the age of artificial intelligence.

4

1 Introduction

(‘AI’); Virtual Reality (‘VR’) and Augmented Reality (‘AR’). All of these are being combined with advances in mechanical engineering as we enter into a realm where robotics are developing a level of dexterity that allows them to perform human like functions. An intermediation layer between the connectivity and the AI/VR/AR layer is data aggregation and distribution. Technologies such as Blockchain, designed to provide data integrity, distributed security and governance of data are emerging that may help to securely distribute and govern the use of data, including the ability for data owners/controllers to monetise their personal data; reduce data monopolies and potentially tokenise physical assets into the digital world. Many commentators have captured the wide array of technologies under the heading digitalisation. This is a very broad term and I can see its appeal in trying to define what is otherwise a diverse set of technologies. It however does not adequately define the specific functions that different technologies play in the technology stack, blurs the very different technology trajectories and our understanding of how they actually complement and converge. I therefore try not to use the broad brush definition within this book, although do make reference to it when detailing other sources or commentary from other learned colleagues. It would be wrong for me to portray that digital disruption is only now beginning to happen. Even without sophisticated technologies like IOT or AI, there is already a lot of digital disruption happening today, through digitising processes, retail channels and payments. These are already changing how services are delivered and disrupting the old order. This book however seeks to look beyond these current digital initiatives and to the next digital wave.

1.2.1

What is at Stake?

The World Economic Forum (WEF) estimates point to more than US$10 trillion of value from digitisation in five key global industries2 over the next decade, being directly dependent on essential infrastructure, applications and productivity improvements delivered by the telecommunications industry.3 IDC forecasts that IOT spending will experience a compound annual growth rate (CAGR) of 13% over the 2017–2022 period and reach US$1.2 trillion in 2022.4 McKinsey estimates IOT has a total potential economic impact of US$3.9 trillion to US$11.1 trillion per year in 2025 and the value of this impact, including consumer surplus, would be equivalent to about 11% of the world economy in 2025.5 The five key global industries are: E-commerce (US$3.1 trillion), Automotive (US$2.6 trillion), Logistics (US$2.1 trillion), Electricity (US$1.5 trillion), Media and Entertainment (US$0.7 trillion). 3 World Economic Forum. (2017). Digital transformation initiative telecommunications industry. 4 IDC Worldwide Semiannual Internet of Things Spending Guide (2H17), published on 18 June 2018. 5 McKinsey. (2015). The internet of things: mapping the value beyond the hype. 2

1.2 The Fourth Industrial Revolution

5

I am not saying that I believe all these forecasts, but what they all do show is an upward trajectory in value from the various elements that make up digital technologies. The value that can be derived from digital technologies are wide and varied. Analysis has shown that new and emerging digital technologies affect productivity through mechanisms that are many and varied.6 Some of these include: • Speed and strength: by being faster, stronger, more precise and consistent than workers, robots have vastly raised productivity on assembly lines in the automotive industry. They will do so again in an expanding range of sectors and processes; • Productivity enhancement: the combination of new sensors and actuators, big data analytics, cloud computing and IOT is enabling autonomous productivity enhancing machines and intelligent systems; • Predictability: automated maintenance scheduling, enabled by new sensors, artificial intelligence and machine to machine (M2M) communications, will reduce disruptions to production caused by breakdowns. Condition based maintenance in airplanes could reduce maintenance spending between 10 and 40% for air carriers, by shifting from rules based maintenance routines to predictive maintenance based on actual need, which is made possible by real-time monitoring; • 3D printing: which can remove the need for assembly in some stages of production by printing already assembled mechanisms. Aside from the economic benefits, digital disruptive technologies have the potential to significantly improve consumer lives and create broader societal good, while providing businesses with new opportunities for value creation and capture. Digital disruptive technologies can promote social inclusion by creating better access to quality education and offering new opportunities for skills development.7 Digital learning environments can enhance education in multiple ways, for example by expanding access to content to people from low income backgrounds or disadvantaged areas; supporting new teaching methods with learners as active participants; fostering collaboration between educators and between students; and enabling faster and more detailed feedback on the learning process. New digital technologies can better connect disadvantaged groups.8 For example, mobile connectivity is helping reach remote populations as well as those with lower incomes. Digital technological innovations in the financial and health sectors can also promote social inclusion. Digital lending innovations and innovative financing like peer-to-peer lending and crowd funding platforms are already helping fill a bank

6 OECD. (2016). Enabling the next production revolution: the future of manufacturing and services. Interim Report. Meeting of the OECD Council at Ministerial Level, 1–2 June 2016. 7 OECD. (2014). Trends shaping education 2014; Spotlight. OECD. 8 OECD. (2016). The productivity-inclusiveness nexus. OECD.

6

1 Introduction

lending gap and improving access to finance for both households and small enterprises, allowing for the participation of smaller investors. Digital disruption fuelled by the rise of IOT and Smart Cities9 has the potential to make meaningful contribution to the achievement of the UN Sustainable Development Goals (SDGs).10 In tackling global development challenges, digitally disruptive technologies are being utilised across the full spectrum of development activities. The ITU report ‘Harnessing the Internet of Things for Global Development’, released in 2016, highlights many IOT applications that are having a material impact on lives in many developing countries and contributing to the achievement of the UN Sustainable Development Goals. Sensors in agricultural fields are monitoring soil conditions and moisture levels; Radio Frequency Identification (RFID)11 tags are helping farmers provide more personalised care for their livestock; connected thermometers are monitoring vaccine delivery and storage in real-time; cameras and sensors in smart phones and tablets are allowing healthcare workers to provide remote diagnosis of disease; and off-grid solar systems monitored via SMS are bringing affordable electricity to lower income families. Advances in nanotechnologies are making the diagnosis and delivery of medicine much more effective with less intrusive needs for surgery and real-time diagnosis and treatment. There have been numerous studies quantifying the societal benefits from digital disruption and in particular the benefits from connected IOT devices. PwC has estimated that mHealth (health services delivered through the use of smart phones) could save 1 million lives in Sub-Saharan Africa over the next 5 years; traffic telematics could help Chinese commuters reclaim nearly 2 h each week and add US$20 billion to Chinese GDP; technology enhanced learning could save South Korean families between US$8k to US$12k on private tuition for their children; eHealth initiatives (health services delivered through the use of computers and smart phones) could save €99 billion and add €93 billion to the EU GDP; mEducation could provide 180 million children with the opportunity to stay in school and smart meters could save enough electricity through efficiency gains and reduced theft to

9 A smart city is an innovative city that uses information and communication technologies and other means to improve quality of life, efficiency of urban operation and services, and competitiveness, while ensuring that it meets the needs of present and future generations with respect to economic, social and environmental aspects. 10 The Sustainable Development Goals (SDGs) are the blueprint to achieve a better and more sustainable future for all. They address the global challenges we face, including those related to poverty, inequality, climate, environmental degradation, prosperity, and peace and justice. The SDGs interconnect with targets set for achievement by 2030. 11 Small electronic devices that consist of a small chip and an antenna. The chip typically is capable of carrying 2000 bytes of data or less. The RFID device serves the same purpose as a bar code or a magnetic strip on the back of a credit card or ATM card; it provides a unique identifier for that object. RFID technology has been available for more than 50 years. It has only been recently that the ability to manufacture the RFID devices has fallen to the point where they can be used as a “throwaway” inventory or control device, largely as a result of standardization efforts. EPCglobal, the standards body for the RFID industry, has set a goal to reduce the cost of an RFID tag, now 15 cents, to 5 cents.

1.2 The Fourth Industrial Revolution

7

power more than 10 million homes in India.12 A 20% reduction in Person to Government (P2G) transactions in Australia (from the approximately 800 million conducted with federal and state levels) through the use of digital technologies could deliver over AU$17 billion to the Australian government and over AU$8 billion to its citizens.13 Unfortunately, aside from the rosy economic and societal benefits, there will be significant potential for adverse impacts on society and the accompanying need for government policies, including: • Employment: jobs lost to automation and the use of robotics and artificial intelligence (AI) enabled decision making, with the accompanying need to redesign government social security schemes in light of possible mass unemployment; with a need to re-skill vast amounts of the population which could be a slow and expensive process, with a need to redesign the whole education system; • Privacy and freedom: increased threats to personal and state security and risks to privacy from developments in AI and more opportunities for mass surveillance and a big brother society either by governments or large private sector organisations; • Health: less human interaction which may lead to more loneliness and depression in society; and potentially adverse health effects from the use of nanotechnology • Inequality: increasing the digital divide within society and between nations; • Investment: a massive need for investment in digital infrastructure, including 5G and fibre; • And many that we just cannot fathom at this stage. . .

1.2.2

The Need for Refreshed Government Policy

Governments face a delicate balancing act in nurturing these digital technologies whilst also trying to limit the adverse impact they may have on society and the economy at large. Digital disruption also brings into sharp focus government policy around digital versus non-digital businesses, how they are licensed, taxed and more broadly controlled. Some governments have sought to develop policies towards the development of National Broadband Infrastructure, National Digital Economy, National Cybersecurity, National Smart Cities, Artificial Intelligence, as well as policies that seek to help develop a digital start-up ecosystem allowing start-ups to disrupt the market with new value propositions. The Dublin Digital Hub demonstrates how such regulation overhauls can maximise the potential of the digital start-up ecosystem.14 The hub focuses on coordination and facilitation between Dublin’s start-up centres 12

PwC Report. (2015). Realising the benefits of mobile enabled IOT solutions. GSMA. (2016). Advancing digital societies in Asia. 14 www.thedigitalhub.com 13

8

1 Introduction

and key regulatory players (such as lowering the administrative burden and enhancing the visa regime for entrepreneurs). Today, many of these policies are disjointed or overly regulatory focused and not holistic in their outlook. When I say some governments, actually it is really just a handful—the vast majority firmly have their heads in the sand. Some have also developed policies addressing the demand side, with a focus on protecting consumers and citizens. In an ever more digital world, data privacy and protection are a critical dimension as part of broader cyber security regulations— however such policies need to tread a fine line between making data available for use in innovative applications (and AI and smart cities) and the privacy of data subjects. This is where data and AI policies are likely to diverge between the EU on one side and the USA and China on the other. We are back to that question of values.

2

A Framework for Understanding Digital Disruption

A well-thought through and holistic policy framework which reflects the changing digital landscape can deliver the best outcomes for society and the economy. How policy makers look at the digital landscape in developing such a holistic policy is however vital. Looking at each element of the digital ecosystem in silo would be a major mistake, given value creation will be through the convergence of the different elements of these disruptive technologies. This framework needs to look at both supply and demand sides of the digital ecosystem more than ever before, for one will not flourish without the other. If regulatory policies and institutions fail to acknowledge these separate, yet converging digital technologies and adapt, markets will become distorted in ways that harm competition, slow innovation and ultimately deprive consumers of the benefits of technological progress, whilst doing irreparable harm to employment, tax receipts and the general well-being of citizens and residents. Whilst existing policies will need to be adapted, new policies will also be needed. Some of these policies go beyond policy making in the traditional technology areas. A fundamental re-examination of the wider digital ecosystem is required, looking at a whole of government approach including education policies, government investment and innovation policies, social protection systems, frictionless trade, equality, privacy, anti-trust policies, as well as trust building mechanisms.

2.1

The Digital Ecosystem

Not only are policy makers being called upon to update and strengthen policies to protect the privacy and property rights of businesses and consumers, but they will also need to regulate entirely new forms of activity in the public sphere. Autonomous vehicles are the most obvious example of this challenge. Even as carmakers and technology companies edge closer to road-ready autonomous cars, regulators have not created clear rules that would allow widespread use of these vehicles. Figure 2.1 illustrates the areas that require focus. # Springer Nature Switzerland AG 2020 B. Vagadia, Digital Disruption, Future of Business and Finance, https://doi.org/10.1007/978-3-030-54494-2_2

9

Degree of financial inclusion and use of digital money

Digital Uptake

Device prevalence and density, technology, internet and mobile connecons, digital consumpon

Fulfillment Infrastructure

Quality of transportaon infrastructure, logiscs performance, Smart cies

Fig. 2.1 A holistic approach to policy development

Digital Payment Uptake

Access to financial instuons, electronic payment opons, Digital ID infrastructure

Transparency, rule of law, competence of regulatory authories, taxaon

Instuonal Effecveness and Trust

Government uptake and use of ICT and digital technology, Cyber security, privacy laws

Instuons and Digital ecosystem

Legal environment, including dispute resoluon, IP protecon and ease of doing business.

Consumer ability and willingness to spend, digital divide

Depth of mobile engagement, reach of innovaon, use of social networks and digital entertainment

Output

Sophiscaon of business pracces, R&D, Tax incenves, educaonal instuons

Process

Financing opons and opportunies, start-up capacity, talent pool, VC funding / exist opportunies

Inputs

Instuons and Business Environment

Consumer capacity to consume

2

Transacon Infrastructure

Communicaons sophiscaon and coverage security, spectrum availability

Access Infrastructure

Culture and change

Demand condions

Supply condions

Instuonal environment

Policy enablers for digital evoluon

10 A Framework for Understanding Digital Disruption

2.1 The Digital Ecosystem

11

Government policies, regulations and standards enter in the pores of every aspect of the economy and society. The use of Internet of Things (IOT) and Artificial Intelligence (AI) within industries falls under some form of government regulation, E.g., governments set rules for aircraft and railway equipment maintenance that may need to be modified with the advent of AI driven maintenance; aviation policies and regulation will need to be adopted in the advent of drones; healthcare policies and regulations will need to be modified with the advances in nano-technologies, IOT and AI. For the wider digital eco-system to deliver its maximum economic impact, certain conditions need to be in place and several obstacles need to be overcome, some of these issues are technical, some are structural and behavioural; consumers for example need to trust IOT and AI based systems and companies need to embrace the data driven approaches to decision making that IOT and AI enables. In addition, a number of regulatory issues need to be resolved, such as determining how autonomous vehicles can be introduced to public roadways and how they will be regulated, liabilities apportioned and insured. With an interconnected nation, privacy and security become of paramount concern. Governments will be required to address security risks, convene and fund multi-stakeholder centres to set standards and share information, model good security practices and craft thoughtful rules to encourage security management and punish bad actors. At the time of writing, as illustrated in Table 2.1, most national digital policies simply do not go far enough. They focus primarily only on broadband infrastructure or pay some lip service to education or enabling e-transactions. Whilst there will be a number of reasons for this and for sure, there may be other policies in each of the countries that may try to deal with other aspects of digital disruption, I have not seen many government policies that attempt to examine and deal in a holistic manner the whole digital ecosystem (the USA and UK has made progress and EU is attempting to do this as I write this book). I suspect this is because it has simply been too complicated to understand and instill appropriate policies. Nevertheless, national digital policies can become a key reference point for regulators, for investors and for service providers and can determine the nature and scope of investment that may be apportioned to these technology sectors. National digital plans can provide valuable insight into governments’ digital priorities, degree of commitment and capacity to deliver on the aspirations of the country and its people. As you can see, most of the current policies generally appear to focus on the promotion and growth of the ICT sector, strengthen trust within society for digital transactions, enable e-government services, drive further e-inclusion and improve digital skills and education. However, national digital strategies need to be much more broader ranging, with a need to focus on enabling the positive economic and social conditions necessary for development and growth of digital technologies. As such, they must be cross-sectoral by nature and designed to boost countries’

12

2

A Framework for Understanding Digital Disruption

Table 2.1 Snapshot of digital plans Country Australia

Bahrain Bangladesh China

Egypt

India

Indonesia Japan

Jordan Qatar

Oman

Saudi Arabia

Singapore

South Korea

Some prominent digital related policies, strategies, plans National Digital Economy Strategy—lists eight goals for Australia to become a leading digital society by 2020. Digital First and the Australian Public Sector ICT Strategy 2013. Digital Strategy (2022)—focuses on eight pillars. Digital Bangladesh (2015–2020)—mainly focuses in ICT related industries and services. Digital Bangladesh incorporated under the plan. Next Generation AI Development Plan and Three Year Action Plan, January 2018—sets a goal for country to become the world’s primary innovation centre by 2030, with output of AI industries passing RNB 1 trillion. A number of significant policy initiatives for rolling out 5G across the country. Egypt ICT 2020 Strategy—focuses on three main pillars: the transformation of Egypt into a digital society, the development of the ICT industry and the establishment of Egypt as a global digital hub. National Cyber Security Strategy. Digital India Program of the Government 2015—focus on empowering society and towards a knowledge economy. IOT Policy Document 2018—focuses on standards and governance. MP3EI (Economic Master Plan 2011–2025). Indonesia Broadband Plan 2014–2019. Declaration to be the World’s Most Advanced IT Nation, June 2013. Japan Revitalisation Strategy, June 2013—draws roadmap defining objectives for R&D related to AI technologies and their commercialisation. REACH2025: Jordan’s Digital Economy Action Plan—seeks to streamline the digital transformation across the entire Jordanian economy. e-Government 2020—focuses on three main pillars: better serve individuals and businesses, create efficiency in government administration and develop a more open government with enhanced participation of citizens and residents. Digital Oman Strategy (eOman), published in March 2013—focuses on six main pillars: society and human capital development; enhanced e-government and e-services; ICT industry development; governance, standards and regulations; national infrastructure development; promotion and awareness. National Transformation Program 2020—lists various digital initiatives, such as ‘improve the efficiency and effectiveness of the healthcare sector using information technology and digital transformation’ and ‘establish emerging technology companies with added value to contribute to the increase of local content’. Smart Nation Vision Infocomm Media 2025—initiatives include a smart nation operating system, an Internet of Things scheme targeted at homes and pilot trials at a designated residential-business estate. Infocomm Media 2025 sets out to create a globally competitive infocomm media ecosystem that enables and complements Singapore’s Smart Nation vision. AI R&D Strategy—aims to train 5000 AI personnel over the following 5 years with a fund of US$2 billion and to establish six AI graduate schools from 2019 to 2022 and nurture 1400 students by strengthening support for AI research at existing university research centres. On February 16, 2017, the Ministry of Science, ICT and Future Planning released its regulatory reform plan in the areas of artificial intelligence, Virtual Reality and FinTech. The Regulatory Reform Plan—raises questions about security, IPR, legal (continued)

2.1 The Digital Ecosystem

13

Table 2.1 (continued) Country

Thailand

UAE

UK

USA

Some prominent digital related policies, strategies, plans liability, ethics and trust in AI as well as regulation of virtual currency and foreign exchange remittance. Thailand Information and Communication Technology Policy Framework (2011– 2020). ICT2020 National Digital Economy Master Plan—beginning with building a nationwide advanced broadband network, it focuses on promoting a digitally enabled economy and society, creating public e-services, developing a digitally skilled consumer base, and boosting public confidence in the use of digital technology. Smart Dubai and Abu Dhabi—Smart Dubai aims to introduce strategic initiatives and develop partnerships to contribute to its Smart Economy, Smart Living, Smart Governance, Smart Environment, Smart People and Smart Mobility dimensions. Similarly, the Abu Dhabi e-government programmes and digital transformation initiatives discuss key digital transformation plans and projects. Robotics and Autonomous Systems 2020—a national strategy to capture value, published in July 2014. UK Digital Strategy 2017 policy paper, published in March 2017 looks at connectivity (including releasing public spectrum for connectivity), digital skills, training and inclusion, key digital sectors (including £100 million for funding to support R&D and real-world demonstrations of connected and autonomous vehicles), entrepreneurship, cyber security, digital government and data sharing, APIs and protecting rights of data subjects. The policy paper specifically states a focus on innovation-friendly regulation. The Digital Economy Act 2017—addressees policy issues related to electronic communications infrastructure and services, data sharing between government departments and updating universal service obligations to a minimum of 10 Mpbs. ‘Growing AI in the UK’ report published in October 2017 recommended establishing the Alan Turing Institute as a national institute for AI and data science to work together with other public research entities or councils to coordinate demand for computing capacity for AI research and to negotiate on behalf of the UK research community. Industrial Strategy: building a Britain fit for the future white paper published in November 2017. The Federal Big Data Research and Development Strategic Plan, published by the Subcommittee on Networking and Information Technology Research and Development (NITRD) in May 2016. Federal Automated Vehicles Policy, published by the Department of Transportation in September 2016. Preparing for the Future of Artificial Intelligence published by the National Science and Technology Council Committee on Technology in October 2016— makes 23 recommendations. Artificial Intelligence, Automation and the Economy, published by the Executive Office of the President in December 2016—makes a number of recommendations: invest in and develop AI for its many benefits; educate and train Americans for jobs for the future; aid workers in the transition and empower workers to ensure broadly shared growth. The National AI R&D Strategic Plan—published by the National Science and Technology Council, in June 2019—recommends seven strategic initiatives: make long-term investments in AI research; develop effective methods for human-AI collaboration; understand and address the ethical. Legal and societal implications (continued)

14

2

A Framework for Understanding Digital Disruption

Table 2.1 (continued) Country

Some prominent digital related policies, strategies, plans on AI; ensure the safety and security of AI systems; develop shared public datasets and environments for AI training and testing; measure and evaluate AI technologies through standards and benchmarks; better understand the national AI R&D workforce needs; expand public-private partnerships to accelerate advances in AI. Ten AI principles, published by the White House’s Office of Science and Technology Policy (OSTP) in January 2020.

competitiveness, economic growth and social well-being.1 The USA and the UK are two (with South Korea not far off) that are probably ahead of the rest. The EU has started to catch-up, although the focus of the EU appears to be largely regulatory rather than on using digital technologies to provide economic growth.

2.2

A Framework for Understanding Data and Technology Convergence

In order to develop a more holistic understanding of the key technology innovations that are driving digital disruption and the wider digital ecosystem, we need a reference framework to understand these digital developments and fathom how they may converge. Such a reference framework is needed for policy makers, regulators and business leaders. I propose a reference framework, detailed in Fig. 2.2, where I concentrate on how data moves along the value chain from data capture to the applications that make use of data. In the end, digital disruption is after all about the use of data to create value. I believe the framework allows policy makers and business leaders to think more holistically about how the various digital disruptive technologies fit together and the impact each individual technology layer has on the wider ecosystem. The fundamental underlying layer is the data connectivity layer. Here technology improvements in telecommunications and connectivity for IOT enables new ways to connect things and people. This layer is essentially about connectivity and nationwide digital infrastructure—and ultimately about investment incentives. The data capture and distribution layer is all about innovations in device technologies that enable new ways to capture data, in all its forms. IOT enabled developments such as self-driving cars have captured the popular imagination and with fitness bands to monitor physical activity and Internet-connected devices to manage Heating, Ventilation and Air conditioning (HVAC) systems/connected appliances and entertainment and security systems, consumers are getting a glimpse of what an IOT-enabled future may look like. This layer is all about IOT and the interoperability and security of these extremely diverse set of devices. 1

OECD. (2015). OECD digital economy outlook 2015. OECD.

Data Protecon requirements Data Processing and Artificial Intelligence

Disruptive Applications • • • • • • •

• • • • •

• • • • •

Policies Regulation and Competition Scarce Resources Infrastructure Investment Incentives

Internet of Things Devices Consumer IOT Industrial IOT Smart Cities

Security Encryption and Cryptography Digital Signatures Digital IDs Blockchains Smart Contracts Tokens, ICOs, Currencies

History of Artificial Intelligence AI’s day of reckoning Machine Learning Deep Learning Ethical issues

Data Connectivity: Telecommunications and Internet of Things

• • • • •

Digital Assistants Digital Twins Virtual Reality Augmented Reality Platforms Autonomous Vehicles Drones

Data Capture and Distribution

Data Integrity, Control and Tokenization

Fig. 2.2 Framework for understanding digital disruption

Investment requirements

• • • • • • •

2.2 A Framework for Understanding Data and Technology Convergence 15

16

2

A Framework for Understanding Digital Disruption

The data integrity, control and tokenisation layer is about how data moves around securely and how it can be used in a multiple use cases and potentially be monetized. A common understanding of control rights to data produced by various connected devices will be required to unlock the full potential of IOT. Who has what rights to the data from a sensor manufactured by one company and part of a solution deployed by another in a setting owned by a third party will have to be clarified. For example, who has the rights to data generated by a medical device implanted in a patient’s body? The patient? The manufacturer of the device? The health-care provider that implanted the device and is managing the patient’s care? This layer is essentially about trust. The data processing and artificial intelligence layer is all about how data can be used to deliver new insights, improve predictions/forecasts and make automated decisions that is based on a level of intelligence that may be beyond the capabilities (or speed) of humans. This layer brings about serious questions about ethics. This layer is all about the smart use of data. But it is more than just this. It is about the consequences on employment, on equality, on ethics, on privacy, to name a few that will be affected by artificial intelligence (and the applications enabled by this). The final layer—the peak of the pyramid—looks at the applications and use cases that utilise these new digital technologies. These include digital assistants, digital twins, virtual reality, augmented reality, platforms, autonomous vehicles and drones for instance. This layer is all about new applications and new business models created through these new digital technologies. It requires appropriate policy and regulatory oversight to enable these innovative use cases to be developed, but to do so without damaging the competitive process or the long term interest of consumers. This layer is about creating a fair competitive process, robust consumer protection and safety. The above mainly concentrates on the supply side of the equation. Surrounding all these layers, one also needs to look at the demand side and the barriers to access and effective use of digital technologies. These demand side factors typically include the lack of access to high-quality and affordable infrastructure; a lack of trust in digital technologies and activities; a shortage of the skills needed to succeed in the digital economy; an ability to participate in trade including digital ID; and the general levels of trust needed for consumers to provide ecosystem players access to their personal data. I have attempted to cover these issues within this book, as far as is possible without creating an encyclopaedia that becomes unwieldy to the reader. The rest of the book focuses on each of these layers in turn, detailing recent developments, their impact on economies and society and the policy and regulatory implications they raise for governments and society. Beyond these layers, there are also other technology developments that are not necessarily driven by the data revolution, but which may well shape the future and are therefore also briefly discussed in a separate chapter. I have also included a chapter towards the end that details how organisations, specifically existing organisations, will need to adapt and reformulate themselves in order to ride this new digital tsunami and become ambidextrous. It would be

2.2 A Framework for Understanding Data and Technology Convergence

17

impossible to provide a ‘how-to’ guide given the immense diversity of organisational types, their management approaches or indeed the likely impact digital disruption may have upon them. However, there are generic management and organisational approaches to dealing with the uncertainty and change; it is these that are detailed in the chapter. As a reader, you do not necessarily need to read the book from the beginning, chapter by chapter. Each chapter has been written to be read standalone, so you could go straight to the elements of the digital disruption framework you see most relevant to you. However as I have cautioned till now—it is the holistic understanding of how these separate technologies converge which is the really fascinating story.

Part I Data Connectivity

Disruptive Applications Data Processing and AI Data Integrity, Control and Tokenization Data Capture and Distribution

Data Connectivity: Telecommunications and Internet of Things

3

Data Connectivity and Digital Infrastructure

The availability of ICT infrastructure that provide reliable and affordable connectivity is a first step on every country’s road to digital transformation. Connectivity is the bedrock for the development of a digital ecosystem. It is the basic plumbing that allows the life-saving water to be delivered. Initially, connectivity was about connecting phones, then it was about connecting people to the Internet, now it is connecting a whole host of devices in various consumer, industrial and government contexts, so called Internet of Things (IOT). This connectivity is a combination of what some label as first mile, middle mile, last mile and invisible hand. The first mile is the international connectivity of data, the middle mile is the backbone and metro networks along with transmission and core network that provide nationwide connectivity, and the last mile is the final connection to users (fibre, wireless or satellite). The invisible hand is the policies, legal and regulatory frameworks that enable investment and maintenance of infrastructure such as spectrum, fees and taxation. Policy makers need to stimulate the deployment and the use of these 3 miles using the invisible hand such that they are capable of delivering the new digital applications and supporting the IP traffic they will generate.1 This chapter details the key developments in communication technologies and explores the policy and strategic requirements for building a connectivity layer that underpins the digital revolution. It looks at the key government and regulatory policy choices that will determine the investment strategies and priorities of telecommunications operators and their decisions in terms of what services are provided, how these services are rolled out and how competition might evolve. The policy priorities are no longer the same as those enacted in 1998 (largely across the EU), when comprehensive telecommunications regulatory frameworks

1

WIK-Consult, Ecorys and VVA Consulting. (2016). Support for the preparation of the impact assessment accompanying the review of the regulatory framework for e-communications. Study for the European Commission. # Springer Nature Switzerland AG 2020 B. Vagadia, Digital Disruption, Future of Business and Finance, https://doi.org/10.1007/978-3-030-54494-2_3

21

22

3

Data Connectivity and Digital Infrastructure

were adopted. At the time, the main priority was to bring competition to existing fixed (copper) infrastructure and increase market efficiency. Today, the policy priorities must seek to stimulate the penetration and the take-up of new broadband infrastructure, ensuring that citizens and firms use new digital applications delivered over such infrastructure and reap their full economic and social benefits. Governments need to consider policy choices which incentivise the private operators to invest in new and/or upgraded networks to deliver ultra-high speed reliable broadband connectivity. They need to support public investment in an efficient manner without crowding out private investment for those areas (mainly rural) where there is no business case for investment; and finally, they must ensure that citizens and firms can enjoy an always-on access to those infrastructures in the most efficient and the least costly manner.

3.1

Unprecedented Growth: Limited Opportunities

Fixed and mobile networks will need to scale to match the demands of billions of devices or ‘things’ that are expected, whilst guaranteeing a level of quality that is being required to enable many IOT applications. There is already an explosion in connectivity and data: a 400 million factor increase in the total mobile data traffic in the past 15 years2 and 44 zettabytes forecast annual data flows in 2020 (equivalent to 44 trillion gigabytes).3 By 2020, projections suggest that there will be around two zettabytes of data in the Middle East alone, greater than the estimated number of grains of sand covering the entire Arabian desert.4 The growth in data consumption and the surge in the number of connected devices are likely to require future networks to have a thousand times more capacity than is available today. At the same time, a growing number of real-time applications will demand that end-to-end network latency is reduced to milliseconds, to enable a seamless and lag-free experience in watching videos or even remotely controlling robots or vehicles. However, colliding pressures on revenue and costs, overlaid with operators’ failure to monetise the digital opportunity, have resulted in profit and value pools shifting away. Operators’ share of the industry profit pool has declined from 58% in 2010 to 47% in 2015 and is forecast to fall further.5 Establishing the infrastructure that can provide ubiquitous nation-wide connectivity, with ever-increasing levels of quality is a costly and slow process. Telecommunications operators have to invest billions of dollars in infrastructure, in an ever-decreasing period. The technology refresh cycle, especially for mobile 2

Visual Networking Index, Cisco, 3 February 2016. Visual Networking Index, Cisco, 3 February 2016—report reference to IDC data. 4 Gantz, J., Reinsel. (2013). The digital universe in 2020: big data, bigger digital shadows, and biggest growth in the Far East. IDC country brief. 5 World Economic Forum (2017). Digital transformation initiative telecommunications industry. 3

3.1 Unprecedented Growth: Limited Opportunities

23

operators is less than 10 years. That means each mobile operator needs to spend billions roughly every 10 years, yet consumers are starting to view connectivity as a commodity, not willing to pay the premiums that such operators would hope to charge for the continual investment in their networks (spectrum, as well as equipment). The impeding investment requirements in delivering 5G is likely to require even more investment than spent on 4G. Most operators saw revenues flat line after 4G investments and the prospects for revenue uplift after 5G investment is likely to follow a similar pattern. Connectivity in many countries is also held back by poor enabling infrastructure, such as the lack of grid electricity in rural areas,6 limited or difficult access to duct infrastructure, or insufficient or costly access to necessary spectrum for delivery of mobile services. Transformational change in networks and business models will have to be accompanied by greater flexibility in regulation and some thought given to the policy and regulatory environment required to incentivise both the supply and demand of digital services. While society has embraced many facets of the digital society, government must do more to ensure that the stimulus required to kick-start the digital transformation is injected into society. This means that government must be full advocates and users of digital products and services and should push E-government services as a primary feature of a digital society. The significance of broadband connectivity to the advancement of a digital society is demonstrated by the higher level of digital offerings and customer adoption in countries with an established broadband infrastructure. South Korea has one of the most developed broadband infrastructures globally in terms of speed and coverage and is also one of the most advanced digital societies. National broadband plans are an important tool for creating a policy environment conducive to promoting digital infrastructure development and deployment. In September 2016, the United Nation’s Broadband Commission for Sustainable Development reported that over 80% of countries have established or are planning to introduce national broadband plans or digital strategies. In September 2016, the European Commission (EC) proposed new targets for a European Gigabit Society by 2025. Under these proposals, all schools, transport hubs, main providers of public services and digitally intensive enterprises should have access to Internet connections with download/upload speeds of 1 Gigabit of data per second (Gbps). In addition, all European households should have access to networks offering a download speed of at least 100 Megabits per second (Mbps) and all urban areas as well as major roads and railways should have uninterrupted 5G wireless broadband coverage. The USA aims for 100 Mbps to 100 million homes by 2020, while Canada’s plan is focused on boosting coverage in underserved areas by investing CDN$500 million over 5 years. In Asia, South Korea’s goal was 1 Gbps to 90% of urban areas

6

E.g. It was reported in 2013 that Myanmar’s national grid reached only 25–30% of the population, and the per capita power consumption was the lowest in the region at 160 KWh per year.

24

3

Data Connectivity and Digital Infrastructure

(85 cities) and 100 Mbps to 100% of households (including rural areas with 50 households) by 2017, while by 2020, Australia aims for speeds of 50 Mbps to 90% of households and businesses and at least 25 Mbps to the whole population. As well as setting targets, many national broadband plans also include plans for public investment in infrastructure projects. In certain cases public support and investment may be needed to ensure rural rollout or high-speed backhaul infrastructure when they cannot be addressed adequately by private initiatives. Some private sector firms are also seeking innovative ways to fill such gaps, e.g. Google’s Project Loon is a network of balloons travelling on the edge of space, Facebook’s Connectivity Labs is looking at deploying satellites and drones and Microsoft has been involved in research into the dynamic use of spectrum in television white spaces to provide Internet access in underserved areas. Many countries have realised the need for a national broadband plan, with an understanding that a path towards economic development as well as distribution of the benefits to the wider society requires all citizens and businesses to have access to high-speed broadband connectivity, both through fixed and mobile channels. However, there must be caution against the development of national broadband plans that simply aim to replicate other markets, or the whole process being a political one. Many countries have introduced such plans trying to replicate the ‘Singapore model’ as an exemplar, only to find it failed miserably in their own countries. These countries sometimes forget to pay attention to the wider factors that drove success in Singapore, namely, the degree to which the Singaporean government has control over economic and sector participation, as well as the geographic advantages a nation the size of a city has. Sometimes the process is politicalised such as was the case in Australia, where government and citizens must in the end pay the price, literally. Notwithstanding the concerns with such plans (primarily with implementation approaches) and given the impact of digital disruption on economies and societies, a similar policy is required for the wider digital eco-system. Such policy considerations must be cognisant of the needs of the telecommunications industry as a fundamental layer in the wider ecosystem with policies that free up more spectrum and which is made available at a more affordable price, together with federal and regional policies that make access to sites for deployment of technology more streamlined (e.g. site approvals for installing mobile towers).

3.2

Fit for Purpose Regulatory Policy

Digital development has policy and regulatory implications across a number of areas, including licensing, spectrum management, standards, competition, security and privacy. Good predictable regulation is a critical enabler for efficient markets and consumer safeguards. Telecommunications regulation usually seeks to protect broad national interests through three principal means:

3.3 Promoting Investment

25

• Market efficiency: regulation to ensure the sustainability of industry players and promoting market competition and efficiency; • Scarcity management: regulation to manage access and the utilisation of, sometimes-scarce national resources, such as spectrum in the sector; • Safeguarding customer welfare: regulation that seeks to protect customer rights and interests and maximise consumer surplus. These three principals remain valid today as they have ever done. However, regulation can no longer be a static affair. Twenty years ago, regulators set out regulations which sought to define the market. Things moved gradually. However, that old model needs to be reconsidered in an era of digital disruption. Innovation drives regulation, not vice versa. Regulation tends to respond to industry evolution because policy makers cannot anticipate how and when innovations may change an industry through the introduction of new business models or new services that become substitutes for existing services. Regulators must intervene when non-regulated activities start negatively affecting customer welfare, and start distorting the competitive process. When markets and the activities within them change in a manner that is counterproductive, regulators need to intervene to redefine the market and/or adopt specific regulations to protect consumer welfare. Non-regulated activities, which create new risks in regulated markets, also warrant the attention of regulators. In innovation driven environments, regulators should be prepared to act when non-regulated services threaten either the sustainability of regulated entities through disproportionate competition or to protect consumers. Ensuring a twenty-first century approach to the connectivity sector will involve removing regulation where it is no longer necessary or extending the scope of regulation to new service providers. It may also entail creating converged regulators and/or adjusting regulatory powers so they can oversee all elements of bundled services and ensure consistent consumer protection. Maximising the benefits of digital disruption will require more coordinated regulation across all sectors, with telecommunications regulators working closely with their counterparts in data protection and competition authorities, but also with other sectoral regulators in health, agriculture, aviation, highway authorities and many others that will equally need to examine and adapt their regulations given the impact digital technologies will have in their markets. The rest of this chapter examines the evolving nature of telecommunications business models and the policy and regulatory changes that the digital revolution requires.

3.3

Promoting Investment

The communications sector is dynamic and capital intensive, with high levels of sunk costs and long pay back periods. Ongoing investment to meet the everchanging demands of consumers is also constant and critical to ensuring that

26

3

Data Connectivity and Digital Infrastructure

industry participants are able to deliver high-quality and high-value communications services to consumers, businesses/governments and deliver reliable IOT connectivity. The nature of the oligopoly mobile market where services are becoming commoditised, means consumers are attracted through competitive pricing and ‘Below the Line’ offers. In an attempt to stop what can become a downward spiral for mobile prices, many mobile operators are attempting to sell multiple products (fixed broadband and TV) in order to maximise the share of the consumers telecommunications economic wallet. Selling such bundled offerings at a modest discount typically drives lower churn and increases ARPA (revenue per account rather than per product). However, to do so, requires further investments in developing new fixed product sets. In addition, barriers to exit for the telecommunications industry are extremely high and so the competitive reaction from operators has typically been to seek to adopt a similar strategy, adding further pressures to pricing and margins. The scale of investment required to build the digital infrastructure, in particular, the connectivity required to connect the billions of devices with the low latency demanded by new applications will require significant investment by the telecommunications industry, including deployment of 5G technologies. The value realised by the telecommunications operators is however not believed to be proportionate. Connectivity is estimated to only form less than 10% of the value of IOT technology spending, with 20–40% spent on integration services, 20–35% on software, 5–20% on software infrastructure and 20–30% on hardware.7 Much of the value derived from the digital ecosystem will reside outside of the traditional telecommunications domain, with significant value being captured by currently non-regulated entities. This is not to say that telecommunications operators will not see benefits in deploying 5G. For a start, it delivers efficiency improvements of up to seven times those delivered in delivering 4G. Spectral efficiency and new technologies such as software-defined networking (SDN) and network function virtualisation (NFV) may help operators deliver a Giga byte of data for a much lower unit cost, albeit with significant upfront investment. Policy makers and regulators need to recognise and harness the relationship between investment, competition and innovation. This is a particularly complex task in an increasingly converging market. Competition between communication networks and service providers generally leads to greater consumer choice, better quality communication services and lower prices. However, some argue that things like Over-The-Top (‘OTT’) provision of voice and video services discourages infrastructure operators from investing in further network expansion. Others believe that it spurs innovation, competition in communication markets and generates traffic and demand for broadband services, and hence collectively encourages more investment. The jury is still out on which one it is—however just looking at valuations of

7

McKinsey. (2015). The internet of things: mapping the value beyond the hype.

3.3 Promoting Investment

27

telecommunications operators and OTT players, it is clear that value is flowing to the OTT providers at the expense of telecommunications operators. With these complexities at work in converged markets, traditional approaches for assessing and regulating telecommunication and media markets may not be optimal. Telecommunications operators fear that the path to the digital world will be similar to that seen when OTT providers entered the scene. Then, (and still to date) telecommunications operators faced discriminatory regulatory treatment vis-à-vis OTT providers. Not only did the telecommunications industry have to withstand the worst of the investment risk into sunken assets, but also it continues to be heavily regulated, whilst the OTTs have seen a more lassie fair approach to regulation. The regulatory framework has a key role to play in encouraging and protecting these investments. Investors need to know that the regulatory environment in which they operate will remain predictable and subject to checks and balances and which limits the ability for regulators to change regulatory settings and industry structures without reasonable cause. At the same time, investors need comfort that regulators will protect these investments with the introduction of regulations when needed to create a level playing field, especially when new unregulated entrants come into the market. I am not calling for more regulation—far from it. A key requirement for promoting investment is a regulatory framework that is proportionate and carefully targeted to achieve legitimate policy objectives, such as addressing genuine market failures. Market-based mechanisms are generally preferred over regulation to the extent possible, since market-based mechanisms are best at facilitating quality economic decision making and in stimulating competition, both of which are needed to promote investment and guarantee high-quality communications services to consumers. Regulation often imposes costs and creates distortions in the market which can hinder the development of the communications sector. This is not to say that any abuse of dominant position in any relevant market should be tolerated—it should not and must be addressed. What I am calling for is a level playing field between the traditional investment heavy telecommunications operators and the investment light OTT players, or indeed other digital players now entering the market. Such a level playing field will mean lifting many of ex-ante regulations from telecommunications operators and allowing them to compete with new digital players on a level footing. The value of a proportionate, light-touch regulatory framework that prioritises investment in delivering pro-competitive outcomes for consumers is demonstrated by comparing the developments of mobile and fixed communications markets in the USA and the EU (especially in the early days of 4G and fibre rollout). Regulators in the USA have focused on infrastructure-based competition and observed regulatory forbearance, while the EU has emphasised service-based competition through more extensive and intrusive wholesale access obligations. The observable outcomes for consumers have been striking and very different. The USA approach has delivered significantly higher levels of investment, more innovation and better quality services compared to the EU approach. While prices of

28

3

Data Connectivity and Digital Infrastructure

services are on average, modestly more expensive in the USA, consumption of communications services (measured by bandwidth usage) is higher in the USA than in the EU, ultimately highlighting that end-users in the USA are deriving greater value from communications services than their EU counterparts.8 For example: • Fixed telecommunications: The United States has outperformed the EU in the fixed sector due to policy and regulatory considerations that focus on infrastructure competition: (i) more flexible regulatory settings have contributed to the USA significantly outperforming the EU in next-generation access (NGA) network coverage, particularly fibre-to-the-premises (FTTP). Only 54% of EU households were connected to NGA networks in 2012, compared to 82% in the United States. FTTP coverage was almost double in the USA (23%, compared to 12% in the EU)9; (ii) several European fixed telecom incumbents have refrained from investing aggressively in next generation access networks due to regulation affecting the business case for fibre investments. In fact, in 2011 and 2012 more miles of fibre were installed in the USA than Europe-wide. The ‘ladder of investment’ unbundling approach taken by the EU does not suit the challenge of promoting investment in fibre-based infrastructure for NGA networks10; and (iii) the EU approach of excessive reliance on wholesale obligations on the incumbent has contributed to keeping prices so low in the EU, that a creeping investment malaise has resulted. A decade later, per capita investment in telecommunications networks in the USA was more than 50% higher than in the EU (US$197 to US$129 in 2009).11 • Mobile telecommunications: The USA has outperformed the EU in the mobile sector also due to policy and regulatory considerations that incentivise investment: (i) mobile wireless markets in the EU are today characterised by lower prices, lower intensity of use, lower revenues, lower quality (at least along some significant dimensions), less product differentiation and consumer choice, a slower pace of innovation and lower rates of capital investment than the mobile wireless market in the United States12; and (ii) the USA had significantly higher LTE coverage than the EU—86% in 2012, compared to 27% in the EU.13

8 Yoo, C. (2014). U.S. vs. European broadband deployment: what do the data say? University of Pennsylvania Law School. 9 Yoo, C. (2014). U.S. vs. European broadband deployment: what do the data say? University of Pennsylvania Law School. 10 Copenhagen Economics. (2013). How Europe can catch up with the US: a contrast of two contrary broadband models. 11 Copenhagen Economics. (2013). How Europe can catch up with the US: a contrast of two contrary broadband models. 12 GSMA. (2013). Mobile wireless performance in the EU & the US. 13 Yoo, C. (2014). U.S. vs. European broadband deployment: what do the data say? University of Pennsylvania Law School.

3.3 Promoting Investment

29

Ultimately, the differences between the performance of the fixed and mobile communications sectors in the USA and the EU demonstrates the role that proportionate, light-touch, pro-investment regulation can play in facilitating high-quality services for end-users and maximising the value generated by the communications sector for a country’s broader economic development. Nevertheless, the task of balancing the need for proportionate regulation with investment friendly policies is not an easy one. Some genuine market failures (such as fibre access) may need regulation, but those same regulations may inhibit investments by the private sector. Certain regulatory solutions may work well in urban areas but fail miserably in rural areas. The ultimate regulatory choice must align with broader policy objectives. Does government want nationwide, uniformly priced connectivity, even if that means overall lower speeds, or does government want investment to go to those areas economically feasible and which may spur further economic activity, even if that means other areas may get left behind? These complex issues primarily resolve around fibre connectivity and how policy makers can incentivise further fibre rollout. Many are questioning if facilities based competition is appropriate when it comes to highly capital intensive fibre to the home investments. At the same time, with significant investment requirements, many mobile operators either voluntarily or through regulation, are doing more network sharing. There may well be a case for a ‘single publicly funded network’ in rural areas, whereas multiple operators could rollout or share fibre as deemed necessary in more built-up areas. With the rather haphazard manner in which policies have evolved, many telecommunications operators have taken the decision that it is better to separate their businesses into those that will continue to be heavily regulated and those where regulators are likely to loosen the regulatory noose. Many telecommunications operators globally have sold off their passive infrastructure (which attracts the most intrusive regulations). In the mobile space, this includes towers. In the fixed space, many are considering sharing their passive infrastructure, such as ducts and fibre. This is also supported by valuation metrics, where tower companies are attracting higher valuation multiples than integrated operations. Investors believe an independent tower company for instance will generate higher tenancy rates for towers and thereby sweat their assets more than telecommunications operators are capable of doing. The sale and lease back of towers can also inject much needed cash into telecommunications businesses. At a broader level, this may well be a correct strategy where a correctly regulated monopoly of passive infrastructure may encourage investment. Many operators within the EU for instance have hesitated investing in fibre rollout given the uncertain regulatory environment they face. Investors are worried that having invested significant sums, regulators may force sharing of such infrastructure at regulated prices. A fundamental question this raises for telecommunications operators, is determining what business they are in and what their core competences really are, when they sell off their critical network assets, outsourcing the management of the network to vendors and outsourcing many other functions such as their call centres to others.

30

3

Data Connectivity and Digital Infrastructure

This hallowing out of the telecommunications operators may well reduce their ability to compete in the future, where their only real asset becomes the scarce spectrum they hold and little else.

3.4

Rethinking Regulation

Many policy makers and regulatory authorities around the world are struggling to confront the urgent needs to reform policies and remake institutions in virtually every area of regulation. The same questions arise in each case: should regulators try to achieve a level playing field by applying the same rules to entrants that were traditionally only applied to incumbents? Or should equality be achieved by reducing regulation on incumbents? Given the realities of the new market, how can regulatory goals and objectives best be achieved? How can policies and institutions be future-proofed so that they are flexible enough to accommodate continuing change? And to what extent has dynamic competition in the digital ecosystem reduced the need for regulation in the first place? The answer until recently has been a policy towards relaxing regulation as applied to telecommunications operators, allowing them to compete with digital players. However, as nationalism seems to be taking hold globally, many are now looking at the prospect of going the other way and applying regulation to digital players that are in many cases based out of the USA. Clearly, countries are at various stages along the path towards a digital society, driven by a range of factors. These factors include the pervasiveness of underling telecommunications infrastructure, the availability and adoption of digital products and services, the ability to interact and pay for such goods and services electronically, as well as the suitability of the legal and regulatory environment within each country. Emerging digital societies (e.g. Myanmar, Palestine, Bangladesh, Iraq and Pakistan) are primarily focusing on extending mobile connectivity to unconnected citizens. Transition digital societies (e.g. India, Indonesia, Thailand and Tunisia) are interconnecting networks to enable broader use. Advanced digital societies (Australia, Japan, Singapore and the UK) are already laying the foundations for the next generation of digital infrastructure in the form of smart, interoperable technologies and IOT enabled applications. The right approach to regulating telecommunications operators and other digital players does very much depend where you are along the journey. The GSMA back in 2016 advocated a set of principles to guide efforts to reform what may be outdated regulatory regimes,14 these are summarised in Table 3.1. I believe these principles are still relevant today for most countries, irrespective of where they are along the development path. The main challenge facing telecommunications regulation today is the entry of ‘edge’ providers (suppliers of applications, content and devices) into markets 14

GSMA. (2016). A new regulatory framework for the digital ecosystem.

3.4 Rethinking Regulation

31

Table 3.1 Core principles in reforming regulations Market mechanisms preferred—where regulation is called for, it must be cost effective

Functionality based regulation

Dynamic regulation

While markets are generally the most effective way to foster innovation, enhance prosperity and promote consumer welfare, they do not always deliver optimal outcomes at every moment in time. In cases of sustained monopoly power, externalities, public goods and asymmetric information, government intervention has the potential to increase overall welfare. If market conduct is harming consumer welfare and regulatory intervention would create a net benefit, then regulations should be designed to achieve the greatest possible benefit at the lowest possible cost. That is, they should be cost effective. A direct corollary of the cost effectiveness principle is that regulatory policy should be functionality based, rather than structure or technology based. By this, the GSMA means that regulatory policy should be designed to achieve the desired objective (e.g., protecting privacy, promoting universal adoption, providing incentive for investment and innovation) in the most efficient way, regardless of the technology, industry structure, or legacy regulatory regime. This principal is sometimes called, ‘same service, same rules’. Information technology markets are characterised by dynamic competition, meaning that companies largely compete through innovation, rather than price. This competition leads to rapid changes in markets and technologies, so regulation must be flexible enough to accommodate these changes while creating the regulatory certainty and predictability that companies need to take risks (‘dynamic regulation’). Regulations should be designed to be durable and consistent over time in order to enhance the ability of market players to engage in long-term and risky investments.

previously served by vertically integrated infrastructure based communications providers. Services provided by companies like Amazon, Facebook, Google, Microsoft, and Netflix are directly competing successfully with services provided by traditional telecommunications companies like BT, Vodafone, AT&T, Comcast, Bharti Airtel and others like them. The first group of companies and the services they provide are typically regulated under horizontal antitrust and consumer protection regimes, while the second group of companies and their offerings are subject to industry-specific rules and institutions. Thus, telecommunications carriers (but not other voice and data communications providers) are still subject to rules designed for telephone companies; traditional audio and video distributors (but not OTT providers) are still subject to rules designed for ‘broadcasters’; mobile carriers and their services face many of the same rules as wire line telephone companies (and often even more that come attached to their spectrum licenses), while other wireless ecosystem participants face much lower burdens. As was mentioned earlier, Professor Mariana Mazzucato makes a good point that many of these now successful digital companies are a product of government funding,

32

3

Data Connectivity and Digital Infrastructure

they are utilising a public good; personal data, for their business model whilst evading regulatory scrutiny and evading taxes. A long hard look at these digital companies is required to evaluate the true economic benefit they are delivering—not to mention where that economic value actually flows—usually outside of the country. Many countries are looking into the role OTT (such as WhatsApp and others) providers are playing in the telecommunications sector—especially given the increase in the uptake of Voice over IP (VoIP—digital voice) services which would indicate that users consider it more and more a substitute for narrowband voice services, typically provided by the traditional operators. In New Zealand, VoIP is viewed as fully equivalent to PSTN voice telephony (traditional analogue) and identical regulatory frameworks applies to both. India recently consulted on the matter and has proposed to license such providers. Within Europe, the EU Commission has proposed to divide the Electronic Communications Services (ECS) into four main categories15: (a) Internet Access Service (IAS) which covers the Internet connections offered by the Internet service providers via fixed or non-fixed technologies; (b) Number-based Interpersonal Communications Service (ICS) which covers the traditional telecommunications services; (c) Number-independent Interpersonal Communications Service (ICS) which covers the new communications OTT services such as Skype, WhatsApp or Viber; (d) Service consisting wholly or mainly of the conveyance of signals, such as transmission services used for Machine-to-Machine (M2M) communications and for broadcasting. Discriminatory regulatory treatment of traditional communications companies is not however limited to ‘economic’ regulation of prices and entry (i.e., what is generally thought of as ‘public utility’ regulation). Sector-specific regulation of communications providers and the resulting disparity in treatment extends across the entire scope of regulatory issues, including consumer protection, competition regulation, privacy/data protection, security/law enforcement and even taxation. Whilst there may still be some discrimination applied between the traditional telecommunications operators and the new set of service providers (typically referred to OTT players), licensing these operators can allow relevant regulatory constraints to be imposed, which is currently lacking in many jurisdictions. This would allow the regulatory authorities to enforce lawful intercept and other security measures, allow users access to emergency services, enforce privacy and data protection obligations, enforce other consumer protection measures, as well as provide the ability to tax such companies locally—things not really possible today.

15

Under the current regulatory framework, the status of OTT is not totally clear and most of them are not covered by the electronic communications rules: see BEREC (2016). Report on OTT services, BoR(16) 35.

3.4 Rethinking Regulation

33

Table 3.2 Substantive and procedural discrimination Substantive discrimination

Procedural discrimination

Substantive discrimination occurs when specific regulatory mandates are applied differently, such as when infrastructure-based communications providers face mandates for universal service that are not imposed on OTT competitors, or are subjected to different privacy and data protection regulations. Structure-based regulation not only imposes costs on consumers and the economy, it frustrates public interest objectives by creating a bias in favour of some types of regulatory interventions over others. While it is understandable that regulators tend to look first at markets they are familiar with (and where regulatory institutions are already in place) during the decision making process, (what some refer to as the spot light syndrome) this approach can artificially limit regulatory options in ways that raise costs or even prevent them from achieving their regulatory goals altogether. Procedural discrimination occurs when different market sectors or technologies have different degrees of freedom to innovate or adjust their business models without having to seek approval or incur other kinds of regulatory risk. In markets where competitive success is determined by the speed of innovation, procedural discrimination may be even more distorting than substantive discrimination. A prime example of this is the need for price approvals by telecommunications operators from their regulatory authorities, sometimes with detailed cost justification and sometimes with 30 days advance notice, whereas un-regulated OTT providers can change their prices or introduce new services at will. It is impossible for traditional regulated operators to compete in a market where they will also have their hands tied in such a manner. The effect is sub-optimal competition, reduced innovation and possibly higher prices.

The ability to regulate effectively is also affected by underlying statutes and because the digital ecosystem operates on a global scale, by international law. For example, Microsoft is currently in litigation with the U.S. government over the government’s ability to subpoena information stored in Microsoft servers located outside the U.S. See Shelkey, D., Archer, C. (2015). Microsoft Ireland Case—Status and What’s to Come. National Law Review

The idea is to get away from regulation that is ‘structure-based’,16 i.e. it is determined by the nature of the company supplying the product or the technology being used. Structure-based regulation is inherently discriminatory. It harms both competition and consumers and makes it more difficult and costly to achieve legitimate public interest objectives. It distorts economic incentives, causing economic resources to flow away from their highest valued uses, harms competition by raising barriers to entry for some types of companies but not others, slows innovation by limiting the ability to create new products and platforms, creates consumer confusion about what types of protection apply to which products and raises the costs of regulation by distributing regulatory burdens in an inefficient way. Table 3.2 describes the two sources of discrimination faced by telecommunication operators. With the explosion expected within the digital space and a plethora of new businesses emerging and an equal number of commercial models, it is appropriate

16

GSMA. (2016). A new regulatory framework for the digital ecosystem.

34

3

Data Connectivity and Digital Infrastructure

for policy makers and regulatory authorities to take a long hard look at their regulatory policies and examine if they are fit for purpose in a interconnected, digital world driven by innovation. A regulatory regime that is applied non-discriminately must be given serious consideration. When Google bought Nest, a maker of digital thermostats in early 2014 for US$3.2 billion, it was a clear indication that digital transformation is spreading across even the most traditional industrial segments and regulators will need to catch-up.

3.5

Clear, Nuanced Network Neutrality Policies

The debate around network neutrality is one that appears to change depending on the course of political winds blowing at any point in time within a country. The USA originally sought to impose strict network neutrality regulations, premised on the fact that main content providers and OTT providers were USA originated. The EU policy makers originally took the opposite view, whereby the emphasis was on the ability of telecommunications operators to shape traffic as required, given that the economic benefits of network neutrality would accrue to the USA rather than the EU. However, positions have changed over time and continue to change. There is no unified approach towards network neutrality, so policy frameworks vary from country to country. A number of countries have introduced regulation to ensure network neutrality and have prohibited blocking and unreasonably discriminating against services. Guidelines were adopted in Canada (2008) and South Korea (2011) and network neutrality rules were included in Brazil’s 2014 Civil Rights Framework for the Internet. Most recently, rules have been put in place in the United States and the European Union. In the United States, the 2015 Open Internet Order established three ‘bright line’ rules prohibiting blocking, throttling and paid prioritisation. It also introduced transparency requirements. Although the rules were challenged by operators, the order was upheld by a 2016 court decision and then changed again when the Trump Administration exerted its influence over the FCC. EU rules came into force in 2016 requiring ISPs to treat all traffic equally and to establish a right for all end users to access and distribute lawful content, applications and services of their choice. Each country must consider the policy objectives of network neutrality and the benefits each position is likely to provide and to whom in the digital eco-system value chain. Whilst clearly there are citizen rights to non-discriminatory access to basic Internet, what precisely is meant by basic Internet needs to be looked at thoroughly. In any case, network neutrality needs to be called into question, when used in the context of IOT, where the basic premise of citizen rights to basic Internet appears irrelevant when most of the communication is between machines. Within the EU, the Body of European Regulators for Electronic Communications (BEREC) has clarified that machine-to-machine (M2M) connectivity services, which are mainly provided in relation to the IOT, are not necessarily in scope of the network neutrality

3.6 Rethinking Competition Policy

35

regulations. If the number of reachable endpoints of the Internet is limited due to the nature of the device (e.g. because a smart meter communicates only with dedicated host servers), the connectivity services are outside the scope of the regulation unless they are used to circumvent the regulation.17 This appears a sensible, flexible approach. Aside from IOT, the very foundation of network neutrality needs to be aligned with societal values (if these are even evident). In the EU, these are more socialist in nature than other countries. Another practice that is closely associated with network neutrality is zero-rating, where data consumption for access to some services is effectively zero rated and not charged for or excluded from the customers’ data allowance. This practice is on the rise in developing and emerging economies. A well-known example is Free Basics, originally launched as Internet.org, a service offered by Facebook and six technology companies in partnership with local ISPs. With the stated aim of bringing affordable Internet access to users in less developed countries, it provides free access to a limited range of Internet content and services. Its effect on Internet openness is potentially both positive and negative. On the one hand, it increases access by giving people who may otherwise not be able to afford Internet access a way to connect. On the other hand, what they are connecting to is only part of the Internet and that part is determined by the commercial operator; the rest is sealed off unless the user upgrades to a paid access plan. While Free Basics is currently operating in over 50 countries, it is noteworthy that India’s regulator banned it in February 2016, ruling that differential pricing for data services was unacceptable. Zero-rating must not be a substitute for well defined rollout targets for telecommunications operators, well functioning competitive markets and government investment in areas where there is simply no business case for rolling out networks. Zero-rating nevertheless becomes less of an issue with increased competition and higher or unlimited data allowances. Indeed, it can be a tool to increase competition in markets for data transit services, so prohibiting zero-rating may harm competition and reduce the effectiveness of peering.

3.6

Rethinking Competition Policy

Whereas previously telecommunications operators deemed to hold significant market power had burdensome regulations placed on them, today that market power is shifting away to digital players. The likes of Google and Facebook today control access to significant value that as a result, could harm competition and consumers in the longer term. There is a rise of digital businesses that offer ‘free’ services in exchange for customer data. Data is becoming a key resource in the era of digital disruption and could be considered to be the primary competitive asset in future competition enforcement matters. 17 BEREC. (2016). Guidelines on the implementation by National Regulators of European Net Neutrality Rules.

36

3

Data Connectivity and Digital Infrastructure

Competition concerns centre around two central questions, one around a dominant operator abusing its position and harming consumers or the competitive process and the second as to whether two companies that propose to merge would have so much combined power to create a dominant entity that could abuse such a position. In traditional telecommunications competition cases, mergers were typically concerned around the combined customer base or combined infrastructure or combined scarce resources such as spectrum, such that a dominant entity would be created that would impede the competitive process and ultimately harm consumers. In digital mergers, the focus shifts to the combined data that the merged entity would have access to, such that no competitors would be able to match them (and whether existing laws could do anything about it). Perhaps a relevant question might also be whether privacy should be a consideration in merger reviews when two companies that have substantial amounts of personal data propose to join. Would the merging companies have so much market power that users would have no alternative but to submit to unfavourable privacy terms? These issues are discussed in a more detail in subsequent chapters. To date, there appears to have been greater scrutiny when in comes to traditional telecommunications mergers, but few digital mergers have been investigated and applied the level of scrutiny required to protect the competitive process and enable the fair level playing field between telecommunications operators and the increasingly large, dominant digital players. Defining the relevant market is a necessary step in virtually all competition law cases. Traditional markets are defined using something called a small but significant non-transitory increase in price (SSNIP) test, that helps to locate the market’s boundaries in both geographic and product space. But when products are free and markets are two-sided, as are almost all digital platform markets, the SSNIP test does not perform well. One possible alternative, discussed by the OECD, is a small but significant non-transitory decrease in quality (SSNDQ) test.18 But at this point in time, this is more of a theoretical possibility than a well developed economic test. Incorporating the concept that data has value and that it can be a currency with which services are ‘bought’ is new territory in competition policy. Platforms (discussed in further detail in a subsequent chapter) that connect participants in a two (or multi–sided markets, e.g. Google, Uber, Airbnb, Facebook, LinkedIn) form new bottlenecks and must be managed carefully by regulatory/competition authorities. A wide ranging reassessment of platform markets is required and where market power is found, regulators should ensure that other players within the market are granted effective and efficient access to such platforms (or elements of such platforms). These other players could be traditional telecommunications operators. Maintaining competition in platform markets requires active effort to make it easier for consumers to move their data across digital services, to build systems around open standards and to make data available for competitors, offering

18 OECD. (2013). The role and measurement of quality in competition analysis. Background Note for OECD Policy Roundtables.

3.6 Rethinking Competition Policy

37

benefits to consumers and also facilitating the entry of new businesses.19 These issues are discussed in further detail in subsequent chapters. Notwithstanding this, regulators must fully understand the consequences of their actions. In late 2018, the EU found Google to be dominant in the search engine market and found it had abused such dominance by bundling the search engine with all android phones. It required Google to unbundle. A small search engine based in France called Qwant complained that the fee structure that Google has put in place for handset OEMs, disincentivises them from installing any alternatives, thereby continuing the dominance of Google.20 Google did as requested and unbundled the Apps that earn it a return (Search and Chrome) and allowed handset makers to install its other ecosystem services without these two installed or set by default. However, to compensate for the loss of revenues that would ensue, Google has put in a charging structure for access to its proprietary software. This varies from country to country but is typically in the range of US$10–US$20 per unit in most European markets. Given that most handset makers make less than US$10 in operating profit from their devices, it is unlikely that OEMs will want to adopt the new Google model.21 Regulators need to understand the business and commercial models of the wider ecosystem in which a subject under regulatory scrutiny operates and be clear about how such entities could sidestep the regulatory obligations using their intricate commercial relationships. Google’s Project Fi, a service that brings Wi-Fi and cellular together is another example, of an offering that enhances customers’ experience by doing two things that an individual telecommunications operator would find intrinsically difficult: (i) first, Project Fi integrates Wi-Fi into the coverage proposition, initially seeking available Wi-Fi connections and turning to cellular only if these are unavailable. This requires automatically managing the Wi-Fi login process, which is handled by Google’s Wi-Fi Assistant software. Tools enabling smart phone users to access Wi-Fi have been available for some time, but Google’s pre-eminent position in developing the Android operating system has the potential to give it much greater reach. In effect, this would be a case of global scale (i.e., software, handsets) trumping national scale (i.e., telecommunications operators); (ii) second, Project Fi is capable of roaming on more than one cellular network. As a consequence, it offers better coverage to what an individual cellular network could provide. Competition regulation would make it very difficult for operators to get together to provide anything similar. Project Fi clearly provides customer appeal, but it comes at the cost of traditional operators, who are straddled with regulatory obligations which do not apply to Google. Communications regulators tend to ignore these anomalies and resist

19

Unlocking digital competition, Report of the Digital Competition Expert Panel, UK, March 2019. The French National Assembly and the French Ministry of the Armed Forces Minister announced in 2019 that they had switched to Qwant instead of Google as their default. 21 See https://techcrunch.com/2018/12/18/google-still-claimed-to-be-blocking-search-rivals-onandroid-despite-europes-antitrust-action/?renderMode¼ie11 20

38

3

Data Connectivity and Digital Infrastructure

regulating the likes Google, whilst claiming it would be difficult to relax regulations on mobile operators, given the oligopolistic market structures in which they operate. I find these approaches to be far too simplistic.

3.7

Unpicking the Hype Around 5G

The digital ecosystem increasingly relies on mobile technology for Internet connectivity. Mobile broadband is not only taking the lead for overall usage in developed countries but is also essential in providing universal access in developing nations. It is these universal access desires (as well as device penetration) that has meant mobile operators have had to operate 2G, 3G and 4G networks simultaneously at considerable expense (2G has better coverage reach that 3G and 4G). Over the previous few years, there has been ridiculous expectations of magic with 5G, however in many senses it is a natural evolution of 4G. As already explained, it is likely that operators will need to continue operating a range of previous technologies at the same time as offering 5G. The use of so-called millimetre waves for 5G has also been over-hyped. However, it is safe to say that the hype over 5G is largely over, as networks start to be deployed across many countries. 5G whilst having even worse coverage reach, will bring a number of enhancements over 4G, including high speeds, low latencies, enhanced reliability, lower power consumption and greater terminal device densities (number of devices served on each cell site). While these speed and latency improvements will appeal to some; primarily gaming enthusiasts, the impact on the general population in the short-term may not be sufficiently appealing, such that mobile operators would be in a position to command a premium for 5G connectivity. In the longer-term, as advances and adoption of Virtual Reality and Augmented Reality become widespread, 5G may come into its own. 5G networks can deliver between 1000 and 5000 times more capacity than 3G and 4G networks today and will be made up of cells that support peak rates of between 10 and 100 Gbps. It will provide ultra low latency, meaning it will take data 1–10 ms to get from one designated point to another, compared to 40–60 ms today. Another goal is to separate communications infrastructure and allow mobile users to move seamlessly between 5G, 4G and WiFi which will be fully integrated with the cellular network. According to the recent 3GPP Release 15 standard that covers 5G networking, the first wave of networks and devices will be classed as Non-Standalone (NSA), which is to say the 5G networks will be supported by existing 4G infrastructure. Here, 5G-enabled smart phones will connect to 5G frequencies for data-throughput improvements but will still use 4G for non-data duties such as talking to the cell towers and servers. The 5G Standalone (SA) network and device standard is still under review and is expected to be signed-off by 3GPP in the year 2020 (Release 16). The advantage of the 5G Standalone standard is expected to be simplification and improved efficiency, which will lower cost and steadily improve performance in

3.7 Unpicking the Hype Around 5G

39

throughput up to the edge of the network, while also assisting development of new cellular use cases such as ultra-reliable low latency communications (URLLC). Once the SA standard is approved, the eventual migration from 5G NSA to 5G SA by mobile operators should be invisible to the user. The 5G SA standard will involve a number of technology improvements, including core network functioning and Radio Access Network (RAN) functioning. Important technologies that will supply the flexibility and adaptability required to satisfy the expanded set of use cases by 5G include22: • Network Functions Virtualisation (NFV), which replaces dedicated network hardware on appliances such as routers with commercial-of-the-shelf hardware running virtualised instances of network function; • Software Defined Networking (SDN), which permits rapid dynamic reconfiguration of networks and allows 5G to be controlled by software; • Network Slicing, which separates a physical network into several virtual networks that support different radio access networks; • Multi-access Edge Computing (MEC), which brings data and computing closer to the end user and thus provides low latency for certain applications; and • Cloud Radio Access Network (C-RAN), a cloud-based network architecture replacing distributed signal processing units at mobile base stations and reducing the cost of deploying large numbers of small cells. Two other technologies that will prove critical in enabling 5G are massive Multiple-Input Multiple-Output (MIMO)23 and beam forming. By featuring dozens of antennas in a single array, massive MIMO could increase the capacity of mobile networks by a factor of 22 or even greater.24 Beam forming allows a large number of users and antennas on a massive MIMO array to exchange more information at once by improving signal to noise ratios. Moreover, it results in a focused data stream that can reach greater distances thereby increasing capacity of the cell towers in terms of number of subscribers served. Perhaps most important, 5G will offer new network management possibilities that could enable a single physical network to support a number of virtual networks with different performance characteristics. The fact that network elements are virtualised via NFV makes them much more flexible and amenable to dynamic software control. This enables network slices to be defined, each with different characteristics catering to different usage requirements. This ‘network slicing’ creates the possibility of tailoring mobile data services to the particular characteristics of specific users. For

22 ITU Telecommunications Development Bureau. (2018). Setting the scene for 5G: opportunities & challenges. 23 Multiple-input and multiple-output is a method for multiplying the capacity of a radio link using multiple transmission and receiving antennas to exploit multipath propagation. 24 Nokia. (2017). Beamforming for 4.9G/5G networks: exploiting massive MIMO and active antenna technologies.

40

3

Data Connectivity and Digital Infrastructure

example, a dense IOT sensor network might prioritise low power consumption of terminals over connection speed; at the same time, a separate network slice on the same infrastructure could deliver high-speed mobile broadband for consumers, whilst industrial applications using 5G to remotely control robots may need ultra low latency, without necessarily high peak rates which would be provided with its own network slice. Some applications will require high reliability, modest latency and modest bandwidth like e-health, again provided over its own network slice. The service sold to an individual customer will amount to a mixed basket of the various network slices tailored to fit that client’s particular requirements. For instance, a customer in the construction industry looking to use the technology onsite would have a variety of different use cases (from security to drone inspection to remote operation of machinery) that would each require a different basket of network slicing categories. This new ability for differentiation of services without having to build different physical networks raises the possibility of services targeted at particular economic or industrial sectors—so called ‘verticals’—as well as at specific user groups. 5G is likely to also potentially change the industry structure. No longer would operators alone provide connectivity services. Many large corporate clients or manufacturing plants could provide their own connectivity services, be it WiFi or even localised 5G without fear of interference from neighbours (given the technology’s shorter range). Regulators are considering if spectrum should be issued not only to conventional mobile operators, but also to industry verticals (for instance, the automotive or manufacturing sectors). Therefore the hope of operators that network slicing’s ability to offer targeted solutions to different industry sectors will boost their businesses, may well be scuppered if localised networks become successful and widely deployed. Network slicing also poses interesting questions in terms of whether it falls foul of network neutrality regulations, which seek to ensure that all packets are treated equally and deny any form of prioritisation. It is to be hoped that there is sufficient flexibility in the shape of the ‘specialised services’ category created by European regulation to enable this type of innovation within the EU. Elsewhere, network neutrality is unlikely to be a major problem given that operators not being able to charge for this type of differentiation will find it difficult to make the 5G investment case stack up. For all the hype of 5G, its immediate use cases are however not fully clear. Today 4G is an excellent technology which is capable of delivering to handsets more bandwidth than almost any consumer is going to need in the near-term. 4G capacity is not being hugely constrained. 5G is unlikely to make a difference to the user experience other than to increase the size of the handset as well as the price paid for the handset and possibly for data (that is the hope of mobile operators at least). Whilst latency improvements from 5G will matter when it comes to VR and AR, outside of these, latency improvements will not mean much. It may make sense for industrial IOT and robots that need adjustments in real-time, but again, these are applications that are not mainstream as yet. So the demand side of the equation is not that promising in the short-term.

3.7 Unpicking the Hype Around 5G

41

Coming to the supply side, the promise of achieving 1 Gpbs over 5G can only really be delivered with the use of millimetre waves: which is spectrum around 28 GHz. However at these frequencies, getting the signal to the device and back again is a real issue as almost anything will absorb the signal. Even if a customer is less than 100 meters from the base station, the use of 28 GHz spectrum will mean the consumer may not receive sufficient signal strength if they were behind a window, or if they happened to turn around and the signal was blocked by their own bodies. This means that the base stations (small cells) will need to be much closer to the customer—which means a magnitude increase in the number of cell sites and correspondingly, a magnitude increase in costs with potentially little revenue up sell, especially in the short term (use cases are still being developed and adopted more generally). When it comes to the much anticipated IOT explosion, the connectivity speed demands of most applications of IOT are likely to be modest. It is difficult to see IOT contributing significantly to mobile network volume growth in the near term. Ironically, IOT need not even await 5G, for 4G already incorporates standards such as narrowband IOT (NB-IOT). While according to the Vodafone IOT Barometer for 2017/18, WiFi is actually the most popular IOT connectivity option today. Given the meagre volumes generated via IOT, operators may also need to adapt their revenue models. Tim Hottges, CEO of Deutsche Telekom (DT), has suggested that IOT could drive a resurgence in the sector’s top line, with the industry shifting from charging on the basis of connectivity towards a model based on revenue sharing.25 This might help to mitigate the risk of commoditised pricing, but investors will be cautious that the operators’ track record of providing systems integration and end-to-end solutions does not inspire enormous confidence—especially as most operators are hallowing themselves by outsourcing and relying on third parties for almost all their network deployment and maintenance needs. Nor are mobile operators likely to have this space to themselves—inevitably the digital technology giants are already extremely active in this territory. An understandable misgiving about IOT is that it has been very slow to reach critical mass. Contributing to this sluggish rate of uptake may be the perceived complexity of IOT tariffs. A new start-up ‘1nce’, is seeking to make the whole matter as simple as possible, with a one-off payment providing up to 0.5 GB of connectivity over 10 years. However, the price point—just EUR10, or EUR1 per annum— underlines the challenge ahead if IOT is to move the needle for the mobile industry’s revenues. Vodafone had EUR700m of sales from IOT (FY17), equivalent to one and a half percent of group revenue, which was derived almost entirely from the corporate market. It estimated that this represented a 12% market share (suggesting an overall market of around €5.8 billion—not a huge amount considering all the markets in which Vodafone operates). Management hopes that this opportunity might evolve into a €22 billion market by 2025, but taking a proportionate share of this growth would imply less than 0.5 ppt of incremental group growth.

25

Bloomberg, 21 January 2018.

42

3

Data Connectivity and Digital Infrastructure

Many industry experts have acknowledged that use cases for 5G remain highly uncertain. Much of the current excitement about 5G in the USA and many other countries relates to what is known as Fixed Wireless Access (FWA)—an alternative solution to laying fibre to the premises, which may be an even more costly solution. In the USA, the fixed last mile is not very competitive, creating economic incentive for anyone who can cost-effectively offer reliable and fast last-mile access. Having a 5G receiver fixed on the outside of the building can solve many of the technical problems that occur when using millimetre waves and may well be the only real business case for 5G today. So why is there so much hype for 5G today—even by the mobile operators themselves? Unfortunately, the short-termism of the capital markets is driving mobile operators’ fear of being left behind, which in turn is driving the rollout of what is unfeasible 5G, just as it drove the rollout of expensive (read spectrum) 3G. That did not turn out well for anyone except the governments who collected significant license fees—the current drive for 5G is clearly not demand led. 5G network deployments will need significant network investment—especially given the very large number of small cell sites needed to deliver ultra-high speeds. The speed of rollouts, quality of service and coverage levels will all be compromised without sufficient investment. Governments and regulators need to encourage high levels of investment through: • Long-term broadband plan: producing a national broadband plan encompassing 5G which details activities and timeframes; • Long-term spectrum plan: creating a spectrum roadmap, detailing which spectrum bands and amounts to be made available over time and avoiding artificially high 5G spectrum which risks spectrum going unsold or reducing subsequent network investment; • Predictable licensing: Supporting exclusive, long-term 5G mobile licenses with a predictable renewal process and ensuring all mobile licenses are technology and service neutral to encourage 5G upgrades; • Fibre access: supporting access to the underlying fibre connectivity required to connect the increasing large number of cells (this is covered later).

3.8

Spectrum Policy to Drive Investment

Spectrum is a scarce natural resource, essential for providing wireless mobile services, among many other things (e.g. broadcasting, aviation, defence). The allocation and use of spectrum is therefore tied up with important social and economic trade-offs that need to be carefully considered. Due to the increase of mobile voice and data traffic as well as the growing market for smart phones and smart devices, such as sensors and RFID tags, the need for efficient allocation of spectrum is becoming more acute. There are a number of ways in which spectrum could be more efficiently managed to enable deployment of more and better high speed wireless services.

3.8 Spectrum Policy to Drive Investment

43

Certain policies promote good use of spectrum, such as transparency in terms of assignment procedures, conditions of use and renewal, statistics on actual use and allocating spectrum in a way that promotes competition (e.g. providing a level playing field for competition through a balanced assignment of spectrum between different market players). Another mechanism includes allowing for the flexible use of spectrum to avoid hampering competition and innovation, i.e. service and technological neutrality. The allocation of spectrum can also be used to meet important policy objectives, such as conditions requiring a certain amount of coverage by mobile operators as one of the conditions for acquiring spectrum to better connect underserved areas. Using auctions is an important way to capture the market value when allocating spectrum. However these auctions have typically led to unsustainable prices, primarily driven by the need for operators to acquire spectrum, even if unfeasible—the consequences of not acquiring the spectrum means the stock market disproportionately punishes the operator through lower valuations—without truly understanding the intrinsic value of the spectrum. This short-term, market driven pursuit by all listed operators to acquire the spectrum results in spectrum that no reasonable person would pay for. Another, possibly better way of promoting more efficient use of such spectrum, is to create incentives that mimic market-based incentives (without the testosterone fuelled need to acquire spectrum), a good example of which is an Administrative Incentive Pricing regime, where fees reflect the opportunity cost of the spectrum or avoided costs as a result of being assigned the spectrum—thus a reasonable proxy for the value of the spectrum. Such a approach balances the needs for governments to get a fair return on a public asset, that is spectrum, whilst enabling operators to acquire the spectrum at prices that do not hamper their ongoing ability to invest in their networks. In addition, sharing of spectrum is an important mechanism to extract more value. This has until recently been accomplished by operators dividing the country into zones, each the responsibility of one or the other and each then sharing access to their networks in the respective zones. Other models may split the technology (3G and 4G) by operator (although these are less economically efficient). With a more recent development, it can also be done through shared licensed access, which allows spectrum that has been licensed to be used by more than one entity; introducing additional users on a given band to unlock spectrum capacity (Licensed Assisted Access). Another promising innovation uses underutilised spectrum, known as ‘white spaces’. These intentionally unused parts of spectrum had been important historically (analogue TV broadcasting for instance) to avoid interference, but technological developments are enabling the use and sharing of these white spaces. As operators move towards delivering 5G, a multi-layer spectrum approach is required to address the coverage and capacity problems inherent in mobile networks. The coverage and capacity layer relies on spectrum in the 2–6 GHz range (e.g. what is typically called the C-band) to deliver a compromise between capacity and coverage. The ultra-high speed data layer relies on spectrum above 6 GHz (typically 24.25–29.5 GHz and 37–43.5 GHz) to address specific use cases requiring extremely

44

3

Data Connectivity and Digital Infrastructure

high data rates. The coverage layer exploits spectrum below 2 GHz (e.g. 700 MHz) providing wide-area and deep indoor coverage. Furthermore, spectrum assignment for specific mobile technologies (e.g. 2G, 3G and 4G) and in some countries for specific services (e.g. voice, data, broadband access), can no longer keep up with the speed of market demand for new network capabilities and for new services with enhanced performance. Where spectrum is technology and service neutral, it has allowed operators to swiftly respond to the changes in market demands with tangible benefits for end users. Regulatory frameworks must embrace the principle of technology and service neutrality for the smooth introduction of the latest available technologies and services in existing and new bands (spectrum re-farming)26 that will be made available for 5G. One of the other areas that will need to be re-examined in the context of 5G, will be spectrum caps. Whilst 5G spectrum in the C-band is usually assigned in 100 MHz blocks, this spectrum compliments existing lower frequency spectrum which provides much need coverage. Spectrum caps are used by regulatory authorities to restrict the maximum amount of spectrum an operator is allowed to hold and is particularly used in spectrum auctions where a bidder may be restricted from participating if its spectrum holdings is likely to be above the spectrum cap. A spectrum cap can be imposed on a single band or on a group of spectrum bands. Given spectrum can provide undue competitive advantage to operators where there is a significant asymmetry of spectrum between operators, it is one regulatory tool that has been utilised by many regulators to equalise such competitive disadvantages as well as reduce the risk of certain operators acquiring the spectrum and hoarding it in an attempt to disadvantage other operators. Whilst spectrum caps remain a vital regulatory tool, its application and relevance when substantial additional spectrum is being made available in a range of bands from 700 MHz to the 3500 MHz will need a major re-evaluation. Policy makers and regulators must also consider the impact the levels of spectrum fees will have on the viability of particular segments of the sector. Spectrum fees set too high combined with income taxes, VAT, excise duties, regulatory fees, and other charges negatively impacts growth opportunities and attractiveness, sector valuations and investment levels. Whilst communications ministers are trying to help the sector, the treasury/finance ministers are not really doing much to help. The basis upon which mobile operators value spectrum is through determining the costs it incurs in rolling out its network with the specific spectrum assignment. It is inherently assumed that the spectrum that will be assigned will be entirely useable. Unfortunately sometimes this has not been the case and the spectrum has been found to have interference from others using spectrum near the assigned band or from the illegal use of repeaters by consumers. This inference can have dramatic affects on the operators business and cost structures as they need to install additional filters at each of the cell towers. It is the role of policy makers and regulators to ensure the

26 Spectrum re-farming is the process of re-deploying spectrum from available users and re-allocating it to others.

3.9 New Players and Priorities for International Connectivity

45

spectrum that is assigned is free of interference. Technology improvements are helping reduce spectrum interference, including so-called beam-forming techniques which can target a signal towards its destination, with the result that it is possible to reuse the spectrum multiple times. The use of beams mean less interference for other users, thereby improving one of the factors influencing the Shannon Limit.27 If I were to sum up the story of 5G, I would say that for most operators and governments, it is an unnecessary cost and distraction. There are many no-spots or grey spots across many countries where customers are not receiving even basic 3G or 4G connectivity. 5G investments where no real business case exists for operators is a distraction that does not add much value to the operators or indeed society at large. The endless talk of 5G by governments needs to be tempered with solutions towards more immediate connectivity needs.

3.9

New Players and Priorities for International Connectivity

International gateways are the facilities through which international telecommunications traffic enters or leaves a country. Whilst most countries have liberalised this market, allowing licensed operators to build their own facilities and land international submarine fibre optic cables, some countries still retain a monopoly on this facility. As with all monopolies, the effects of such action is an inflation in international prices compared to a competitive market, reduced innovation and reduced incentives to bring more international connectivity to the country. With digital products and services being global in nature, the need for cost effective access to international connectivity becomes vital to the success of the digital economy. The higher prices for international connectivity may make some digital products or services uneconomic to deliver, innovation that may have taken place were international connectivity a competitive market will not take place and in the end, consumers and society will suffer. Policy makers will need to question if such monopolies serve the long term interest of end users or the country’s economy. Whilst traditionally it was the operators themselves (through consortia) that built submarine cables to connect their networks with other operators across the oceans, this has changed dramatically over the past two decades. In the 1990s, this market attracted investment from private companies, who saw the massive growth of data as an opportunity to make profit. Massive investment was made and significant submarine cable was laid, in fact too much, such that the prices of international connectivity fell so fast that these private investors had to take a haircut. In the last decade, a significant portion of these submarine cables are being financed by Google, Facebook, Amazon and Microsoft to connect their data centres. These companies owned or leased more than half of the undersea bandwidth in 2018. The emergence and scaling of cloud computing, together with massive explosion of video content 27 The Shannon limit or Shannon Capacity of a communication channel is the theoretical maximum information transfer rate of the channel for a particular noise level.

46

3

Data Connectivity and Digital Infrastructure

(which requires lower latency) has resulted in a significant demand for international connectivity through fibre optic cables. As low-cost, secure, international connectivity becomes a fundamental component for digital players, it is likely that many more digital players, including platform players will join this trend and invest in their own (or a portion of) submarine cables. This is just another area where the telecommunications players are being over taken by other digital players who are increasingly encroaching on what was a lucrative monopoly. The recent battle between telecommunications operators and OTTs is not a new one. Just like both sides are coming to a conclusion that they need to collaborate rather than fight each other, the same is true for international connectivity. Sometimes these digital players are leasing capacity which has been installed by the telecommunications operators, whilst sometimes it is the other way round.

3.10

The Re-emergence of Satellite Connectivity

Traditionally, satellite systems have only been used as a last resort for providing connectivity, given the higher deployment and usage costs. However, innovation in satellite technology means what was once an expensive last resort, may become a useful alternative connectivity medium for mobile and fixed operators and a useful technology for providing universal broadband access. Satellite systems for communications are essentially of two types, Geosynchronous Equatorial Orbit (GEOs) and Low-Earth Orbit (LEOs). The classical type of satellites used for communications is the GEO. It is positioned at a certain orbit with an altitude of 35,786 km from earth’s surface and rotates at the same speed as earth’s rotation around its own pivot. Hence the satellite seems to be fixed at a certain position in the sky when looking at it from earth. GEO’s also rotate in an equatorial orbit meaning that the satellite position is above the equator. The footprint of one GEO satellite therefore doesn’t cover the whole globe but a geographical width of up to 60 . The satellite position must be held accurately and for that there exists complex control systems onboard. A unique advantage of this geometry is that there is no need for a moving antenna on earth. Also, high gain antennas can be mounted onboard the satellite as well as at ground stations to compensate for the free space loss through space and the corresponding weak link budget (amount of power that is lost whilst the signal travels from the satellite to the receiver on earth or viceversa through space). LEO satellites rotate in much lower orbits with no fixed radius, typically in a range of 100–2000 km. Since LEO satellites rotate much faster than earth’s rotation, they are only visible for a relatively short time leading to a need for constellation of many LEO satellites. Here synchronization of rotation timing and distance between satellites is required to achieve the desired satellite network. Handover between satellites is further needed for communication scenarios exactly is as the case of terrestrial mobile networks. System capacity can always be enhanced by the number of satellites, same as small cells for terrestrial mobile networks. Because of the low altitude, free space loss is much lower and there is no need for mounting high gain

3.11

Delivering IOT Connectivity

47

antennas. In fact free space loss for frequencies below 10 GHz is similar to that of terrestrial mobile network. Several thousand of these small LEO satellites, (which at a cost of US$10,000 are relatively inexpensive) are sufficient to provide worldwide broadband connectivity with transmission rates of up to 1 Gbps in high-frequency bands over 10 GHz. Given that LEOs are closer to earth than GEOs, it also dramatically reduces latency rates from around 280 ms to 20 ms, which also widens its potential use cases. Given their characteristic, LEOs are particularly relevant in those use cases where large capacity is required. The more ambitious LEO projects focus on broadband applications, such as provision of mobile backhauling in areas where terrestrial backhauling is too costly to deploy; providing fixed broadband access to business and consumers in remote/un-served/underserved areas; broadband for high mobility applications (trains, planes, boats); or providing fixed link connections like leased lines for enterprise connectivity. Because 5G will use frequency bands similar to satellites (mmWaves), satellites could also be more tightly integrated into 5G. A 5G device could for instance seamlessly switch to any satellite connectivity available once connection with the terrestrial infrastructure is lost. This would enable the ubiquitous coverage goal of 5G to be achieved more cost effectively. This may well spur further collaboration and partnerships between mobile operators and satellite operators in the future. The key issue for policy makers is the allocation of spectrum which sits in a range that is useful for both cellular operators as well as satellite operators. Inevitably compromises will need to be made. However, these decisions are made at the World Radiocommunications Conference (WRC) under auspicious of the ITU to harmonise global acceptance, rather than just left to a national decision. National government support for international coordination with other satellite operators is however vital.

3.11

Delivering IOT Connectivity

As suggested earlier, one of the key short-term use cases for 5G is touted to be IOT connectivity and general machine to machine communications. Here, much lower data rates are needed, often only in short bursts with an accompanying requirement for the remote device or machine to be able to draw only low levels of power. IOT applications will require the embedded sensor/device to communicate in varying models from device to device (i.e. using Bluetooth, Z-Wave, ZigBee), device to cloud/fog service (i.e. wired Ethernet or wireless connection) and device to application layer gateway communications which provides a bridge between the device and the cloud services (which could be via wired Ethernet or wireless connections) which provides security, data protocol translation and other functionalities. There are a number of networks simultaneously being built that can support IOT connectivity. These include narrow band IOT (NB IOT), Low Power Wireless Access networks (LoRA, Sigfox, etc.) and high speed wireless networks (LTE based 4G/5G networks).

48

3

Data Connectivity and Digital Infrastructure

LoRa and Sigfox operate in the 868 MHz band, offering low speeds (a few Kbps) but with much lower energy consumption as compared to what standard 4G requires. These LPWA networks have several features that make them particularly attractive for IOT devices and applications that require low mobility and low levels of data transfer, including: low power consumption that enable devices to last up to 10 years on a single charge; optimised data transfer that supports small, intermittent blocks of data; low device unit cost; few base stations required to provide greater levels of coverage; easy installation of the network; dedicated network authentication; greater indoor penetration and coverage given the lower frequency spectrum bands. For LTE-based mobile28 that wish to compete with LORA networks, these IOT networks need to have similar characteristics: • Long battery life: many M2M devices will need to be left unattended for long periods of time in areas where there may be no power supply. Maintaining batteries is a costly business and therefore devices should be able to have a time between battery changes of up to 10 years. This means that the LTE based IOT system must be capable of draining very little battery power; • Wide spectrum of devices: any LTE M2M system must be able to support a wide variety of different types of devices. These may range from smart meters to vending machines and automotive fleet management to security and medical devices. These different devices have many differing requirements, so any LTE based IOT system needs to be able to be flexible; • Low cost of devices: most M2M devices need to be small and fit into equipment that is very cost sensitive. With many low cost M2M systems already available, LTE based IOT needs to provide the benefits of a cellular system, but at low cost; • Large volumes–low data rates: as it is anticipated that the volumes of remote devices will be enormous, the LTE based IOT must be structured so that the networks are be able to accommodate vast numbers of connected devices that may only require small amounts of data to be carried, often in short peaks but with low data rates; and • Enhanced coverage: LTE based IOT applications will need to operate within a variety of locations—not just where traditional reception is good. They will need to operate within buildings, often in positions where there is little access and where reception may be poor. To enable the requirements of these devices to be met using LTE, a new LTE category was developed. Referred to as LTE Category 0 or simply LTE Cat 0, this new category has a reduced performance requirement. Cat-0 optimises for cost as it eliminated features that supported high data rate requirements for Cat-1 (dual

28

LTE is an abbreviation for Long Term Evolution. LTE is a 4G wireless communications standard developed by the third Generation Partnership Project (3GPP) that’s designed to provide up to 10x the speeds of 3G networks.

3.11

Delivering IOT Connectivity

49

receiver chain and duplex filter). The modem complexity for a Cat 0 modem is around 50% that of a Category 1 modem.29 Cat-M (officially known at LTE Cat-M1) is often viewed as the second generation of LTE chips built for IOT applications. It completes the cost and power consumption reduction that Cat-0 set the stage for. By capping the maximum system bandwidth to 1.4 MHz (as opposed to 20 MHz for Cat-0), Cat-M is really targeting Low Power Wireless Access Network (LPWAN) applications like smart metering where only small amount of data transfer is required. But the true advantage of Cat-M over other options, is that Cat-M is compatible with the existing LTE network. Mobile operators don’t have to spend money to build new antennas. They simply need to upload new software within their LTE network. Associated with the introduction of LTE-M are new categories that were introduced in the Release 13 of the 3GPP standards. These categories include LTE Cat 1.4 MHz and LTE Cat 200 kHz. Narrow Band-IOT (or NB) (also called Cat-M2) has a similar goal to Cat-M, but it uses a different technology (DSSS modulation vs. LTE radios). Therefore, NB-IOT doesn’t operate in the LTE band, meaning that network providers have a higher upfront cost to deploy NB-IOT. Nevertheless, NB-IOT is touted as an potentially less expensive option as it eliminates the need for a gateway. Other infrastructure typically have gateways aggregating sensor data, which then communicates with the main server. With NB-IOT, sensor data is sent directly to the main server. With 5G on the horizon and its explicit design goal of supporting M2M, it remains to be seen what happens to existing deployed LPWA networks. We are likely to find these will coexist alongside each other. It is important that policy makers and regulatory authorities support a technology and service neutral regulatory framework for licensed spectrum that facilitates the development and growth of IOT and does not impose service or technological restrictions that hold back innovation. Operators should not be prevented from deploying the latest cellular IOT technologies in their licensed spectrum bands due to technological restrictions. For example, cellular standard 3GPP Release 1330 allows GSM and LTE networks to support LPWA IOT applications in almost all licensed mobile bands. This includes the ability to support personal and IOT connectivity in the same frequency band at the same time. The regulatory environment should be designed to nurture this evolution in the capabilities of mobile networks and allow the market to decide which solutions thrive. As such, within sub-1 GHz spectrum, operators should be free to use the 700 MHz and 900 MHz bands as they wish for new cellular IOT technologies.

29 The new LTE Cat 0 was introduced in Rel 12 of the third Generation Partnership Project (3GPP), a standards organisation which develops protocols for mobile telephony. 30 Work has already started on 3GPP release 16.

50

3.12

3

Data Connectivity and Digital Infrastructure

The Relevance of Universal Service Access

Governments can take a longer-term and broader view of investment returns than the private sector, and might identify positive societal externalities not taken into account by private investors in investment decisions. Some countries have therefore, concluded that there will not be sufficient private investment to build high-speed networks in a way that meets public policy objectives, such as speed or coverage and have made public infrastructure investments, either directly or in partnership with private investors. Others believe that they should enable the private sector to do as much of the ‘heavy lifting’ as possible, promote infrastructure competition and use public funds only when there is a demonstrated market failure. Which model serves the interest of each country will depend on a range of factors, primarily the existence and appetite of private operators with sufficient incentives to invest. Incentives to invest are influenced by the adequacy of the regulatory regime, its predictability and confidence in the ability of the regulatory authority to implement fair, investment friendly policies. This is true more generally, but becomes even more important when operators are asked to invest in traditionally uneconomic areas. In developing a country’s universal service regime, two key considerations are required: (i) what is the extent to which geographic or population coverage is desired and feasible and the other (ii) the types of services that are determined to be utility like, such that access to them becomes a fundamental right for a resident. It becomes necessary to question what services are encapsulated within a universal service obligation. Many traditional licensing regimes advocated minimum coverage requirements by the mobile service providers, with a set of stringent rollout obligations. These coverage requirements were traditionally based on either a population measure or in terms of geographic coverage. The choice largely depended on the geographic terrain of each country, the urbanisation, density or population clusters, as well as investment incentives offered. As we move towards connectivity of IOT, it becomes necessary to look at the coverage obligations and see if they are still relevant for meeting government policy objectives. In many cases the economic rationale for rolling out networks to remote areas will be questionable, especially as telecommunications operators will see little value capture from an IOT solution where the transmission speeds and durations may be limited—i.e. some remote sensors only sending daily updates consuming a few kilobytes of data, what many analysts call ‘devices that chirp’. Given the evolution towards all-IP networks and the increased use of applications (e.g. OTTs) through the Internet connection, it may be appropriate to focus Universal Service Obligations (‘USO’) on Internet access and not voice communication. Moreover, given the importance of having access to some e-services, it may be more appropriate to define the capability of the Internet connection as supporting those e-services, than just having an Internet connection. It may also be better to define USO in terms of data volume necessary to support the basic e-services and not in terms of speed. Monthly prices depend not only on the maximum possible speed, but also of the amount of data included/consumed.

3.12

The Relevance of Universal Service Access

51

Both the geographic and service scope of a universal service regime in the end boil down to funding. Governments need to strive to find a realistic funding model that is sustainable, does not impose heavy burdens on the private sector, but which equally do not waste public funds when private entities would have been capable of delivering the USO services. Given the desired social benefit that governments seek from a USO, it may more useful that USO commitment be financed with public funds and not with a sector specific funds. A sector fund has several disadvantages: (i) it may be costly to run; (ii) it is unfair as it forces the telecom sector to finance an infrastructure that benefits the whole society; and (iii) it is generally economically inefficient. Economic literature31 has shown that unless the national tax system is very distorted, public funds are more efficient and less distortive than sector funds to finance USO. In the many countries that have imposed sector USO funding obligations, there are very few that have actually utilised such funds appropriately. Many of the funds sit in government accounts with bureaucrats arguing over which projects to fund. Many of these funds have simply encouraged further corruption as politicians attempt to utilise these funds to start programs in those areas and constituencies that are important for their political future. If the USO is financed with public funds, using USO for network deployment is not problematic and probably inevitable in some rural areas. However, if the USO is required to be financed by a sector fund, then the sector may be forced to finance significant network deployment costs, which may well backfire as the heavy taxes on the sector may decrease competition, which is one of the main tools to stimulate investment and network deployment in rural areas. Whatever funding sources are utilised, it is absolutely vital that a robust, transparent governance structure is enforced, with those that contribute to such funds being intimately involved in how such money is distributed, whether that is government departments for public funding to operators in sector funding scenarios. Good examples for the utilisation of USO funds include Indonesia, where the government has used these funds to rollout a fibre backbone across what is a very difficult and expensive terrain. The UK has also just announced as I write this book, a Shared Rural Network wherein the four mobile networks have committed to legally binding contracts and investing £532 m to close almost all partial not-spots (areas where there is currently only coverage from at least one but not all operators). This investment will then be backed by more than £500 million of government funding to eliminate total not-spots: hard to reach areas where there is currently no coverage from any operator. Alternative models, possibly public-private partnership models, need to be seriously examined in an era where the digital divide may only exacerbate with uneconomic rollout of IOT networks. With the deployment of localised networks

31

European Commission. (1999). Liberalisation of network industries: economic implications and main policy issues. See also Hausmann, J. (1998). Taxation by telecommunications regulation. Tax Policy and the Economy 12, 29–48, NBER and MIT Press.

52

3

Data Connectivity and Digital Infrastructure

with 5G, it may be possible to encourage such network operators to extend coverage within their vicinity and provide connectivity where these may have been previously uneconomic. Incentives could be devised to encourage more localised networks where corresponding societal benefits can be generated at incremental costs.

3.13

Facilitating Access to Public Land

As policy makers and regulatory authorities are aware, the ability of telecommunications operators to rollout the network and offer services depends on their ability to install masts and antenna, to install fibre connectivity and to do so cost effectively. Gaining access to land, be it public or private is challenging in most markets, but becomes even more so when land registration documents and title documents are not available or their authenticity is not guaranteed. Significant effort can be expended in investigating such matters. Even when land ownership is clear there are challenges that operators face with the public perception towards Electromagnetic Fields (EMF) radiation. Whilst there have been many studies to date that suggest there is nothing to fear, the public perception is one of fear, uncertainty and doubt. Governments must do more to alleviate such fears, whilst continuing to fund research into the field. Operators facing these challenges still need to meet stringent coverage requirements usually imposed as license conditions and may well face customer criticism for poor coverage levels in their vicinity. Having gained access to appropriate land, the approvals process for installing masts/roof top sites is usually time consuming and very bureaucratic, with decision making empowered to local authorities. Without an appropriate governance framework in place, operators are typically faced with landowners looking to make a quick buck or looking at expensive beautification requirements.32 It is quite normal to see landowners charge rents that are of the magnitude of ten times the charges that would otherwise be levied on other utility providers.33 With the introduction of 5G, where the number of sites, albeit smaller sites, will increase exponentially. Policy makers can help operators significantly by mandating other infrastructure owners rent space to the telecommunications operators, within a governance framework that ensures cost effective and efficient access to such infrastructure. Policy makers should also be careful to avoid seeking to impose additional taxation on masts or sites (some countries charge a regulatory fee per tower), as such action may well slow down the rollout of 5G networks, to the detriment of the digital economy more broadly.

32

Installing cell sites that blend into the environment, such as trees, rather than the traditional structure used, which incurs additional costs. 33 E.g. In the UK mobile operators typically need to pay farmland owners 10 times what is charged to electricity companies for their pylons.

3.14

3.14

Rethinking Network Sharing Policies

53

Rethinking Network Sharing Policies

Given the significant scale of investment required in rolling our networks, together with the constraints placed on each operator in accessing land, electricity and managing public perceptions around EMF, many operators believe sharing of networks may be a useful (if not life saving) strategy for faster and more effective rollout. An added advantage is the sharing of the investment costs by the sharing parties, which hopefully allows the operators to pass on such savings to consumers or for it to invest those sums in other areas of the network and not just pass these savings to shareholders. There are typically two forms of sharing that are seen: Passive sharing and Active sharing. Passive sharing includes sharing of the actual physical site, or sharing of the mast to host each operator’s antenna and the associated facilities such as electric supply, air conditioning units and indeed the conduits to such infrastructure (ducts). In the passive sharing model, each operator continues to build its own radio access network, utilising the spectrum allocated to it. Within the active sharing model, operators may share the antenna, transceivers, microwave radio equipment, base stations and indeed switches. In an active model, sharing is deeper than a passive sharing solution. It is claimed there may be regulatory concerns when operators start sharing active networks, as it potentially gives rise to anti-trust issues. However, to date there have been many hundreds of active sharing deals between operators globally with few anti-trust concerns being raised. Regulators usually take a competition based approach to assessing requests for sharing approval, based upon an analysis of efficiencies versus competitive harm and considering national market conditions. Almost all passive sharing arrangements are approved by the regulatory authorities. In most cases, regulatory approval is given to Radio Access Network (RAN) sharing as network operators maintain separate logical networks so the impact on network competition is assessed to be neutral.34 Some countries approve RAN and spectrum sharing.35 Increasingly, active sharing is being allowed in many countries.36 With significant investment required for 5G, the case of network sharing becomes even more imperative. Policy makers and regulatory authorities must leave the choice of sharing to market participants, allowing competition law to deal with anti-trust issues, if they arise.

34

RAN sharing has been approved in Austria, Belgium, France, Czech Republic, Greece, Poland, Romania, UK etc. 35 RAN and spectrum sharing has been approved in Denmark, Finland, Hungary, Sweden, Poland. 36 Active sharing has been permitted in Hong Kong, Poland, Russia, Spain, UK, Australia, Canada, Brazil, Azerbaijan South Africa, Russia and more recently India.

54

3.15

3

Data Connectivity and Digital Infrastructure

Re-emergence and Importance of Fixed Infrastructure

There was a time when investors rewarded and were more interested in mobile operators than fixed operators. In fact it was this difference that led to many incumbent operators to spin off their mobile operations and extract much needed cash. That story continued for over a decade. However we are today seeing a reversal of fortunes. Fixed infrastructure is now becoming sexy. As governments push their national broadband plans and as fixed operators start rolling out fibre infrastructure, their potential is again being recognised and rewarded. However the scale of investment required in rolling out fibre to the home is such that many operators have struggled to finance these rollouts, except in the affluent and dense urban areas. This has left a large swathe of the population being left behind. Alternative investment models are popping up for rolling out fibre, many including local municipalities, some private investors and some involving pension funds. However, the perennial problem of making a business case outside of these affluent, dense areas, is resulting in duplicate infrastructure in these ‘rich’ areas, whilst the rest of the country remains un-served with fibre. That is not good news in terms of the digital divide, but also not a good use of resource. This is a major market failure that governments need to address and do so quickly. In addition, the growth of mobile networks and their usage requires investment in high capacity backhaul networks, linking mobile switching centres on a local basis, mobile networks to Internet exchanges and data centres and mobile networks to long haul backbone networks on a national basis. Many mobile operators are finding the costs of building fibre networks cost prohibitive and find the many levels of permissions necessary, such as rights-of-way and permissions to build ducts, a time consuming and unnecessarily expensive process. Policies must encourage such investment in fibre, whether that is by giving network operators an option to build such networks for themselves or share such infrastructure and facilities with other mobile or fixed operators. An equally basic requirement is to streamline the approvals process for installing such infrastructure—this is easier said than done, given the condition of duct infrastructure tends not to be fully known. From a policy perspective, many regulatory authorities are mandating access to fibre infrastructure, whether that fibre infrastructure is owned by a licensed telecommunications operator or another entity. To many, the discussion has moved on from mandating access to such infrastructure, to how we strike a balance between mandating open access whilst still incentivising continued investment in fibre infrastructure. The traditional cost based approaches to pricing of regulated wholesale access is giving way to more innovative models. These include incorporating a risk premium in the access pricing, some are encouraging co-investment, whilst in some countries, governments have come to the view (whether rightly or wrongly) that only the public sector is capable of investing and building nationwide fibre infrastructure. More nuanced approaches sees government facilitate cooperative fibre investment models, where only a single entity builds the duct and fibre infrastructure and shares this with others.

3.16

3.16

Numbering Relevant for Digital World

55

Numbering Relevant for Digital World

With an explosion in the number of connected devices as a result of IOT, there will be significant demand on numbering resources. In many cases, it would not be desirable for countries to expand their number ranges for the entire national numbering plan, however, it may be possible to set aside a number range for IOT applications where the digits for M2M applications could be significantly longer than normally assigned to human oriented devices. A parallel issue that needs to be dealt with is the cost of such numbering resources. The price for numbers (where they are charged for), is usually set with reference to the cost of the numbering resource in the context of the overall value derived by the telecommunications operator from the use of that number. In the context of traditional services, that value to cost ratio was sustainable. However, in the context of IOT/M2M, where the actual value derived from use of the number resource may be significantly less than in the traditional connectivity context, the value to cost ratio will simply not be feasible. Regulatory authorities need to re-examine the need for charging for numbers that are to be used in IOT/M2M applications. One issue that follows with the assignment of numbers that are to be used in IOT/M2M applications is the extra-territorial use of such numbers. Typically, regulatory authorities and/or security authorities have sought to limit the duration a number is permitted to be outside of the country (i.e. roaming). Before considering how to regulate a service, it is important to assess whether that service should in fact be regulated in the first place. The public policy rationale that may apply to regulation of roaming voice and data services simply does not apply to the majority of M2M services. In M2M, connectivity is merely a delivery mechanism, with the key service being the value added functionality offered by the M2M platform which is likely provided by an un-regulated company and not the licensed mobile operator. The nature of the IOT/M2M business means that many numbers allocated for IOT/M2M will be permanently roaming outside of the country—take the example of automobiles that are equipped with a SIM card configured with a locally allocated IOT/M2M number. That vehicle may well be sold outside of the country and therefore the SIM card will be permanently roaming outside of the country. Enabling such permanent roaming is fundamental to the digital eco-system. The advent of E-SIM and over the air provisioning may in time help resolve the current issues in terms of IOT roaming.37 In addition to telephone numbers, IP addressing will be very important as a complementary addressing resource for IOT services. Where devices are connected via fixed line or WLAN, IP addresses are already used today. The hitherto commonly used IPv4 address format supports a relatively limited number of globally

Remote provisioning and management of M2M connections can be done through ‘over the air’ provisioning of an initial operator subscription, and the subsequent change of subscription from one operator to another on the embedded electronic SIM.

37

56

3

Data Connectivity and Digital Infrastructure

addressable devices; however, many connected devices are being located behind one IPv4 address using Network Address Translation (NAT). Given the expected growth of IOT services and the number of Internet connected devices generally, this limited address space (even with NAT) could quickly be exhausted. The IPv6 standard has a significantly larger address space and can support a considerably higher number of devices. However its adoption has been limited to date. Policy makers and regulatory authorities should encourage the deployment of IPv6 equipment, however they must also take into consideration the costs of ripping out old IPv4 equipment and replacing these with IPv6 compliant equipment. It may be more appropriate for a country to plan a migration towards IPv6 mandating organisations implement IPv6 compliant equipment whenever they have a technology refresh. Clearly that may mean considerable time before all organisations and all equipment is IPv6 compliant, nevertheless that may be a more cost effective solution.

3.17

Progressive Cloud and Data Centres Policies

A significant enabler for the digital economy and IOT solutions, is the ability to utilise cloud computing and data centres that may be located within or outside of the local jurisdiction. Cloud computing enables ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned with minimal management effort or service provider interaction. Cloud services can be grouped into three categories: Software as a Service (SaaS), Platform as a Service (PaaS); and Infrastructure as a Service (IaaS). SaaS refers to the provisioning of applications running on cloud environments. Applications are typically accessible through a thin client or a web browser. PaaS refers to platform layer resources (operating system support, software development framework etc.). IaaS refers to providing processing, storage and network resource allowing the consumer to control the operating system, storage and applications. The fundamental raison d’être for utilising today’s cloud platforms, is that they are elastic—that is they dynamically determine the amount of resources an application requires and then automatically provisions and de-provisions the computing infrastructure to support the application. This relives developers and IT teams from many operational tasks, such as hardware and software setup and configuration, software patching, operating a distributed database cluster and partitioning data over multiple instances in order to scale. The cloud customer only pays for resources actually used. Effectively the cloud gives organisations the right, but not the obligation, to scale up if and when demand increased. The value of this sort of real option can be substantial. The mass emergence of digital native companies today would not be possible without easy, immediate, and affordable access to the scalable computing resources available through the elastic cloud infrastructure. Policies that restrict or constrain the ability

3.17

Progressive Cloud and Data Centres Policies

57

for organisations to use the cloud could seriously jeopardize a countries’ digital ambitions. The cloud model is itself however also changing. IOT’s biggest transformation will be in shifting power in a network from the centre to the edge. Rather than devices and users communicating through central hubs, mainframes or cloud based management servers, IOT will allow devices to communicate with each other over the cloud or in some cases directly with each other, such as devices on automobiles that need to detect possible collision scenarios with absolutely minimal latency. As IOT matures, there will be an acceleration in the emergence of edge computing (or local clouds). Many IOT applications depend on central cloud services which unfortunately also means increased response times. To address this, edge and fog computing are becoming important as data analysis occurs closer to the device than a central cloud. Edge computing provides that data processing power occurs at the edge of a network where the edge networked devices capture data and process data close to the source. A big problem with edge devices is that they may lack their own computing resources to run IOT operations independently. Edge computing in this sense, complement centralised cloud computing, since analytic models or rules are created in the cloud then pushed out to edge devices. There could well be opportunities for service providers or indeed a new revenue stream for telecommunications operators that can introduce cloud computing that is local to the IOT device—i.e. at each cell site. Multi-access edge computing (MEC), formerly mobile edge computing, is an ETSI defined network architecture concept that enables cloud computing capabilities and an IT service environment at the edge of the cellular network and more in general at the edge of any network. Since MEC must ‘cohabit’ alongside network infrastructure, operators will be uniquely placed to deploy, operate and sell it (assuming mobile operators have not sold off their tower infrastructure that is). The most obvious application is of a familiar variety, caching popular content to improve response times. The value inherent here is well known to App owners and especially gaming platforms and is something that operators should therefore be able to monetise. Historically, OTT players have been able to largely disregard the operators, given the ready availability of commoditised hosting facilities (and in view of the fact that end-users pay for their own bandwidth). MEC could now empower mobile operators, as they will now own the relevant infrastructure and integration capabilities and there will be genuine scarcity (and hence pricing power) in terms of the processing resource available—with the proviso that mobile operators have retained ownership of these tower assets. Aside from MEC, for enterprises one general problem with using cloud infrastructure today is the ability to achieve cloud portability, i.e. the ability to easily replace various underlying cloud services that an application uses with services from another cloud vendor. This is a key issue that tends to get overlooked by many organisations with the inevitable result that organisations get tied into what may be a less economic or useful cloud service. Figure 3.1 lists the typical questions

Fig. 3.1 Cloud due diligence

1

6

4

2

9

Will our data, or data about our customers be shared with third parties or shared across other services the cloud provider may offer?

Will the cloud provider delete all of the data securely if we decide to withdraw from the cloud in the future? 7

How quickly will the cloud provider react if a security vulnerability is identified in their product or there is a breach?

What are the data deletion and retention timescales? Does this include end-of-life destruction?

What are the timescales and costs for creating, suspending and deleting accounts?

Does security assessment comply with an appropriate industry code of practice or other quality standard?

Will the cloud provider itself be honest and won’t peek into the data – or mine the data for analytics (whether anonymised or not)?

Is all communication in transit encrypted? Is it appropriate to encrypt your data at rest? What ‘key’ management is in place?

limiting access of Company Information to Authorized Persons securing business facilities, data centers, paper files, servers, back-up systems and computing equipment, including, but not limited to, all mobile devices and other equipment with information storage capability implementing network, device application, database and platform security securing information transmission, storage and disposal implementing authentication and access controls within media, applications, operating systems and equipment encrypting Highly-Sensitive Company Information stored on any mobile media encrypting Highly-Sensitive Company Information transmitted over public or wireless networks strictly segregating Company Information from information of Company or its other customers so that Company Information is not commingled with any other types of information implementing appropriate personnel security and integrity procedures and practices, including, but not limited to, conducting background checks consistent with applicable law providing appropriate privacy and information security training to Company’s employees

8

5

3

3

(x)

(ix)

(viii)

(vii)

(vi)

(iv) (v)

(iii)

(i) (ii)

Can the cloud provider provide an appropriate third party security assessment

58 Data Connectivity and Digital Infrastructure

3.17

Progressive Cloud and Data Centres Policies

59

organisations need to be asking before embarking on the use of a cloud provider. This is not something that governments can do much about—it is very much the case of ‘caveat emptor’. Containerisation and micro services have also revolutionised cloud environments and architectures for some enterprises. At their core, containers break down applications and provide a flexible, portable platform to organise and distribute applications that are typically developed through micro services. This model is a departure from traditional, monolithic architectures that use virtual machines and physical servers to run the entire application ecosystem on one centralised machine. They are a form of operating system virtualisation. A single container might be used to run anything from a small micro service or software process to a larger application. Inside a container are all the necessary executables, binary code, libraries and configuration files. Compared to server or machine virtualisation approaches, containers do not contain operating system images.38 This makes them more lightweight and portable, with significantly less overhead. Something that is important for IOT applications and edge processing. Another more recent technology that is facilitating IOT is WebRTC. It has been standardised by the World Wide Web Consortium (W3C) and the Internet Engineering Task Force (IETF) and defines an application programming interface (API for short) in a web browser that enables native real-time communication for data, such as voice, video and messaging between multimedia terminals. Originally developed for peer-to-peer applications from browser to browser, WebRTC has been evolved for integrated utilisation in mobile applications. This means data can be transmitted via a direct peer-to-peer connection with the shortest possible latency period. Sensor data from a device, for instance, can be transmitted independently of a video or voice connection, something that is useful for IOT. A final policy consideration in the context of cloud infrastructure is data centre licensing. There are varying degrees of regulation of data centres. Many countries, including the USA and European countries do not impose licensing or additional regulatory conditions for data centre businesses, although horizontal regulations including environmental rules and data protection still apply. However, some countries do impose licensing restrictions for data centre hosting within the country. It is in the interest of countries to make the building of local data centre hosting businesses compelling. With increasing concerns with regard to security and data privacy, there is a move towards keeping local traffic local. That can only be possible

38

People sometimes confuse container technology with virtual machines (VMs) or server virtualization technology. Although there are some basic similarities, containers are very different from VMs. Virtual machines run in a hypervisor environment where each virtual machine must include its own guest operating system inside it, along with its related binaries, libraries, and application files. This consumes a large amount of system resources and overhead, especially when multiple VMs are running on the same physical server, each with its own guest operating system. In contrast, each container shares the same host operating system or system kernel and is much lighter in size, often only megabytes. This often means a container might take just seconds to start (versus the gigabytes and minutes required for a typical VM).

60

3

Data Connectivity and Digital Infrastructure

if businesses are provided the opportunity to host data locally. Adding extra regulatory burdens onto local businesses only distorts the competitive market, when international data centre operators may be able to compete without such regulatory burdens.

3.18

Customer Registration Regulations Fit for the Digital World

Many governments have mandated compulsory Know Your Customer (KYC) requirements as part of the SIM card registration process. This is something that is relevant for normal SIM cards, however should not need to be mandated for SIM cards used for IOT purposes. The primary driver behind SIM registration and stringent procedures that follow, is due to the security apparatus within a country being able to track an individuals’ communications. In an M2M/IOT scenario, there are no communications that would need to be intercepted per se and the very reasons for implementing strict SIM registration do not apply. In addition, number portability39 obligations should not apply to M2M/IOT, since these numbers are not relevant to machines and competition can be enhanced via other mechanisms such as embedded SIM (eSIM) and Over The Air (OTA) technologies. The impact of the eSIM and OTA is yet to be fully realised. When customers no longer need to go to the local mobile operator’s retail store to acquire a SIM card, but can choose an operator and their service plan from their phone, the relationship between the mobile operator and the customer will change. It is not clear who will own the relationship with the customer. It is quite conceivable that customers given the ability, will have multiple software based SIMs from different operators installed on their phone and a clever App on the phone will decide which operator and service plan to use for the service they use at any point in time. Why would the device manufacturer not offer this facility built-into the device? That would mean the device manufacturer would effectively own the relationship with the customer, further reducing the role of the mobile operator. This is an interesting development that has the potential to seriously disrupt existing mobile operator business models. The impact of eSIM and software SIMs are only just beginning to be examined and its true impact will only become apparent over the next few years.

39 Number portability is a service allowing you to switch from one service provider to another while keeping your original mobile number.

3.20

3.19

Changing Operating Models

61

Recalibration of Sector Taxation Policies

Many governments within developed and developing countries have seen the telecommunications industry as an easy target for generating and filling holes in the public finances, through imposition of sector specific taxes. These taxes include import duties for mobile devices, sector specific VAT, higher corporate taxes on profits, as well as various contributions that the telecommunications operators are required to make, such as USO contributions, R&D or education contribution etc. In many countries, high spectrum prices are also seen as a useful contribution to the state budget. Whilst these taxes may be seen to be useful short-term solutions for the finance ministry, the longer-term impact on the sector and wider economy must not be over looked. The UK government’s ravenous approach to 3G spectrum pricing forced the telecommunications operators to pay incredible amounts of money, which has had the longer-term effect of subduing investment in 4G rollout, with the country falling behind many others in its introduction, coverage and speed. The impact is being felt by not only the telecommunications sector but the wider economy as the productivity gains experienced by other nations that had introduced 4G networks sooner competed with UK businesses that were left behind. The economic rationale for imposing sector specific taxation must be considered thoroughly by each country, for the longer-term consequences can be dire.

3.20

Changing Operating Models

The ever increasing challenges on telecommunications operators from endless investment cycles, intense competition between each other and with competition from un-regulated digital players is having a damaging impact on their margins and cash flow. Telecommunications operators have had no choice but to consider asset sales as a means to raise much needed cash. With financial fundamentals declining and costs for maintaining passive fixed assets draining their pockets, it would seem like an easy option. However, the key question is how strategic these assets are to the operator. Many believe capital intensive, passive fixed infrastructure assets, are no longer a strategic priority for them, given that they have limited influence in offerings and service levels that they offer to their customers. Through monetising these fixed assets (especially mobile sites), mobile operators can grow financial muscle for further investments in what are perceived to be more value yielding technologies. However, one must start questioning, if the telecommunications operators are selling off their passive infrastructure to someone else, they are outsourcing the installation and maintenance of their networks to others, like equipment manufacturers, then what is it that is strategic and core to their business? I know many telecommunications operators have given long and hard thought to these issues, however the short term needs of the stock market means most CEOs and

62

3

Data Connectivity and Digital Infrastructure

boards succumb to the pressures, knowing full well, the long term damage to the business may be irreparable. Turning to outsourcing as a operating model. Mobile infrastructure is just not seen to be the differentiator it once was. Operators must demonstrate their commitment to reducing cost and increasing profitability in the near future (and simultaneously shareholder value). Up until recently, the standard organisational model adopted by telecommunications operators was a vertically integrated one, where the access (the last mile and base stations), core network (the switches) and backhaul (transmission), were all owned and managed internally. A lot of what was perceived as periphery to the network, such as customer service was outsourced. In fact the telecommunications sector was the innovator and is today one of two leading sectors for outsourcing these so called ‘Back Office’ functions. The integrated network model worked well in an age where the key differentiator for mobile operators was network coverage. Operators with deep pockets continued building their networks, providing quality coverage to an ever larger proportion of the population, hoping to capture market share. Gaining coverage and delivering network service was seen as the key core competence and thus control over a strategic asset that gave this edge—‘the network’—was regarded as imperative. Outsourcing was considered blasphemous and network control the gospel. Once mass coverage was achieved the fight was over ‘blind spots’. Which operator had the most blind spots, became one of the priority evaluation criteria used by end-users (certainly the business user) in discounting an operator from the list of possible candidates. Operators built out their networks further and deeper to capture the tail end of market share. All the operators had streams of negotiators hitting the streets to secure land sites for their masts. Radio masts suddenly popped up and mushroomed everywhere, blighting the landscape and getting the general population up in arms about what all this meant in terms of harmful radiation and the price of their properties (the regulators kept a watchful eye from a distance). The old model worked well—operators enjoyed a period of bliss: growth, pretty good margins, good bonuses and were seen as the darling of the stock market. However, it did not last—the sector suffered an economic shock—a significant part created by the greed of politicians, wishing to be seen as the Robin Hood of the twenty-first century—taking from the rich telecommunications sector and giving it to the poor public sector (e.g. 3G spectrum auctions). The scramble to gain competitive advantage and ever expanding coverage could not continue forever—certainly not in an era of 4G—where investment requirements for coverage were an altogether different equation. Operators realised that investment in coverage provided minimal additional competitive benefits and could, in any case, be matched easily by competitors. At the same time, talk of convergence and content started making headlines. With the entry of virtual operators, happily reaping the benefits of investment made by the old school operators, the game changed. Competition was no longer driven by coverage, but by how good an operator was in terms of distribution, branding, service design and bundling.

3.20

Changing Operating Models

63

Given these evolving competitive dynamics, operators started examining different operating models and competitive strategies. With the network increasingly regarded as the underlying ‘plumbing’ for telecommunications services, all were asking if they wanted to be a plumber or something else (you will have noticed, plumbers can actually earn good money, so it is not that you should not be a plumber, but it is a different trade and you may need different skills and scale to compete in that game). Being anything other than a plumber, you need ask whether it is still important for you to own, invest and control the network in-house (which requires you to invest in and keep your tools maintained). Operators today face a similar set of challenges as market saturation, slow uptake of new services, and the economic downturn increases pressures to cut costs and improve efficiency. Industry deregulation has led to intense competition and rising customer expectations. In addition 4G and soon 5G, have transformed customer expectations, while dealing with the rise in volumes of data services and the opening of the telecommunications value chain as a result of convergence, has changed the competitive landscape and the types of competitive levers the old guard can pull— put simply—the business changed. Whilst new technologies (5G and IOT) and service opportunities potentially offer operators new revenue streams to offset (to a limited degree) the declining margins of traditional services, they require significant investment in terms of spectrum and infrastructure. To succeed in this environment, operators need to automate and streamline business processes, increase efficiency and enhance relationships with customers and partners. Even as operators’ revenues have held relatively steady, the long-term revenue outlook is less secure and operators must keep looking for ways to free up capital for strategic investments in new technologies or to make acquisitions as they drive to build scope and scale. To that end, operators large and small are increasingly turning to outsourcing both network and field services operations. Outsourcing can (it is a ‘can’ with a big question mark) deliver those all important cost savings, but at what cost? What does this mass outsourcing mean in terms of the operators’ ability to compete in a world that requires them to compete with new digital players, where the business they are in may need to expand as IOT becomes prevalent and where value may lie in integration and intelligence rather than just connectivity? Operators being driven by short-term needs of investors rather than the long-term health of the organisation may well be putting their future prospects into jeopardy by adopting these models and hollowing out their organisations.

Part II Data Capture and Distribution

Disruptive Applications Data Processing and AI Data Integrity, Control and Tokenization Data Capture and Distribution

Data Connectivity: Telecommunications and Internet of Things

4

Data Capture and Distribution

The Internet of Things (IOT)1 takes sensor technology to the next level by connecting up all the sensors and communicating the data for tracking, analysis and response. From fridges to freight trucks, sensors can track how equipment is running, how it is being operated, what weather and other external factors are affecting its operations. IOT enabled developments such as self-driving cars have captured the popular imagination and with fitness bands to monitor physical activity and Internetconnected devices to manage HVAC systems, appliances, entertainment and security systems, consumers are getting a glimpse of what the IOT enabled future may bring. IOT will become the bridge between the physical, digital, cyber and virtual worlds. This section details the key developments in IOT and associated technologies that are driving growth in new services as well as new business models. It explores the policy and strategic requirements for sustained growth and the nurturing of trust including standards development, interoperability and security. The opportunities in connected home, connected car, connected government, connected city, connected health, connected education, connected enterprise and connected agriculture/mining are briefly explored and sector specific policy choices discussed.

1

Note there is not a commonly agreed definition of IOT or M2M. In this book, the terms are used largely interchangeably, but note IOT is typically a broader category that encompasses Machine to Machine, Machine to Person and Person to Person (which is typically enabled by smart phones). M2M is often used for telemetry, robotics, status tracking, road traffic control, logistic services and telemedicine. Wearables are included in the definition of the IOT. # Springer Nature Switzerland AG 2020 B. Vagadia, Digital Disruption, Future of Business and Finance, https://doi.org/10.1007/978-3-030-54494-2_4

67

68

4

Data Capture and Distribution

Table 4.1 IOT enablers Cheap sensors

Cheap bandwidth Cheap processing

Smart phones

Big data Cloud storage and computing a

Sensor prices had dropped to an average 60 cents in 2014 from approx. US$1.30 in the past 10 years and are continuing to fall. For widespread adoption of IOT, the cost of basic hardware must continue to drop. Low-cost, low-power sensors are essential and the price of microelectromechanical system sensors, which are used in smart phones, has dropped by 30 to 70 percent in the past 5 years. The cost of bandwidth over fixed and particularly mobile networks has also declined by a factor of nearly forty over the past 10 years. Processing costs have declined by over 60 fold over the past 10 years, enabling more devices to be not just connected, but smart enough to know what to do with all the new data they are generating or receiving. Smart phones are now becoming the personal gateway to the IOT world, serving as a remote control or hub for the connected home, connected car, or health and fitness devices. As IOT will by definition generate voluminous amounts of unstructured data, the availability of big data analytics has been a key enabler. Cloud computing and storage pricing have reduced storage costs. The price of storing a gigabyte of data on a public cloud service fell from 25 cents in 2010 to 0.024 cents by late 2014 and continues to fall.a

Alex Teu. (2014). Cloud storage is eating the world alive. TechCrunch.

4.1

The Rise of Internet of Things

IOT devices includes everyday objects such as smart phones, tablets, wearable and other machines such as vehicles, monitors and sensors equipped with Machine to Machine (M2M) communications allowing them to send and receive data. There is also ongoing work looking into implantable, injectable and ingestible: smart devices that are inserted, injected, or swallowed by patients as part of e-Heath initiatives, largely becoming possible thanks to advances in miniaturisation and increasingly nano-technology. IOT devices are sensors on steroids, where value is really only derived when they interact with other devices. The real power and potential of IOT derives from the fact that computing is rapidly becoming ubiquitous and interconnected, as microprocessors are steadily getting cheaper and more energy efficient and networks are getting faster. A number of significant technology changes have come together to enable the rise of the IOT. Table 4.1 lists these2: The ability to sense the environment and to formulate and execute an action in response, is central to all vertebrates and increasingly so to all things artificial. This is the driving force behind IOT. Today, inexpensive supercomputers the size of a credit card are being deployed in more devices—such as cars, drones, industrial machinery and buildings. As a result, 2

Goldman Sachs. (2014). IOT primer: The Internet of Things: Making sense of the next mega-trend.

4.1 The Rise of Internet of Things

69

Connected Education • Access • E-education • Remote delivery

Connected Health • Care-e-health • Health monitoring and prevention • Wellness

Connected Home • Safety and Security • Entertainment • Energy efficiency

Connected Car • Safety and Security • Convenience • Live navigation • Infortainment

Internet Of Things

Connected City • Smart meters • Smart traffic • Connected community

Connected Enterprise (Including Industrial IOT) • Real time analytics • Connected workforce • Smart processes • Robotics

Connected Government • Connected public admin • E-Government • Connected civil protection

Connected Agriculture and Mining • Agriculture • Supply chains

Fig. 4.1 IOT applications

cloud computing is effectively being extended to the network edge—i.e. to the devices where data are produced, consumed and now analysed. NVIDIA’s TX2 is an example of such edge supercomputers. The TX2 can process streaming video in real-time and use AI based image recognition to identify people or objects. The NVIDIA Tesla V100 graphics processing unit (GPU) addresses 64 bit binary strings that it can process at speeds up to 15.7 trillion instructions per second. It is this power that is driving today’s digital disruption. We can now store and analyse all the data we generate—regardless of its source, format, frequency, or whether it is structured or unstructured. However, the significance of this big data phenomenon is less about the size of the data set we are addressing than the completeness of the data set and the absence of sampling errors. Homes, businesses, governments and systems are increasingly deploying sensors that gather data which is helping make or inform intelligent decision making. These developments are enabling a safer, more sustainable, more comfortable, healthier and more economically viable world. Figure 4.1 shows the pervasiveness of IOT

70

4

Data Capture and Distribution

applications across the eight primary areas where its impact is expected to be the greatest. Whilst IOT enables automation, this per se is not new. It is the amorphous, pervasive connected, ad-hoc, distributed, easy to design, easy to deploy, easy to ‘mash up’ and massively commoditised nature of the sensing, computing and actuators enabled by IOT that makes it a new and explosive capability. Governments and citizens, in many ways, have the most to gain from IOT in pursuit of comfort, convenience, safety, security and sustainability. Whether it is water quality or detecting leaky water pipes, traffic safety or emissions reduction, plant performance or equipment maintenance, sensors and intelligent actuation can have a significant impact. As the Internet has evolved, value has consistently flowed to users, often in the form of consumer surplus, such as better products and services and greater convenience.3 IOT is likely to hopefully follow the same pattern. It will change how business processes are executed resulting in faster, more accurate and less expensive decision-making. It will change the way products are differentiated in the marketplace. We are likely to see a new level of individualisation of products and services enabled through IOT. IOT’s largest impact will however be on companies operating in asset-intensive industries. Approximately fifty percent of IOT spending will be accounted for by three industries: discrete manufacturing, transportation/logistics and utilities. Many of these companies are increasingly facing competition from powerful technology companies. Closely following these industries, will be business to consumer (B2C) and healthcare, process and energy and natural resource companies.4 IDC estimates that between 2016 and 2020, corporate spending on new technologies will grow by thirteen percent CAGR to US$2.4 trillion per year.5 They claim that growth will be led by investments in IOT, which is estimated to contribute forty-two percent of total new technology spend in 2020 (~US$1.0 trillion). Investments in mobile/social media are expected to remain almost stable, sending their share of total investments down from thirty-five percent to twenty-five percent over time. Gartner had predicted that IOT spending in 2016 was to exceed US$2.5 million per minute and projected one million IOT devices to be installed hourly by 2021.6 HP similarly estimates an eighteen percent compound annual growth rate in M2M connections, reaching 27 billion by 2024.7 Machine Research forecasts that by 2020,8 IOT connected devices will reach 15 billion. Research firm Gartner forecast

3

McKinsey. (2011). Internet matters: The Net’s sweeping impact on growth, jobs, and prosperity. McKinsey. (2014). The Internet of Things: Sizing up the opportunity. 5 World Economic Forum. (2018). Digital Transformation Initiative Maximizing the Return on Digital Investments. 6 Gartner. (2015). Top Strategic Predictions for 2016 and Beyond: The Future Is a Digital Thing. 7 Hewlett Packard Enterprise. (2016). Smart cities and the Internet of Things. HPE business white paper. 8 Machina Research. (2016). 4

4.1 The Rise of Internet of Things

71

that by 2017 there were 8.4 billion connected ‘things’, up over thirty percent from 2016 and over 20 billion by 2020,9 whilst Ericsson puts it at 28 billion connected devices by 2021.10 Gartner has also forecast that one in five cars will be connected by 2020, leading to more than 250 million networked vehicles on the roads worldwide. GSMA Intelligence estimates the total number of IOT connections is predicted to grow from just over nine billion in 2018 to 25 billion by 2025, with the total IOT revenue opportunity worth US$1.1 trillion by 2025. A chorus of predictions by analysts such as McKinsey and Cisco, echoes the same message: IOT will see astronomical growth in the next decade. IOT will force a change to the strategies of almost all firms. Consider car rentals: the typical rental complex is usually near an airport and houses hundreds of cars ready for pick up. The reason is simple: keys need to be distributed and collected and mileage and fuel levels need be recorded. With IOT however, keys can be transferred electronically and mileage and fuel levels can be accessed remotely. Suddenly, cars can be positioned where customers want them: at the car parks across the street from their apartment, or near their office block. This is the business model of Zipcar which was essentially the first mass market consumer-facing IOT company.11 IOT forces us to question the traditional concepts of products, services userinterfaces and capabilities. The move to a new generation of 4G and 5G mobile connections will open further IOT possibilities, by turning everything from a TV to a toothbrush into a constantly monitoring and communicating smart device. Over ninety percent of the M2M market uses short range, unlicensed connections (e.g., Wi-Fi and ZigBee), while the wide-area market is heavily reliant on mobile connectivity. Licenced cellular IOT connections (cellular M2M and licenced LWPA) are expected to grow from 1.1 billion in 2018 to 3.5 billion by 2025. Whilst the estimates of the number of connected devices varies depending on which analyst one speaks to, what is clear is that the forecasts expect exponential growth. This explosion will enable and drive the digital transformation of business models and digital societies. McKinsey claims that IOT has a total potential economic impact of US$3.9 trillion to US$11.1 trillion per year in 2025, equivalent to about eleven percent of the world economy in 2025 (including consumer surplus)12—see Fig. 4.2. IOT technology has significant potential in urban public health and safety, which could have an economic impact of about US$700 billion per year in 2025.13 These applications include using video cameras for crime monitoring, improving emergency services and disaster response and monitoring street lights and the structural health of buildings. Many cities already have security cameras. IOT will enable these cameras and sensors to automatically detect unusual activities, such as someone

9

Gartner Press Release. 7 February 2017. Ericsson Mobility Report. Nov 2015. 11 http://www.zipcar.com 12 McKinsey. (2015). The Internet of Things: Mapping the value beyond the hype. 13 McKinsey. (2015). The Internet of Things: Mapping the value beyond the hype. 10

Fig. 4.2 IOT economic value

Worker monitoring and training

Home

Operations optimization, predictive maintenance, inventory optimization, health and safety

Low estimate

Logistics routing, autonomous vehicles

Source: McKinsey and Co

Operations optimization, equipment maintenance,, health and safety, IOT enabled R&D

High estimate

Monitoring and managing illness wellness

Automated checkout, smart CRM, Personalized in store promotions

4

Human

Condition based maintenance, reduced insurance premiums

Public safety and health, traffic control, resource management

Energy management

Potenal economic impact of IOT in 2025, including consumer surplus between US$3.9 trillion and 11.1 trillion

72 Data Capture and Distribution

4.2 The Emergence of New Business Models Due to IOT

73

leaving a bag unattended and to trigger a rapid response. Such solutions are already in use in Glasgow, Scotland, and in Memphis Tennessee, in the United States. Cities that have implemented such systems claim a 10% to 30% decrease in crime.14

4.2

The Emergence of New Business Models Due to IOT

IOT will enable, and in some cases force new business models. For example, with the ability to monitor machines that are in use at customer sites, makers of industrial equipment can shift from selling capital goods to selling their products as services. This ‘as-a-service’ approach can establish for a supplier, a more intimate relationship with customers that competitors would find difficult to disrupt. This is precisely what Uber and other autonomous vehicle manufacturers are attempting to do, moving away from selling cars to selling a service. When businesses move towards a ‘as-aservice’ model, it opens up additional value creating opportunities. For instance, using sensors and predictive algorithms, smart thermostats can detect when no one is home and adjust the temperature to conserve energy. Over time the smart thermostat could learn about usage patterns and adjust heating or cooling to have the home at the right temperature when residents are due home. Connected washers and dryers could get information about energy prices to delay cycles during peak energy consumption periods. These additional value creating activities also open up opportunities to extract economic value from power companies through the intelligence now residing with the provider of the ‘as-a-service’ activity. A deal between GE’s wind farm deal with the global energy giant E.ON demonstrates how IOT is being used to change business models. In the past, as the demand for power increased, GE would try to sell more turbines and associated equipment to power generation companies. In its partnership with E.ON, GE used E. ON’s extensive operational data to run advanced analytics and simulations and come up with a different scenario: instead of increasing capacity by adding more wind turbine hardware, E.ON could meet demand with a relatively modest purchase of equipment to connect all the turbines through software that allows for dynamic control and real-time analytics. GE creates value by extracting useful data from the sensors on its turbines and other wind energy equipment and using that information to optimise equipment performance, utilisation and maintenance. It captures that value by charging a percentage of the customer’s incremental revenue from improved performance. So although GE sells less hardware, it has developed a mutually profitable long-term partnership. A few years ago, very few Honeywell executives saw Google as a competitor. That changed in January 2014, when Google bought Nest, the digital thermostat and smoke detector company for US$3.2 billion. The Nest thermostat creates value by digitising the entire home temperature control process, from fuel purchase to temperature setting to powering the heating, ventilation and air-conditioning system and 14

McKinsey. (2015). The Internet of Things: Mapping the value beyond the hype.

74

4

Data Capture and Distribution

connecting it to Nest’s cloud data services. The thermostat aggregates its data on real-time energy consumption and shares that data with utilities, which can improve their energy consumption forecasts and thus achieve greater efficiency. Nest can push cost data back to customers (‘current demand is high, so the price you pay is going up—we will turn down your air conditioning for the next two hours’), reducing their energy bills—something that Nest doesn’t currently do, but may well be its next move as other connected thermostats enter the market. Nest captures value by charging two or three times the price of conventional thermostats and it makes money from electric utilities on the basis of outcomes. Thus Nest will not only play in the US$3 billion global thermostat industry; it will help shape the US$6 trillion energy sector. It can also jump into other sectors by opening up its digital cloud platform to devices and services from other providers. For example, the platform now connects with advanced Whirlpool laundry systems to schedule wash and dry cycles during non-peak hours. It works with the wearable technology company Jawbone to detect when someone has just awakened and then dynamically adjusts the home temperature. Over time, we may see it connect with home security and consumer electronics. Many other technology giants are also attempting to get on the IOT wave. Microsoft’s FarmBeats provides an end-to-end approach to enable data driven farming. Information on soil temperature and moisture levels can help better predict timing for planting of seeds, so the farmer gets a more productive harvest. Microsoft claims that a farmer can use up to thirty percent less water for irrigation and fortyfour percent less lime to control soil pH using their solution. One of the key issues for many rural farming communities, is the lack of electricity and high speed connectivity. Creating digital solutions without addressing a fundamental issue like this would render any solution useless. The FarmBeats system is powered by solar panels. FarmBeats uses TV white space and also incorporates machine learning to create aerial maps of a farm from discrete sets of data taken by a farmer. This machine learning can use the maps to make predications on soil temperature and moisture levels for the entire farm.

4.3

The Importance of IOT Interoperability

IOT devices communicate with IOT platforms, which are typically deployed within cloud infrastructure and serve as virtual representations of real things, e.g., they offer services to retrieve raw readings generated by networked sensors or they implement specific actions to trigger actuation functions on specific networked devices. IOT platforms are also responsible for the management of deployed things and serve as sources of raw data for data analytics and/or provisioning of higher-level services offered to end users. For this IOT ecosystem to work, it requires the development of standards to allow heterogeneous IOT devices to communicate with the IOT platform and leverage

4.3 The Importance of IOT Interoperability

75

common software applications. Interoperability among IOT systems is required to capture some forty percent of the potential value from the IOT ecosystem.15 There are a number of different entities inputting to the standards development process, each with their own standards. The mobile sector is more unified in its standardisation and approach, but many other larger users of IOT are disorganised/ disaggregated to be able to push a unified standardisation. The Message Queuing Telemetry Transport (MQTT), an open OASIS16 and ISO standard (ISO/IEC PRF 20922) lightweight, publish-subscribe network protocol for transporting messages between devices is becoming popular, especially with the use of Mosquitto (a free self hosted broker/server on both Linux and Windows and MQTT clients (using Python, Java or Javascript). It appears to becoming a forerunner in terms of IOT communication standards that allows IOT devices to be communicated through APIs.17 The website ‘ProgrammableWeb.com’ lists over twenty-two thousand APIs, many of which can be used for working with IOT devices. Whilst the benefits of IOT are clearly apparent, for many firms this means a fundamental redesign of their business models and technology interfaces. Whilst we are at an early stage in the development of IOT, there remains questions about the interoperability of IOT systems, meaning many firms face a perilous question of timing. The trade-offs between starting too early and building disconnected islands, or starting late and foregoing benefits are difficult to negotiate.18 The uncertainties in the future development of IOT may be categorised into five categories: • Architecture and standards: Scalable, future-proof and cost-effective architectural choices are essential to IOT’s long-term success.19 Architectural choices today may lock companies into long-term trajectories that can make or break IOT businesses. Reference architectures enable the development of standards with dependable interface and touch points, which in turn are necessary for a healthy vendor ecosystem and enable best practices that ensure both widespread adoption and a healthy ecosystem that is relatively free of performance or safety issues. Whilst work on these standards has started, there is not as yet universally accepted reference architecture. • Security and privacy: When we think about IOT applications, we should differentiate between those that are enterprise IOT and those that are consumer IOT. Many of the significant security issues are concerned with consumer IOT, where the scale of the challenges are significant, whereas within the enterprise space, 15

McKinsey. (2015). The Internet Of Things: Mapping The Value Beyond The Hype. The Organization for the Advancement of Structured Information Standards (OASIS) is a global nonprofit consortium that works on the development, convergence, and adoption of open standards for security, IOT, energy, content technologies, emergency management and other areas. 17 Microsoft Azure IoT Hub uses MQTT as its main protocol for telemetry messages. 18 World Economic Forum. (2019). Realizing the Internet of Things: A Framework for Collective Action. 19 Gubbi, G., Buyya, R., Marusic, S., Palaniswam, M. (2012). Internet of Things (IOT): A Vision, Architectural Elements, and Future Directions. The University of Melbourne. 16

76

4

Data Capture and Distribution

these appear to be a little more manageable. For IOT applications, security and privacy must be designed and ‘baked-in’ from the start—however—building security has costs attached to it. Therefore, considerations must be had to the need for end to end security. Some IOT applications do not need or can afford to have that level of security. For IOT solutions that are more sensitive, you also need to think of security for the long-term. How will increasing cyber-security and AI result in further security threats that need to be addressed today? You need to think about device security and more importantly end to end security. For most enterprises that utilise IOT devices, they have to come to it with a mindset that the devices will not have any trusted security built in, so it is up to the enterprise to secure its network. To a large extent, security for enterprise solutions are being dealt with (some more successfully than others). However we are far from such a situation when it comes to consumer IOT devices. It is an unavoidable part of modern systems that as more connected devices are added, there are more and larger attack surfaces, data sets and data points at risk, and therefore it is not unusual that there will be a bias towards doing things smaller and not at scale given these security concerns. • Shared value creation: The anticipated benefits of new technologies in the early days are often rather bleak.20 In a Dell report, forty-eight percent and twentyseven percent of survey respondents list budgetary constraints and unclear financial benefits as barriers to IOT investment, respectively.21 Value created through IOT and automation must be shared—consumers must see the benefits of a smart meter installed by a utility system rather than feeling as if they are the victims of pricing games. Buy-in across the value chain is necessary to ensure that IOT is seen as a tide that lifts all boats rather than a technology for changing the dynamics between players. • Organisational development: IOT is not IT. Using and benefiting from IOT often requires a fundamental rethinking of the business. GE Aerospace famously claimed to be in the business of thrust, not engines. This was predicated in part on the idea of instrumenting engines and monitoring them remotely—an early example of IOT. Such rethinking requires three components: executive leadership, a realignment of incentives across the organisation and perhaps most importantly, a massive up skilling of the organisation to learn the new design language. • Ecosystem governance: The IOT ecosystem today suffers from too little governance at a holistic level and yet too many competing governance mechanisms at the level of individual standards. New technologies often create new ecosystems with little governance, either self or external. This cycle tragically repeats itself and usually involves competing technologies, competing vendors, varying public opinion and national and increasingly, international regulators. A number of

20

The Economist. Cutting the cord. The Economist, 7 October 1999. Dell and IDG Research Services. (2015). Internet of Things: A Data-Driven Future for Manufacturing. 21

4.3 The Importance of IOT Interoperability

77

questions in the IOT space, including standards, privacy, security, architectures, business cases, etc., urgently require collective attention in the form of the development of industry best-practices and self-governance. Open, voluntary standards, grounded in bottom-up and market-led approaches, are an important tool in the context of fast developing technologies. Such standards and related guidelines are needed to maintain current levels of safety, ensure trust based on enhanced levels of digital security and privacy, improve energy and resource efficiency and address emerging social and organisational challenges brought about by digital disruption. The key to success lies in inclusive standards development, built on collaboration and co-operation among the many players that make up the standards ecosystem. Advanced governance frameworks—building upon both existing public and private sector led processes and new multi-stakeholder initiatives for the benefit of all are necessary. The diversity of potential IOT applications and device technologies may lead many to conclude that it would be detrimental to be tied in at an early stage of technological development to one-size-fits-all type of standards or standards that might prove burdensome or conflicting. However, a certain level of standardisation and interoperability will be necessary to achieve a successful IOT ecosystem and, over time, technological maturity will help identify the most promising standardisation approaches. Today, many protocols and limited interoperability exist across the disparate technologies and markets of the IOT, and many bodies, specialised in relatively narrow fields of technology, are striving to establish themselves as leaders in the space. This could result in a continued lack of interoperability among existing systems and would significantly increase complexity and cost. Today, what we see are voluntary standards, which have helped create ecosystems that promote economies of scale and healthy competition. This is essential to help ensure that markets remain open, allowing consumers to have choice and new entrants to successfully enter markets. However, overtime we need a more robust standardisation process, which may not necessarily mean a single interoperability standard, but designated interface standards that enable disparate systems to communicate through APIs. The most common protocols supported by cloud IOT platforms services today are MQTT, AMQP or HTTP on TLS encrypted connections. A fully functional digital ecosystem requires at least seamless data sharing between machines and other physical systems from different manufacturers. In general, digital systems can be made interoperable in two broad ways: by creating widely accepted interface standards to provide a common language for different systems on a data network (difficult given the plethora of IOT applications in virtually all sectors and an almost equal number of standards cropping up) or through use of translation or aggregation systems (for example, middleware that lies between an operating system and applications). It is the latter that is seen to be the most appropriate technology to connect the disparate IOT systems together and deliver a digital ecosystem where data can flow relatively freely between systems.

78

4

Data Capture and Distribution

Another reason why mandating APIs might be more appropriate, is from a competition policy perspective. Where participants within the digital ecosystem have significant market power, it is possible that they will seek to abuse such power by denying access to vital information that would be required for other players in the ecosystem. It is therefore important that policy makers and regulators pay sufficient attention to ensure that access to such ‘bottleneck data’ is provided to others, but also that such access is provided in a form that can be readily used. Common standards for APIs may be required to ensure that the wider ecosystem benefits are realised and competition across and within various parts of that value is enabled.

4.4

IOT Security and Trust Policies

Trust and usability will be very important success factors for IOT, the security and privacy of which need to be addressed across all the IOT architecture layers and across domain applications. Security needs to be designed into IOT solutions from the concept phase and integrated at the hardware level, the firmware level, the software level and the service level. Security by design akin to privacy design advocated by data protection authorities and GDPR will be necessary. IOT applications also need to embed mechanisms to continuously monitor security and stay ahead of the threats posed by interactions with other IOT applications and environments. IOT security concerns are twofold. In the first instance, consumers and businesses are aware of the risks to their own networks and data posed by an unsecured network of IOT devices. The security of individual devices is often severely lacking when they come out-of-the-box. IOT developers often prioritise getting their devices to market ahead of ensuring that they are stable and secure. The second security issue surrounding IOT is the harnessing of IOT devices for their IP addresses to be used in large-scale distributed denial-of-service (DDoS) attacks. The largest DDoS incident ever recorded was orchestrated by malicious agents hacking into IOT devices (Internet connected CCTV cameras) and coordinating those devices to target specific websites with enormous amounts of traffic, forcing them down. IOT also heightens the risks to personal privacy and the confidentiality and integrity of organisational data. Some IOT consumer applications, particularly those related to health and wellness, collect sensitive personal data that consumers might not wish to share. With IOT applications, consumers may have no idea what kind of information is being acquired about them. Consumers have more than possible embarrassment to worry about from the misuse of private data. Personal data about health and wellness and purchasing behaviour can affect employment, access to credit, and insurance rates. Figure 4.3 illustrates the need for trust in the IOT ecosystem. Many governments have been late to address these security and trust concerns and which in turn has slowed the uptake of IOT. Countries are at different stages of policy and regulatory development when it comes to IOT, for instance:

4.4 IOT Security and Trust Policies

The rapid explosion of technology, products, and services is leaving very few people and businesses untouched

IoT is really a combination of products, technology, data, analytics, services and outcomes

79

Internet of Things (IoT) is changing the way we live, work and transact business

IoT is part of a larger and wider communication fabric where data is generated, captured, transported, analysed, processed, insights derived, actions taken, value captured and reported

For such an intricate system to work with multiple hand-offs, trust is fundamental. Trust in the safety, security and resiliency of data between the different participants and the processing chains

Fig. 4.3 IOT trust

• In Australia: the regulatory environment surrounding IOT security is limited to the obligations placed upon data controllers by the Privacy Act 1988, in particular the Australian Privacy Principles (APPs). For businesses, many of the APPs will be relevant in the context of operating an IOT network. While the use and management of data gathered using IOT networks will be subject to the governance of the APPs and the Privacy Act, these principles do not represent a dedicated approach to managing the unique security issues associated with IOT. • In Hong Kong: the Office of the Privacy Commissioner for Personal Data (PCPD) encourages IOT device manufacturers to adopt the practice of ‘Privacy by Design’ and that consumers only deploy those IOT devices which have incorporated such design. The PCPD provided non-binding guidance and recommendations. • In the USA: IOT specific legislation is limited, however California recently introduced legislation that works to address the issue of IOT security. The California Consumer Privacy Act of 2018 (CCPA) is the first attempt by any state within the USA to impose security requirements on IOT devices. The new law requires manufacturers of IOT devices to install minimum reasonable security features in Internet-connected devices in a manner that is appropriate for the nature and function of the particular device. The Act requires manufacturers to provide authentication credentials that are unique to each device or install prompts for users to change the password before they can make use of the device. • In the UK: there has been the long standing Data Protection Law and now has adopted the GDPR to further protect privacy rights of individuals. The UK government announced in mid-2019 plans to give consumers better information on how secure their devices are. It opened a consultation on a new mandatory labelling scheme that would include Smart TVs, toys, and other connected appliances. The idea is that retailers will only be able to sell items with an IOT security label.

80

4

Data Capture and Distribution

Given the possible economic benefits to governments and citizens alike, governments need to give considerable thought to how they can facilitate the adoption of IOT in a safe and secure manner. It is particularly important to take into consideration the forthcoming integration of new and emerging technologies on the IOT horizon, including those associated with artificial intelligence, robotics, and wearables. Governments need to develop holistic yet heterogeneous policies that facilitate the use of IOT in multiple sectors. For example, in agriculture, policy makers should support development of standardised frameworks to provide transparency of supply and demand information to farmers. In healthcare, regulators should establish standards and guidelines which support interoperability of health management systems and devices, and encourage standards for sharing of electronic healthcare data, as well as enabling healthcare institutions to access national health data to conduct research into improving health outcomes across society. In transportation, regulators should ensure that data collection is allowed to enable real-time traffic management for cities and other transportation authorities, taking into account data privacy and anonymization requirements. Just forcing IOT interoperability at the technical layer is not enough. This is not to say to say that work has not started in developing IOT standards. The IOT Security Foundation, a collaborative, non-profit organisation has been setup to develop compliance standards for IOT, including ensuring the integrity of software; use of hardware-rooted trust chains; authentication and integrity of data in the device, server and other systems, including time stamping of data; identification of compromised or malfunctioning devices; data compartmentalisation; system testing and calibration; building trust into device meta data; re-using existing good security architectures, including cyber security and securely transferring data.22

4.5

Wearable and Medical Sensors: Potential and Hype

Wearables, a name used to describe a range of devices can be confusing to fathom. It includes smart watches, fitness trackers, sensors in shoes, trainers, clothes, eye wear and even medical grade sensors that may be attached to the body or possibly consumed. The use cases and indeed its potential, varies hugely. Nevertheless, wearables and its wear nature has the potential to transform diverse sectors such as healthcare, wellbeing, work safety, public safety and leisure. Fitness tracking is the biggest application today and this opens the opportunities for watches that are capable of tracking blood pressure, glucose, pulse rate and other vital parameters measured every few seconds for a long period of time to be integrated in new kinds of healthcare applications. Some of the smart wearables already on the market or in progress for the healthcare industry include: Asthma monitoring and management devices; devices attached to a person’s back to lower 22 The organisation has more than 100 members from telecommunications operators, infrastructure and device manufacturers to a number of universities.

4.5 Wearable and Medical Sensors: Potential and Hype

81

back pain and treatment; knee braces with electrodes inside the brace; reusable biosensors embedded in a disposable patch with ECG electrodes; wearable wireless ECG monitors; pills with ingestible sensor technology; smart devices helping quit smoking; smart contact lenses measuring glucose levels in tears or helping restore the eye’s natural autofocus; diabetes sensors placed in clothes; hospital ulcer monitors; smart watches with medical grade sensors. What this list shows is the sheer diversity and enormous potential IOT has within the healthcare sector. The market for wearable computing was expected to grow six-fold from 46 million units in 2014 to 285 million units in 2018.23 Today’s biggest challenge for wearable technology is the reticence to use wearables for privacy or data protection concerns or the fatigue faced by users from using a wearable, which are not quite small enough or light enough yet. Healthcare IOT applications require a careful balance between data access and sharing of health information, versus security and privacy concerns. Some information might be acceptable to be shared with the physician, while other types of information would not be acceptable to divulge. For some applications, there is a need to have a paradigm shift in human behaviour in order for patients to evolve, adapt and ultimately embrace what IOT technologies can provide. For the potential offered by wearables, the start has not been hugely inspiring for smart watches—the earliest incorporation of wearables. The smart watch market, a subset of wearables for instance, has not seen huge successes as had been originally hoped. By 2021, Gartner estimates that sales of smart watches will total nearly 81 m units, representing sixteen percent of total wearable device sales.24 Even the Apple Watch has struggled. The overall wearables market is growing around twenty percent per annum,25 but it is the cheap and cheerful pedometers and basic fitness trackers that are making all the running. Fitness tracking is already a commodity and one with which many users rapidly tire of. The problem today is that wearables look very much like a solution looking for a problem, with many players in the ecosystem not actively supporting this field. Amazon, eBay and Google (Maps) have ceased supporting their Apps on the Apple Watch. Outside of fitness tracking, wearables are little more than remote controls for a smart phone. The real impetus for the wearables market is for devices being capable of delivering medical grade health measurements (for blood pressure and blood glucose alone this would open up a market of some 1.4 billion users). Wearable technology has the potential to evolve and help solve some of the healthcare’s biggest problems.

23

Market research group Canalys. Garner Press Release dated 24 August 2017. Egham, UK. 25 Shipments reached 33.9 million units in the fourth quarter of 2016, growing 16.9% year over year (Business Wire, 2017). 24

82

4

Data Capture and Distribution

1

IoT devices (the sensory organs of the new smart world) becoming ubiquitous

Many sources, from start-ups to established players, manufacturing devices – some for as low as a dollar ($) 2

Can security be built into a device that retails for $1? 3

Can security patches be released to such devices? 4

May call for the need for certification process / product classification / labelling

In pursuing mass market, affordable IoT applications we also run the risk of compromising on their reliability and accuracy of data capture, data transport and local compute capabilities

5

May call for the need to enforce better reliability of devices – e.g. healthcare devices

6

You don’t want an IoT device or service monitoring your blood sugar to suddenly develop a fault, that release insulin when you don’t need it!

Fig. 4.4 Creating trust in devices

In November 2017, the FDA approved for the first time a drug with a digital ingestion tracking system. A product by Otsuka Pharmaceutical and Proteus Digital Health, called Abilify MyCite (aripiprazole tablets with sensor), has an ingestible sensor embedded in the pill which registers that the medication was taken. The product is approved for the treatment of schizophrenia, acute treatment of manic and mixed episodes associated with bipolar-I disorder and for use as an add-on treatment for depression in adults. The system works by sending a message from the pill’s sensor to a wearable patch, which transmits the information to a mobile application so that patients can track the ingestion of the medication on their smart phone. Patients can also permit their caregivers and physician to access the information through a web-based portal.26 The next wave of innovation in wearables is likely to come from the integration of key technologies (e.g. nanoelectronics, organic electronics, sensing, actuating, communicating, low power computing, visualisation and embedded software) into intelligent systems to bring new functionality into clothes, fabrics, patches and other body mounted devices. However, creating a seamless user experience is essential for wearable applications success. In the future, wearable devices will need to be more pervasive and more multifunctional (smart watches that open doors, start cars etc.). However when we are talking about life and death, these systems will need to be ultra reliable. Figure 4.4 illustrates the need for device trust. From a policy perspective, it is important for government to mandate sufficient safety standards for medical grade wearables—however, at the same time it should

26

World Economic Forum. (2019). Health and Healthcare in the Fourth Industrial Revolution. Global Future Council on the Future of Health and Healthcare 2016–2018.

4.6 Energy Storage Innovations Driving IOT

83

not stifle innovation. The current regulations are every strict and really serve to help those with deep pockets. It is unlikely that innovation will thrive when the costs of gaining regulatory approvals are out of reach of most start-ups and small scale companies. A testing framework needs to be developed that can allow for robust and rapid testing of new products, at low cost. These facilities need to be funded by government. Akin to regulatory sandboxes used in the FinTech sector.

4.6

Energy Storage Innovations Driving IOT

As was highlighted earlier, many of the advances in IOT is driven by parallel advances in technologies that can function with much lower power consumption as well as advances in energy storage that allows sensors to be placed in environments which can function with battery power that can last for 10 years or more, without the need for any intervention. Many of the IOT solutions today would not have a business case if the sensors required a permanent power source or required the battery to be replaced. It is the recognition that digital disruption cannot in many ways progress without advances in energy storage technologies that is driving investment in this area. A new report by Mercom Capital Group indicates that smart grid, battery and storage, as well as energy efficiency companies raised up to US$1.7 billion in VC funding in 2015.27 A significant part of innovation in battery technology is being driven by firms seeking to capture the ‘electric car or electric bike’ market. Lithium-ion batteries represent a landmark technology that has made the current generation of electric vehicles possible. There has been a dramatic fall in the cost of the lithium-ion batteries that are used to power pure electric vehicles. (Hybrids favour less powerful, but cheaper, nickel metal hydride batteries.) Between 2010 and 2015, the price of lithium-ion batteries fell from US$1000 per kilowatt-hour of storage capacity to US$350, according to Bloomberg New Energy Finance, a market research firm. Since then they have tumbled to around US$200. However, the day of their demise, while still lies years in the future, is within view. Lithium-ion chemistries have a certain maximum energy density, dictated by the laws of physics and today’s batteries are not so far from that theoretical maximum. Engineers have been pushing the limits of Li-ion technology for decades. Today’s best Li-ion cells can reach an energy density of about 300 watts per kilogram, which is getting close to the theoretical maximum. Battery expert Naoaki Yabuuchi of Tokyo Denki University expects lithium-ion technology to reach its limits around the year 2020. Safety is also an issue. Li-ion batteries have to be handled carefully and necessary safety features add complexity and cost to battery packs. A new chemistry that is safer could also prove to be cheaper.

27

www.metering.com

84

4

Data Capture and Distribution

Researchers around the world are working on ‘beyond lithium’ projects. One technology that’s been getting a lot of attention from researchers is the solid-state battery, which uses a solid electrolyte instead of the liquid electrolyte used today. Solid-state batteries could theoretically have double the energy density of current batteries and last several times longer. They also use a non-flammable electrolyte— usually glass, polymer, or a combination—so they would eliminate the safety issues that plague Li-ion cells. Toyota has been focusing on solid-state and Li-air batteries that may allow electric cars to go 500 km on a single charge. That would be two to three times the energy density of today’s best Li-ion batteries. Panasonic, Tesla’s battery supplier, is also taking a hard look at solid-state technology. Tesla is driving costs down faster than any other maker. The firm standardised from the beginning on battery packs which incorporate thousands of little AA-type cells that are cheaper and easier to manufacture, while rivals adopted designs that employ smaller numbers of proprietary cell modules. Tesla has also invested heavily in battery-making facilities. When it is in full swing, the US$5 billion ‘Gigafactory 1’ that Tesla and its battery partner, Panasonic, have built in Nevada, USA will be able to produce 500,000 lithium-ion packs a year. By then, Tesla’s battery costs should be down to US$125 per kilowatt-hour. The question is whether the fast-maturing lithium-ion batteries or solid state batteries can yield enough further improvements to replace the internal combustion engine when it is banished from the road, as Britain has promised to do by 2035. The only motoring option then will be the electric vehicle. The question is whether it will be powered by a battery or by some other source of electricity. To hedge their bets, Honda, Toyota, Hyundai, General Motors, Mercedes-Benz and Volkswagen have dusted down designs for hydrogen fuel-cells that were put aside when the powerful lithium-ion battery burst on the scene a quarter of a century ago. Three of them already have fuel-cell vehicles on the market; the Honda Clarity, the Toyota Mirai and the Hyundai Tucson FCEV. Mercedes is planning to introduce a plug-in hybrid SUV that combines a battery pack with a fuel-cell generator rather than an internal combustion engine (Mercedes-Benz F-Cell). The fuel-cell’s main attraction is that it can propel an electric vehicle for 500 km or more on a tank full of fuel and then be refilled, like a conventional car, in a matter of minutes rather the hours a battery vehicle needs. Proponents believe drivers will demand a similar convenience when petrol and diesel cars are outlawed. Like its battery powered equivalent, a fuel-cell vehicle produces no nasty emissions. The only waste that comes out of its exhaust pipe is water vapour. A fuel-cell functions like electrolysis, only in reverse. Instead of using electricity to split water into hydrogen and oxygen, those two gases are combined to produce water and electricity. Compressed hydrogen from a tank is fed to the anode side of the fuel-cell, while oxygen from the air is pumped to the cathode side. Sandwiched between the two is a catalyst that splits the hydrogen atoms into electrons and protons and a special membrane that selectively blocks the electrons but lets the protons pass through to the cathode. The electrons are forced to follow an external circuit, where they deliver current to an electric motor, before arriving at the cathode. Here, they rejoin the protons, and combine with oxygen to form water.

4.7 Connected Home

85

Fuel-cell vehicles on the market today have their fuel tanks that are considerably larger than those of their petrol equivalents and are far more expensive. Prospective owners face other drawbacks; fuel-cells do not take kindly to extreme conditions, especially temperatures below the freezing point of water. For another, they are not that durable. While internal combustion engines can provide anything up to 10,000 h of service, the best fuel-cells around today are good for little more than 4000 h. Nevertheless hydrogen is the most abundant element in the universe, while lithium used in batteries is a strategic material found mainly in Chile and China. On top of that, two-thirds of the world’s supply of cobalt, a critical ingredient of lithium-ion cells, comes from the Democratic Republic of Congo, a place with a long history of political instability. However, despite its abundance, hydrogen has to be manufactured. Because of its propensity to combine with other elements to form compounds like water and methane, hydrogen on Earth is rarely found on its own. It would seem fuel cells is the way forward (for cars at least), however further work is required to ensure these fuel cells themselves can be created cost effectively and deliver usage for a period that is comparable to the combustion engine. These investments in battery technologies for electric cars is and will continue to provide windfall gains for the wider IOT ecosystem.

4.7

Connected Home

Smart home devices are beginning to penetrate households in many advanced economies. In the UK, the penetration has reached almost a quarter of households, with smart speakers such as Amazon Echo and Google Home leading the way with more than ten percent penetration.28 Mobile phone Apps are now being used to remotely control connected home gadgets through the Internet, offering households a combination of physical comfort, security and potential energy saving benefits. Households are installing and connecting a number of devices/applications including: • Lighting: which can be made more efficient by switching off lighting through scheduling or detecting when a body is home; • Utility bill management: helping reduce energy costs by automatically managing demand to take advantage of pricing differentials; • Smart thermostats: tracking residents’ activities and routines and controlling temperatures in response; • Smoke detectors: which not only distinguish steam from smoke but have the ability to shut off stoves and other appliances that may be causing the problem;

28 YouGov Whitepaper. (2018). The dawn of the connected home: YouGov current and future analysis of the smart home market.

86

4

Data Capture and Distribution

• Smart door locks and video surveillance: where alarm systems not only detect intrusions and call contact centres or law enforcement authorities, but also trigger cameras to capture photo or video footage; • Other appliances: such as refrigerators which alert homeowners to power outages, while washers and dryers can start or stop automatically and send notifications if problems arise. By 2016, the global connected home market was expected to reach US$235 billion, with the largest revenue generating segments being home security (US$110 billion), home entertainment (US$68 billion) and smart utilities (US$33 billion).29 While this market is already large, it is in its early stages of growth, with connectivity popping up everywhere, from smart refrigerators to smart carpets that can provide notification of unauthorised entry. As technology matures and adoption increases, we may see refrigerators that incorporate RFID sensors that track the quantity of produce inside it and effectively create shopping lists or maybe even automatically ordering replenishments, as these refrigerators integrate with online shopping carts. As more and more devices become connected, we will see many of these devices being integrated in an intelligent manner, e.g. motion sensors triggering a ceiling fan when someone walks into a room or light that switch off when someone leaves, or face detection technologies opening door locks or changing the lighting settings dependent on an individual’s preference. Whilst more and more devices become connected, we will face more interoperability issues. I had such an experience with a smart socket, which was incompatible with the operating frequencies of the home broadband wireless hub, I am sure many others will be as equally frustrated as I was. The latest trend is for connected devices to be integrated with smart assistants, such as Amazon Echo and Google Home, in an attempt to simplify the ecosystem and reduce configuration confusion and reduce possible interoperability issues. Unfortunately, connected homes mean households being subject to what amounts to mass surveillance by those organisations that have access to your data. Each individual device may not amount to much privacy concern, but combine these with other devices and suddenly these entities know when you are at home, what TV channels you watch, when you switch your kettle on, what your preferences are for lighting and heating, as well as what you look like. There are ongoing concerns about smart assistants which may be constantly listening and recording every conversation. One sector that is looking into connected homes as a means of developing new offerings, albeit slowly, is the insurance sector where IOT creates opportunities for insurers to lower costs and improve operational efficiency. Insurers can leverage data from connected home devices to assess and mitigate risk, increase pricing sophistication and offer new products/services, all of which help drive operational efficiency and top-line growth. Some of these services could be a managed service, where the insurance provider could provide home monitoring solutions which could offset the

29

GSMA. (2011). The Role of Mobile in the Home of the Future: Vision of Smart Home.

4.8 Connected Car

87

insurance premium, or subscription to a appliance maintenance services, which can track the age, maintenance records and general condition of major appliances and systems. Whilst the opportunity certainly looks interesting, the grand old insurance companies with their old operating models and even older cultures will find it hard to adapt. More agile, nimble players are likely to take a significant part of the insurance industry pie, as they combine insurance IOT with artificial intelligence to offer much more personalised insurance products, better matching risks and therefore potentially offering a price advantage compared to the incumbent players. This is not to say that incumbents cannot effectively respond, they have a huge amount of customer data, case histories and financial muscle, such that they would be a commendable competitor, if they can just shrug off their old ways. That however will be challenging and take time. A sensible approach might be to create separate divisions with the autonomy to develop digitally driven insurance products, having access to the customer data, but without the governance strait jackets that accompany incumbent providers.

4.8

Connected Car

Connected cars will increasingly offer many safety and convenience benefits to consumers. For example, sensors on a car can notify drivers of dangerous road conditions and software updates can occur wirelessly, obviating the need for consumers to visit the dealership. Connected cars also can offer ‘real-time’ vehicle diagnostics to drivers; Internet radio; navigation, weather and traffic information; automatic alerts to first responders when airbags are deployed; and smart phone control of the car. According to data from Machina Research, the number of factoryfitted connected vehicles worldwide is expected to reach 366 million by 2025. In the future, cars will drive themselves, something discussed in further detail in later chapters. In Europe, eCall regulation means that, as of March 2018, all new car models must now support eCall—in the event of an accident, an eCall-equipped vehicle automatically calls the nearest emergency centre and sends the exact location of the crash site, allowing for a rapid response by emergency services. In the United States alone there are some 5.6 million car accidents per year. Property damage from collisions in the United States is estimated at US$277 billion per year, which is factored into insurance premiums.30 By using IOT technology to avoid low-speed collisions and trigger automatic braking, twenty-five percent of the annual property damage caused by low-speed collisions could be avoided. For connected cars to really take off, the mobile ecosystems needs to be further developed with 5G deployment, concerns over device trust need to be overcome and security fears need to be alleviated. Smart sensors need to be installed within roads 30 Traffic Safety Facts 2012, US Department of Transportation, National Highway Traffic Safety Administration, 2012.

88

4

Data Capture and Distribution

and traffic control infrastructure needs to be overhauled, all massive investment intensive projects that may take decades to complete. It also requires robust sensors and actuators which are able to reliably deliver information to the control systems in all weather conditions and in a variety of circumstances. Procedures and standards will need to be developed to guarantee the reliability of such devices. Regulations in many countries will need to be developed or tweaked to protect the privacy of motorists whose movements can be tracked. New regulatory frameworks will be needed to enable autonomous vehicles to coexist with other cars and trucks. For example, it will need to be determined if the owner or manufacturer of a self-driving vehicle (or another party) is liable if the vehicle injures a person or causes property damage. There has also been a little battle going on between the traditional vehicle manufacturers and the global digital giant, Google on who is best placed to drive and grow digital revenues from connected cars today and autonomous vehicles in the future. This primarily involves a decision on whether Google’s services would be embedded within the car’s infotainment system or whether these would be accessed by pairing with the driver’s smart phone that uses Android—as has traditionally been the case. The Renault-Nissan-Mitsubishi Alliance signed a deal with Google that will see it give up seventy percent of its potential digital revenue and all of its digital differentiation in return for a higher probability that it will at least get something. From 2021, the alliance expects to debut vehicles where Google’s services such as Maps, Assistant and the Google Play app store are embedded in the infotainment unit directly. This is very different from Android Auto where the Apps still run on the smart phone but are projected onto the infotainment screen and can be remotely controlled using the vehicle’s buttons and knobs. With Android Auto, Google has no access to the rest of the vehicle and indeed the data generated. This is why it moved from Android Auto to trying to convince manufacturers to deploy full Android in the head unit instead of using their own proprietary software. Google it is claimed, is providing its software for ‘free’ to the alliance, but in return, most likely Google will now have access to the sensor data that alliance vehicles generate. Legally, it will be Google software that is generating the data to which the alliance has a perpetual licence, meaning that in reality, its Google data. This means that this data will be sent off to Google, enhancing its AI and data monetisation capabilities.

4.9

Connected Government

Connected government is not really about systems and specifications, it is really about how our society develops and functions. In many senses, connected government is not a new concept. Many governments have been working on e-government for many years trying to make the functioning of government more efficient and making the interaction between society and government more streamlined.

4.10

Connected City

89

Governments have been doing so, not only for altruistic motives, but to reduce the costs of delivering public services through the use of information and communications technology. Connected government has the ability to streamline and bring about efficiencies on policing and justice by sharing digital evidence seamlessly; tracking suspected criminals through real-time monitoring; speeding up justice through digital systems that link police, prosecutors and courts. Those that are looking beyond efficiency, are looking at how connected government can delivery ‘citizen centricity’. A citizen-centric model of government is about making the expectations of citizens (and residents) the pre-eminent design principle in all government programs, solutions and initiatives. To do so, requires government to be seen and work as a single enterprise so that citizens feel they are being served by one organisation rather than a number of different public entities. Unfortunately, connected government has been a slow process with most of the enabled services to date being pretty rudimentary. The complexity of government structures and processes has evolved over decades with different uncoordinated legacy systems. A key reason for the development of these disparate systems is because of a lack of coordination and consistency across different government departments. Connected government is not merely an automation of existing government services; it is a convergence of government and technology that has the potential to transform public administration and the citizen’s experience of it. This convergence requires interoperability between different systems, processes, software and networking technologies combined with major administrative reform and sharing of common processes and information. Delivering connected government requires a tremendous effort in robust standardisation across the whole of government. In delivering connected government services, there must be confidence in the trustworthiness of online services. This has to date been a key focus for many governments. How to have consistent authentication policies and technologies that enable both individuals and agencies to have confidence in the identity of each other when transacting over the Internet. Many countries have embarked on digital ID programs as a precursor to connected government. This is discussed further in subsequent chapters. Truly ‘connected government’ is likely to be many years away.

4.10

Connected City

A connected city (also called smart city) is a physical or digital environment in which humans and technology enabled systems (and processes) interact in increasingly open, connected, coordinated and intelligent ecosystems. As technology becomes an integral part of our daily lives, community members or citizens of a city will expect more and more services to be delivered through technology. At the same time, local governments or city majors are keen to ensure connected cities are more efficient, sustainable, more attractive to citizens/businesses and to encourage economic growth. The hope is that a smart city will improve the availability and ease with which citizens can access services, drive greater efficiencies and generate greater

90

4

Data Capture and Distribution

economy growth from the integration of what were previously separate and diverse services (and data). There are a number of key elements needed to form a connected city including smart society, smart buildings, smart energy, smart mobility, smart water management, smart government and smart buildings to name a few. An ecosystem of smart city services consists of services built using the IOT stack. Figure 4.5 illustrates some of these smart applications. In the USA, the White House has recognised that ‘an emerging community of civic leaders, data scientists, technologists and companies are joining forces to build ‘Smart Cities’, communities that are building an infrastructure to continuously improve the collection, aggregation and use of data to improve the life of their residents’.31 Among many smart city definitions, the following two specified by standardisation bodies are widely used as they are both precise and comprehensive. The ITU-T Focus Group on Smart Sustainable Cities analysed nearly 100 definitions and used them to conclude: “A smart sustainable city is an innovative city that uses information and communication technologies (ICTs) and other means to improve quality of life, efficiency of urban operation and services, and competitiveness, while ensuring that it meets the needs of present and future generations with respect to economic, social and environmental aspects”.32 Another definition that can be seen as the most comprehensive one is given by ISO where a smart city is recognized as “a new concept and a new model, which applies the new generation of information technologies, such as the Internet of things, cloud computing, big data and space/ geographical information integration, to facilitate the planning, construction, management and smart services of cities”.33 A smart city is more than the simple usage of technology to facilitate city services. Smart cities require a completely new management approach to their infrastructure and services as well as novel communication mechanisms with its citizens. In such a dynamic environment, pilot projects are typically used to test the benefits of digital transformation in practice. In the worldwide semi-annual smart cities spending guide, IDC forecasts worldwide spending on smart cities initiatives to reach US$95.8 billion in 2019, an increase of nearly eighteen percent over 2018, with Singapore, New York City, Tokyo, and London each investing more than US$1 billion in smart city programs in the year. The smart cities use cases that will receive the most funding in 2019 according to the report include fixed visual surveillance, advanced public transit, smart outdoor lighting, intelligent traffic management and connected back office.

31 The White House. Fact Sheet: Administration Announces New ‘Smart Cities’ Initiative to Help Communities Tackle Local Challenges and Improve City Services; Office of the Press Secretary: Washington, DC, USA, 2015. 32 International Telecommunications Union (ITU). (2014). Smart Sustainable Cities: An Analysis of Definitions; ITU: Geneva. 33 International Standards Organisation (ISO). Smart Cites Preliminary Report (2014). ISO: Geneva.

Water & Waste

Smart water meters

Distribution network control, leak detection, GIS

Storm and flood management

Consumption visualisation and behaviour change New water purification methods

Transport

Intelligent transportation & smart parking

Tolling & Congestion charging Public transport system information sharing Car & public transport sharing

Low emission vehicles & new public transport

Energy

Smart meters & demand response

Electric vehicle infrastructure

Distributed generation integration

Consumption visualisation and behaviour change

Renewable- and Co-Generation

Fig. 4.5 Connected city applications

Digital-based systems/apps/services

Green hospitals

Social city apps

Safety & Security

Remote social infrastructure (health, education)

E-government

Social

Energy efficient building design & refurbishment

P2P room sharing portals

Smart consumer appliances and devices

Home entertainment and communication

Home, building & energy management. systems

Buildings

4.10 Connected City 91

92

4

Data Capture and Distribution

Together, these five use cases represent thirty-four percent of worldwide spending in the year. Strong investment growth in intelligent traffic management solutions will make it the third largest use case in 2020, overtaking smart outdoor lighting, according to the report. Unfortunately to date, smart cities have been found to be overhyped and proved to be not that viable, the data that was hoped to be monetised did not materialise and getting a return on investment has been difficult. Whilst there are many positive externalities from smart cities, these have not been/cannot be monetised. The result has been that many smart cities are today, much smaller in scale than had been anticipated. There are also many (too many) players in the smart city ecosystem and much consolidation is expected as the market fails to deliver the returns expected. The perennial problem of standards and interoperability that plagues the IOT landscape more broadly is also evident in the smart city space. Overtime however, we are likely to see the adoption of three key smart city applications in many of the world’s cities: • Public health and safety IOT applications which could have an economic impact of about US$700 billion per year in 2025. These applications include using video cameras for crime monitoring, improving emergency services and disaster response and monitoring street lights and the structural health of buildings.34 Many cities already have security cameras and some have gunshot recognition sensors. IOT will enable these cameras and sensors to automatically detect unusual activities, such as someone leaving a bag unattended and to trigger a rapid response. Such solutions are already in use in Glasgow, Scotland and in Memphis, Tennessee in USA. Cities that have implemented such systems claim a ten to thirty percent decrease in crime. • Transportation applications ranging from traffic control systems to smart parking meters to autonomous vehicles. Adaptive traffic control uses real-time data to adjust the timing of traffic lights to improve traffic flow. A centralised control system collects data from sensors installed at intersections to monitor traffic flow. Based on volume, the system adjusts the length of red and green lights to ensure smooth flow. Abu Dhabi recently implemented such a system, which covers 125 main intersections in the city. The system also can give priority to buses, ambulances, or emergency vehicles. For example, if a bus is five minutes behind schedule, traffic signals at the intersection are adjusted to prioritise passage for the bus. Use of adaptive traffic control has been shown to speed traffic flow by between five and twenty-five percent. Boston, for example, has established a data-sharing partnership with the navigation App Waze. HubCab tracks over 170 million taxi trips taken annually in the City of New York to understand taxi users’ travel patterns. In Cebu City, in the Philippines, the Open Traffic

34

McKinsey. (2015). The Internet of Things: Mapping the value beyond the hype.

4.10

Connected City

93

platform optimises the timings of traffic signals in peak hours using GPS data from the smart phones of drivers for the taxi service Grab.35 • Intelligent buildings to tackle the approximately thirty percent of world energy consumed by buildings. Buildings have an important weight in regards to the energy challenge. IOT can be leveraged for making buildings smarter, improve life of the occupants, make the buildings more energy efficient and facilitate the management and maintenance of the building during its whole cycle. The real challenge is that this has to be done not only with new constructed buildings, but also the existing stock of buildings. Given that new construction only represents less than two percent of total installed base each year, the scale of the investment required for making existing buildings smart is gigantic. The market for systems in buildings will grow from US$111 billion in 2014 to US$182 billion in 2020, with physical security, lighting control and fire detection and safety representing the three largest segments.36 The expected potential of smart cities has been hindered by the lack of return of investment, as well as the lack of well defined and accepted standards. IOT applications are currently based on multiple architectures, technology standards and software platforms which have led to a highly fragmented IOT landscape. This fragmentation impacts directly the smart city area which compromise several technological silos. IOT systems to date have been developed and deployed independently for smart homes, smart industrial automation, smart transport and smart buildings—typically using proprietary solutions that are not compatible with each other and many being implemented as closed solutions. This creates higher fragmentation that has been identified as one of the biggest challenges for the growth of the smart city ecosystem.37 Most IOT services are deployed with a vertical standalone IOT solutions. Such solutions are restricted to an ecosystem that can be created around a single IOT platform.38 Such vertical solutions do not share infrastructure, nor the data generated with each other, even though the same infrastructure and data could be used by and incorporated into different services. To overcome the traditional silo based organisation of the cities, where each utility provider is responsible for their own functionalities, interoperable solutions are needed. A key element to achieve interoperability in such environments with many existing systems and heterogeneous devices and information models is the usage of a horizontal platform that will allow loosely-coupled integration of various vertical systems that make up a smart city. 35

World Economic Forum. (2017). Data Driven Cities: 20 Stories of Innovation. Prepared by the Global Future Council on Cities and Urbanization. 36 Memoori Report. (2014). The Internet of Things in Smart Buildings 2014 to 2020. 37 Vermesan, O., Friess, P. (2014). Internet of Things: From Research and Innovation to Market Deployment; River Publisher. 38 Soursos, S., Zarko, I., Zwickl, P., Gojmerac, I., Bianchi, G., Carrozzo, G. (2016). Towards the Cross-Domain Interoperability of IOT Platforms. In Proceedings of the European Conference on Networks and Communications (EUCNC) 2016, Athens, Greece, 27–30 June 2016.

94

4

Data Capture and Distribution

However beyond these interoperability issues, there are also wider organisational issues that hampers the development of smart cities. As an example, the split responsibility for roads and public transport between federal, national, regional and local levels, limits the ability for coordination. This has limited the capacity of local authorities to install smart traffic management systems to address traffic flows—a task that is difficult when traffic signals are controlled at a federal level by a different government agency, whilst local roads are managed by local public authorities—each building their own empires and refusing to share information. These factors lead to fragmented cities, where data is being stored and managed in silo’s by the different entities involved in delivery of the smart city. This fragmentation also leads to some interesting questions in terms of IOT data: who owns the data and who has the right to use it? Do citizens, city administration, IOT platform provider or someone else own the data? One way around this is to develop policies for what has been termed ‘open data’. The term open data in the context of smart cities generally refers to public policy that requires public agencies and their contractors to release key sets of government data (relating to many public activities of the agency) to the public for any use, or reuse in a easily accessible manner. The value of releasing such (non-personal) data lies in the combinations of this and other data from various sources. This value can be dramatically increased when the data is discoverable, actionable and available in standard formats for machine readability. The data is then usable by other public agencies, third parties and the general public for new services and for even richer insight into the performance of key areas like transport, energy, health and environment. Recent McKinsey research39 found that open data could unlock some US$3 trillion in annual value by giving rise to innovative start-ups and giving established companies opportunities to develop products and services and increase the efficiency of their operations. Open data is traditionally in conflict with IPR/licensing approaches. A data ownership and licensing framework is required to truly enable a smart city, one that reconciles traditional IPR frameworks.

4.11

Connected Health

The scale of the global healthcare industry is mind-boggling. In the United States alone, health expenditures accounted for US$3.2 trillion in 2015, nearly eighteen percent of the country’s total GDP according to a 2016 report by the Obama Administration published in the journal ‘Health Affairs’. With a growth rate of around six percent per year, health spending is predicted to top twenty percent of GDP by 2025 in the USA.40

39 40

McKinsey (2013). Open data: Unlocking innovation and performance with liquid information. See: https://www.healthaffairs.org/doi/full/10.1377/hlthaff.2016.1330

4.11

Connected Health

95

However, healthcare economists estimate that about one-fifth of healthcare spending is wasted on wrong or unnecessary treatments.41 Other estimates, including a 2012 report published by the Institute of Medicine (renamed the National Academy of Medicine in 2015), put healthcare waste closer to thirty percent, making it a trillion dollar problem, or opportunity.42 There has been a lot of improvements in technology but the basic delivery of healthcare hasn’t changed much in the last 50 years. With advances in IOT, nanotechnologies and energy storage, the application of digital health holds significant hope for early detection of symptoms and recognising adverse changes in a patient’s health status, facilitating timely medical interventions, as well as monitoring and managing human health and fitness more broadly. Analysts estimate that 130 million consumers worldwide use fitness trackers today.43 With the rise of smart watches and other wearable devices, the number of connected fitness monitors is expected to exceed one billion units in 2025.44 The basic technology for fitness monitoring devices; sensors and low-power chips, is well established and prices are expected to decline as volumes rise. The next leap in the technology is to ensure these devices are ‘medical grade’ and are authorised for use by the healthcare industry. There is also likely to be rapid growth in devices and systems for in-home monitoring of patients, particularly those with chronic conditions such as diabetes. These devices have already demonstrated potential to improve health outcomes and reduce health-care costs among patients with acute forms of chronic heart failure, diabetes, and COPD (chronic obstructive pulmonary disease). In developing economies, the greatest benefits of IOT applications could be in expanding delivery of health-care services to the underserved. Whilst today some health monitors may be prohibitively expensive, such devices can be used to evaluate patients remotely at rural health clinics. As technology evolves, costs will continue to fall, enabling broader adoption and use by a wider range of patients. In advanced economies, some of the greatest benefits of IOT in health care will result from improving treatment of chronic diseases, unfortunately a significant problem affecting vast swathes of the population, which is likely to only increase. Wearable technology has become a trendy consumer item in advanced economies. In the United States, an estimated 31 million wrist monitors and other wearable devices that track physical activity were sold in 2014, and the market was estimated to be growing by more than sixty percent per year.45 Fitness trackers range from simple pedometers that calculate the distance a runner covers during a morning jog (the majority) to more advanced devices that measure indicators such as heart

41

OCED (2017). Tackling Wasteful Spending on Health. OECD Publishing, Paris. https://business.cornell.edu/hub/2017/08/18/the-digital-revolution-takes-on-healthcare/ 43 IHS estimates that there are just under 130 million connected sports and fitness trackers in 2015. 44 Using smart phone and e-reader adoption patterns as proxies. 45 Consumer Electronics Association Press Release dated 6 June 2015. Industry revenues to reach all time high of $223.2 billion in 2015. 42

96

4

Data Capture and Distribution

rate, skin temperature, and sleep. Most of the more advanced applications monitor physiological data of older people and individuals with chronic conditions; detect falls, epileptic seizures and heart attacks in older people and susceptible individuals, and then send alarm signals to caregivers or emergency response teams. In limited cases, sensing technology is sometimes used in combination with interactive gaming and virtual reality environments and augmented feedback systems to facilitate home based rehabilitation for physiotherapy. The scope of IOT enabled healthcare is simply unimaginable. Two of the biggest killers in the developed world are hypertension (blood pressure) and diabetes (blood sugar) which are currently diagnosed with expensive, one-time and in many scenarios, invasive medical devices. Small, low-cost medical grade devices that can continually measure/diagnose these conditions in a non-invasive manner would be a welcome innovation. It is estimated that around seventy percent of healthcare costs in the developed world are lifestyle related, meaning that prevention of these two conditions would have a huge impact. However, for the most part, advice based on data collected remains limited, such as the notification that the wearer has not logged the recommended 10,000 steps per day. As wearables technology evolves, it is likely to broaden in scope and impact. Beyond simply measuring activity, inexpensive wearables could measure a broad range of indicators (for example, blood oxygen, perspiration, blood glucose, and calories consumed). Apps that offer advice based on user data could also expand to include recommendations for workouts (suggesting specific exercises) and diet tips based on measured food consumption.46 The UK Digital Strategy published in 2017, claims the UK will invest £4.2 billion over the next 5 years in areas such as electronic patient records, apps and wearable devices, telehealth and assistive technologies. There are a number of important factors that are converging that are driving evolvement of the IOT healthcare market, including: the decreasing costs of sensors as well integration of miniaturised physiological sensors into consumer end devices and accessories, the availability of smart phone penetration with its ability to monitor healthcare metrics, the rise in home and remote patient monitoring, creating greater awareness of the benefits of such technologies, as well as a general increase in consumers’ health and fitness awareness. At the same time, most countries are experiencing a rising share of ageing population and increasing incidences of chronic and lifestyle diseases, growing the budget requirements for healthcare to levels that are becoming unsustainable. These are being associated with more scrutiny being given to health care organisations and insurance providers, with entry of big digital players such as Apple, Google, Microsoft and Amazon into the field promising massive potential for saving and improved transparency. However, for all the potential these advances offer, there are a number of significant factors inhibiting the growth of the market to date. These factors include:

46 Sofge, E. (2015). Google AI is training itself to count calories in food photos: Popular Science, 19 May 2015.

4.11

Connected Health

97

major privacy and security concerns for patients using these digital devices and services, the lack of clarity in health communication protocols and standards for the sector and ecosystem, major regulatory hurdles that need to be overcome by entities wishing to enter and promote their products and services, as well as the perennial problems of interoperability between different systems (a generic IOT problem today). For IOT applications to deliver their full potential impact in health and wellness, advancements in technology and changes in the health-care industry will be required. The maximum benefits from IOT will depend not only on falling costs, but also on investments in communications infrastructure to bring connectivity to underserved areas—primarily rural locations where high speed communication infrastructure is limited. The success of IOT in health care also depends on further refinements and expanding use of predictive analytics. Use of predictive analytics across health care has been limited to date. Sophisticated algorithms are needed for such applications which can only be developed and refined when large amounts of training data is made available—not an easy task given privacy concerns and something covered in more detail in subsequent chapters. Perhaps more challenging than the technical issues are the structural changes necessary to create incentives for behaviour changes among health-care funders, providers and patients. Compensation models that pay for wellness and quality of care could create incentives for providers to use IOT to provide high-quality care while simultaneously managing costs. For example, a shift to payment regimes in which providers are paid for a course of treatment for a particular condition, rather than for every examination and procedure, would also provide an incentive to use IOT technology to improve patient adherence and reduce the need for additional procedures or hospitalisations. Lower-cost systems and experimentation (sandboxes) is required to demonstrate the cost-effectiveness for insurance companies and public health systems to justify paying for IOT devices and digital health services.47 Furthermore, the success of IOT in improving human wellness requires people to change their behaviours. Will people use technology capable of helping patients live healthier lifestyles and following prescribed treatments more closely? Achieving these behavioural changes will require innovations in financial models (for example, insurance companies reducing premiums for patients that exercise regularly) and psycho-social models (using relationships and behavioural ‘nudge’ psychology to encourage patients to change their habits). Policy makers may need to get involved in creating incentives to use IOT monitoring as part of routine care for specific types of patients. Government programs can also encourage use of IOT by providing incentives for specific outcomes such as paying hospitals that are able to reduce the re-admission rate for

47 Henderson, C., Knapp, M., Fernández, J. L., Beecham, J., Hirani, S. P., Cartwright, M., ... Newman, S. P. (2013). Cost effectiveness of telehealth for patients with long term conditions (Whole Systems Demonstrator telehealth questionnaire study): Nested economic evaluation in a pragmatic, cluster randomised controlled trial. BMJ, 346(7902), [f1035].

98

4

Data Capture and Distribution

heart disease patients. In developing economies, policy makers may need to allocate more resources to improve telecom infrastructure to enable IOT use, particularly in rural areas. Further benefits in IOT enabled healthcare will be seen when implantables, ingestibles, and injectables, such as nanobots (which can clear arteries or help detect early-stage cancer) become available and trusted. These devices have not yet reached the clinical trial stage, however, when they are ready for widespread adoption, their impact could be as large as or larger than the current e-health technologies. As technology matures and wearables and sensors are further miniaturised, more novel applications for healthcare will be developed. Big health providers are firmly on the bandwagon. One Medical Group, a healthcare provider based in San Francisco, was founded on the premise that using telehealth and other IT tools would enable them to cut administrative costs by two-thirds relative to traditional insurers. Through its locally hosted video platform, Cloudbreak, which re-launched in its current form in 2015, it offers patients the ability to quickly access a certified medical interpreter by audio in 250 languages and by video in 60 languages. Cloudbreak has recently expanded beyond interpretation into telepsychiatry, telestroke and tele-ICU. The USA Food and Drug Administration recently updated the 2017 draft guidance to clarify the categories of clinical and decision support software subject to FDA oversight under the twenty-first Century Cures Act based on risk and released final guidance on the types of software no longer considered medical devices under the Act’s amended definition of device.48 Certain digital health technologies—such as mobile Apps that are intended only for maintaining or encouraging a healthy lifestyle—generally fall outside the scope of the FDA’s regulation. Regulators will need to work closely with the industry to drive innovation, whilst guaranteeing safety of patients. Given that data is the raw material of diagnosis and analysis, it will be increasingly critical that the sensors that collect that data are reliable and accurate. Nowhere is this more true than in eHealth where inaccurate data is useless at best and deadly at worst. Fitbit, Apple, Xiaomi, Garmin and others generally generate data that is too low quality such that it can really only be used for recreational fitness and not for medical purposes. There are two ways in which the data that these sensors generate can be improved. First, improve the quality of the sensors themselves, e.g. if an optical heart rate sensor can gather data as reliably and as accurately as an electro cardiogram (ECG), then this would have substantial ramifications for cardiac medicine. Second, create intelligent software that improves the quality of the data. Artificial intelligence a key enabler for this, is further discussed later in the book.

48

Guidance for Industry and Food and Drug Administration Staff Document issued on September 27, 2019. Changes to Existing Medical Software Policies Resulting from Sect. 3060 of the twentyfirstCentury Cures Act https://www.fda.gov/media/109622/download

4.12

Connected Education

99

Apple has taken wearables a step closer to replacing medical devices. KardiaBand is a strap for the Apple Watch which incorporates in it a sensor that is capable of producing a full ECG. Critically this accessory has been approved by the FDA meaning that it is good enough to be a medical device producing medical data that can be relied on by a doctor. The Apple Watch App that comes with KardiaBand can use the heart rate sensor on the Apple Watch to detect abnormalities and recommend to the user that he records his ECG. Atrial fibrillation is a leading cause of stroke and it is thought that sixty-six percent of strokes could be prevented with early detection. It is these signs that the KardiaBand App is looking for via the Apple Watch sensor which can then be confirmed through the recording of an ECG. However, it requires a large metal plate to be present in the device’s strap—maybe a small price to pay. However, it does demonstrate that change is healthcare has started and will only accelerate—something that will be welcomed by health sufferers, but also potentially finance ministers that are looking for healthcare budgets to be cut. The incumbent pharmaceutical sector and insurance industries are likely to be the only ones pushing against these advances.

4.12

Connected Education

Education is widely accepted to be a fundamental resource, both for individuals and societies. Indeed, in most countries basic education is perceived not only as a right, but also as a public duty. Despite progress in the long run, large inequalities remain, notably between sub-Saharan Africa and the rest of the world. In Burkina Faso, Niger and South Sudan, African countries at the bottom of the rank, literacy rates are still below thirty percent.49 In the majority of developing countries, net enrolment rates are higher than attendance rates. This reflects the fact that many children who are officially enrolled, do not regularly attend school. Education also remains an inaccessible right for millions of children around the world. More than 72 million children of primary education age are not in school and 759 million adults are illiterate and do not have the awareness necessary to improve both their living conditions and those of their children.50 A report by Vodafone showed that digital learning in Vodafone’s markets and territories could benefit over 85 million people by 2025 and create economic benefits to society of US$7.3 billion annually.51 Outside of providing access to education and reducing education inequality rates in developing countries, there is significant change afoot in developed markets. Connected education has the potential to make sharing high-quality education resources easy, enriching teaching methods and ultimately the potential to widen 49

https://ourworldindata.org/global-rise-of-education https://www.humanium.org/en/right-to-education 51 Vodafone Connected Education, a report prepared by AD Little in 2017. 50

100

4

Data Capture and Distribution

student access to education. The traditional classroom has stayed the same for the past few decades. The curriculum has changed but the most advanced technology in most classrooms is often still a whiteboard. Bringing digital technologies to education is a trend that is already well under way as traditional methods of learning and teaching are increasingly being replaced or enhanced by digital technologies. This paradigm shift is not only facilitated by the availability of digital technologies, but also partly a response to increasing costs of education and the accompanying inefficiencies. Digital content means easier sharing and more collaboration amongst teachers. Teachers can build on each other’s knowledge and move through the material more quickly because they are not wasting time writing letter by letter on a chalkboard or whiteboard. It also means he or she can share lecture notes with students with very little effort. As education becomes connected, teachers and students will be equipped with augmented reality allowing them to see, touch and hear situations whilst sitting in the classroom. Outside the classroom, many students are signing up to personalised interest based courses. It is these trends that led to LinkedIn’s US$1.5 billion acquisition of tutorial and training site Lynda in April 2015 and then Microsoft acquiring LinkedIn for US$26.2 billion in mid-2016. Many leading universities are also making many of their leading courses available online. These universities use smart classrooms to record and broadcast interactive teaching resources. Real-time live broadcasts can be accessed anytime, anywhere, with playback making it easier to share high-quality resources. Massive Open Online Courses (MOOCs) are delivered online and accessible to all, some for free. Whilst MOOCs are created by universities, they are distributed by course providers such as: Coursera, edX, FutureLearn and Udacity. While governments must assume primary responsibility for the provision of education to all children, the role of technology has the potential to overcome many divides, such as: overcoming low-income versus high-income differentials; reducing the rural versus urban area divide; delivering education to crisis affected regions including refugees; delivering education to out of school or out of reach children; reducing the low female ratio of learners who may have cultural barriers to attending a school; reducing the literacy rates in the population including providing education of adults, and reducing barriers to education because of language. Access to education enabled by digital technology will reduce information asymmetries and equalise access to wider social networks and opportunities. The use of technology and education platforms will enable individualised learning experiences that can accommodate to the learner’s individual pace or preference of learning. The old formulae of teaching the same material in the same way is giving way to more tailored education that is aligned with the learner’s preferences. This will increase engagement levels across the population and improve education outcomes for society. Soon virtual reality will enable teachers to be connected with students online, creating an immersive classroom experience that is not dependent on a fixed, physical location. This has the potential to slash travel costs and save time for schools, making teaching and learning more convenient, and maximising the

4.13

Connected Enterprise

101

effective use of resources. Even within schools, technology will improve the safety and security of learners. Wristbands, smarter doors, smarter ID badges and even wearable devices will allow teachers and staff members to track the whereabouts of students. Devices may be able to track student attention levels and help identify areas where additional resource or teaching is needed or drive changes to teaching methods. The use of gaming and virtual reality and augmented reality will help find new (and potentially more exciting ways) to help students latch onto specific lessons and assignments. Increased access to and use of technology is however a tool, not the end game. When answers to virtually any question can be searched online or enquired via Google or Amazon digital assistants, the very nature of teaching and the education curriculum will need a major overhaul. Minerva Schools in San Francisco is an extreme example of this. No lectures, no labs, no football teams and no buildings. Everything is done remotely online. Professors have a time limit for talking and proprietary software tracks which students have spoken in class or not, making participation compulsory. The curriculum is designed to ensure, teachers do less teaching and are more inspiring. Whilst this will be covered in more detail in subsequent chapters, governments need to start redesigning their education systems and develop the skills and talent required for the future. A data scientist is the most sought after job in the USA today and will only continue and be replicated in other nations. Nations will need an increasing number of researchers to improve computational behaviour and understanding. We’ll need even more software developers, mathematicians and ethical hackers. Governments must starting developing policies that will fill all of these jobs and still unseen ones.

4.13

Connected Enterprise

Some organisations are relying on IOT to make the journey to becoming intelligent enterprises, using data from across the organisation to make decisions. As a result, they are better able to delight their customers and operate more efficiently. For the intelligent enterprise to make good decisions, good data is needed. This data must come from the supply chain, operations, the manufacturing centre, distribution and logistics and products in operation. Much of it will originate from IOT devices in these locations. Good data from these devices is made possible by great connectivity, which is foundational to any intelligent enterprise. Whilst it is accepted that almost all organisations will benefit from improved connectivity, the beneficiaries for the use of IOT, is primarily expected to be the manufacturing sector, where it will help digitalise industrial production methods, including manufacturing plants (what some are calling smart factories) and the way in which people, equipment, machines, logistic systems and products communicate and co-operate with each other. Wireless IOT solutions are great for parts of manufacturing/plant facilities that are hard to reach, cost prohibitive to connect with wired installations or have frequent

102

4

Data Capture and Distribution

modifications to workflows and layouts. In factory automation, wireless IOT can be used to control devices such as cranes and automated guided vehicles in material handling applications. They can also be used in process automation to connect instruments that enable plant operators to monitor and optimise processes in hazardous areas, which helps to ensure worker safety. The smart digital connectivity of supply chains and the availability of comprehensive, real-time information systems will enable companies to make manufacturing more efficient and flexible. Within the retail sector, again connectivity and IOT has the potential to completely change the delivery of services to consumers. In the retail space, IOT can automate checkout by scanning the contents of shopping carts and automatically charge the sale to the customer’s mobile payments account, allowing a consumer to walk out of a store without pausing—as is already possible in the Amazon Go store in Seattle. The system relies on cameras and sensors to track what shoppers remove from the selves and what they put back. Shoppers are required to download and scan Amazon Go smart phone App as they pass through the gated turnstile. Further advances in RFID technology and cost reductions will in the future, allow the checkout turnstile to read the electronic tags on the items in the shopping cart, wherein the checkout system will tally up the prices of the items and relay the information to a wireless payment system that will debit the customer’s smart phone mobile account as it passes.

4.14

Connected Agriculture/Mining

Sustainable farming, producing more with less and with smaller environmental footprint is an inevitable demand which requires new technologies. IOT will be crucial for meeting the challenges of tomorrow’s sustainable farming, supporting the implementation of smart/precision farming techniques aimed at improving the processes of food production. One crucial aspect that cannot be overlooked and which transverses the whole agri-food value chain, is food safety and traceability through the whole lifecycle, from farm to fork. IOT applications have the potential together with new technologies like Blockchain to provide the food safety and traceability requirements demanded by consumers (Blockchain is discussed in detail later in the book). Agriculture faces severe uncertainties in terms of weather conditions, soil fertility, irrigation facilities etc. Precision farming can help make farming procedures more accurate and controlled. With the help of sensors, autonomous vehicles, control systems and robotics, connected devices can be used for soil moisture probes, variable rate irrigation optimisation and improving yields. IOT enabled sensors can also allow farmers to keep a track on the location, well-being, and health of their livestock. This data allows them to identify sick cattle, separate them from the herd and provide them with care. A key technology that is finding important use in agriculture is the use of drones. Agricultural drones are finding major applications in large organised farms. Generally, there are two types of drones used in agriculture. These include ground-based

4.14

Connected Agriculture/Mining

103

and aerial-based drones. Depending on the farm conditions, these drones are used for integrated geographic information system mapping, crop health assessment, irrigation, planting and soil analysis. The farmers can punch in the location of their farms, select an altitude or ground resolution and start the automated survey. This derives insights regarding plant counting, yield prediction, plant height measurement, canopy cover, fertiliser content, drainage and many other factors. These drones can be simply controlled through a laptop or a smart phone. The real-time data collected with drones allows the farmers to take corrective or proactive measures to ensure optimal yield and risk management. These same technologies and approaches can be applied to mining operations. Greenhouse farming is another area where IOT is likely to have a large impact. Greenhouse farming is all about controlling the environmental parameters, through an automated control system to derive higher yields. Manual intervention is often expensive, time consuming, inaccurate and leads to energy loss. An IOT embedded smart greenhouse not only controls these environmental factors, but provides realtime monitoring. Nevertheless, connectivity will be vital to making the best of IOT in agriculture and mining. Unfortunately, IOT intensive precision farming applications take place at food production (farms, aquaculture) facilities which are located in rural areas, where broadband coverage is still far too low, e.g., only four percent of the rural European population had access to 4G connectivity, compared to twenty-five percent in towns and cities in 2015.52 Mining operations are probably located in even worse areas where connectivity may be non-existent. As has been a theme so far, policy cannot be looked in isolation. Improving IOT agri-farming and mining is less about agricultural or mining policies and more about telecommunications policies. It is this coherent policy making that has been and will continue to be the theme for this book.

52 European Commission. (2015). Why we need a Digital Single Market? Published on 12 March 2015.

Part III Data Integrity, Control and Tokenization

Disruptive Applications Data Processing and AI Data Integrity, Control and Tokenization Data Capture and Distribution

Data Connectivity: Telecommunications and Internet of Things

5

Data Integrity, Control and Tokenization

Data is the lifeblood within the digital ecosystem and more broadly the economies of the future. Whilst there is increasing focus on data protection (e.g. the EU General Data Protection Regulation ‘GDPR’), the broader role of data in enabling the development and growth of a convergent ecosystem has only recently been brought into sharp focus. Artificial intelligence, virtual reality, augmented reality, as well as many applications used in the digital environment require vast amounts of data inputs (sometime diverse sets of data that may be geographically dispersed) and vast computing resources to process these. When I talk about data, I’m referring to any form of information; text, speech, video, structured and unstructured data. The value of such data is generated by a number of factors, including: ease of access, ease of use, providence of the data, its accuracy, its ability to be distributed to a wider stakeholder base and their ability to disseminate and aggregate the data as they see fit. This section details the key developments in terms of data integrity, control and tokenization, looking specifically at Ecommerce, digital identity, security, encryption and cryptography, digital signatures, Blockchains, smart contracts, token economics and data privacy, as these are key technological developments that are important to deliver data integrity, trust and security that customers, policy makers and regulators demand.

5.1

Data Value and Tradability

Creating an ecosystem for data collection, distribution, usage and adequate value creation requires an ecosystem for the data itself. Just as public utilities, such as power and telecommunications have significant positive externalities as more people use them, data could be seen in a similar context. In an environment where Artificial Intelligence (AI) can have profound impacts on society, it can be socially disadvantageous to have data monoliths that control access to such data and possibly use such data to create less desirable outcomes for society. # Springer Nature Switzerland AG 2020 B. Vagadia, Digital Disruption, Future of Business and Finance, https://doi.org/10.1007/978-3-030-54494-2_5

107

108

5 Data Integrity, Control and Tokenization

This should however not mean that data is freely given away. In order to create an data ecosystem, it requires a framework where individuals and organisations are provided both incentives and safeguards for them to provide access to data they may own, or control. Governments are, or should be, increasingly looking at understanding the data value chain and facilitating data exchange platforms that enable the exchange of data between data providers and data acquirers. The data value chain must also be fully cognisant of the need to protect the privacy of personal data. Today, we have an incredibly diverse set of data that is being generated and most if not all, is being used by entities without the actual data subject being aware of its value. Some of the data is being created by individuals and a lot of it is being created by firms, who again like individuals cannot determine its value. As we move forward into the digitally disruptive era, more and more data will be generated through IOT, which itself may have value that is socially and economically good for it to be shared more widely. Without incentives for such data owners/controllers, the data may be kept in silos or hoarded—thereby reducing its true economic value. Digital monopolies including Facebook, Google and Amazon, get data from users for free. Every like, search and purchase feeds the learning system to further improve the algorithms; in turn bringing more customers and engagement. Digital monopolies are searching for more and more data to feed their algorithms. Facebook buying WhatsApp and Instagram. Google with self-driving cars and Google Home and Amazon with Alexa Echos and Dots, are all attempts to gather more and more data. Whilst monetisation of personal data from each individual may not amount to much and be too small for anyone to care about, given the economies of scale and data aggregation ability of these digital giants, they can extract considerable value from such data. One way to solve this asymmetry of value is the creation of a marketplace for buying and selling data. A data exchange would create a business model for sharing data. Individuals can sell it instead of giving it away for free; organisations can monetise it instead of letting it sit idle on databases; and machines can buy and sell data automatically to increase their utility. Individuals may be able to choose which of their particular data is sold, rented or leased for a time bound period. Given the non-rival nature of data (i.e. multiple uses of the same data does not deplete its intrinsic value unlike physical goods), companies and researchers should be incentivised to contribute and tap into such data exchanges, with great potential for value creation. These exchanges would also address information asymmetries that particularly affect small and medium enterprises, which are often unaware of the potential of the data they hold or could have access to. Data exchanges could stimulate market competition more broadly in the data ecosystem by providing companies with the means to challenge multinationals’ power in data markets. As this data commons grows with more datasets, it will attract more data buyers, creating powerful network effects. A platform based data exchange can also enforce data quality standards, ownership and usage rules, as well as paying sellers to rent or sell their data.

5.2 Data/Cyber Security Risks

109

As data becomes more valuable (and being capable of being monetised), advances in data processing techniques and other ‘lean’ and ‘augmented’ data learning techniques will accelerate. We may not need for example, a whole fleet of autonomous cars on the road to generate data about how they’ll act. A few cars, plus sophisticated mathematics, will be sufficient (akin to what is sometimes called digital twins) discussed in subsequent chapters. The European Commission has recently published plans for an initiative on ‘Industrial Data Spaces’1—modelled after the ‘German Industrial Data Space’—as a platform seeking to deliver secure data exchange in business ecosystems on the basis of standards and by using collaborative governance models.2 Access to data will be covered in the next chapter in further detail. Within this chapter, we are focusing on data integrity, security, distribution and secured access. The mechanisms for guaranteeing data integrity and secure data distribution are only now beginning to be addressed. This is where Blockchains and distributed ledgers more generally come to play. Decentralisation technologies may be an important component to the creation of a true global public utility for data. The transparency, usage provenance, tamper-evidence and censorship resistance features of Blockchain technology can provide for such a global public utility. These will be discussed later in a little more detail.

5.2

Data/Cyber Security Risks

Digital security risk has traditionally been approached as a technical problem but the changing nature and scale of digital security risk is driving countries to re-evaluate their strategies and policies in this area. In recent years, many governments and stakeholders have emphasised the importance of considering digital security risk from an economic and social perspective. The digital ecosystem is premised on the interconnection of various devices and networks and supported by individuals having confidence and trust to participate in the system. These individuals will be asked to trust the system with their financial transactions, their privacy and their very own identities. In order to do so, data security will be firmly at the centre of mind for them and all ecosystem participants. Figure 5.1 illustrates the common security threats faced by consumers. To understand the growth in Blockchains and cryptocurrencies, a basic understanding of security is useful. To understand how data can be secured is a vast and complex field, that has been covered by many academics, security firms and experts. Whilst I do not intend to cover this vast area here, it would be helpful to understand some of the common

1 European Commission. (2017). Legislative priorities for 2018–2019. 14 December 2017. 2 Industrial Data Space Association. See http://www.industrialdataspace.org/en/

Published

on

110

5 Data Integrity, Control and Tokenization

POS intrusions

Errors Insider misuse Denial of service attacks

Crimeware

Physical theft/loss Cyberespionage

Card skimmers Attacks on web applications

Fig. 5.1 Data privacy and security

security vulnerabilities that allow unscrupulous individuals or even nations to gain access to unlawful data, and how advances in cryptography (on which Blockchain is built) and other techniques may alleviate these vulnerabilities. A word of caution; no matter how good any particular technology is, its efficacy is limited if it is not effectively adopted and implemented by management teams and correctly used by employees or by consumers who want out of the box solutions. So whilst robust systems and firewalls may provide comfort, if the softer aspects of human behaviour are not addressed, the security vulnerabilities remain. In fact, the situation may be worsened because management is led to believe the purchase of expensive security systems has provided a certain level security, when the reality is, that this security infrastructure could be broken by unintentional behaviour of employees, clicking an email link without much thought. A common and unfortunate trend is the widening gap between the time (minutes or even seconds) it takes an attacker to compromise a system and exfiltrate data and the time (weeks or months) it takes an organisation to discover a breach. Worse still, typically it’s a security blogger who sounds the alarm, not the organisation’s own security measures. Security threats are nevertheless only going to increase, driven by increasing use of third party supply chain partners which you cannot control, the proliferation of IOT devices which may be difficult to secure, the use of public cloud computing that you have less control over, the use of mobility and mobile devices creating additional points of security vulnerabilities, as well as increasing levels of digitisation within firms, exposing additional data sets to potential security vulnerabilities. Recent examples include security incidents which have interrupted essential utility services such as electricity distribution (e.g. 2015 attack against Ukraine power grid), destroyed physical industrial facilities (e.g. 2014 attack against a German steel mill), interrupted for several hours broadcasting of an international television channel (e.g. 2015 attack against TV5 Monde), interrupted access to over 60 major online services (e.g. 2016 ‘Dyn attack’), undermined large firms’ reputation (e.g. Yahoo in 2016, Target Store in 2013) and functioning (e.g. Saudi Aramco

5.2 Data/Cyber Security Risks

111

in 2012) and exposed the privacy of millions through numerous personal data breaches. Attackers have also affected the functioning of governments (e.g. USA Office of Personnel Management in 2015) and stolen millions from financial institutions, including central banks (e.g. Bangladesh Bank in 2016). Most large organisations impose security policies and systems to safeguard their organisation. The challenge is choosing a security framework that is both holistic and detailed, but which does not stifle the actual running of the business. A number of frameworks currently exist, including: • NIST: the National Institute for Standards and Technology (NIST) Cybersecurity Framework;3 and NIST Special Publication 800 series;4 • CIS: the Center for Internet Security (CIS) Controls for Effective Cyber Defense;5 • ISO: the International Organisation for Standardization (ISO) standards;6 • CoBIT: the Control Objectives for Information technology (CoBIT) standards;7 • DHS: the Department of Homeland Security (DHS) cybersecurity evaluation toolkits;8 • HITRUST: the Health Information Trust Alliance (HITRUST) Common Security Framework (CSF);9 • FFIEC: the Federal Financial Institutions Examination Council (FFIEC) Cybersecurity Assessment Tool.10 Aside from the above, a number of other standards organisations (e.g. SANS) have published guidelines for cyber security and created accreditation processes to further assess risk management strategies. Given the near-certainty that a breach will occur even with the best cyber security policies and practices in place, post-breach loss management becomes important. In addition to the increase in cyber security standards, governments and industries are promoting and in some cases mandating, cyber security insurance in order to mitigate the financial risks to companies and consumers. As a result, it is important for organisations to familiarise themselves with the cyber insurance market and decide how to integrate such policies into their overall Enterprise Risk Management (ERM) plans.

3

https://www.nist.gov/cyberframework https://www.nist.gov/itl/nist-special-publication-800-series-general-information 5 https://www.cisecurity.org/controls 6 https://www.iso.org/home.html 7 http://www.isaca.org/Knowledge-Center/COBIT/Pages/Overview.aspx 8 https://www.us-cert.gov/forms/csetiso 9 https://hitrustalliance.net 10 https://www.ffiec.gov/cyberassessmenttool.htm 4

112

5.2.1

5 Data Integrity, Control and Tokenization

Protocol Vulnerabilities

Many of the attacks that are utilised today take advantage of some of the inherent vulnerabilities designed into the Transmission Control Protocol (TCP)/Internet Protocol (IP) protocol suite. TCP/IP is a suite of protocols that can be used to connect dissimilar brands of computers and network devices. The largest TCP/IP network is the Internet. Protocols are nothing more than a set of formal rules or standards that are used as a basis for communication. Before network devices are able to exchange data, it is necessary for the devices to agree on the rules (protocol) that will govern a communication session. The TCP/IP suite has become widely adopted, because it is an open protocol standard that can be implemented on any platform regardless of the manufacturer. In addition, it is independent of any physical network hardware. Most attacks use the functioning of TCP/IP to defeat the protocol. The TCP/IP protocol is an element within the wider OSI reference model. Forgive me if you believe I am teaching you to suck eggs, but a baseline understanding is vital to understand where security vulnerabilities emanate and how new cryptography techniques are seeking to work around these vulnerabilities. The OSI reference model is a seven layer model that was developed by the International Standards Organisation (ISO) in 1978. The OSI model is a framework for international standards. The OSI architecture is split into seven layers: 1. The physical layer: addresses the physical link and is concerned with the signal voltage, bit rate and duration (and the equivalent for wireless communication); 2. The data link layer: is concerned with the reliable transmission of data across a physical link. In other words, getting a signal from one end of a wire to the other end. It handles flow control and error correction; 3. The network layer: handles the routing of data and ensures that data is forwarded to the right destination; 4. The transport layer: provides end-to-end control and constructs the packets into which the data is placed to be transmitted or ‘transported’ across the logical circuit; 5. The session layer: handles the session set-up with another network node. It handles the initial hand shake and negotiates the flow of information and termination of connections between nodes; 6. The presentation layer: handles the conversion of data from the session layer, so that it can be ‘presented’ to the application layer in a format that the application layer can understand; 7. The application layer: is the end-user interface. This includes interfaces such as browsers. The TCP/IP protocol suite is a condensed version of the OSI reference model consisting of the following four layers: (i) Application Layer (which combines the Application, Presentation and Session layers of the OSI model); (ii) Transport Layer; (iii) Internet Layer (which combines the Internet and Network layer of the OSI

5.2 Data/Cyber Security Risks

113

layer); and (iv) Network Access Layer (which combines the Data Link and Physical layers of the OSI layers). The IP portion of TCP/IP is the connectionless network layer protocol. It is sometimes called an ‘unreliable’ protocol, meaning that IP does not establish an end-to-end connection before transmitting datagrams11 and contains no error detection process. IP operates across the network and data link layers of the OSI model and relies on the TCP protocol to ensure that the data reaches its destination correctly. The TCP portion of TCP/IP comes into operation once a packet is delivered to the correct Internet address. In contrast to IP, which is a connectionless protocol, TCP is connection-oriented. It establishes a logical end-to-end connection between two communicating nodes or devices. TCP operates at the transport layer of the OSI model and provides a virtual circuit service between end-user applications, with reliable data transfer, which does not exist in the datagram-oriented IP. Software packages that follow the TCP standard run on each machine, establish a connection to each other and manage the communications exchanges. TCP provides the flow control, error detection and sequencing of the data; looks for responses; and takes the appropriate action to replace missing data blocks. The end-to-end connection is established through the exchange of control information. This exchange of information is called a three-way handshake. Another important TCP/IP protocol is the User Datagram Protocol (UDP). Like TCP, UDP operates at the transport layer. The major difference between TCP and UDP is that UDP is a connection-less datagram protocol. UDP gives applications direct access to a datagram delivery service, like the service IP provides. This allows applications to exchange data with a minimum of protocol overhead. The UDP protocol is best suited for applications that transmit small amounts of data, where the process of creating connections and ensuring delivery may be greater than the work of simply retransmitting the data (such as many IOT devices). Another situation where UDP would be appropriate is when an application provides its own method of error checking and ensuring delivery of packets. Many security threats are carried out using the IP network as well as hijacking the ports used within a device for receiving data packets.

5.2.2

Network and System Threats

Networks and systems face many types of threats, including viruses, worms, trojan horses, trap doors, spoofs, masquerades, replays, password cracking, social engineering, scanning, sniffing, war dialling, denial-of-service attacks, and other protocol based attacks, some of these are briefly explained below:

11

The datagram is the packet format defined by IP.

114

5 Data Integrity, Control and Tokenization

• A virus is a parasitic program or fragment of a code that cannot function independently. It is called a virus, because like its biological counterpart, it requires a ‘host’ to function. A virus is usually spread by executing an infected program or by sending an infected file to someone else. In general, virus scanning programs rely on recognising the ‘signature’ of known viruses, turning to a database of known virus signatures that they use to compare against scanning results. • A worm is a self-contained and independent program that is usually designed to propagate or spawn itself on infected systems and to seek other systems via available networks. • A trojan horse is a program or code fragment that hides inside a program and performs a disguised function. This can be accomplished by modifying the existing program or by simply replacing the existing program with a new one. The trojan horse program functions much the same way as the legitimate program, but usually it also performs some other function, such as recording sensitive information or providing a trap door. • A trap door or back door is an undocumented way of gaining access to a system that is built into the system by its designer(s). It can also be a program that has been altered to allow someone to gain privileged access to a system or process. There have been numerous stories of vendors utilising trap doors in disputes with customers. The current security debate between the USA and Huawei of China is primarily around possibilities of trap door being inserted by Huawei, giving access to sensitive data utilising the networks delivered by Huawei, to the Chinese government. • A logic bomb is a program or subsection of a program designed with malevolent intent. It is referred to as a logic bomb, because the program is triggered when certain logical conditions are met. This type of attack is almost always perpetrated by an insider with privileged access to the network. The perpetrator could be a programmer or a vendor that supplies software. • A port scanner is a program that listens to well-known port numbers to detect services running on a system that can be exploited to break into the system. Most intrusion detection software monitor for port scanning, which can trace the scan back to its origination point. However, some programs create a half-open connection that doesn’t get logged and cannot therefore be easily detected. • Spoofs cover a broad category of threats. In general terms, a spoof entails falsifying one’s identity or masquerading as some other individual or entity to gain access to a system or network or to gain information for some other unauthorised purpose. There are many different kinds of spoofs, these include: – IP Address Spoofing: takes advantage of systems and networks that rely on the IP address of the connecting system or device for authentication. Packet filtering routers are sometimes used to protect an internal network from an external un-trusted network. These routers only allow specified IP addresses to pass from the external network to the internal network. If a hacker is able to determine an IP address that is permitted access through the router, he or she can spoof the address on the external network to gain access to the internal network. The hacker in effect masquerades as someone else.

5.2 Data/Cyber Security Risks

115

– Sequence Number Spoofing: TCP/IP network connections use sequence numbers. The sequence numbers are part of each transmission and are exchanged with each transaction. The sequence number is based upon each computer’s internal clock and the number is predictable because it is based on a set algorithm. By monitoring a network connection, a hacker can record the exchange of sequence numbers and predict the next set of sequence numbers. With this information, a hacker can insert himself into the network connection and effectively, takeover the connection or insert misinformation. The best defence against sequence number spoofing is to encrypt the connection link. – Session High jacking: Is similar to sequence number spoofing. In this process, a hacker takes over a connection session, usually between a client user and a server. This is generally done by gaining access to a router or some other network device acting as a gateway between the legitimate user and the server and utilising IP spoofing. Since session high jacking usually requires the hacker to gain privileged access to a network device, the best defence to take is to properly secure all devices on the network. Something that may not be possible in an IOT environment, where most devices are rather rudimentary. – Domain Name Service (DNS) poisoning: DNS is a hierarchical name service used with TCP/IP hosts that is distributed and replicated on servers across the Internet. It is used on the Internet and on intranets for translating IP addresses into host names. Two common attacks are called the man in the middle (MIM) and DNS poisoning. Redirects, another less common attack, rely on the manipulation of the domain name registry itself to redirect a URL. In a MIM attack, a hacker inserts himself or herself between a client program and a server on a network. By doing so, the hacker can intercept information entered by the client, such as credit card numbers, passwords and account information. The MIM attack, which is also sometimes called Web spoofing, is usually achieved by DNS poisoning or hyperlink spoofing. Whilst all the above present worrying areas that need to be safeguarded, what we are seeing today are many unscrupulous individuals or nations engaging in an Advanced Persistent Threat (APT) that may seek to utilise all of the above vulnerabilities. Telecom operators find themselves at the forefront of this battle, with operators such as Deutsche Telekom registering close to 1 million hacker attacks a day in 2014.12 Whilst organisations may deploy systems to monitor communication to their network ports, websites and other entry points, the attackers have started to use communication that is more stealthy and which is less likely to be picked up by an organisations’ monitoring systems. Such traffic is usually obfuscated or hidden through techniques that include:

12 https://www.euractiv.com/section/digital/news/one-million-cyber-attacks-a-day-on-deutschetelekom-network/

116

5 Data Integrity, Control and Tokenization

• Encryption: with SSL, SSH (Secure Shell), or some other custom application. Proprietary encryption is also commonly used. For example, BitTorrent is known for its use of proprietary encryption and is a favourite attack tool—both for infection and ongoing command and control; • Circumvention: via proxies, remote desktop access tools (such as LogMeIn!, RDP, and GoToMyPC), or by tunnelling applications within other (allowed) applications or protocols; • Port evasion: using network anonymizers or port hopping to tunnel over open ports. For example, botnets are notorious for sending command and control instructions over IRC (Internet Relay Chat) on non-standard ports; • Fast Flux (or Dynamic DNS): to proxy through multiple infected hosts, re-route traffic and make it extremely difficult for forensic teams to figure out where the traffic is really going. Recent attacks are unfortunately taking advantage of the resiliency built in to the Internet itself. A botnet can have multiple control servers distributed all over the world, with multiple fallback options. Bots can also potentially leverage other infected bots as communication channels, providing them with a near infinite number of communication paths to adapt to changing access options or update their code as needed. Botnets are a key component of targeted, sophisticated and ongoing attacks. These types of botnets are very different than their larger brothers. Instead of attempting to infect large numbers of machines to launch malicious large-scale attacks, these smaller botnets aim to compromise specific high-value systems that can be used to further penetrate and intrude into the target network. In these cases, an infected machine can be used to gain access to protected systems and to establish a backdoor into the network in case any part of the intrusion is discovered. These types of threats are almost always undetectable by antivirus software. They represent one of the most dangerous threats to the enterprise because they specifically target the organisation’s most valuable information, such as research and development, intellectual property, strategic planning, financial data and customer information. Unfortunately, it is also true that in order to maximise their accessibility and use, many applications are designed from the outset to circumvent traditional port-based firewalls by dynamically adjusting how they communicate. Many new business applications also use these same techniques to facilitate ease of operation while minimising disruptions for customers, partners and the organisation’s own security and operations departments. For example, RPC (remote procedure calls) and SharePoint use port hopping because it is critical to how the protocol or application (respectively) functions, rather than as a means to evade detection or enhance accessibility.

5.3 Creating Trust in the Data Ecosystem

5.3

117

Creating Trust in the Data Ecosystem

IOT devices present a variety of potential security risks that could be exploited. Although these risks exist with traditional computers and computer networks, they are heightened in the IOT environment. A lack of embedded security could enable intruders to access and misuse information collected and transmitted to or from the IOT device. For example, new smart televisions enable consumers to surf the Internet, make purchases and share photos. Any security vulnerabilities in these televisions could put the information stored on or transmitted through the television at risk. If smart televisions or other devices store sensitive financial account information, passwords and other types of information, unauthorised persons could exploit vulnerabilities to facilitate identity theft or fraud. Smart televisions have the processing power to potentially build in security protection—many other IOT devices simply do not have the processing power to do so. As more connected devices are installed, the number of vulnerabilities are increasing exponentially. Security vulnerabilities in a particular device may facilitate attacks on another connected network or connected systems. A compromised IOT device could be used to launch a denial of service attack.13 Denial of service attacks are more effective the more devices the attacker has under his or her control. Securing connected IOT devices is challenging given that companies entering the IOT market generally may not have experience in dealing with security issues. The other factor behind these risks is that although some IOT devices are highly sophisticated, many others may be inexpensive and essentially disposable devices. In those cases, if a vulnerability were discovered after manufacture, it may be difficult or impossible to update the software or apply a patch.14 And if an update is available, many consumers may never hear about it. Companies, particularly those developing low-end devices may lack economic incentives to provide ongoing support or software security updates at all, leaving consumers with unsupported or vulnerable devices shortly after purchase.15 Policy makers need to consider if certification regimes are appropriate for IOT devices and give consideration to changing their product liability regimes to place a higher degree of accountability

13 O’Brien, D. (2014). The internet of things: new threats emerge in a connected world. SYMANTEC (Jan. 21, 2014)—describing worm attacking IOT devices that connects them to a botnet for use in denial of service attacks. 14 See Article 29 Data Protection Working Party, Opinion 8/2014 on Recent Developments on the Internet of Things 9 (Sept. 16, 2014). For example, most of the sensors currently present on the market are not capable of establishing an encrypted link for communications since the computing requirements will have an impact on a device limited by low-powered batteries. 15 Schneier, B. (2014). The internet of things is wildly insecure - and often unpatchable. WIRED (Jan. 6, 2014)—The problem with this process is that no one entity has any incentive, expertise, or even ability to patch the software once it’s shipped. The chip manufacturer is busy shipping the next version of the chip, and the [original device manufacturer] is busy upgrading its product to work with this next chip. Maintaining the older chips and products just isn’t a priority.

118

5 Data Integrity, Control and Tokenization

on the device manufacturers as well as other players involved in the ecosystem. A fine balance is needed. Placing very high liability risks to product manufacturers through the product’s lifecycle, when they such manufacturers have no control over how the product may be used and connected which other networks and devices will only slow down innovation and entrepreneurship. In a world of IOT, where devices could cost less than a dollar, it will be impossible to apply security at the device layer. Security must therefore be applied at the perimeter. Telecommunications operators may be uniquely placed to provide security at the perimeter. There are already established security practices within enterprises for securing its infrastructure which could be easily extended to IOT infrastructure. Software and applications are available to harden the systems and monitoring systems readily available. However, organisations need to go back to basics; segregating the network, limiting access to certain ports, enforcing authenticated access, using different virtual machines to limit potential exposure to the network, using clusters for different critical functions. The concept of ‘security by design’ needs to be considered, accepting that IOT device manufacturers will not have adequate security built into their devices. Another important area that needs to be considered is to have incident response teams, tools and communication processes to respond to what many accept are likely to be security incidents to even the most secure networks. To be prepared for mounting cyber threats, leaders and even the boards need to be adequately educated and resources provided to the security teams. However, the security team cannot be expected to deal with these issues in isolation. It is an organisation-wide effort, especially as many of the security incidents that have plagued organisations have been enabled by employees breaching security protocols rather than vulnerabilities in the infrastructure itself. The security team needs to work with the rest of the organisation and be integrally involved in the launch of new products/services which could expose the organisation to new threats. In addition to risks to security, privacy risks flowing from IOT are also clearly of concern. Some of these risks involve the direct collection of sensitive personal information, such as precise geolocation, financial account numbers, or health information; risks already presented by traditional Internet and mobile commerce, others arise from the collection of personal information, habits, locations and physical conditions (of objects and/or people) over time. Even where direct personal data may not be collected, collection of ancillary or meta data may be enough to provide enough data to infer personal information. Researchers are beginning to show that existing smart phone sensors can be used to infer a user’s mood; stress levels; personality type; bipolar disorder; demographics (e.g., gender, marital status, job status, age); smoking habits; overall well-being; progression of Parkinson’s disease; sleep patterns; happiness; levels of exercise; and types of physical activity or movement.16 Although a consumer may today use a

Peppet, S. (2015). Regulating the internet of things: first steps towards managing discrimination, privacy, security & consent, 93 TEX. L. REV. 85, 115–16.

16

5.3 Creating Trust in the Data Ecosystem

119

That requires some form of identify and authentication system – is someone impersonating you? Identify as a service: e.g. Facebook 1

That requires customers being clear about what digital footprint organisations hold about them 2

That requires a grown-up relationship with customers. When customers realize the value of their data, how will organisations establish a credible transaction with customers for usage of their (valuable) data? 3

Fig. 5.2 Creating trust with data

fitness tracker solely for wellness-related purposes, the data gathered by the device could be used in the future to price health or life insurance (lower risk premiums for regular exercisers) or to infer the user’s suitability for credit or employment. The risks are greater than just privacy and customer consent, when customers are unaware how personal and non-personal data is and will be used, by whom and for what purposes. By intercepting and analysing unencrypted data transmitted from a smart meter device, researchers in Germany were able to determine what television show an individual was watching.17 Security vulnerabilities in camera-equipped devices have also raised the spectre of spying in the home. These examples are just a small measure of what is to come. In a digital ecosystem where access to personal and non-personal data can drive economic and societal benefits, an ecosystem of trust needs to be established, for consumers to share their information and for organisation’s to use such information in a transparent ethical manner. Figure 5.2 illustrates the need for trust in the wider digital eco-system. Confidentiality, integrity and availability, also known as the ‘CIA’ triad, is a model designed to guide policies for information security within an organisation. The elements of the triad are considered the three most crucial components of security. Figure 5.3 details how these elements are linked to advances in encryption, hash functions, digital IDs and Blockchains. Understanding this triad and the underlying technologies is important to understand how and why Blockchain technology may be useful. 17

Dario Carluccio, D., Brinkhaus, S. (2011). Presentation: smart hacking for privacy. 28th Chaos Communication Congress, Berlin, December 2011. Available at https://www.youtube.com/watch? v¼YYe4SwQn2GE&feature¼youtu.be

120

5 Data Integrity, Control and Tokenization

Confidentiality includes data security and a set of rules (or methods) that limits access to information. Encryption fall within the domain of Confidentiality

Confidenality

Integrity is the assurance that the information is trustworthy and accurate. Hash functions and cryptography fall within the domain of Integrity

Integrity

Availability

Availability is a guarantee of reliable access to the information by authorised people. Digital ID, signatures and Blockchains fall within the domain of Availability

Fig. 5.3 The CIA triad

5.4

Data Confidentiality: Cryptography and Encryption

Encryption is a foundational technology for data security. Encryption is used to ensure that sensitive data and communication are not accessed by bad actors. However, encryption can also be used by bad actors to shield communications from law enforcement. In the last few years, encryption has become increasingly salient as the private sector has invested in differentiation on the basis of user-friendly encryption to secure increasing amounts of personal sensitive data (e.g. what Apple has been attempting to do). On the other hand, some policy makers increasingly insist on weaker encryption to enable law enforcement officials to intercept communications.18 (Apples battles with USA law enforcement officials). Where the right balance lies is a fundamental question for society (where they actually have a choice) and this varies by country. Cryptography is the science concerned with the study of secret communication, whilst encryption is a process or algorithm (known as a cipher) to make information hidden or secret. A cryptosystem has (at least) five ingredients: Plaintext, Secret Key, Ciphertext, Encryption Algorithm and Decryption Algorithm. Remember, security usually depends on the secrecy of the key, not the secrecy of the algorithms. There are essentially two types of encryption approaches: Symmetric (private/ secret) key and Asymmetric (public) key encryption. (i) Symmetric algorithms (Secret Key Algorithms) use a single key for both encryption and decryption and include DES, 3DES, CAST-128, BLOWFISH, IDEA, AES and RC6. (ii) Asymmetric algorithms (Public Key Algorithms) uses a public key and a private

18

World Economic Forum. (2018). Cyber resilience playbook for public-private collaboration.

5.4 Data Confidentiality: Cryptography and Encryption

121

key pair which are related to each other and include DH (Diffie-Hellman keys), SSL, RSA and SSH. In both the cases the keys are generated by random number generators. The cryptographic keys must be established between the sender and the receiver either manually or using trusted third party key management. The types and strength of cryptosystems can be classified along three dimensions: (i) Type of operations used for transforming plaintext into ciphertext: either binary arithmetic (shifts, XORs, ANDs, etc. which are typical for conventional or symmetric encryption) or integer arithmetic which is typical for public key or asymmetric encryption; (ii) Number of keys used: symmetric or conventional systems use a single key, whereas in asymmetric or public key systems, two keys are used, one to encrypt and one to decrypt; and (iii) How plaintext is processed: one bit at a time; a string of any length; or a block of bits. Encryption advances are being made on all three dimensions in an attempt to make the encryption process even more resilient. An encryption scheme is generally considered computationally secure if the cost of breaking it (via brute force) exceeds the value of the encrypted information; or the time required to break it exceeds the useful lifetime of the encrypted information. In the case of the German Enigma encryption machine for example, the machines settings were changed daily, based on secret key lists distributed in advance. This meant that the code needed to be solved within the day.19

5.4.1

Symmetric Key Encryption

When most people think of encryption, it is symmetric key cryptosystems that they think of. Symmetric key, also referred to as private key or secret key, is based on a single key and algorithm being shared between the parties who are exchanging encrypted information. The same key both encrypts and decrypts messages. The strength of the scheme is largely dependent on the size of the key and on keeping it secret. Generally, the larger the key, the more secure the scheme. In addition, symmetric key encryption is relatively fast. The strength of the encryption algorithm also depends on how many rounds the plaintext block is transformed into the ciphertext block; the more rounds the greater the security. The main weakness of the system is that the key or algorithm has to be shared. You can’t share the key information over an unsecured network without compromising the key. As a result, private key cryptosystems are not well suited for spontaneous communication over open and unsecured networks. In addition, symmetric key provides no process for authentication (ensuring the person claiming to be the recipient is the rightful recipient) and no proof against repudiation (preventing the recipient denying (repudiating) that a message was sent or received or that a file was accessed or altered, when in fact it was). This ability is particularly important in the context of e-commerce or creating a data commons platform for instance. 19

The military Enigma had 158,962,555,217,826,360,000 different settings.

122

5 Data Integrity, Control and Tokenization

There were two variants of symmetric systems until 2005; DES and AES: • Data Encryption Standard (DES): DES is a symmetric block cipher (shared secret key), with a key length of 56-bits and 16 rounds (shifts, XORS). Published as the USA Federal Information Processing Standards (FIPS) 46 standard in 1977, DES was officially withdrawn in 2005. The federal government originally developed DES encryption to provide cryptographic security for all government communications. Although its short key length of 56 bits criticised from the beginning, makes it too insecure for most current applications, it was highly influential in the advancement of modern cryptography. • Advanced Encryption Standard (AES): Published as a FIPS 197 standard in 2001, is a more mathematically efficient and elegant cryptographic algorithm, but its main strength rests in the option for various key lengths. AES allows you to choose a 128-bit, 192-bit or 256-bit key, making it exponentially stronger than the 56-bit key of DES. AES offers vast speed improvements over DES.

5.4.2

Asymmetric Key Encryption/Public Key Cryptosystems

Asymmetric Key Encryption was invented between 1974 and 1978. There are two keys: private and public. Encryption is done with the public key, whilst decryption is done with the private key. In the context of digital signatures, signing is done by the private key, whilst verification is done by the public key. Whilst Asymmetric Key Encryption is much more secure than symmetric key encryption, it does require much more computation and is much slower to compute than conventional cryptography (approx. 1000 times slower) and therefore results in low data throughput. It is therefore commonly combined with conventional cryptography, e.g., to encrypt session keys to balance the need for security with throughput. With the aid of public key cryptography, it is possible to establish secure communications with any stranger when using a compatible software or hardware device. For example, if ‘A’ wishes to communicate in a secure manner with ‘B’, a stranger with whom ‘A’ has never communicated before, ‘A’ can give ‘B’ their public key. ‘B’ can encrypt its outgoing transmissions to ‘A’ with ‘A’s public key. ‘A’ can then decrypt the transmissions using their private key when they receive the encrypted message. Only ‘A’s private key can decrypt a message encrypted with ‘A’s public key. If ‘B’ transmits to ‘A’ his public key, then ‘A’ can transmit secure encrypted data back to ‘B’ that only ‘B’ can decrypt. It does not matter that they exchanged public keys on an unsecured network. Knowing an individual’s public key tells you nothing about his or her private key. Only an individual’s private key can decrypt a message encrypted with his or her public key. The security breaks down if either of the parties’ private keys is compromised. Unlike symmetric key cryptosystems, public key allows for secure spontaneous communication over an open network.

5.5 Data Integrity: Hash Functions

123

There are three public key algorithms in wide use today, with advances being made in another: Diffie-Hellman; RSA; the Digital Signature Algorithm (DSA) and Elliptic Curve Cryptography (ECC). • Diffie-Hellman: The Diffie-Hellman algorithm was developed by Whitfield Diffie and Martin Hellman at Stanford University. It was the first usable public key algorithm. Diffie-Hellman is based on the difficulty of computing discrete logarithms. • Rivest, Shamir, Adelman (RSA): The RSA public key algorithm was developed by Ron Rivest, Adi Shamir and Len Adelman at MIT. RSA multiplies large prime numbers together to generate keys. Its strength lies in the fact that it is extremely difficult to factor the product of large prime numbers. This algorithm is the one most often associated with public key encryption. The RSA algorithm also provides digital signature capabilities. They are used in SSL to set up sessions and with privacy-enhanced mail (PEM) and pretty good privacy (PGP).20 • Digital Signature Algorithm: DSA was developed as part of the Digital Signature Standard. Unlike the Diffie-Hellman and RSA algorithms, DSA is not used for encryption but for digital signatures. • Elliptic Curve Cryptography: Another up and coming development in cryptography appears to be elliptic-curve cryptography (ECC). EEC’s strength comes from the fact that it is computational very difficult to solve the elliptic curve discrete logarithm problem. The appeal of ECC algorithms is the fact that they hold the possibility of offering security comparable to the RSA algorithms using smaller keys. Smaller keys mean that less computation is required. Less time and CPU translates into less cost associated with using these algorithms. As a result, interest in these algorithms is growing. It is important to remember that as computing power increases and becomes less expensive, the cryptographic key sizes will have to increase to ensure the same levels of security.

5.5

Data Integrity: Hash Functions

To attain a high level of confidence in the integrity of a message or data, a process must be put in place to prevent or detect alteration during transit. One technique employed is called a hash function, which helps identify if a message or file has been altered in any way during transit. A hash function takes a message of any length and computes a product value of fixed length. The product is referred to as a ‘hash value’. The length of the original message does not alter the length of the hash value. 20 Pretty Good Privacy (PGP) is an encryption program that provides cryptographic privacy and authentication for data communication.

124

5 Data Integrity, Control and Tokenization

Using the actual message or file, a hash function computes a hash value, which is a cryptographic checksum of the message. This checksum can be thought of as a fingerprint for that message. The hash value can be used to determine if the message or file has been altered since the value was originally computed. Using e-mail as an example, the hash value for a message is computed at both the sending and receiving ends. If the message is modified in any way during transit, the hash value computed at the receiving end will not match the value computed at the sending end and you will know that the email has been compromised during transit. There are a number of techniques for hashing developed over time, some of the important ones include: • MD4 was developed by Ron Rivest. MD4 is a one-way hash function that takes a message of variable length and produces a 128-bit hash value. MD4 has however been proven to have weaknesses. • MD5 was also created by Ron Rivest as an improvement on MD4. Like MD4, MD5 creates a unique 128-bit message digest value derived from the contents of a message or file. • The Secure Hash Algorithm are a family of cryptographic hash functions published by the National Institute of Standards and Technology (NIST) as a USA Federal Information Processing Standard. SHA-0 is the original version of the 160-bit hash function published in 1993 under the name ‘SHA’. It was withdrawn shortly after publication due to an undisclosed ‘significant flaw’ and replaced by the slightly revised version SHA-1, which is a one-way hash algorithm used to create digital signatures. SHA-1 is similar to the MD4 and MD5 algorithms, but is slightly slower, however it is reported to be more secure. Cryptographic weaknesses were discovered in SHA-1 and the standard was no longer approved for most cryptographic uses after 2010. SHA-2 includes two similar hash functions, with different block sizes, known as SHA-256 and SHA-512. SHA-3 is a hash function formerly called Keccak, chosen in 2012 after a public competition among non-NSA designers. It supports the same hash lengths as SHA-2, but its internal structure differs significantly from the rest of the SHA family. SHA-3 was released by NIST in August 2015 as a subset of the broader cryptographic primitive family ‘Keccak’.21 SHA-3 uses a concept called the sponge effect. SHA-3 is faster than SHA-1 and SHA-2. • RIPEMD is a hash function that was developed through the European Community’s project RIPE. Blockchain technology incorporates hashing as a key component of its operation. Each Blockchain protocol can decide which hashing algorithm to utilise. For instance, Bitcoin uses the SHA-256 hashing algorithm, whilst Ethereum uses SHA-3.

21

Designed by Joan Daemon, Michael Peeters and Gilles Van Assche.

5.6 Data Availability and Access: Digital Signatures

5.6

125

Data Availability and Access: Digital Signatures

Availability is a wide ranging area and includes the functioning of information systems used, the security controls used to protect the data and the communication channels used to access which I covered earlier. Here when I say availability, I am actually refereeing to access controls and in particular authentication. To have a high level of confidence and trust in the integrity of information received over a network, the transacting parties needs to be able to authenticate each other’s identity. While confidentiality is ensured with the use of public key cryptography, there is no authentication of the parties’ identities. To ensure secure business transactions on unsecured networks like the Internet, both parties need to be able to authenticate each other’s identities. Authentication in a digital setting is a process whereby the receiver of a message can be confident of the identity of the sender. The take up of digital products and services can be significantly undermined where legal mechanisms for digital authentication are not established. The lack of secure authentication has been a major obstacle in achieving widespread use of e-commerce in many countries for example. That has impeded their ability to extract considerable value from a range of e-transactions including cutting bureaucratic inefficiencies and delivering fiscal savings. In Haiti and the Philippines, for instance, the cost per transaction of some social assistance programs fell by around 50% per transaction once payments had been digitalised.22 Two general challenges apply to electronic transactions. In countries where rules for electronic transactions (e-transactions) and laws for electronic signatures (e-signatures) are not in place, commerce is slowed down significantly or significantly undermined in the case of international transactions. Secondly, parties need to find ways to ensure the people signing documents are who they say they are, without necessarily seeing them in person; and that the transaction document in question has not been tampered with, copied or otherwise changed—things more or less impossible in international transactions without expensive middle men (i.e. banks). At present, no universal system of standards, technologies or regulations exists for e-transactions. Among international institutions, the United Nations Commission on International Trade Law (UNCITRAL) has taken steps to increase the uniformity of a country’s legal rules governing e-transactions, e-signatures and digital authentication. These include: UNCITRAL Model Law on Electronic Commerce (MLEC) (1996); UNCITRAL Model Law on Electronic Signatures (MLES) (2001); United Nations Convention on the Use of Electronic Communications in International Contracts (ECC) (2005); UNCITRAL Model Law on Electronic Transferable Records (MLETR) (2017). To date, over 70 states have enacted the MLEC, while

22 Zimmerman, J., Bohling, K., Parker, S. (2014). Electronic G2P payments: evidence from four lower-income countries (English). CGAP focus note; no. 93. Washington, DC; World Bank Group.

126

5 Data Integrity, Control and Tokenization

more than 30 have turned to the MLES as the basis for their national legislation, and 18 states have signed (and nine have ratified) the ECC.23 UNCITRAL model laws, though not legally binding, are designed to guide countries in drafting legislation, while the ECC, as a treaty, is ‘hard law’ that allows for less variation on formal adoption. Not surprisingly, enactment of national and sub national laws along the lines suggested by UNCITRAL is uneven, even among countries that have committed to the model law—e.g. several states have enacted the MLES without referring to its Article 12 on cross-border recognition of e-signatures for instance. A number of free trade agreements (FTAs) include provisions on e-transactions, e-signatures and digital authentication, which may well spur growth in countries enacting relevant laws in this area. One process used to authenticate the identity of an individual or entity involves some form of digital signature. It is worth clarifying the differences between e-signatures, digital signatures and digital authentication: • An electronic signature (e-signature) is any electronic symbol, process or sound that is associated with a record or contract where there is intention to sign the document by the party involved. It signals intent, including acceptance, as to the content of an electronic record. Practically speaking, the technologies used for e-signatures include email addresses, enterprise IDs, personal ID numbers (PINs), biometric identification, social IDs, scanned copies of handwritten signatures and clickable ‘I accept’ boxes on websites. Electronic signatures are however vulnerable to tampering. • A digital signature, digital certificate or advanced e-signature is characterised by a unique feature that is in digital form, like a fingerprint that is embedded in a document. The signer is required to have a digital certificate so that he or she can be linked to the document. A digital certificate is used to validate the document and ascertain its authenticity and integrity. It uses cryptography to scramble signed information into an unreadable format and decodes it again for the recipient. When a digital signature is applied to a document, the digital certificate is bound to the data being signed into one unique fingerprint. Certification Authorities (CAs) often provide certification services for verifying the signer’s identity. The EU, distinguishes between digital signatures and what it calls ‘qualified e-signatures’ (or qualified digital signatures). While both rely on encryption and CAs to identify the signer, the qualified e-signatures also require the signer to use a qualified signature creation device (QSCD), such as a smart card, token or cloud-based trust service. The QSCD verifies the digital identity and can only be given to users once they have passed a Know-Your-Customer (KYC) process.

23

States party to the ECC are Cameroon, Congo, the Dominican Republic, Fiji, Honduras, Montenegro, the Russian Federation, Singapore and Sri Lanka. For the updated treaty status, see “United Nations Convention on the Use of Electronic Communications in International Contracts”.

5.6 Data Availability and Access: Digital Signatures

127

• Digital authentication refers to the techniques used to identify individuals, confirm a person’s authority or prerogative, or offer assurance on the integrity of information. ‘Authentication’ can mean different things in different national legal contexts, with the challenge of doing it remotely over networks. Digital authentication can rely on a varied set of factors, such as those concerning knowledge (e.g. passwords, answers to a pre-selected security question), ownership (e.g. possession of a one-time password) or inherence (e.g. biometric information). Depending on the level of security desired, a digital authentication system could be single, double or multi-factor. To sign a message, senders usually append their digital signature to the end of a message and encrypt it using the recipient’s public key. Recipients decrypt the message using their own private key and verify the sender’s identity and the message integrity by decrypting the sender’s digital signature using the sender’s public key. For example, if ‘A’ has a pair of keys, its private key and its public key. ‘A’ sends a message to ‘B’ that includes both a plaintext message and a version of the plaintext message that has been encrypted using ‘A’s private key. The encrypted version of ‘A’s text message is ‘A’s digital signature. ‘B’ receives the message from ‘A’ and decrypts it using ‘A’s public key. ‘B’ then compares the decrypted message to the plaintext message. If they are identical, then ‘B’ has verified that the message has not been altered and that it came from ‘A’. ‘B’ can authenticate that the message came from ‘A’ because ‘B’ decrypted it with ‘A’s public key, so it could only have been encrypted with ‘A‘’s private key, to which only ‘A’ has access. The strengths of digital signatures are that they are almost impossible to counterfeit and they are easily verified. However, if ‘A’ and ‘B’ are strangers who have never communicated to each other before and ‘B’ received ‘A’s public key, but had no other means to verify who ‘A’ was, other than ‘A’s assertion that it was who it claimed to be, then the digital signature is useless for authentication. It will still verify that a message has arrived unaltered from the sender, but it cannot be used to authenticate the identity of the sender. In cases where the parties have no prior knowledge of one another, a trusted third party is required to authenticate the identity of the transacting parties. A digital certificate issued by a Certification Authority (CA) utilising a hierarchical PKI can be used to authenticate a sender’s identity for spontaneous, first-time contacts. A digital certificate issued by a trusted/known third party (CA) binds an individual or entity to its public key. The digital certificate is digitally signed by the CA with the CA’s private key. This provides independent confirmation that an individual or entity is in fact who it claims to be. The CA issues digital certificates that vouch for the identities of those to whom the certificates were issued. If ‘A’s public key is presented as part of a digital certificate signed by a known CA, ‘B’ can have a high level of confidence that ‘A’ is who and what they claim to be. The CA and the CA’s public key must be widely known for the digital certificate to be of practical value. The CA’s public key must be widely known so that there is no need to authenticate the CA’s digital signature. You are relying on the CA’s

128

5 Data Integrity, Control and Tokenization

digital signature to authenticate the certificate owner’s identity and to bind that identity to their public key. In its simplest form, a digital certificate would include the name and address of the individual/entity, certificate expiration date, serial number for the certificate and the individuals or entities public key. A digital certificate could contain other information as well. Depending on the type of certificate, it could include information on access privileges, geographic location of the owner, or the age of the owner. When fully implemented, digital certificates will most likely take several forms. A digital signature used in concert with a digital certificate potentially possesses greater legal authority than a handwritten signature. The inability to forge a digital signature, the fact that the digital signature can verify that the document has not been altered since it was signed and the certificate verifying the identity of the signer make a digitally signed document irrefutable. The whole digital certification process requires a certification authority that is well trusted, well known and capable of issuing digital certifications, with the initial user verification process (KYC) that is robust and non-reputable. Typically a certification authority will be a government agency, or a large telecommunications organisation that has the infrastructure, skills and resources required to perform some initial KYC procedures and subsequent management of digital certificates. In a digital world, the certification authority is likely to become the future credit agencies, rating everyone as a good or bad ‘risk’.

5.7

National Digital Identities

There are few things more central to a functioning society and economy than identity. Without a way to identify each other and our possessions we would be back to the primitive days where village elders were used to help identify individuals. Printed documents such as passports, national ID cards and driver’s licences offer proof of a person’s identity. Similarly, online electronic information can be linked to an individual or another entity to offer proof of identity. Digital identity is today used by a computer system to identify an individual or corporation or a machine. However, over 1 billion people in the developing world lack any form of officially recognised identification, either paper or electronically based according to the World Bank ID4D database.24 The remaining 6 billion plus people have some form of identification, but over half cannot use it effectively in today’s digital ecosystems. This identity gap is a serious obstacle for participation in political, economic, and social life. Without a secure way to assert and verify their identity, a person may be unable to open bank account, vote in an election, access education or healthcare, receive a pension payment, or file official petitions in court. Furthermore, poor identification systems mean that states will have difficulty collecting 24

World Bank Global ID4D Dataset, 2018.

5.7 National Digital Identities

129

taxes, targeting social programs and ensuring security. Achieving inclusive development therefore requires a sustained effort to address the world’s identity gap, as reflected in the new Sustainable Development Goals (SGDs).25 Digital ID has the potential to unlock new opportunity for the 3 billion plus individuals who have some form of high assurance ID but limited ability to use it in the digital world. Moving from purely physical ID to digital ID programs and creating digital infrastructure and applications that use digital ID for authentication, could enable these users to take advantage of the efficiency and inclusion benefits that digital interactions offer.26 In many countries, access to banking services may not be available and large swathes of the population could miss out on the digital transformation dividend. Unlike a paper based ID such as most driver’s licenses and passports, a digital ID can be verified remotely over digital channels, often at a lower cost, if the digital ID has the following attributes: • Is verified to a high degree of assurance. High assurance digital ID meet both government and private sector institutions’ standards for initial registration and subsequent acceptance for a multitude of important civic and economic uses. This attribute does not rely on any underlying technology. A range of credentials can be used to achieve unique high assurance authentication and verification, including biometrics, passwords, QR codes and smart devices with identity information embedded in them; • Is unique. With a unique digital ID, an individual has only one identity within a scheme and every scheme identity corresponds to only one individual. This is not characteristic of most social media identities today for example; • Established with individual consent. Consent means that individuals knowingly register for their digital ID, with control over what personal data will be captured and how they will be used. This element is now becoming important as privacy becomes more and more important and as cyber-crime is on a rise. The technology needed for digital ID is ready and more affordable than ever, making it possible for emerging economies to leapfrog paper based approaches to identification.27 Biometric technology for registration and authentication is becoming more accurate and less expensive.28 For example, iris-based authentication technologies can give false acceptance rates as low as 0.2% and false rejection rates of less than 0.01%.29 The average selling price of a fingerprint sensor found in a 25

GSMA. (2016). Digital identity: towards shared principles for public and private sector cooperation. A joint World Bank Group, GSMA Secure Identity Alliance Discussion Paper, July 2016. 26 McKinsey. (2019). Digital identification A key to inclusive growth. 27 Bujoreanu, L., Mittal, A., Noor, W. (2018). Demystifying technologies for digital identification. World Bank. 28 World Bank. (2017). Technology landscape for digital identification, identification for development. 29 World Bank. (2017). Technology landscape for digital identification, identification for development.

130

5 Data Integrity, Control and Tokenization

mobile phone fell by 30% in 2017 alone.30 Bar codes on cards, which once stored only numerical data, can now store signature, fingerprint, or facial data.31 A 2019 McKinsey report estimates that improved customer registration and on-boarding costs through digital IDs can be reduced by 90%, saving up to US$1.6 trillion globally.32 It further suggests that extending full digital ID coverage and usage could unlock economic value equivalent to between 3 and 13% of GDP in 2030 in a number of countries.33 Perhaps the most important regulation dealing with identity in the EU is the EU Regulation 910/2014 on electronic identification and the associated set of standards for electronic identification and trust services for electronic transactions in the European Single Market. Better known as the electronic IDentification, Authentication and trust Services regulation (eIDAS). The eIDAS regulation was born out of the Electronic Signatures Directive of 1999, which it superseded. Unfortunately, for various reasons, including the fact that as a directive and not a regulation, it left discretion over implementation into local law in the hands of Member States, leading to a fractured, non-interoperable set of standards; it fell short of its ambitions. As a binding regulation, eIDAS is mandatory for Member States and so will be applied uniformly. The eIDAS package includes: eID: a way for businesses and consumers to prove their identity electronically; eTimestamp: electronic proof that a set of data existed at a specific time; eSignature: expression in an electronic format of a person’s agreement to the content of a document. eIDAS recognizes three levels of esignatures: Simple, Advanced and Qualified; eSeal: guarantees both the origin and the integrity of a document; Qualified Web Authentication Certificate: ensures websites are trustworthy and reliable; Electronic Registered Delivery Service: protects against the risk of loss, theft, damage or alterations when sending documentation; Legal recognition of electronic documents: assurance that an electronic document cannot be rejected by the court for the reason that it is electronic. Numerous new national digital ID programs (including card and/or mobile-based schemes) have been launched or initiated in other countries. Examples include new schemes in India, Algeria, Cameroon, Jordan, Italy, Senegal and Thailand, major announcements in the Netherlands, Bulgaria, Norway, Liberia, Poland, Jamaica, Sri Lanka and a pilot scheme in Myanmar. Most of these programs now include

30

Burt, C. (2017). Fingerprint cards reports cost cutting and changing focus after tough 2017. BiometricUpdate.com, 09 February 2018. 31 Burt, C. (2017). Fingerprint cards reports cost cutting and changing focus after tough 2017. BiometricUpdate.com, 09 February 2018. 32 In India, the use of Aadhaar for KYC verification reportedly reduced costs for financial institutions from approximately $5 to approximately $0.70 per customer. Banks reduced their KYC costs by eliminating manual processing of paper documentation and the need for in-person verification of the account holder’s identity, reducing the costs of bank accounts and expanding opportunities for financial access. 33 McKinsey. (2019). Digital identification: a key to inclusive growth. Brazil has a potential for 13% improvements to GDP from digital ID, whilst Nigeria, Ethiopia, India, China, USA, UK has the potential for 7%, 6%, 6%, 4%, 4% and 3% respectively.

5.7 National Digital Identities

131

biometrics, the majority in the form of fingerprints. Schemes such as the Gov.UK Verify initiative were also introduced in 201634 and Australia announced the first phase of its digital identity program, which was launched in 2017. Whilst there is an increasing number of countries experimenting with digital IDs, a 2016 study of 48 national identity programs found that very few could be used in a wide variety of sectors.35 Remember that it is not digital identity itself, but the myriad of applications it enables citizens to access that is important. Digital ID systems take one of three forms: centralised, federated, or decentralised. All three have both advantages and disadvantages for advanced digital IDs. Hybrid models are also possible, for example, a centralised basic digital ID with federated add-on services: • In a centralised system, a single provider, typically a government agency, is integrated into all use cases and must generate adoption and use and bear all costs. Examples include the national advanced digital ID programs in Estonia36 and India’s Adhaar card. Benefits include streamlined service delivery and high data aggregation capabilities, with tools like distributed storage helping avoid data consolidation. Such a setup does however concentrate risk and liability, placing a significant burden of trust on the government agency; • In a federated system, ownership of the system is shared among multiple standalone systems that share common standards. Examples include SecureKey Concierge in Canada, which is led by financial institutions and GOV.UK Verify, a basic digital ID launched by the UK public sector (although adoption has been poor). A federated structure distributes cost and dilutes potential for abuse but also requires coordinated decision making, introducing complexity that may dis-incentivise institutions from participating as ID providers; • Decentralised models operate with no institutional owners and hinge on distributed ledgers, for example through Blockchain and other technologies to establish and manage identities. Such models remain in the early stages of development. Although it is not an ID system, Solid launched by Tim BernersLee in September 2018, provides an example.37 In many cases, the lines between basic and advanced digital ID may blur because broader digital ecosystems can be built on top of a basic digital ID that provides the

34

In the United Kingdom, HM Revenue and Customs’ Connect computer draws on information from a wide range of government and corporate sources, as well as individual digital footprints, to create a profile of each taxpayer’s total income. Such analytical capability could even be used to assess the behavioral impact of new tax and spending policies. 35 Review of national identity programs, International Telecommunication Union, May 2016. 36 Estonia uses platform technology to connect Uber drivers directly with the tax office, adding income from rides directly to their tax return. Estonia stands out in its use of digital platforms for delivering government services. 37 Solid is a new open-source framework seeking put data back in the hands of individuals rather than data monoliths.

132

5 Data Integrity, Control and Tokenization

underlying ability to authenticate over digital channels. Increasingly, in more developed countries, we are likely to see digital IDs progress from what are today fairly basic digital IDs, to more advanced digital IDs. These are likely to be more sophisticated and include within the digital ID itself, a range of individual preferences for the types of data which can be shared and under what circumstances, which the individual can specify and change over time. It may now be possible (given increasing smart phone penetration and advances in cryptography) to build new identity frameworks based on the concept of decentralised identities—potentially including an interesting subset of decentralised identity known as Self Sovereign Identity (‘SSI’). These frameworks put the user at the centre of the framework, where the user ‘creates’ their own identity, generally by creating his or her own unique identifier (or a number of them) and then attaching identity information to that identifier. By associating verifiable credentials from recognised authorities (or even data from a social media accounts, e-commerce transactions) users can in effect create the digital equivalents of physical world credentials like passports, national IDs and driving licences. By setting up a system in which the user controls not just the identity but also the data associated with it, it may be possible to create self sovereign identities. While Blockchain is not required for decentralised identity, it can be a powerful solution for different aspects of the decentralised identify framework. It provides infrastructure for managing data in a decentralised, but in a trustworthy way. This can help mitigate the use of trusted third parties or provide censorship resistance in certain circumstances. Some of the benefits of using Blockchain based solutions for digital ID include: • Blockchain addresses can be used for digital IDs: these are unique, generated by the user themselves and already leverage public/private key cryptography; • Blockchains could be used as digital ID registries: providing information on who is related to specific IDs and how to access information about them; • Notarisation: a Blockchain could ‘notarise’ credentials through the use of hash functions. This doesn’t mean storing the credentials on the Blockchain, which may run afoul of regulations like the GDPR. Instead it acts as a timestamp and electronic seal. This both provides proof of when the credential was created, as well as ‘seals’ that credential by making any tampering of the credential evident to outside observers. For example, a university might send the hash of a diploma to the Blockchain at the time of graduation. This provides the student with both a timestamp of when the diploma was issued as well as a way to prove at any time in the future that the diploma being presented is the one that was registered at that time;

5.8 Blockchain

133

• Facilitate smart contract execution:38 For example, a user can agree to share certain information with a social media platform but only for a limited amount of time. This consent can be recorded as a transaction on the Blockchain along with its expiry date. The social media company would then have to delete the information at the expiry date and put proof of that deletion on the Blockchain. A collection of enterprise providers, start-ups and nonprofits, including Microsoft, Evernym, Tierion, Uport and Sovrin Foundation are working to create an identity infrastructure in which users are no longer tied to dominant platforms or centralised intermediaries, such as credit bureaus. The Decentralised Identity Foundation, the ID2020 public-private partnership and the World Wide Web Consortium are shepherding the creation of open standards to make this vision reality. The State of Illinois has launched a pilot project to issue digital birth certificates on a Blockchain based approach.39 In another example, Civic offers Blockchain based identity verification services. Users who provide personal information, validators who verify it and large service providers who connect their user profiles to the system all receive compensation in the form of Civic’s CVC tokens. These tokens can be used to pay for validating information. Having explained some of the benefits and uses of Blockchains, in the digital ID context, it would be appropriate to explain what it is and how it works more broadly.

5.8

Blockchain

Blockchain has received much attention in the past 2–3 years as a significant enabling technology. It was originally used in the context of Bitcoin but soon came to prominence for its potential for use more broadly. Many have claimed it has been an over hyped technology, very much a technology looking for a solution, rather than a solution to real world problems. I suspect it may have been over hyped by the Blockchain evangelicals who saw its ability to get rid of centralised authorities towards a more egalitarian decentralised model. However, there are many cases where it might be useful and some use cases are beginning to emerge. Whether it has the profound impact that many had hoped for, is however far from clear. It is still a very nascent technology and like the TCP/IP protocol, is likely to take time for it to be adopted more broadly and for relevant use cases and infrastructure to be widely deployed.

38 Smart contracts are programmed to generate instructions for downstream processes (such as payment instructions or moving collateral) if reference conditions are met. Like passive data, they become immutable once accepted onto the ledger. 39 Douglas, T. (2017). Illinois announces key partnership in Birth Registry Blockchain Pilot. GovTech. 08 September 2017.

134

5 Data Integrity, Control and Tokenization

Blockchain at its core, is a form of distributed ledger, which its advocates claim has the promise to reshape industries by enabling trust, providing transparency and reducing friction across business ecosystems potentially lowering costs, reducing transaction settlement times and improving cash flow. Today, they claim, trust is placed in banks, clearinghouses, governments and many other institutions as central authorities—this centralised trust model adds delays and friction costs (commissions and fees) to transactions and places control at the hands of centralised authorities. Blockchain provides an alternative trust model and removes the need for central authorities in arbitrating transactions. Blockchains, it is claimed, gives power back to the people and may remove the need for centralised regulation. As digital economy expert Arun Sundarajan of New York University observes, if you look back at history, every time there was a big expansion in the world’s economic activity, it was generally induced by the creation of a new form of trust.40 Its primary value is its ability to deploy cryptography to reach consensus across parties in the ledger. No single entity can amend past data entries or approve new additions to the ledger. Whilst I don’t want to go into the intricacies of how Blockchains actually operate, I think some background is useful to understand broadly what makes the technology different and how it might operate in the real world. Figure 5.4 details the various steps involved in completing a transaction on Blockchain. A new block is verified on the Bitcoin Blockchain roughly every 10 min. Each block has a fixed size, so transactions often must wait until the next block. The requisite confidence level (and thus delay) to accept a block as valid, depends on the risk profile of the activity. Someone with little at stake (buyer accepting Bitcoin for a purchase of a coffee) will prioritise speed over small, incremental robustness, while one engaged in a large, important transaction (paying for a car) will be willing to wait for more confirmations before accepted payment. Anyone can decide the level of assurance they want by tolerating additional delay. Users can however pay higher transaction fees to incentivise miners to include their transaction in a block they construct. By 2017, Bitcoin transaction fee routinely were several dollars per transaction and sometimes much more, making small value micropayments impractical.41 Aside from payments, in the real world, to represent physical items onto the Blockchain, QR codes are used. These QR codes are printed onto a physical item in a manner that they cannot be tampered with. As that physical items moves through the supply chain, the QR code gets scanned by systems and its status gets updated on the Blockchain. Trust in transactions is not brokered by intermediaries, but embodied algorithmically in the transaction process itself. The algorithmic consensus process is the trust agent. The Blockchain promises to create an Internet of Value by taking aspects of

40 Will crowd based capitalism replace managerial capitalism? http://reinvent.net/innovator/arunsundararajan 41 Marshall, A. (2017). Bitcoin scaling problem explained. CoinTelegraph, 02 March 2017.

After a new block is added to the Blockchain, the nodes start over again by forming a new block of transactions from the unconfirmed transactions pools.

Start

End

Fig. 5.4 The workings of Blockchain

Some form of software installed on computer/mobile – commonly called wallets that allow transactions to be submitted to the Blockchain.

The node that finds an eligible signature for its block first, broadcasts its block and its signature to all the other nodes on the Blockchain.

Other miners accept the block and save it to their transaction data as long as all transactions inside the block can be executed according to the Blockchain’s history.

Other nodes verify the signature’s legitimacy and where valid, the block is added to the Blockchain, and is distributed to all other nodes on the network.

Node itself confirms if all the transaction are eligible to be executed according to the Blockchain history.

Node uses its computing power to find the correct ‘nounce’ and signature. Each block has a different mathematical problem, so every miner will work on a different problem unique to the block they formed.

If a Bitcoin owner wants to speed up their transaction, they can choose to offer a higher mining reward.

Nodes (miner) select transactions from these pools and form them into a ‘block’. Any combination of transactions can be chosen. Multiple miners can select the same transactions to be included in their block.

Transaction is placed in a pool of unconfirmed transactions – usually several subdivided local pools.

Transaction initiated between two holders of such software wallets is submitted (broadcast) to the Blockchain for processing.

A node is a computer that runs Blockchain software and helps transmit the information across the Blockchain network. Mining Nodes have the ability to add transactions to a Blockchain. Full Nodes hold and distribute copies of the entire Blockchain ledger helping validate the history of the Blockchain. There may be variants of these including super nodes and light nodes.

5.8 Blockchain 135

136

5 Data Integrity, Control and Tokenization

Transacons/Tokens [a transaction or token is issued in a decentralised way through the protocol]

Peer To Peer

Cryptographic Protocol

Register of Transacons [register of transactions, held in multiple nodes]

[protocol allows to transact / exchange online, in a secure, pseudonymous, global]

Fig. 5.5 Blockchain—decentralised trust

the digital economy that are traditionally centralised and replacing them with distributed trust. Information held on a Blockchain exists as a shared and continually reconciled database. The Blockchain database isn’t stored in any single location, meaning the records it keeps are truly public (well in some cases) and easily verifiable. No centralised version of this information exists for a hacker to corrupt. Hosted by potentially thousands of computers simultaneously, its data is accessible to anyone on the Internet. Bitcoin for instance is managed by its network and not any one central authority. Decentralisation means the network operates on a user-to-user (or peer-to-peer) basis. Figure 5.5 illustrates the three key characteristics of Blockchains. Aside from the obvious applications in finance, it has the potential to be used in a number of other settings. For instance, IOT platforms tend to have a centralised model in which a broker or hub controls interactions between devices, an arrangement that can be expensive and impractical. Blockchain can alter that by decentralising secured and trusted data exchanges and record keeping on IOT platforms, serving as a general ledger that maintains a trusted record of all messages exchanged between IOT devices. However, this still remains a theory as current Blockchain systems simply cannot handle the magnitude and scale of transactions that would be generated.

5.8 Blockchain

137

Worldwide spending on Blockchain solutions was forecast to be nearly US$2.9 billion in 2019, before surging to US$12.4 billion in 2022.42 Gartner predicts value added from Blockchains will grow to more than US$176 billion by 2025 and exceed US$3.1 trillion by 2030.43 According to a 2018 Constellation Research survey, 67% of USA companies are evaluating or implementing Blockchain technology, with a quarter already having projects underway or completed.44 According to the survey, 57% of respondents investing in Blockchain technology agreed, or strongly agreed, that their organisation should adopt Blockchain technology to remain competitive. And of those who declared their Blockchain investments, 68% are spending more than US$1 million, with 27% spending more than US$10 million on Blockchain activity. A recent World Economic Forum report stated that over 40 central banks are researching distributed ledger technology for a variety of use-cases.45 Elsewhere in the public sector, there are over 200 Blockchain initiatives spanning 45 countries.46 Currently, finance appears offers the strongest use cases for the technology, in particular, international transactions. Blockchain potentially cuts out the middleman for these types of transactions. Whilst Blockchain can be effective in solving a number of pressing and difficult problems, it should not be over-hyped. In many of the use cases being proposed for Blockchain, many would appear to be hype; there could be many alternative (mostly existing) solutions that could be deployed which would be cheaper, faster and more effective. The utopia held by some about the current versions of Blockchains leading to a decentralised world is also a little way off. As will be described later, the nature of decentralised Blockchains has some very fundamental shortcomings. In permissionless Blockchain,47 as the network grows, the requirements for storage, bandwidth and computing power required by participating nodes in the network also increase. At some point it becomes only feasible for the few nodes that can afford the resources to process blocks leading to the risk of centralisation. Before I get into Blockchain in further detail, it may be worth highlighting fundamentally why it was initially created. In the context of Bitcoin, a mechanism was needed to create trust without reliance on a central authority, trust in terms of motives of the various actors. It needed to ensure that all the users of the system could trust the state of the system, in particular the state of financial transactions and provide an audit trail of the transaction history which is irrefutable. It needed to 42

IDC. (2019). Worldwide blockchain spending guide. See https://www.idc.com/getdoc.jsp? containerId¼IDC_P37345 43 Gartner. (2017). Forecast: blockchain business value, worldwide 2017–2030. 44 Sato, C., Wang, R. (2018). Constellation research 2018 digital transformation study. 45 World Economic Forum. (2019). Central banks and distributed ledger technology: how are central banks exploring blockchain today? 46 OECD. (2018). Blockchain and its use in the public sector. Observatory of Public Sector Innovation. 47 Permission-less blockchains participates can join the network by downloading a software application like a wallet, and do not need permission or approval from another entity.

138

5 Data Integrity, Control and Tokenization

enable the transfer of assets (Bitcoins) between different participants. In essence, Blockchain in the context of Bitcoin needed to provide: Trust, Traceability and Tradability. It is also worth stating that Blockchains are not databases, but ledgers. Blockchains are not meant to act as databases. The fact that a block within the chain encapsulates all the historic transactions, can quickly become unwieldy and difficult and slow to operate. Most Blockchains are combined with off-chain databases, for they must be lightweight with limited on-chain storage so that ‘anyone’ can download a full history of the Blockchain. If Blockchains become too large, fewer people will be able to participate in the network, thus reducing the decentralisation of the network and overall security. A public Blockchain such as Bitcoin, broadcasts all transactions across the network and is totally transparent. Every full node on the network maintains a copy of the entire transaction history all the way back to the so-called ‘genesis block’, now over 100 gigabytes in size. We need to differentiate between two separate concepts here: decentralised and distributed data storages, like IPFS48 and Swarm.49 Each Blockchain implementation uses different data storage for ‘off-chain’ data and the balance between ‘onchain’ and ‘off-chain’ data depends on the use case. Just like the design of the Internet and the Internet protocol suites, it is expected that Blockchains remain as light as possible to ensure speed and reliability; whilst it will be the ‘off-chain’ storage that will hold the majority of the data. Back to the original utopian view—unfortunately the original Blockchain as used within Bitcoin actually hasn’t really fully delivered on its promise: • Trust: the aggregation of miners to a few large operators out of China, makes it possible for them to collude and manipulate transactions; • Traceability: there have been forks,50 as they are called in the space, wherein some participants disagree on the validity of some historic transactions and agree to replicate the system status without the ‘anomaly’, or the fact that users can remain anonymous thereby making traceability unachievable;

48

The Inter Planetary File System (IPFS) offers blockchain based distributed cloud storage technology. Instead of storing files in a particular; location, accessible through a URL address, IPFS stores multiple copies of files in pieces, across many hard drives throughout the network. It is designed to use the Filecoin token to incentivise users to contribute storage space. The token provides the intermediation by establishing incentives on both sides. 49 Swarm is a censorship resistant, permissionless, decentralised storage and communication infrastructure. 50 There are two different versions of forks in cryptocurrencies. A hard fork and a soft fork. When a hard fork takes place there can be a split in a blockchain if there isn’t consensus. The hard fork will cause all older versions of the Blockchain to be invalid. However, people can still choose to run the older software if they do not agree with the hard fork decision, i.e. the hard fork didn’t achieve consensus amongst the community and miners. This can lead to the creation of a new cryptocurrency, as happened when Ethereum hard forked and Ethereum Classic was created. Hard forks can also be used to upgrade the software on a blockchain. If this upgrade has the consensus of the community and the miners then there will be no split on the blockchain.

5.8 Blockchain

139

• Tradability: the sheer effort required to solve mathematical problems and reach consensus means that only a limited number of transactions can be completed in a given time period, thereby reducing tradability, e.g. it could not be used to replicate credit card transactions, as the sheer volume would overwhelm the Blockchain system.51 It would however be wrong to assume all Blockchain implementations suffer from these issues. There are in fact two forms of Blockchain; permission-less and permissioned. Bitcoin is a permission-less system (also called a public Blockchain), in that a user does not need permission to join and use the Blockchain. In permissioned Blockchain (sometimes called Private Blockchain), some form of permission is required for a user to be allowed use of the system.52 The problems highlighted above relate primarily to permission-less Blockchains: given that any user (good or bad intentioned) can utilise the system; that a potentially exponential number of users could be added and therefore an explosion in the number of transactions that need to be processed. In a public Blockchain, you place your trust in the consensus of a large number of entities, which will in effect vote on the state transitions of your system (i.e. transactions). You hope the good entities will out-vote the bad entities. The system in this sense is trust-less.53 The Blockchain protocols rely on the assumption that the system consists of many small actors that make decisions independently. However, if anyone can take part, the system is vulnerable to what are called ‘Sybil attacks’, in which an attacker creates many apparently independent voters who are actually under his sole control. If creating and maintaining a voter is free, anyone can win any vote they choose simply by creating enough Sybil voters. It is for this reason that Blockchain protocols require ‘miners’ to expend significant effort before they can in effect cast their vote—one form of effort is called ‘proof of work’ (POW). The high cost of adding a block to a chain is not an unfortunate side-effect, it is essential to maintaining the correctness of the Blockchain. Permission-less systems defend against Sybil attacks by requiring a vote to be accompanied by a proof of the expenditure of some resource. To vote in a Proof-ofWork (POW) Blockchain, such as Bitcoin or Ethereum, requires computing very

51 The time required to process a block of transactions is slow. For example, Bitcoin block times are 10 min, while Ethereum block times are around 14 s. These times are even longer during peak moments. Compare that to the nearly instantaneous confirmations you get when using services like Square or Visa. 52 A permission-based framework requires rules to approve/reject authorised participants, including perhaps minimum capital requirements, conduct of business rules and risk management processes. Permission-less systems in contrast, allow anyone to participate as long as they install the software with the requisite protocols. 53 Techniques using multiple voters to maintain the state of a system in the presence of unreliable and malign voters were first published in The Byzantine Generals Problem by Lamport et al. in 1982. Alas, Byzantine Fault Tolerance (BFT) requires a central authority to authorise entities to take part. In the Blockchain jargon, it is permissioned.

140

5 Data Integrity, Control and Tokenization

many otherwise useless hashes. The idea is that that the good voters will spend more and compute more useless hashes, than the bad voters.54 Blockchains cannot be described as efficient. Blockchain’s utopian vision of a decentralised world is not without substantial costs. Decentralisation has three main costs: waste of resources, scalability problems and network externality inefficiencies. Blockchain has to choose at most two of the following three attributes: (i) Correctness; (ii) Decentralisation; and (iii) Cost-efficiency. Permission-less Blockchains choose correctness and decentralisation at the expense of efficiency. Efficiency in terms of the wasted resources spent by miners solving, rather worthless cryptographic problems (some 100 terawatt hours in 2018), as well as inefficiency in terms of the number of transactions that can be processed. Whilst Visa processed some 3526 transactions per second and MasterCard processed 2061 transactions per second, Bitcoin only processed 3.30 transactions per second.55 Permission based Blockchains on the other hand, choose correctness and efficiency at the expense of decentralisation; where ledgers managed by a single entity or a selected number of entities, forgo the desired feature of decentralisation (examples include IBM’s Hyperledger Fabric; JP Morgan’s Quorum based on Ethereum; and R3 Corda). In a traditional centralised ledger, the participant is incentivised to report honestly because he does not wish to jeopardise his future profits. Where the central ledger is removed, how do you incentivise distributed participants to validate the blocks? The distributed participants must be rewarded in some form for them to expend resources in verifying any new transaction. A user of Blockchain wanting to perform a transaction, store a file, record a review, whatever, needs to persuade a node to include their transaction in a block. Nodes are usually coin-operated, meaning you need to pay them to do so. In Bitcoin, the successful mining node that solves this useless hash and adds the transaction block to a Bitcoin Blockchain is rewarded with 12.5 Bitcoins (and reducing over time) roughly every 10 min, which was worth around US$100,000 at the time of writing, along with the additional transaction fees. Since mid-2016, the reward has been 12.5 Bitcoin; it will however be cut in half automatically again around 2020. Nodes cannot be rewarded in ‘fiat currency’, because that would need some central mechanism for paying them (going against the decentralisation theme). So the reward has to come in the form of coins generated by the system itself (Bitcoin).

54

Note: the mining difficulty is adjusted automatically on Bitcoin network every 2 weeks based on the block production rate. When more miners join the network to mine Bitcoin, the total hashing power increases and therefore it can be assumed that the network altogether will find eligible signatures faster, meaning they will add blocks to the blockchain faster—so the level of difficulty is adjusted to ensure the total hashing power on the network on average produces 1 block per 10 min—given it has been estimated that a blockchain will need approximately 10 min to propagate the latest block(s) to all nodes globally and the system to remain synchronised. 55 Securities and Markets Stakeholder Group. (2018). Own initiative report on initial coin offerings and crypto-assets. 19 October 2018 ESMA 22-106-1338.

5.8 Blockchain

141

The Blockchain system needs names for the parties to these transactions. There is no central authority handing out names, so the parties need to name themselves. They typically do so by generating a public-private key pair and using the public key as the name. In practice this is implemented in wallet software, which generates and stores one or more key pairs for use in transactions. The public half of the pair is a pseudonym. The security of the system depends upon the user and the software keeping the private key secret. This can be difficult, as Nicholas Weaver’s computer security group at Berkeley discovered when their wallet was compromised and their Bitcoins were stolen.56 Given the heavy utilisation of computational resources involved in mining (buying hardware, power, network bandwidth, staff time), PoW is considered to be costly, wasteful and inefficient. Having thousands of miners working on just one solution each time is an excessive use of resources, especially as the only block that has any value thereafter is the one that is solved. With each new block being mined, therefore, there’s a significant amount of useless bi-product. What’s more, mining is a costly endeavour. Some estimates put the electricity consumption costs of the entire Bitcoin mining operation in the region of US$500 million per year. In fact, one study has equated the entire power consumption of Bitcoin mining to Ireland’s average electricity consumption. And that’s just for Bitcoin; a whole range of new cryptocurrencies are emerging that utilise some forms of PoW algorithm. Given the volatility of many cryptocurrencies such as Bitcoin, there are few merchants that accept payment in Bitcoin or other cryptocurrencies today. Nodes only see value in being paid in cryptocurrencies if they can sell their cryptocurrency rewards for ‘fiat currency’ which they need to pay their electricity bills. Cryptocurrency exchanges are being established to enable this conversion. But the question one must ask is who will be the buyers on the other side of an exchange converting their fiat currency into cryptocurrencies. It can only really be speculators betting that the ‘price’ of the cryptocurrency will increase. The whole Bitcoin system is premised on the belief that the cryptocurrencies used to reward nodes will increase in value over time. Setting up a POW mining operation is a risky business. It requires massive resources in computing and energy, with rewards being granted every 10 min across the node community. A small node will barely make enough rewards over a time period to justify the expense. The only way POW nodes can assure a return on their investment is through pooling their resources with other nodes, contributing their mining power and receiving the corresponding fraction of the rewards earned by the pool. These pools have strong economies of scale, so successful cryptocurrencies end up with a majority of their mining power in 3–4 pools. So you end up with permission-less Blockchain that is actually not decentralised, but centralised around a few large pool of centrally controlled nodes.

56 Weaver, N. (2018). Risks of cryptocurrencies communications of the ACM, June 2018, Vol. 61 No. 6, Pages 20–24. https://doi.org/10.1145/3208095

142

5 Data Integrity, Control and Tokenization

Permission-less POW Blockchains must therefore trust the humans controlling the dominant mining pools not to collude, as well as the core developers of the protocol itself who provide the ongoing governance function for the Blockchain and carry out updates of the code. You must place your trust in developers of wallet software and developers of exchanges not to write bugs. You must trust the users and owners of the smart contracts built on the Blockchain to keep their secret key, secret and not lose it. Finally, you must trust the speculators to keep providing the funds needed to keep the ‘price’ of the ‘coin (Bitcoin)’ to keep going up—otherwise the nodes will simply lose interest and the whole system will fall over. Therefore, where you started out to build a trust-less, decentralised system, you end up with: • Untrusted system: a trust-less system that trusts a lot of people you have every reason not to trust;57 • Centralisation system: a decentralised system that is centralised around a few large mining pools based out of China that you have no way of knowing aren’t colluding; • Mutable system: an immutable system that either has bugs you cannot fix, or is not immutable (forks); • Questionable security: a system whose security depends on it being expensive to run and which is thus dependent upon a continuing inflow of funds from speculators. These are issues that the Blockchain community has started to recognise and we are likely to see a shift away from the current POW consensus process towards less onerous energy hungry mechanisms, such as Proof of Stake (POS) and others. That journey has just started and it may be a few years before these are developed and trusted. It is for these reasons, that the major focus of Blockchains for practical use to date, have been permission-based systems, like IBM’s Hyperledger and R3’s Corda where accessibility can be controlled by design and participants can opt-in to the desired level of disclosure and shared access.58 Both Hyperledger and Corda allow participants to control who can see what information about the transactions 57 As Nick Szabo tweeted, Bitcoin is the most secure financial network on the planet. But its centralised peripheral companies are among the most insecure. It is estimated that 10% of the ether raised in ICOs has already been stolen, mostly from wallets. A particular point of vulnerability at the edge of Blockchain networks lies in exchanges that trade cryptocurrencies for dollars or other government backed currencies. In 2014, the most prominent Bitcoin exchange, Mt. Goz collapsed after hackers were able to steal a significant amount of currency, then worth about US$400 million in a series of thefts. Another major exchange, Bitfinex was hacked in 2016, losing cryptocurrency valued around US$70 million at the time. 58 A May 2016 Goldman Sachs report suggests that strong account and payment information on a Blockchain could improve data quality and reduce compliance costs, savings US$3 to US$ 5 billion annually in AML expenses. See: Schneider, J., Blostein, A., Lee, B., Kent, S., Groer, I., Beardsley, E. (2016). Profiles in innovation: Blockchain—putting theory into practice. Goldman Sachs, 24 May 2016.

5.8 Blockchain

143

submitted to the ledger. This is not to say that permission-less systems are not dead, there are still many applications of permission-less Blockchains, primarily in the field of various cryptocurrencies (albeit using alternatives to POW like POS which is detailed later—however these are facing significant regulatory hurdles). When you delve deeper and overcome the recent hype around Blockchain, you come to the conclusion that whilst permission-less Blockchains are likely to come under increasing pressure, both regulatory and through its own design, there remains potential for permissioned Blockchains. There are an increasing number of applications emerging within a range of industries that are using Blockchain. Some of these include: • In the pharmaceutical sector: Farmatrust is developing Blockchain based digital solutions and services which provides accountability and transparency for the entire supply chain from the drug manufacturers to the pharmacies distributed across the country.59 • In agriculture: Blockchain has been used to manage and authenticate harvesting of resources to ensure sustainable practices. The Instituto BVRio has developed an online trading platform it has termed a ‘Responsible Timber Exchange’ to increase efficiency, transparency and reduce fraud and corruption in timber trading.60 • In fishing: Provenance, a UK-based start-up, worked with the International Pole and Line Association (IPLA) to pilot a public Blockchain tuna-tracing system from Indonesia to consumers in the UK.61 Ventures such as FishCoin are developing a utility token tradable for mobile phone top-up minutes in an attempt to incentivise fisheries to provide information on their catch. The data captured is then transferred down the chain of custody until it reaches consumers. Such data could also be invaluable for governments seeking to better manage global fish stocks.62 • In food supply: Carrefour Supermarkets have recently introduced an application where customers can scan products to receive information on a product’s source and production processes.63 • In ecology: Plastic Bank has created a social enterprise that issues a financial reward in the form of a cryptographic token in exchange for depositing collected ocean recyclables such as plastic containers, cans or bottles.64 Tokens can be exchanged for goods including food and water for instance. RecycleToCoin is

59

https://www.farmatrust.com/ BV Rio, Responsible Timber Exchange: see: https://bvrio.com/madeira/analise/analise/ plataforma.do 61 Provenance, from Shore to Plate: Tracking Tuna on the Blockchain, 15 July 2016: See: https:// www.provenance.org/tracking-tunaon-the-blockchain 62 Fishcoin: A Blockchain Based Data Ecosystem for the Global Seafood Industry, February 2018. 63 https://www.essentialretail.com/news/carrefour-and-nestle-blockchain 64 IBM, The Plastic Bank: See: https://ibm.com/case-studies/plastic-bank 60

144









5 Data Integrity, Control and Tokenization

another Blockchain application in development that will enable people to return their used plastic containers in exchange for a token from automated machines in Europe and around the world.65 Gainforest is using Blockchain to incentivise farmers in the Amazon to preserve the rainforest in return for internationally crowd funded financial rewards. Remote sensing using satellites verifies the preservation of a patch of forest, which then triggers a smart contract using Blockchain technology to transfer payment.66 In carbon trading: Ben & Jerry’s is piloting a Blockchain platform to assign a carbon credit to each tub of ice cream sold, allowing consumers to offset their carbon footprint.67 Poseidon is another company that has developed a platform which enables consumers to offset their carbon.68 Payment goes directly to one of Ecosphere+’s forest conservation projects. The retailer’s POS system shows the carbon impact of a product (in KG) and adds the price for the required carbon offset to the customer’s bill.69 In retail: shoe designer Great released its Beastmode 2.0 Royale Chukkah collection in 2016 with a Blockchain enabled smart tag that enables customers to confirm the authenticity of their trainers with their smart phone. Customs officials used the same technology to seize US$50 million of its counterfeit shoes entering the USA in 2014.70 In energy distribution: Consensys is partnering on Transactive Grid, working with the distributed energy firm, LO3. A prototype project currently up and running uses Ethereum smart contracts to automate the monitoring and redistribution of micro grid energy. This so-called ‘intelligent grid’ is also an early example of emerging IOT use cases.71 In finance: FinTech start-ups Transferwise and Remitly72 are disrupting the global payments (remittances) market, estimated at over US$600 billion in 2016.73 The global remittance industry takes out US$40 billion annually in fees.74 Such fees typically stand around 2–7% of the total transaction value, depending on the volume of the corridor and foreign exchange fees represent

65 Clark, A. (2017). Blockchain based recycling initiative to benefit third sector. Charity Digital News. 7 November 2017. 66 See: https://www.technologyrecord.com/Article/gainforest-microsoft-and-the-un-join-onsustainability-project-83119 67 Cuff, M., (2018). Ben and Jerry’s scoop blockchain pilot to serve up carbon-offset ice-cream. BusinessGreen, 30 May 2018. 68 Carbon on Blockchain, Poseidon Foundation: See: https://poseidon.eco/carbon.html 69 Ecosphere+: https://ecosphere.plus 70 https://www.fastcompany.com/3060459/how-sneaker-designers-are-busting-knockoffs-withbitcoin-tech 71 https://www.coindesk.com/ethereum-used-first-paid-energy-trade-using-blockchain-technology 72 Vijaya, R. (2016). Mitigating the effects of de-risking in emerging markets to preserve remittance flows. IFC. 73 Ratha, D. (2016). Migration and remittances Factbook 2016, Third Edition. World Bank Group. 74 Pitchbook. (2016). Can blockchain live up to the hype? Press Release, October 4.

5.8 Blockchain

145

20% of the total cost.75 Bank wire transfers can take fees of 10–15%. These new Fintech start-ups are developing permission-based Blockchain systems to reduce these commissions and provide a faster payment system. A May 2016 Goldman Sachs report suggests that strong account and payment information on a Blockchain could improve data quality and reduce compliance costs, savings US$3 to US$5 billion annually in AML expenses.76 • In compliance: Start-up Polycoin has an AML/KYC solution that involves analysing transactions.77 Those transactions identified as being suspicious are forwarded on to compliance officers. Another start-up Tradle78 is developing an application called Trust in Motion (TiM). Characterised as an ‘Instagram for KYC’, TiM allows customers to take a snapshot of key documents (passport, utility bill, etc.). Once verified by the bank, this data is cryptographically stored on the Blockchain. • In trading: Numerous stock and commodities exchanges are prototyping Blockchain applications for the services they offer, including the ASX (Australian Securities Exchange), the Deutsche Borse (Frankfurt’s stock exchange) and the JPX (Japan Exchange Group).79 Nasdaq has launched Linq, a platform for private market trading.80 More recently, Nasdaq announced the development of a trial Blockchain project for proxy voting on the Estonian Stock Market.81 • In government: Honduras was the first government to announce Blockchain based land registry projects in 2015,82 although the current status of that project is unclear. Recently, the Republic of Georgia cemented a deal with the Bitfury Group to develop a Blockchain system for property titles.83 Reportedly, Hernando de Soto, the high-profile economist and property rights advocate, will 75 Natarajan, H., Krause, K., Gradstein, H. (2017). Distributed Ledger Technology (DLT) and Blockchain. FinTech note; no. 1. Washington, D.C. World Bank Group. 76 Schneider, J., Blostein, A., Lee, B., Kent, S., Groer, I., Beardsley, E. (2016). Profiles in innovation: blockchain—putting theory into practice. Goldman Sachs, 24 May 2016. 77 https://www.financemagnates.com/cryptocurrency/innovation/polycoin-launches-blockchainbased-compliance-aml-and-kyc-tools 78 https://www.newsbtc.com/2015/08/24/tradle-integrating-blockchain-technology-with-kycrequirements 79 Rizzo, P. (2016). 10 Stock and commodities exchanges investigating blockchain tech. See: https://www.coindesk.com/10-stock-exchanges-blockchain 80 French, J. (2018). Nasdaq Exec: exchange Is ‘All-In’ on using blockchain technology. See: https://www.thestreet.com/investing/nasdaq-all-in-on-blockchain-technology-14551134 81 Rizzo, P. (2016). Nasdaq to launch blockchain voting trial for Estonian Stock Market. See: https:// www.coindesk.com/nasdaq-shareholder-voting-estonia-blockchain 82 Shin, C. (2017). The first government to secure land titles on the bitcoin blockchain expands project. See: https://www.forbes.com/sites/laurashin/2017/02/07/the-first-government-to-secureland-titles-on-the-bitcoin-blockchain-expands-project 83 Shin, C. (2016). Republic of georgia to pilot land titling on blockchain with economist Hernando De Soto, BitFury. See: https://www.forbes.com/sites/laurashin/2016/04/21/republic-of-georgia-topilot-land-titling-on-blockchain-with-economist-hernando-de-soto-bitfury/#3e96f70844da

146

5 Data Integrity, Control and Tokenization

be advising on the project. Sweden also announced it was experimenting with a Blockchain application for property titles.84 Governments are also looking at using Blockchain for managing asset registries, with initiatives or trials being conducted by Sweden,85 UK,86 India,87 Georgia,88 Ghana89 and Russia.90 • In digital identities: government are trying to use Blockchain identity services, the foremost being Estonia, with the e-Identity ID card.91 The country’s Blockchain enabled platform, known as X-Road, is used to provide integrated services to citizens across multiple programs through their Digital IDs. It also uses the platform technology to connect Uber drivers directly with the tax office, adding income from rides directly to their tax return. Trials are also ongoing or in development in Switzerland,92 Finland,93 India,94 Japan,95 the USA96 and UNICEF.97 The UN is also trialling provision of refugee identities using Blockchains in a collaboration with Accenture and Microsoft.98

84

Anand, S. (2018). A Pioneer in real estate blockchain emerges in Europe. See: https://www.wsj. com/articles/a-pioneer-in-real-estate-blockchain-emerges-in-europe-1520337601 85 Zuckerman, M. (2018). Swedish government land registry soon to conduct first blockchain property transaction. See: https://cointelegraph.com/news/swedish-government-land-registrysoon-to-conduct-first-blockchain-property-transaction 86 De, N. (2018). UK land registry begins new phase of blockchain research project. See: https:// www.coindesk.com/tag/hm-land-registry 87 Browne, R. (2017). An Indian state wants to use blockchain to fight land ownership fraud. See: https://www.cnbc.com/2017/10/10/this-indian-state-wants-to-use-blockchain-to-fight-land-owner ship-fraud.html 88 Shin, L. (2017). The First Government to secure land titles on the bitcoin blockchain expands project. See: https://www.forbes.com/sites/laurashin/2017/02/07/the-first-government-to-secureland-titles-on-the-bitcoin-blockchain-expands-project/#167b5fa04dcd 89 Aitken, R. (2016). Bitland’s African blockchain initiative putting land on the ledger. See: https:// www.forbes.com/sites/rogeraitken/2016/04/05/bitlands-african-blockchain-initiative-putting-landon-the-ledger/#405d82507537 90 De, N. (2017). Russia’s Government to test blockchain land registry system. See: https://www. coindesk.com/russias-government-test-blockchain-land-registry-system 91 https://e-estonia.com/solutions/e-identity/id-card 92 Kohlhaas, P. (2017). Zug ID: exploring the first publicly verified blockchain identity. See: https:// medium.com/uport/zug-id-exploring-the-first-publicly-verified-blockchain-identity-38bd0ee3702 93 Suberg, W. (2017). Finland solves refugee identity with blockchain debit cards. See: https:// cointelegraph.com/news/finland-solves-refugee-identity-with-blockchain-debit-cards 94 https://www.wisekey.com/press/wisekey-and-the-government-of-andhra-pradesh-indiaannounce-collaboration-to-bring-security-to-block-chain-and-fintech-projects/ 95 Rueter, T. (2017). Digital ID in Japan to be powered by blockchain. See: https://www. secureidnews.com/news-item/digital-id-japan-powered-blockchain 96 Douglas, T. (2017). Illinois announces key partnership in Birth registry blockchain pilot. See: http://www.govtech.com/data/Illinois-Announces-Key-Partnership-in-Birth-Registry-BlockchainPilot.html 97 Vota, W. (2018). Five examples of United Nations agencies using blockchain technology. See: https://www.ictworks.org/united-nations-agencies-using-blockchain-technology/#.Wuhijn8h1hE 98 Roberts, J. (2017). Microsoft and accenture unveil global ID systems for refugees. See: http:// fortune.com/2017/06/19/id2020-blockchain-microsoft

5.8 Blockchain

147

• In cryptocurrencies: some governments are going even further, with trials of payment and cryptocurrencies in Russia,99 Venezuela100 and the United Arab Emirates101 and with experiments in Japan,102 Singapore103 and Australia104 extending to banking and trading services. Beyond finance, interest has been shown in energy markets in the UK,105 USA106 and Chile.107 • In healthcare: countries such as Estonia,108 Sweden109 and the UK,110 as well as several USA states,111 have shown interest in the use of Blockchains to certify patient medical records. Estonia’s X-Road interconnects hospitals, clinics, and other organisations. It implements a unified Electronic Health Record that supplies medical practitioners with information about patients’ health while protecting their privacy. For example, the ‘e-prescription’ system allows doctors to create prescriptions and make them immediately available to pharmacies. Patients can then collect their medicines directly from the pharmacy without having to visit the doctor for a hard copy of the prescription. The Estonian

99

Milano, A. (2018). Regional government in Russia to test blockchain payments. See: https:// www.coindesk.com/regional-government-russia-test-blockchain-payments 100 https://www.aljazeera.com/news/2018/02/venezuela-petro-cryptocurrency-180219065112440. html 101 Buntinx, J. (2017). UAE and Saudi Arabia collaborate on central bank digital currency research. See: https://themerkle.com/uae-and-saudi-arabia-collaborate-on-new-central-bank-digital-cur rency-research 102 Rueter, T. (2017). Digital ID in Japan to be powered by blockchain. See: https://www. secureidnews.com/news-item/digital-id-japan-powered-blockchain 103 Basu, M. (2016). Singapore Government builds Blockchain system to protect banks. See: https:// govinsider.asia/smart-gov/singapore-government-builds-blockchain-system-to-protect-banks 104 Mathew, S., Irrera, A. (2017). Australia’s ASX selects blockchain to cut costs. See: https://www. reuters.com/article/us-asx-blockchain/australias-asx-selects-blockchain-to-cut-costsidUSKBN1E037R 105 https://uk.reuters.com/article/us-centrica-blockchain/uk-energy-suppliercentrica-to-trialblockchain-technology-idUKKBN1I117G 106 Sundararajan, S. (2018). US Government lab looks to blockchain for P2P energy. See: https:// www.coindesk.com/us-government-lab-blockcypher-tap-blockchain-for-p2p-energy-transactions 107 Floyd, D. (2018). Chile is using Ethereum’s blockchain to track energy data. See: https://www. coindesk.com/chile-to-use-ethereums-blockchain-to-track-energy-data 108 Allison, I. (2016). Guardtime secures over a million Estonian healthcare records on the blockchain. See: https://www.ibtimes.co.uk/guardtime-secures-over-million-estonian-healthcarerecords-blockchain-1547367 109 Marenco, M. (2018). A Nordic way to blockchain in healthcare. See: http://www.himss.eu/ himss-blog/nordic-way-blockchain-healthcare 110 Miller, R. (2018). UK’s first trial of blockchain in healthcare. See: https://www.telegraph.co.uk/ business/business-reporter/blockchain-trial-in-healthcare 111 Miliard, M. (2018). Massachusetts general Hospital piloting Blockchain projects with Korean start-up. See: https://www.healthcareitnews.com/news/massachusetts-general-hospital-pilotingblockchain-projects-korean-start-up

148

5 Data Integrity, Control and Tokenization

government claims that the use of X-Road saves Estonians over 800 years of working time every year.112 Several much hyped examples of revolutionary Blockchain opportunities have however failed to meet expectations. A version of Blockchain by Imogen Heap for the music industry, on Ethereum generated sales of US$133.113 Mycelia uses the Blockchain to create a peer-to-peer music distribution system. Founded by the UK singer-songwriter Imogen Heap, Mycelia was meant to enable musicians to sell songs directly to audiences, as well as license samples to producers and allocate royalties to songwriters and musicians, all of these functions being automated by smart contracts.

5.9

Alternative Implementations to POW Blockchains

5.9.1

Lightning Network

The Lightning Network (LN) sits on top of the Bitcoin network with a goal of moving most transactions off the Bitcoin Blockchain and thus presumably to make them cheaper and faster. The key ideas behind Lightning were proposed by Joseph Poon and Thaddeus Dryja in a 2015 white paper, but it’s taken 3 years to translate the proposal into fully working code. Today, three different companies—San Francisco start-ups Blockstream and Lightning Labs and Paris start-up ACINQ are working on parallel implementations of the Lightning technology stack. The trio released version 1.0 of the Lightning specification in December 2017 and the companies are now racing to get their software ready for use by the general public. Lightning Network needs to do routing. Many people have pointed out the difficulty of routing in the system. The sender of a payment needs to know not merely a complete path of nodes to the intended recipient, but also the liquidity of each of the channels linking them. It can work only if all users have channels to one or more of a small number of large ‘payment hubs’ which each have channels to all the other ‘payment hubs’ with large endowments in each channel. Lightning fees are per-hop, so shorter payment chains will require smaller fees than longer ones. This means hubs with many connections will have an advantage over hubs with smaller numbers of connections. Over time the Lightning Network is likely to become more and more concentrated. Unfortunately this sounds a lot like the current banking system and many do not see real value.

112 113

https://e-estonia.com/solutions/interoperability-services/x-road Gerald, D. (2017). Imogen Heap: ‘Tiny Human’. Total sales: $133.20. https://davidgerard.co.uk/

5.9 Alternative Implementations to POW Blockchains

5.9.2

149

IOTA

IOTA is based on the idea of doing away with building a linear chain of blocks of transactions, in which a block validates every preceding block and thus every preceding transaction. Rather, each IOTA transaction validates two randomly selected preceding transactions. Thus it constructs a Directed Acyclic Graph114 of transactions. By doing so it eliminates the need for miners, which each have to store a copy of the entire chain. The protocol was designed specifically for use within IOT on connected devices, which have limited storage and compute resources. However some observers note that for IOTA to work, IOT devices need to be in sync with what is called a ‘tangle’. Something that most IOT devices would not have the capability to do.115

5.9.3

Ethereum

Unlike LN and IOTA, Ethereum has been live since mid-2015 and is being used quite extensively by the Initial Coin Offering (ICO) community and many other Blockchain based projects. Ethereum is planning to become a decentralised super-computer wherein anyone, anywhere can rent out some computational power and create decentralised applications (Dapps) which can run on top of the Ethereum platform. Smart contracts are how things get done in the Ethereum ecosystem. When someone wants to get a particular task done in Ethereum they initiate a smart contract with one or more people. To create a healthy ecosystem on top of Ethereum, it is absolutely essential that the Dapps built on top of it can interact with one another seamlessly. This is where ERC20 comes in. ERC20 is a guide of rules and regulations that help create a blueprint for Ethereum-based smart contracts. This helps ensure that applications and tokens built on the Ethereum platform are interoperable. The Ethereum network however suffers from capacity issues—as was found when cryptokitties popularity exploded in December 2017 with the Ethereum network struggling to keep pace. Ethereum’s governance players started to discuss a proposed fix, changing the basis for consensus from Ethereum’s Proof of Work (POW) to Proof of Stake (POS), which works similarly to PoW except that instead of computers validating transactions and receiving rewards equivalent to their relative computing power, POS uses token holders. Those who hold tokens can ‘stake’ their 114

A Directed Acyclic Graph, is a graph data structure that has vertices and edges. (A vertex is a point on the graph, and an edge is the path from one vertex to another.) DAG guarantee that there is no way to start at any vertex and follow a sequence of edges that eventually loops back to that vertex again (i.e. no loops). This allows us to have a sequence of nodes (or vertices) in topological order. 115 Tangle the underlying data structure that IOTA is built on, is actually a ‘blockless’ Blockchain. Rather than transactions created by users being incorporated into blocks by miners, users function as both the miners and the creators of transactions.

150

5 Data Integrity, Control and Tokenization

tokens (staking means to temporarily place the tokens in a locked smart contract, until staking is over) and in exchange, confirm transactions and receive rewards based on the relative number of tokens held. With POS, given you must stake your coins, any malicious behaviour would result in the loss of all staked coins. So if you bought 51% of all tokens, you would immediately lose your substantial investment. However there seem to be a number of problems with POS. The design of these networks is such that early users would get vastly better rewards than later users for the same effort. The individuals who are defining the POS mechanism and defining what ‘malicious behaviour’ consists of, are all early adopters with large potential stakes they can use to dominate the POS governance process. They will thus garner most of the POS reward, which will reinforce their dominance of the process.

5.9.4

Cross-Chain Technologies

As decentralised Blockchains are developed, there is a risk that they will not be able to interact with each other at scale. By enabling ledgers to interact with one another with a communication protocol layer, improvements in security, speed, cost of transactions can be attained. There are multiple approaches to obtaining interoperability. Technology to facilitate interactions between different Blockchains, also known as ‘cross-chain’ technology are being developed allowing for interoperability by offering a platform that lets various Blockchains interact with one another and in essence function as a single chain. One of the simplest forms is through a relayer. These utilities check for transactions in one chain and ‘relay’ that information to another. BTC Relay, for instance, allows Ethereum smart contracts to verify a Bitcoin transaction without any intermediary. This enables Ethereum Distributed Apps and smart contracts to accept Bitcoin payments. Real Blockchain interoperability is not possible today, but most use what are called ‘hops’ to move from one Blockchain to another. A variety of projects, including Cosmos, Ripple’s Interledger Protocol and Polkadot hope to deliver cross-chain interoperability so that users need not worry about which coin or which Blockchain powers the application they interact with. Nevertheless many users will be concerned that as decentralised ledgers start interacting with each other, privacy will be at risk. Therefore protocols are now being developed that permit data to be shared seamlessly across both centralised and decentralised databases whilst safeguarding privacy. Innovations in cryptography such as Zero-Knowledge Proofs (ZKP),116 differential privacy, Fully Homomorphic

116

A Zero-Knowledge Proof (ZKP) is when a prover convinces a verifier that they have some secret knowledge, without revealing the knowledge directly. In other words, a program can have secret inputs and the prover reveals nothing to the verifier. ZKPs provide fundamental primitives that can be used to build privacy-preserving mechanisms. ZKPs can be used to guarantee that transactions are valid despite the fact that information about the sender, the recipient, and other transaction details remain hidden. Examples include: https://z.cash/

5.10

Regulating Blockchain

151

Encryption (FHE)117 and Secure Multi-party Computation (SMPC)118 may enable data to remain private and secure but still move through public networks. I suspect you would need a PhD in mathematics or cryptography to really make sense of these innovations, something I do not have and will therefore swiftly move on.

5.10

Regulating Blockchain

There has been a lot of discussion within many regulatory authorities around new emerging Blockchain enabled financial systems which they believe are trying to circumvent existing regulatory rules. Regulatory authorities quite rightly under those circumstances have looked at some implementations of Blockchain and token economics with a degree of suspicion. Attempting to regulate a permission-less system where there is no controlling legal entity is a complicated task. Consequently, regulation so far has targeted cryptocurrency business applications such as exchanges and wallet providers.119 I am not advocating more regulation per se. Innovation has almost always outpaced regulation. Blockchain development and implementation are happening at a pace well beyond the capacity of regulators to respond. Trying to heavily regulate such a space is likely to do more long-term damage than good. Many regulators and central banks are coming to the conclusion, which I had written about a couple of years back, that Blockchain and associated technologies can actually be tools for regulatory oversight. They can fundamentally improve the transparency of the financial system, embed compliance and real-time reporting, protect the end-to-end integrity of the system and nurture further levels of trust within and outside of the system. Whilst good regulation is critical—you cannot have good regulation when you do not know the lay of the land. At the same time, you cannot not regulate, just because you do not have the full facts. The path forward must balance the priorities of regulators to uphold the law, preserve the integrity of capital markets, whilst allowing innovation to flourish. We need to move from strict regulation to good governance. Just because you are decentralised does not mean you need to be disorganised.120 Some regulatory authorities are themselves becoming a full node on the Blockchain with 117

A fully homomorphic encryption system enables computations to be performed on encrypted data without needing to first decrypt the data. 118 SMPC works by splitting the data into several, smaller parts, each of which is masked using cryptographic techniques. Each small, encrypted piece of data is sent to a separate independent server, so that each server only contains a small part of the data. An individual or organisation looking to discover the ‘secret’ (i.e., uncover the data) will need to aggregate the encoded data. 119 Massari, J. et al. (2018). The fragmented regulatory landscape for digital token. Harvard Law School Forum on Corporate Governance and Financial Regulation. 120 Tapscott, T. (2018). 2018 Blockchain regulation roundtable: addressing the regulatory challenges of disruptive innovation. Blockchain Research Institute.

152

5 Data Integrity, Control and Tokenization

multi-signatory rights to view all the transactions on the Blockchain. As the regulatory space matures, we may see regulators provided broad rights, which may include broader governance capabilities, which will enable them to not only have broad oversight of all activities, but ensure the network is run in a manner that is not contrary to agreed governing principles. The regulatory authority in this context will move away from a policing force, trying to police a problem it does not really understand. It would instead collaboratively work with the industry to develop regulatory governance mechanisms that can provide the necessary levels of oversight and control required by the regulator to fulfil their statutory duties whilst allowing this nascent technology and business models to grow and flourish. This is already happening with some central banks mandating rights to be a node in many cryptocurrency implementations. Much has also been written about the impact cryptocurrencies can have on the broader global financial systems, some predicting or hoping that central banks’ role will become irrelevant with the emergence of decentralised technology (or rather protocol) driven trust creating networks. The evidence to date is that government controlled central banks are unlikely to give up their central role in determining their monetary policy, their ability to manage interest rates, inflation or their currencies. Major established systems are typically more resilient than they appear. I believe it is fair to say that decentralised Blockchain based applications must complement the role of central banks and indeed other regulatory agencies, rather than trying to bypass them. This is not to say that Blockchain based models will not be a disruptive force: the tokenization model, in which value resides in the network rather than its controlling operator, offers the potential for disruption—transferring power from the large and few to the small and many. Participants on Blockchains need to trust in a decentralised model in which either there is no central control or that they can have effective shared control. At the same time, governments need to trust that their citizens will be protected, taxes will be paid and criminal activity effectively monitored and policed. Whilst Blockchains may have inherent mechanisms of trust built in, these need to be complimented with mechanisms of Governance and Law. The role of regulatory authorities (in the financial space at least) is pivoted around three pillars: • Pillar I: to protect consumers (sometimes from themselves in the case of investing in high risk investments) or ensure universal access to services for all consumers; • Pillar II: to protect the competitive process and prevent any abuse that may result from market power or dominance (however these maybe defined); • Pillar III: to protect the integrity and security of the system itself—i.e. ensure it is fully trustworthy. On the first pillar: consumer protection—Blockchain based systems are helping expand financial inclusion to previously unbanked populations (especially in international transactions), one of the key mandates of regulators. Whilst many

5.10

Regulating Blockchain

153

regulatory authorities have expressed concerns around cryptoassets, most appear to be reaching at least an interim conclusion that the existing regulatory systems are capable of regulating such activities. Sometimes the regulatory solution is to expand the licensing regime to capture these new entities or activities. In some countries, a licensing framework approach for Blockchains applications has also started to emerge, in particular, for cryptocurrency application environments. For example, the State of New York is offering ‘BitLicense’, which allows business to conduct virtual currency activities on Distrubuted Ledger Technology/Blockchain infrastructure. The traditional ethos of high quality regulation being ‘technology neutral’ means that most of the existing regulatory tools should be capable of regulating these services, whatever technology may be used to deliver it. The broader question of managing cross-border activity which Blockchain enabled systems enable is not a new one. The Internet has vexed many regulatory authorities for many years, without definitive solutions to date. Trying to find solutions for regulation of cross border activities through Blockchain activities may be as futile as ever, although there are some interesting ideas emerging. On the second pillar: competition—it is too early for regulatory authorities to have concern in the Blockchain and tokenization space. It is still very much a fragmented space without the emergence of dominant players. However, one area where regulators need to pay attention is the interoperability of different Blockchain based systems. This becomes important, as will be discussed later. On the third pillar: system integrity—it is fair to say that regulatory authorities have started from a perspective that Blockchains (read cryptocurrencies and in particular Bitcoin) have inherent regulatory problems; significant volatility, reduced transparency and are being used by unscrupulous people (tax evasion,121 money laundering, terrorist financing and other Dark Web activities). This is a misconception. Blockchains have five basic principles underlying the technology that actually make the system more transparent, robust and resilient: 1. Distributed Database: given that each party on a Blockchain has access to the entire database and its complete history, every party can verify the records of its transaction partners directly without an intermediary- there is no single point of failure, improving system resiliency.122 2. Peer-to-Peer Transmission: given that each node stores and forwards information to all other nodes—there is a single shared version of the truth.

121

Some countries consider them as digital money (subject to related regulation), while others treat cryptocurrencies as digital products or commodities (subject to VAT). Regulators in the USA have different criteria around Bitcoin for instance: some consider it as money (FinCEN, SEC), while others consider it a commodity (CFTC), or even a property (Internal Revenue Service). 122 Most DLT or Blockchain technologies use Public-Private Key Infrastructure (PKI) to secure rights to view and add to blocks—therefore the risk of system security is somewhat transferred to the PKI operator and the individuals that need to guard their respective ‘keys’.

154

5 Data Integrity, Control and Tokenization

3. Transparency: every transaction is visible to anyone with appropriate access to the system. Each node or user on a Blockchain has a unique address that identifies it. Every transaction is associated with such address. Users can choose to remain pseudo anonymous or provide proof of their identity to others. Blockchain transactions are not tied directly to your identity, so they may appear more private. Anyone in the world can create a new wallet anonymously and transact using it. However, it’s not quite that simple. The appearance of total anonymity is misleading. It’s true that a person can preserve his or her privacy as long as the pseudonym is not linked to the individual, however with a Blockchain platform like Ethereum, users are interacting with smart contracts that handle more than just simple value transfers. All the details about these smart contracts are public on the Ethereum Blockchain, including senders and recipients, transaction data itself, the code executed and the state stored inside the contract. Contrary to popular belief, there is more transparency with Blockchains (that does however lead to difficult questions for organisations that make use of smart contracts for critical business data into a Blockchain where hackers and competitors can view the information). 4. Irreversibility of Records: given that once a transaction is entered in the database and the accounts are updated, the records cannot be altered because they are linked to every transaction record that came before them through hash functions—the data is immutable. Blockchains support more robust automated and accurate record keeping, improving the audit and oversight capabilities for regulatory authorities. 5. Verifiable data and reconciliation: Blockchains provide access to real-time verifiable data, potentially reconciling end-to-end processes that would otherwise be difficult or impossible to do, through smart contracts. Having all the transaction information in a shared register in almost real-time will allow regulatory authorities to monitor financial activity without having to wait to receive the required reports from the various financial institutions. Furthermore, where Blockchains are combined with trustworthy multi-signature and cryptography functions, the security of the entire system and data can be enhanced. Using the technology, data can be hashed onto the Blockchain and only accessed with signature approval from a predefined group of people; something that would enable the open transparent nature of Blockchain technology to be reconciled with data protection regulations such as GDPR. One aspect of GDPR that many commentators state may not gel well with Blockchain technology is the ‘right to be forgotten’. I suspect this a red herring—GDPR was never intended to be used by unscrupulous individuals to use the GDPR provisions to wipe clear illegal transactions or activities. The ‘right to be forgotten’ is not an unconstrained right. Technology solutions could enable the right to be forgotten to be implemented for all practical purposes, in that relevant data about the person is no longer publicly visible, even if such data is not deleted from the Blockchain. When you examine the inherent principles underlying Blockchain technology, you come to realise that Blockchain technology (especially in the context of smart

5.11

Smart Contracts

155

contracts) can support the regulatory authority systematically adhere to the third regulatory pillar: making the end-to-end system more trustworthy. It can more effectively and efficiently bridge the gap of trust between those they regulate and the public. The real issue to implementing this solution is that many permissioned Blockchains are global in nature and may have multiple consortia. A regulatory authority’s powers are however local in nature. One solution that is being promoted is that regulatory authorities from different geographies have presence in each of the global consortia, with access rights to the information needed to perform their supervision activities related to the entities that fall under their jurisdiction. Then, information extracted by one regulatory authority from every consortia is shared in a private ‘regulatory authority’ network—thus combining all the information needed for monitoring global transactions. For a regulator friendly system to work, there needs to be interoperability of different Blockchains for which the regulatory authority may need to be a node, as well as interoperability with existing systems. A regulator having to have nodes on multiple Blockchains, each potentially with different protocols and reporting standards will make the life of the regulator difficult. There is already some collaboration in this space. In September 2017, R3 announced that it had developed a prototype of a system together with the UK Financial Conduct Authority, the Royal Bank of Scotland Group Plc and another global bank, built using R3’s Blockchain called Corda, which enables banks to generate automated delivery receipts for the regulator each time a mortgage is booked. There are other projects like Polkadot, Atomic Swap and others with a similar aim. A common set of standards and format needs to be agreed about the precise regulatory information that needs to be stored in the ledger, so regulators can easily extract the needed data. The International Organisation for Standardization (ISO) is developing technology-agnostic standards for Blockchain and distributed ledger technologies. Namely, ISO has established a technical committee (ISO/TC 307) that is currently working to develop 11 ISO standards, with 42 participating entities and 12 observing members. This would have a significant impact on the standardisation of Blockchain and distributed ledger technologies, one that might be comparable to the effect of W3C on the World Wide Web. An alternate and perhaps more effective approach than standardising technology, is to standardise data formats. Since network participants will often need to implement data processing capability on their own specialised software platforms, it might be more versatile to standardise on data exchange formats and trusted sources of data (oracles), but leave the choice of technology to the network participants.

5.11

Smart Contracts

The digital nature of the ledger enables smart contracts, which means that Blockchain transactions can be tied to computational logic and in essence be programmed. Therefore, users can set up algorithms and rules that automatically

156

5 Data Integrity, Control and Tokenization

trigger transactions between nodes. Smart contracts can be programmed to generate instructions for downstream processes such as payment instructions, if and when reference conditions are met. Like passive data, they become immutable once accepted onto the ledger. Counterparties need to establish obligations and settlement instructions (put assets under custody of the smart contract(s) and establish conditions for execution). Upon an event trigger (e.g. transaction initiated/information received); the contract would be executed based on terms of the contract. Movement of value based on conditions being met, which could be ‘on-net’ or ‘off-net’ (e.g. issuance of settlement instructions and reconciliation), would occur and the distributed ledger updated. Unlike a normal contract, which is drafted by a lawyer, signed by partaking parties and enforced by law, a smart contract sets out a relationship with cryptographic code. In simple terms, smarts contracts are self-executing, written into code, and built as complex ‘if-then’ statements (meaning that they will only be fulfilled if the established conditions are met). Smart contracts tend to be written in a JavaScript-like language and embedded in Ethereum’s Blockchain. They are today mainly used to implement Initial Coin Offerings, games such as Cryptokitties and gambling parlours, but have the potential for a wide range of activities, replacing many of today’ manual contract execution processes. Most smart contracts will need to interface with the outside world. Those outside sources are called ‘oracles’.123 Some oracles are just traditional data feeds designed with interfaces for smart contracts to process them in an automated way. Oracles can also be humans who can feed information into a smart contract—especially in the context of arbitration—here each of the two parties has a private key and a third key is given to an expert human arbitrator. The smart contract requires two of three keys in order to execute. If the parties agree that the contract has been fully performed, they provide their keys and the smart contract executes. If there is a dispute, the parties turn to a human arbitrator who either provides a key along with that of the party seeking to enforce the contract or refuse it, therefore preventing completion of the transaction. Some believe that smart contracts may be the most transformative Blockchain application. A firm could signal via Blockchain that a particular good has been received or the product could have GPS functionality, which would automatically log a location update that in turn, triggers a payment, eliminating inefficiencies, unnecessary intermediary costs and reduce the potential for fraud or errors. The implications go beyond these immediate transactions. Firms are built on contracts, from incorporation to buyer-supplier relationships to employee relations. If contracts are automated, then what will happen to traditional firm structures, 123

Smart contracts use what are called oracles. An oracle is a party which relays information between smart contracts and external data sources. It essentially acts as a data carrier between smart contracts on the Blockchain and external data sources off the Blockchain. So one approach to keeping information ‘private’ is simply to use oracles to fetch private information from an external data source.

5.11

Smart Contracts

157

processes and intermediaries like lawyers and accountants? And what about managers? Taken to the extreme, smart contracts have been deemed potentially revolutionary across a range of industries with some observers going as far as saying that they will replace traditional contracts and lawyers. Those who say smart contracts will render lawyers useless usually believe the entire legal agreement is expressed in its code. Essentially meaning that code is law and as such lawyers are unlikely to be required. That assumption remains open to question. Most current implementations of smart contracts are aimed at complementing lawyers rather than replacing them. Virtually every economist who has studied the issue, argues that in practice complete contracts are simply not possible. The world is a complicated place, the future is largely unknowable and we humans have limited intelligence. These and other factors combine to make it prohibitively difficult and most likely impossible to write a complete contract—one that truly does away with the need for lawyers. Agreements incorporating smart contracts will not be immune to disputes and legal challenges. Parties will disagree about the terms governing their performance and disagree on how the smart contract was intended to operate. There could also be bugs in the smart contract itself, creating complications or misaligned incentives. The risk of mistakes becomes magnified as smart contracts increasingly interact with outside data provided by trusted oracles, but also require humans to perform their terms.124 Parties aggrieved at the results of a smart contract will still need to resort to litigation; in order to reverse the position of the parties. Rather than seeking to have alleged promissory obligations fulfilled, complaining parties will now seek to undo or reverse completed transactions. Claims of breach will transform into claims of restitution. This will affect legal standards such as causes of action and burdens of proof, with unpredictable consequences. The problem with smart contracts is that they are publicly accessible and anything stored within smart contracts is open for anyone to view. While this provides openness and transparency, it also makes smart contracts very attractive targets for hackers. The first big smart contract was The Decentralized Autonomous Organisation (‘DAO’), operating without a corporate hierarchy, organised around the concept of a town hall, with the potential to give voice to all investors. The DAO was designed as a series of contracts that would raise funds for Ethereum based projects and disperse them based on the votes of members. An initial token offering was conducted, exchanging ethers for ‘The DAO tokens’ that would allow stakeholders to vote on proposals, including ones to grant funding to a particular project. That token offering raised more than US$150 million worth of ether at thencurrent prices, distributing over 1 billion The DAO tokens. In May 2016 however, news broke that a flaw in The DAO’s smart contract had been exploited, allowing the removal of more than 3 million ethers (worth around US$55 million at the time). Subsequent exploitations allowed for more funds to be removed, which ultimately triggered a ‘white hat’ effort by token holders to secure the remaining funds. The loot 124

OpenLaw, 2018.

158

5 Data Integrity, Control and Tokenization

was restored by a ‘hard fork’, the Blockchain’s version of mutability. Since then it has become the norm for ‘smart contract’ authors to make them ‘upgradeable’, so that bugs can be fixed. ‘Upgradeable’ is another way of saying ‘immutable in name only’. In the case of The DAO, the difference between a legitimate transaction and theft came down to intent, which is something that computers cannot determine under the terms of the smart contract. These are the sort of things that courts tease out all the time. The DAO raised many novel legal questions. Most importantly, did the exploit break laws and if so, which ones? For example, did the person or group that exploited The DAO commit theft or fraud or violate cyber security laws? Was The DAO a partnership? Did token holders have fiduciary responsibilities to each other? Do The DAO programmers or curators have any liability for the exploitation of The DAO tokens? Which courts around the globe have jurisdiction over each of these issues? Given these inherent legal issues, software vendors have explicitly avoided accepting liability for the damages caused by vulnerability exploitation. As software has become embedded more deeply into processes integral to an individual, business or even a nation-state, the potential damages associated with exploiting software vulnerabilities have also grown. One way to conceptualise how liability could be distributed is based on risk. In thinking about assigning responsibility for securing vulnerabilities, two main questions emerge. The first is, who is liable or otherwise responsible for securing software? At least four broad sets of liability regimes exist, ranging from no liability to holding vendors liable for their software: • No liability, code close-sourced: this is the current norm where counterparties to software vendors may negotiate some level of accountability by exception. In this regime, if damages arise as a consequence of a vulnerability being exploited, the vendor is not held responsible; • No liability, code open-sourced: in exchange for being released of liability, vendors could be required to open-source underlying code. In theory, users and implementers of software would be more empowered to address vulnerabilities on their own. In this regime, if a vendor has open-sourced the code, the vendor is not responsible for consequences of software being exploited; • User, implementer liable: users and implementers could be held liable for damages arising from software being exploited. In practice, such a regime would create heightened incentives for users to contract for secure software. Within this context, the possibility also exists of differential liability that differs between enterprises with dedicated security teams and consumers (with more responsibility being attached to entities ‘that should know better’); • Vendor liable: vendors could be held liable for damages arising from software being exploited. For example, if a vendor did not issue a patch for a known software vulnerability, the vendor would be held liable if damages arose as a consequence (thereby heightening incentives for vendors to design and maintain secure software).

5.12

Token Economics, Cryptocurrencies and Initial Coin Offerings (ICO)

159

The second question is around how liability should shift when software transitions to its end of life, or vendors go out of business? Given that most software vendors cannot afford to support software ad infinitum, there are justifiable concerns around attaching liability to software for perpetuity. To address those concerns, some commentators have proposed a sliding scale of liability, such that liability shifts as software enters ‘end of life’. For example, software may begin as a vendor’s responsibility but as it becomes ‘end of life’, liability may be transferred to users/ implementers. In such a regime, vendors would be held responsible for designing and maintaining secure software and there would be commercial incentives for users and implementers to upgrade to newer versions of software, when available. The whole debate around liability is also becoming heightened in the context of AI and who is liable for developing products that incorporates AI. This is discussed in a subsequent chapter.

5.12

Token Economics, Cryptocurrencies and Initial Coin Offerings (ICO)

Blockchains create the potential to tokenize (monetise) literally everything. I have already talked about tokens in the context of cryptocurrencies and Bitcoin specifically. However tokens can be used in a range of applications. Tokenization basically allows for fractional ownership of different assets categories, which can lower barriers to investment, improve liquidity, and facilitate tradability. Blockchains through tokenization enable fractional ownership of these asset categories whereby any owner can represent his property with a property address and wallet address on the Blockchain platform and issue a number of security tokens against that specific property. Tokens themselves are simply a type of value instrument. The rules under which these instruments are generated, distributed and managed are decided by community members through agreed governance rules or by the organisation issuing the tokens. Cryptocurrency networks are in essence technologies of governance. The ability for tokens to enable speedy, secure and low friction value transfer will underpin the highly automated IOT economy and provide a mechanism for its governance.

5.12.1 Token Classification Although there is no established classification of tokens, the categorisation introduced by the Swiss Financial Market Supervisory Authority (‘FINMA’) based on the economic functions of the tokens proves to be widely accepted, which nevertheless is being modified over time as necessary:125 125

Securities and Markets Stakeholder Group (2018). Own Initiative Report on Initial Coin Offerings and Crypto-Assets. 19 October 2018 ESMA 22-106-1338.

160

5 Data Integrity, Control and Tokenization

• Utility Tokens are intended to provide access to a specific application or service but are not accepted as a means of payment for other applications. The value of the service or product depends on the perception of the investor, making him comparable to a purchaser of a product or service. These are comparable to a voucher and to crowd funding by coupon. They allow prefunding of a future business without diluting ownership; • Asset Tokens (or Security Tokens) represent assets such as a debt or equity claim against the issuer as they promise the owner a share in the future profits or capital flows of the underlying company (sale of the tokens, dividends, interest). They resemble bonds, financial instruments or derivatives. Security tokens are expected to grow to the largest token market because of the benefits of fractional ownership and increased liquidity. These types of tokens fall under existing securities laws and markets; • Payment Tokens represent a means of payment for goods and services and are true virtual currencies with the most prominent example being Bitcoin. It is used as a form of payment and store of value, designed to serve the purpose of bolstering an independent and peer-to-peer monetary system—although some of these payment tokens are not used as payment instruments but as speculative investments. In addition, since no central bank intervenes to smooth extreme price fluctuations, payments token volatility is much higher than fiat currencies, as has been the case with Bitcoin—which has proved to be too volatile to be used as a payment token. For example Bitcoin rose from around US$5950 to above US$19,700 within 1 year—and likewise the depreciation has been equally rapid. It is highly unlikely that retailers would accept Bitcoins as a means of payment, when it is as volatile as this; • Stable Coin Tokens which are a subset of payment tokens, take cognisance of the fact that a currency should act as a medium of monetary exchange and a mode of storage of monetary value and that such value should remain relatively stable over longer time horizon. Users will refrain from adopting it if they are not sure of its purchasing power tomorrow. The reason fiat currencies remain stable is because there are reserves that back them and central banks with the authority and power to stabilise the currency. Stable coins aim to replicate this stability by either holding reserves such as a basket of fiat currencies, precious metals or commodities, or pegging the stablecoin to something more stable with algorithms that perform the equivalent function of a central bank for fiat currencies. Tokens combining several features of these different token categories are referred to as Hybrid Tokens.

5.12.2 Regulating Cryptoassets, ICOs and Cryptocurrencies Globally, governments and regulators have been slow off the mark when it comes to cryptocurrencies. Most have sat on the fence. Most believed this was fad and would soon disappear. For a majority of the period, the key cryptocurrency has been

5.12

Token Economics, Cryptocurrencies and Initial Coin Offerings (ICO)

161

Bitcoin, which has been on a roller coaster ride with volatility changing by the hour. Under these circumstances most governments thought Bitcoin would effectively self-destruct and they didn’t need to spend considerable time or effort trying to understand its effects, whether it needed to be regulated and how.126 That has recently changed, as more and more cryptocurrencies have emerged and their role has extended. One of the key moments came when tokens where being used as a means of raising funds, so called Initial Coin Offerings (ICO). These ICOs can be thought of as an evolution of crowd funding where future users can finance the development of solutions and in turn become stakeholders in the system. An ICO like an IPO in mainstream investment, are a means of fundraising, used primarily by firms looking to create a new coin, app, or service whereby interested investors buy into the offering, either with fiat currency or with pre-existing digital tokens. In exchange for their support, investors receive a new cryptocurrency token specific to the ICO that could be classified as a security or utility token. ICOs have become an increasingly popular method for start-ups and other companies to raise capital. Investors participate in the fundraising by transferring fiat currencies or cryptocurrencies, such as Bitcoin or Ether, to the issuer in exchange for these new digital tokens. Nevertheless, many investors had to take a hair-cut on their investments in ICOs and brought the issue to the attention of regulators. This opened Pandora’s box for all things crypto. A report by ESMA Security and Markets Stakeholder Group, published in 2018, claims that 78% of listed coins/tokens were scams, whereas only 7% were deemed to be successful.127 The report also claimed that nearly 60% of these were issued in the USA and Switzerland in 2017, but by 2018, this had dwindled to 14% with the majority (60%) being issued from the Cayman Islands and Virgin Islands due to regulatory concerns. The ICO Governance Foundation (IGF), a global self-regulatory initiative for token offerors based on a Swiss foundation, is developing standard registration form for ICOs, a registration database for investors and certification of custodial organisations to handle funds contributed to ICOs, in an attempt to provide some reassurance to investors and regulators.

5.12.3 The Debate Between Utility and Securities Tokens The most vital key question which arises when looking at cryptocurrencies is whether these tokens are utility tokens, security tokens, payment tokens or something else. Each carries very different policy and regulatory concerns.

126

Liechtenstein is one of the few countries to have passed the Blockchain Act, which provides a comprehensive legal framework for digital assets. The Act will come into force on 1 January 2020. 127 ESMA. (2018). Advice to ESMA: own initiative report on initial coin offerings and cryptoassets. ESMA22-106-1338, published 19 October 2018.

162

5 Data Integrity, Control and Tokenization

The popularity of selling tokens via ICOs as a means of start-up fundraising has exploded in the last few years. Figures show approximately US$31.7 billion has been raised through some 1790 ICOs at the time of writing this book, dwarfing the amounts raised for Blockchain projects via traditional venture capital.128 The majority of ICOs are within the finance, trading, investment and payment sectors (over 37%). The purchase of tokens issued in connection with an ICO can be qualified as a purchase of commodities, a purchase of rights or a purchase of securities, which may ultimately subject ICOs and their ‘Whitepapers’ to prospectus or other disclosure requirements. Regulations may not necessarily be limited to those of the jurisdiction governing the ICO. When marketed to investors residing or domiciled in another jurisdiction, financial regulatory rules of such jurisdiction may equally apply to the ICO. A governing law clause does not dispense the ICO issuer from compliance with such financial regulation. The difficulties of applying the existing regulatory regime in the token space can be seen clearly when it comes to the use of cryptoassets. There a range of (conflicting) opinions from regulators on cryptoassets, from outright scepticism and bans in some countries, to more cautious investor warnings from others,129 while yet other countries have introduced regimes to attract more crypto activity.130 In the USA, the Securities and Exchange Commission (SEC) has confirmed that investment and hybrid investment/utility tokens pass the so called ‘Howey test’, meaning those tokens are ‘investment contracts’ and therefore subject to USA securities laws. EU law relies on the transfer of units in the secondary market rather than on the investment character of the instrument to determine if they are a security instrument/token. From a regulatory perspective, attempting to regulate a permission-less system like Bitcoin, where there is no controlling legal entity, is a complicated task. Consequently, regulation so far has targeted cryptocurrency business applications such as exchanges and wallet providers.131 Regulators tend to focus on cryptocurrency exchanges, because they provide a nexus between the cryptocurrency market and the traditional financial sector. In contrast, for permissioned systems where access is conditional and the participants are pre-screened, the existing regulatory framework should be able to provide sufficient oversight since the actors are already subject to regulatory obligations.

128

Figures as of 8 December 2019 from: https://www.coinschedule.com/stats.html Many EU jurisdictions have issued warnings on the use of cryptocurrency but continue to apply existing legal principles. 130 Gibraltar and Malta are examples of countries advocating cryptocurrency. See Gibraltar Finance. 2018. “Token Regulation: Proposals for the Regulation of token Sales, Secondary token Market Platforms, and Investment Services Relating to Tokens.” Parliament of Malta, Virtual Financial Assets Act 2018. 131 Massari, J., Nazareth, A., Zweihorn, Z. (2018). The fragmented regulatory landscape for digital tokens. Harvard Law School Forum on Corporate Governance. See: https://corpgov.law.harvard. edu/2018/03/26/the-fragmented-regulatory-landscape-for-digital-tokens/ 129

5.12

Token Economics, Cryptocurrencies and Initial Coin Offerings (ICO)

163

Regulatory authorities have not been shy about enforcing regulations related to cryptoassets. A crypto exchange was fined US$110 million for failure to detect suspicious transactions and file suspicious activity reports.132 A key moment that brought regulatory authorities to the forefront has been Facebook’s Libra project—which effectively proposes to issue stablecoins as an alternative to fiat currency (with the ability to trade in and out of Libra into fiat currencies). When an organisation with two and half billion users proposes an alternative to fiat currencies, it was no surprise that governments, central banks and regulators took notice. Policy makers are concerned about the risks that such stablecoins pose. They unsurprisingly want to prevent stablecoins from influencing open market operations, affecting monetary bases, weakening capital controls, draining deposits and increasing systemic risk. Facebook had aimed to launch Libra in the first half of 2020. Facebook knew people wouldn’t trust it to wholly steer the cryptocurrency, so they recruited the founding members of the Libra Association, a not-for-profit consortium which would oversee the development of the token, based in Switzerland. Members of the association included, amongst others: Payment providers (MasterCard, PayPal, Visa), Technology and platforms (Booking Holdings, eBay, Facebook, Spotify, Uber), Telecommunications providers (Iliad, Vodafone) and a number of venture capital firms, as well as Non-profit and multilateral organisations and academic institutions. In total 28 founding members were listed. However USA lawmakers formally called on Facebook to cease all development of its Libra cryptocurrency in a letter sent to executives. Democrats from the USA House of Representatives wrote an open letter to Facebook, calling on a moratorium to all Libra development, while the Financial Services Committee and affiliated subcommittees held hearings to determine how it would operate and what safeguards would be implemented to protect user privacy. Many of the founding consortium members have since exited following this strong resistance from both the USA and other governments. President Donald Trump signalled that cryptocurrencies such as Bitcoin and Facebook’s proposed Libra digital coin will face the full force of USA regulators if they want to become a bank. In a post on Twitter, as is his preferred medium, he said “If Facebook and other companies want to become a bank, they must seek a new banking charter and become subject to all banking regulations, just like other banks, both national and international”. The USA Congress is considering introducing a bill to stop the launch of Libra, until regulatory questions about the project and its governance can be resolved.133 It is also perceivable that Facebook’s Libra may have also been blocked by the USA as they wanted the cryptocurrency tied to the US dollar, rather than a basket of currencies as has been proposed.

132

Source: U.S. Treasury Financial Crimes Enforcement Network (FinCEN), FinCEN Fines BTC e virtual currency exchanges $110 million for facilitating ransomware, Dark Net Drug Sales (July 27, 2017). 133 Discussion Draft, 116th Congress, 1st Session (18 October 2019). See: https://financialservices. house.gov/uploadedfiles/bills-116pih-ssa.pdf

164

5 Data Integrity, Control and Tokenization

The Libra project is meeting with resistance in the UK as well. The Bank of England said that the Libra cryptocurrency project would need to grant access to the Bank of England for the purpose of monitoring payment chain information. China, in the meantime believes Libra is doomed to fail because it lacks international regulatory or political support.134 The vice chairman of the China International Economic Exchange Center predicts that China’s central bank instead will be the first to issue a Central Bank Digital Currency—a centrally controlled and managed currency. Much of the resistance whilst coming from governments (central banks losing control and the fear that Facebook will get even more access to consumer data (although the Libra prospectus made it clear that Facebook will not have access to the data generated by these transactions)), is also being driven by incumbent banks, who have a lot to lose if Libra becomes successful as a payment platform and reduce significantly, the commissions the banks and other players in the value chain get for every transaction. As predicted, advanced economies are starting to coordinate on cryptocurrency regulation, as they have on other measures to prevent money laundering and tax evasion. But that leaves out a lot of disgruntled players—including Cuba, Iran, Libya, North Korea, Somalia, Syria, Venezuela and Russia, all of whom are labouring under USA financial sanctions. Their governments will not necessarily care about global externalities if they encourage cryptocurrencies that might have value in their home markets. In response to Iran’s actions, USA lawmakers have introduced at least two bills explicitly aiming to curb the development of a statebacked ‘Digital Rial’. Bill S.3758 (sec 306), sponsored by Senate Republicans in December 2018, aims to “impose sanctions with respect to Iranian financial institutions and the development and use of Iranian digital currency. . .”.135 The bill, has been read twice and was deferred to the Committee on Banking, Housing, and Urban Affairs. The Bill requests the USA Secretary of the Treasury to report to Congress on Iran’s efforts and its potential implications, alongside other measures. It further quotes the Civil Defense Organisation of Iran as having stated that “cryptocurrencies can help bypass certain sanctions through untraceable banking operations.” In another example, Venezuela has allegedly issued the ‘petro’ digital currency on the NEM Blockchain platform in February 2018, in part to serve as a mechanism to attract government financing during rapidly deteriorating domestic economic conditions and a plummeting Bolivar. It is supposedly backed by the state’s oil and mineral reserves. The ‘petro’ may also support sanctions circumvention.

134

LaVere, M. (2019). Chinese Official Slams Libra, Says Central Bank will Issue Digital Currency First. See: https://www.cryptoglobe.com/latest/2019/10/chinese-official-slams-libra-says-centralbank-will-issue-digital-currency-first 135 S.3758—Blocking Iranian Illict Finance Act. 115th Congress (2017–2018). See: https://www. congress.gov/bill/115th-congress/senate-bill/3758/text

5.12

Token Economics, Cryptocurrencies and Initial Coin Offerings (ICO)

165

Within the USA, the Commodities Futures Trading Commission (CFTC) has designated certain cryptoassets as commodities. Crypto futures, swaps, options, and other derivative contracts are subject to the same regulatory protocols as physical assets in this class. These regulations are focused on ensuring orderly markets and protecting against market manipulation. Exchanges will need to continue to enhance their surveillance for manipulation and fraud and act accordingly if malfeasance is detected.136 The SEC has concluded that certain cryptoassets, issued as part of ICOs, are securities under the Securities Act of 1933 and the Securities Exchange Act of 1934, which means they must be registered with the SEC—they have also made it clear that Bitcoin is not a security.137 The Financial Crimes Enforcement Network (FinCEN) of the USA Treasury, considers crypto exchanges money service businesses, which means they are subject to existing banking regulations like the AML, Know Your Customer (KYC) and various financial reporting requirements.138 Organisations that trade crypto futures will be required to conduct business through a registered futures commission merchants (FCM) or introducing brokers (IB), which are regulated by the CFTC and National Futures Association (NFA). Furthermore, organisations wanting to offer futures trading will themselves be required to register with the CFTC and NFA as an FCM or IB. The New York State Department of Financial Services (NYDFS) has required any entity operating in the crypto business in the state of New York and/or with New York residents to apply for a BitLicense. Other states have required crypto businesses to operate under money transmitter laws. Organisations that provide crypto custody services, perform exchange services, or issue crypto (virtual currency, money transmitter and exchange services) are subject to state money transmitter obligations, many of which require compliance with FinCEN’s KYC and AML. The NYDFS BitLicense builds significantly on top of those requirements and includes, for example, significant cyber security requirements. The USA Internal Revenue Service (IRS) has also issued guidance that some cryptoassets are to be treated as property and are subject to tax upon sale or exchange. In Europe, some cryptoassets qualify as transferable securities and as such the legal framework for the regulation and supervision of financial instruments applies, including: the Prospectus Directive, the Transparency Directive, MiFID II, the Market Abuse Directive, the Short Selling Regulation, the Central Securities Depositories Regulation and the Settlement Finality Directive. Utility tokens are generally not covered by financial regulation. The key test is if they are transferable, 136

Foxley, W. (2019). CFTC Chairman Confirms Ether Cryptocurrency Is a Commodity. See: https://www.coindesk.com/cftc-chairman-confirms-ether-cryptocurrency-is-a-commodity 137 United States Securities and Exchanges Commission. Cipher Technologies Bitcoin Fund. Letter dated 1 October 2019. See: https://www.sec.gov/Archives/edgar/data/1776589/ 999999999719007180/filename1.pdf 138 Financial Crimes and Enforcement Network. (2014). Administrative ruling on the application of FinCEN’s regulations to a virtual currency trading platform (October 27, 2014).

166

5 Data Integrity, Control and Tokenization

with the fundamental concern that they have the potential to become investment objects. Payment tokens are not currently covered by MiDID II nor the Prospectus Regulation or the Market Abuse Regulation. Asset tokens generally come under the purview of financial regulation, especially where they give rights to a financial entitlement or entitlement in kind and are transferable. All cryptoassets and related activities are generally subject to Anti-Money-Laundering (‘AML’) provisions. China restricted its banks from using cryptocurrencies, banned ICOs and restricted cryptocurrency exchanges. South Korea has also banned ICOs. Algeria, Bolivia, Morocco, Nepal, Pakistan and Vietnam ban all cryptocurrency activities; Qatar and Bahrain bar domestic cryptocurrency activities; and Bangladesh, Colombia, Iran, Lithuania, Lesotho and Thailand ban financial institutions from facilitating transactions involving cryptocurrencies.139 Egypt has banned the use of cryptocurrencies to conduct commerce, Taiwan has prohibited its banks from accepting or transacting cryptocurrencies, Indonesia has prohibited the use of cryptocurrencies for payment and Vietnam does not allow cryptocurrencies to be used as a legal means of payment.140 In India, a government panel has recommended banning cryptocurrencies, which has been challenged in the courts. In the middle of the spectrum, some governments are seeking to balance encouraging financial innovation and managing the risks posed by cryptocurrencies, whilst providing greater clarity surrounding the emergence of cryptocurrencies. These governments stop short of banning cryptocurrencies but are not actively seeking to become cryptocurrency hubs. Most major advanced economies, including the United States, Euro zone countries and the United Kingdom, have adopted this type of approach. At the extreme end are countries that are embracing crypto developments. Switzerland is seeking to create a cryptocurrency industry, or ‘Crypto Valley’ located in the canton of Zug. The jurisdiction has tried to attract cryptocurrency companies and exchanges through the early adoption of regulations designed to provide regulatory certainty. Companies that created and promote Ethereum, the second largest cryptocurrency by value, are located in Zug and as many as 200 to 300 cryptocurrency entities have opened there in recent years.141 Singapore is also striving to become a cryptocurrency hub, with analysts describing its regulators as well-informed and transparent, compared to regulatory uncertainties in other jurisdictions.142

139

The Law Library of Congress. (2018). Regulation of cryptocurrency around the world. Global Legal Research Center, Law Library of Congress, June 2018. See: https://www.loc.gov/law/help/ cryptocurrency/cryptocurrency-world-survey.pdf 140 The Law Library of Congress. (2018). Regulation of cryptocurrency around the world. Global Legal Research Center, Law Library of Congress, June 2018. See: https://www.loc.gov/law/help/ cryptocurrency/cryptocurrency-world-survey.pdf 141 Irrera, A., Neghaisi, B. (2018). Switzerland seeks to regain cryptocurrency crown. Reuters, July 19, 2018. 142 Yang, J. (2018). Singapore is the crypto sandbox that Asia needs. TechCrunch, September 22, 2018.

5.12

Token Economics, Cryptocurrencies and Initial Coin Offerings (ICO)

167

However, the global picture is one of scepticism, especially in regards to stablecoins. On 13 October 2019, following Facebook’s Libra, the Financial Stability Board (FSB) wrote to the G20 Finance Ministers and Central Bank Governors highlighting the regulatory challenges that the introduction of Global Stable Coin (GSC) would pose.143 According to the direction, GSC would raise new challenges for financial stability, consumer and investor protection, data privacy and protection, financial integrity (including AML/CFT and KYC compliance), mitigation of tax evasion, fair competition and anti-trust policy, as well as market integrity. On 17 October 2019, the Bank for International Settlements (BIS) referred to the challenges highlighted by the FSB and concluded that no GSC project would be authorised unless the challenges and risks outlined were adequately addressed.144 Both France145 and Germany146 have expressed views that GSC will not be allowed in their countries. I suspect this reaction is against the Facebook Libra proposals. Stablecoins that are relatively small in scale and whose purpose is well defined and designed to solve real problems are likely to be allowed over time. The question really will be, where that threshold lies. At the moment however, the global response towards stablecoins is for governments to develop their own Central Bank Digital Currencies. According to a January 2019 report by the BIS, at least 40 central banks around the world are currently, or soon will be, researching and experimenting with Central Bank Digital Currency (CBDC).147 In many of these CBDC pilots, the central bank issues digital tokens on a distributed ledger that represent and are redeemable for, central bank reserves in the domestic currency held in a separate account with the central bank. The agents in the system use the CBDC to make interbank transfers that are validated and settled on the distributed ledger. The central banks typically use a ‘permissioned’ Blockchain network, including: R3’s Corda, the Linux Foundation’s Hyperledger Fabric, J.P. Morgan’s Quorum, or some form of an Ethereum Blockchain network.148 China has been proactively exploring its application of CBDC. A senior official at the Chinese central bank has stated that China’s digital currency will be similar to Libra in that it will have the ability to be used across major payment platforms,

143

Financial Stability Board. (2019). To G20 Finance Ministers and Central Bank Governors. See: https://www.fsb.org/wp-content/uploads/P131019.pdf 144 Bank for International Settlements. (2019). Investigation the impact of global stablecoins. Report of the G7 Working Group on Stablecoins. See: https://www.bis.org/cpmi/publ/d187.pdf 145 Rattaggi, M., Longchamp, Y. (2019). International regulators tackle the issues of stable coins and digital money. See: https://www.seba.swiss/research/International-regulators-tackle-the-issuesof-stable-coins-and-digital-money/#1 146 https://www.bis.org/cpmi/publ/d187.pdf 147 Bank for International Settlements. (2019). Proceeding with caution- a survey on central bank digital currency. BIS Papers No 101. January 2019. See: https://www.bis.org/publ/bppdf/ bispap101.pdf 148 World Economic Forum. (2019). Central Banks and distributed ledger technology: how are Central Banks Exploring Blockchain Today? World Economic Forum Whitepaper, March 2019.

168

5 Data Integrity, Control and Tokenization

including WeChat and Alipay, which both dominate the mobile payment market in China.

5.13

Privacy and Data Protection

In a world where data is becoming the new oil, the incentives to steal, pilfer, illegally sell or otherwise abuse data, increases dramatically. It therefore becomes natural for individuals to guard against access to their personal data. Artificial Intelligence (AI) is increasing the ability to derive intimate information from individuals public or aggregated data. This means that freely shared information of seeming innocence; where you ate lunch or what you bought at the grocery store, can lead to insights of a deeply sensitive nature. With enough data about you and the population at large, firms, governments and other institutions with access to AI will 1 day make guesses about you; what you like and whom you like. On the other hand, significant economic and social value can be accrued when data is being used more intelligently and combined. It is therefore vital that a happy balance is found between these two extremes. Data integrity, control, security and protection are pivotal to the growth of the digital economy. Many countries recognising this have sought to introduce data protection laws. The most stringent of these was the introduction of the General Data Protection Regulation (GDPR)149 within the EU which enhanced upon the previous data protection directive. The GDPR was designed with the data economy in mind and was primarily a response to the power of large data processors such as Google having access to and processing the personal details of European residents whilst potentially being outside the jurisdiction of the EU. The extraterritorial provisions were largely to curtail such possibilities. GDPR is an example of a principles-based regulation. Figure 5.6 illustrates typical data protection principles—these are incorporated within the GDPR. Many other countries, including the USA and China have a taken a less aggressive approach to data protection. Whilst Europe’s practice of data minimisation and high data privacy standards can be seen as a disadvantage against the likes of China, where personal data flows more freely, in the long run, digital ‘prosperity’ will inevitably have to go hand in hand with citizens’ well-being. GDPR applies to both ‘controllers’ and ‘processors’. A controller determines the purposes and means of processing ‘personal data’, whilst a processor is responsible for processing personal data on behalf of a controller. Both have legal obligations under GDPR. ‘Personal data’ refers to any information relating to an identifiable person who can be directly or indirectly identified, in particular by reference to an identifier: including name, identification number, location data or online identifier,

149

https://ec.europa.eu/info/priorities/justice-and-fundamental-rights/data-protection/2018-reformeu-data-protection-rules/eu-data-protection-rules_en

Enable Data Subject right to access to their Personal Data that is held by the company, how it is used and by whom (including third parties)

Access Obligaon

Implement the necessary policies and procedures in order to meet obligations under the Law and Regulations

Openness Obligaon

Allow Data Subject to (i) correct an error or omission in an individual’s Personal Data (ii) remove some or all of the individual’s Personal Data

Ensure processes in place to notify Data Subject and relevant authority if breach occurs that is "likely to cause serious damages" to data or privacy of an individual

Breach Noficaon Obligaon

Correcon / Removal Obligaon

Allowed subject to adequacy tests

Protect Personal Data in company possession or under our control by making arrangements and technical safeguards to prevent unauthorized access, collection, use, disclosure, copying, modification, disposal…

Protecon Obligaon

Trans Border Flow Obligaon

Ensure Personal Data is accurate and complete, and remains so

Accuracy Obligaon

Keep Personal Data for no longer than needed for legitimate purposes.

Data Subject notified about the purpose for which their Personal Data will be processed

Purpose Noficaon Obligaon

Retenon Obligaon

Personal Data only processed / used for legitimate business purposes or otherwise where consent obtained

Data Subject agreement for company to process their Personal Data OR legitimate purpose exists

Purpose Limitaon Obligaon

Consent Obligaon

Fig. 5.6 General data protection principles

Accountability to individuals

Care of personal data

Collection and use of personal data

5.13 Privacy and Data Protection 169

170

5 Data Integrity, Control and Tokenization

reflecting changes in technology and the way organisations collect information about people. GDPR’s principles of ‘purpose specification’ and ‘limitation’ requires that personal data must be collected for specified explicit and legitimate purposes and not further processed in a way incompatible with those purposes. Therefore the purposes of the processing must be determined prior to the collection of personal data by the data controller, who must also inform the data subject. When data controllers decide to process data in the digital space, they must ensure that personal data is not illegally processed for further purposes by it, or other stakeholders along the value chain. This will be a real challenge as we enter into an interconnected digital world where value will be created by sharing data across ecosystem players. GDPR sets out a number of other important principles, amongst which include: • Transparency: privacy notices for data sharing must be easy to access and to understand, explaining how data are processed, what are the individuals rights and how they can be enforced; • Consent: valid consent must be explicit for data collected and the purpose for the data collection must be stated. Data controllers must be able to collect ‘consent’ from end users (opt-in) and consent must be able to be withdrawn; • Erasure: data subjects have rights for their personal data to be deleted (right to be forgotten); • De-identification by default: whereby personal data is de-identified, sanitised or deleted as soon as there is no valid legal basis for processing/storage; • Data minimisation by default: reducing the collection of personal data and its processing; • Use of encryption by default: for personal data in transit and rest. GDPR grants data subject rights not to be subject to a decision based solely on automated processing, including profiling which produces legal effects concerning him or her or similarly significantly affects him or her (Article 22 GDPR). Profiling is defined in Article 4(4) as ‘any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to the use of natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location of movements’. As is clear, its scope is broad. Where organisations contemplate using automated decision making, GDPR also requires them to undertake a data protection impact assessment and assess the risks involved. GDPR certainly goes much further than many other data protection regimes, some would argue it goes too far and potentially impedes the ability for innovative data driven businesses to thrive within the EU at the expense of less onerous regions. Nevertheless that sense of control over our data footprint is rather misleading. Whilst we have control over what information we share with digital platforms and mobile applications and have the ability to consent to the personal information that firms collect and use. Much of the valuable data about us is inferred beyond our control

5.13

Privacy and Data Protection

Asymmetries of power between the citizen and those controlling information about them

2

It requires treang people with respect: having clear data ownership rights, the rights around the use of derivative data and customers’ rights to consent and to withdraw such consent

171 1

To reap the benefits of the digital ecosystem, organisations and people need to trust the eco-system

E.g. Users of health/personal IoT devices like Fitbit may have allowed the equipment/service provider to take data for giving summary of lifestyle behaviors users

This could lead to discrimination and denial of services and to the detriment of users

Now an insurance company can acquire access to this data and modify the health insurance premiums based on the lifestyle behaviors – the device wearers may not have consented to this

It calls for a need to be transparent with customers (even when using dumb devices)… 3

Fig. 5.7 Creating trust in data usage

and without our consent. Digital firms can make observations of our behaviour through the metadata. This is data that provides context to our actions, such as where we were located when we ticked the do not consent box; the mobile phone we used; the network we were connected to; which other devices we may have used to access the same website; how long we spent reading certain articles; which links we clicked on; the pace and accuracy of our keystrokes when typing; or how we navigated through a page or app including touch screens. These patterns potentially help a firm build a psychological profile of us: our wealth, our intelligence level, our interest in certain subjects, what we like and don’t like etc. This psychological profile is then fed into big datasets and analysed for similarities and differences with other profiles using AI to try and second guess what interests us, what we might be looking to buy, the type of brands we will most likely be associated with, and in turn push adverts to us—this explains the less sinister version. IOT and smart applications heightens the risks to personal privacy and the confidentiality and integrity of organisational data even further. Some IOT consumer applications, particularly those related to health and wellness, collect sensitive personal data that consumers might not wish to share, or data that you may not associate as directly being personal data, yet it might be used to infer your behaviours and profile you. Consumers may have no idea what kind of information is being acquired about them and why. Figure 5.7 illustrates the need for trust and transparency. An important aspect that must be given consideration for digital transformation to take place and for participants to be provided the mechanisms to trust the various actors in the value chain is around building trusting relationships between data owners, data users and the various intermediaries. In a world where it will become increasingly difficult to establish data ownership, focus must shift to data control,

172

5 Data Integrity, Control and Tokenization

reflecting the right of a person or organisation to control and grant access rights to personal data and non-personal data and how this information can be shared with others (enhanced consent). It also reflects the fact that digital data can and in most cases will be shared and processed. A common understanding of ownership rights to data produced by various connected devices will be required to unlock the full potential of IOT. Who has what rights to the data from a sensor manufactured by one company and part of a solution deployed by another in a setting owned by a third party will have to be clarified. For example, who has the rights to data generated by a medical device implanted in a patient’s body? The manufacturer of the device? The health-care provider that implanted the device and is managing the patient’s care? Or the patient themselves? While ownership is important, appropriate license rights may provide an entity with everything it needs to conduct its analysis or monetisation activity, whilst allowing the ‘owner’ to retain rights of how such data is collected, processed and shared. Often, collected data may need to be used for purposes other than its original intent, without express consent from the generator/owner of the data. Such use poses risks to the development of trust between the data subject and the whole digital ecosystem. The issue is really what data is being used. If it is meta data—i.e. data that is not personally identifiable or not provided by the data subject per se, but generated as a consequence of the data subject’s use of a service, then what control do users have? Where such derived data are used to make decisions that affect the individual, albeit in another context, it is likely that it needs to be monitored and managed carefully. This brings another area that needs to be managed carefully. Gaining consent from a data subject for collection and use of their personal information and importantly to allow such consent to be withdrawn or amended over time by the data subject (as required by GDPR). A important colliery to this is what happens to the data that has been shared with other parties or partners in the value chain—how will modifications to consent be communicated and given affect by all the actors in that value chain? These flows of data also raises an interesting question about liability across the value chain, especially when the value derived by each party in that chain may not correspond with the liability they are asked take on—different participants will have different thresholds for accepting their share of liability as their levels of economic benefit in the processing chain are different. The legal framework today handles single accountability scenarios well, but we are fast moving away from that world. Figure 5.8 illustrates the need for establishing liability across the value chain.

5.13.1 Anonyimization Techniques A common way to improve privacy is to de-aggregate date and anonymize it. De-aggregation breaks up a logical group of data into isolated pieces which in itself may not make much sense or help create any value. Policy makers and

5.13

Privacy and Data Protection

173

That requires the establishment of clear liability across the value chain for customer data and derivative data usage 1

The analysis and control actions change multiple hands in a typical IoT ecosystem

2

So, if something goes wrong in that value chain – who is accountable? 3

E.g. a wearable device might capture heartbeat and few other indicators, then transmits it to some processor of the data, which uses algorithms from another provider to calculate health risks and the need for emergency response, and sends it to a healthcare provider, who in turn uses a different service for patient monitoring, who…..

Even if it is clear, the different participants might have different thresholds for accepting their share of liability as their levels of economic benefit in the IoT processing chain are different 4

Fig. 5.8 Liability across the value chain

regulatory authorities need to consider setting out clear guidelines that industry can follow to capitalise on the use of personal data but with appropriate safeguards in place, including recommended anonymization techniques. A number of techniques are emerging to enable the anonymization of data including: • Differential privacy: Adding noise to the process (to the inputs, the calculations themselves or to the outputs). For example, census data is often anonymized with noise to protect the privacy of individual respondents; in the United States, differential privacy will be used for the 2020 Federal Census.150 • Federated analysis: Conducting analysis on disparate datasets separately and then sharing back the insights from this analysis across the datasets.151 The technique has been widely used by large technology companies.152 In March 2019, TensorFlow (a widely used open-source library for machine learning) published

150

https://www.census.gov/newsroom/blogs/random-samplings/2019/02/census_bureau_adopts. html 151 This concept is explored in greater detail, in the context of health-related data, in a White Paper by the World Economic Forum titled federated data systems: balancing innovation and trust in the use of sensitive data. 152 Yang, T., Andrew, G., Eichner, H., Sun, H., Li, W., Kong, N., Ramage, D., Beaufays, F. Applied federated learning: improving Google keyboard query suggestions. See: https://www.groundai. com/project/applied-federated-learning-improving-google-keyboard-query-suggestions/1

174

5 Data Integrity, Control and Tokenization

TensorFlow Federated,153 an open-source framework that allows machine learning to be performed on federated datasets. • Homomorphic encryption: Encrypts the data so that analysis can be performed on it, without the information itself ever being readable.154 The first fully homomorphic encryption (FHE) system was proposed in 2009 by Craig Gentry.155 • Zero-knowledge proofs: Allows for one party to prove to another some specific information without sharing anything other than the intended information.156 As with federated analysis, the technique is being used in conjunction with other emerging technologies. • Secure multiparty computation (SMC): As with homomorphic encryption and zero-knowledge proofs, this technique allows for individual privacy to be maintained when sharing information with untrusted third parties. SMC relies on ‘secret sharing’,157 where sensitive data from each contributor is distributed across every other contributor as encrypted ‘shares’.158 The first live implementation of SMC was in 2008, when it was used to determine sugar beet market prices in Denmark without revealing individual farmers’ economic position.159

5.13.2 Reliability and Accuracy Standards Already a number of critical services like healthcare monitoring and weather monitoring are transitioning to more IOT type devices. This impacts people’s lives and livelihood. So there are expectations of an extremely high level of reliability and accuracy of data coming from IOT devices, as the central processing algorithms and control functions rely entirely on the incoming data. To make IOT devices for all such applications affordable and increase their adoption, we run the risk of compromising on their reliability and accuracy of data capture, data transport and local computation capabilities. Policy makers and regulatory bodies must consider appropriate standards on how to balance affordability and quality, as well as the management of multiple hand-offs for data along the value chain.

153

Ingerman, A., Ostrowski, K. (2019). Introducing TensorFlow Federated. See: https://medium. com/tensorflow/introducing-tensorflow-federated-a4147aa20041 154 Branscombe, M. (2019). Is homomorphic encryption ready to deliver confidential cloud computing to enterprises? See: https://www.techrepublic.com/article/is-homomorphic-encryptionready-to-deliver-confidential-cloud-computing-to-enterprises 155 https://crypto.stanford.edu/ 156 Goldwasser, S., Micali, S., Rackoff, C. (1985). The knowledge complexity of interactive proofsystems. See: https://dl.acm.org/citation.cfm?id¼22178 157 Two-party computation is a special case of SMC where only two entities are involved in the datasharing (as opposed to multiparty computation, where a number of entities can be involved). 158 Shamir, A., Rivest, R., Adleman, L. (1979). Mental Poker. Massachetts Institute of Technology. 159 https://ercim-news.ercim.eu/en73/special/trading-sugar-beet-quotas-secure-multiparty-computa tion-in-practice

5.13

Privacy and Data Protection

175

5.13.3 Data Mobility Data portability can empower individuals, thus leading to better functioning and competitive markets and ultimately growth and consumer welfare benefits. Data portability can be seen through a data protection lens and can also be considered as a means to achieve the free flow of data in a world where a few data monoliths are attempting to control data. At a consumer’s request, their data can be seamlessly shared by a business with the consumer’s chosen third party. Personal data mobility however goes beyond just data portability. Data portability typically just refers to consumers being able to themselves request access to and move data from one business to another. Data mobility encompasses the ability for that data to be moved or shared directly between a business and a third party at the customer’s request. Data portability and mobility is expected to increase competition between providers of digital goods and services. However it does pose policy and implementation challenges, including concerns regarding data security and implementation costs, as well as practical issues such as when a request is made to port a photograph of two friends from one social network service to another, how can the rights of the second individual’s privacy rights be respected here?

5.13.4 Open Data The value of many IOT, smart cities and applications of AI, is possible when public sector agencies release key sets of governmental data to the public for any use, or reuse, in an easily accessible manner. The value can be significantly increased when the data is discoverable, actionable and available in standard formats for machine readability. Much of the value of digital transformation can be destroyed if market participants are not provided access to such data. Recent McKinsey research found that open data could unlock some US$3 trillion in annual value.160 Research by Deloitte found that the release of open data by Transport for London is generating annual economic benefits and savings of up to £130 million a year.161 At the same time, evidence suggests that large data holdings are at the heart of some platform markets to be dominated by single players and for that dominance to be entrenched in a way that lessens the potential for competition for the market. In these circumstances, if other solutions do not work, data openness, could be the necessary tool to create the potential for new companies to enter the market and challenge an otherwise entrenched firm. Policy makers will need to carefully define

160

McKinsey. (2013). Open data: Unlocking innovation and performance with liquid information. Deloitte. (2017). Assessing the value of TfL’s open data and digital partnerships. July 2017. See: http://content.tfl.gov.uk/deloitte-report-tfl-open-data.pdf 161

176

5 Data Integrity, Control and Tokenization

the circumstances in which personal data can be shared and in what format it can shared. Where risks of data monopolies are unacceptably large, policy may need to focus on creating data exchanges and trusts, accessible by all participants and not the few.

Part IV Data Processing and AI

Disruptive Applications Data Processing and AI Data Integrity, Control and Tokenization Data Capture and Distribution

Data Connectivity: Telecommunications and Internet of Things

6

Data Processing and AI

Today, Artificial Intelligence (‘AI’) is at the heart of many applications. Video recognition technology can lip read speech and respond to speech; character recognition systems can decipher written script; photo software identifies people in pictures; there is near real-time translation from multiple languages at your fingertip, self-driving cars and applications for medical diagnosis could soon become common and AI is behind the rise of the robots at manufacturing and distribution facilities. However, together with its potential comes many important questions that will not only help determine its future success, but also how it impacts society. Given that AI works by learning from training data, the nature of such training date determines how the AI will respond in the real world. Where discrimination or bias is built into the training, the AI will discriminate against certain individuals in the real world. Given that AI only works with vast amounts of data feeding its ability to learn and adapt, those firms that have access to data will become key players in the AI economy, potentially at the expense of others and ultimately consumers. In generating the data required to drive these AI algorithms more intrusive means of acquiring personal data and associated meta data could become more prominent, striking at the heart of personal privacy. As AI advances, its use cases will mushroom, some applications delivering much needed solutions in healthcare, but others that could be used in military applications and by unscrupulous individuals and nations. This chapter examines the advances in AI, exploring its present day uses, future capabilities, its impact on economies and society and the associated policy considerations that are raised.

6.1

The Rise of Artificial Intelligence

AI is a very general investigation of the nature of intelligence and the principles and mechanisms required for understanding or replicating it. One of the most significant papers on machine intelligence, ‘Computing Machinery and Intelligence’, was # Springer Nature Switzerland AG 2020 B. Vagadia, Digital Disruption, Future of Business and Finance, https://doi.org/10.1007/978-3-030-54494-2_6

179

180

6

Data Processing and AI

written by the British mathematician Alan Turing over 50 years ago. It stands up well under the test of time and the ‘Turing approach/test’ remains universal. Alan Turing, seen as a father for AI, proposed an imitation game to determine if a machine had the capability of offering AI. The imitation game originally included two phases. Phase 1: The interrogator, a man and a woman are each placed in separate rooms. The interrogator’s objective is to work out who is the man and who is the woman by questioning them. The man should attempt to deceive the interrogator that he is the woman, while the woman has to convince the interrogator that she is the woman. Phase 2: The man is replaced by a computer programmed to deceive the interrogator as the man did. It would even be programmed to make mistakes and provide fuzzy answers in the way a human would. If the computer can fool the interrogator as often as the man did, we may say this computer has passed the intelligent behaviour test. Much of the AI seen today, would most probably passes the Turing test. Much of the contemporary excitement around AI, flows from the promise of a particular set of techniques known collectively as Machine Learning (‘ML’). ML refers to the capacity of a system to improve its performance at a task over time. Often this task involves recognising patterns in datasets. Another AI technique that is increasingly seen as being behind recent advancements is Deep Learning which leverages many layered structures to extract features from enormous data sets. AI today is more properly classified as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only Internet searches or only driving a car). However, the long-term goal of many researchers is to create ‘General AI’ (or strong AI). While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, General AI would outperform humans at nearly every cognitive task. In the words of the late Stephen Hawking, “AI could be the biggest event in the history of our civilisation. Or the worst. We just don’t know”. As the world stands at the cusp of this transformative technology, much is at stake. Deployed wisely, AI holds the promise of addressing some of the world’s most intractable challenges, from climate change and poverty, to disease eradication. Used in bad faith, it can lead the world on a downward spiral of totalitarianism and war, endangering—according to Hawking—the very survival of humankind itself. Much like electricity or the steam engine, AI is classified as a general purpose technology that can profoundly change all aspects of life. It is difficult to imagine any segment of society that will not be transformed by AI in years to come.

6.1.1

Why the Sudden Excitement?

As just stated, the field of AI itself dates back many years and to at least the 1950s, when John McCarthy and others coined the term one summer at Dartmouth

6.1 The Rise of Artificial Intelligence

181

College.1 Whilst AI has been here before with its history abound with booms and busts, extravagant promises and frustrating disappointments, it appears we are now at a inflexion point. AI is finally starting to deliver real life business benefits. The ingredients for a breakthrough are in place including: (i) computer power growing significantly, (ii) algorithms becoming more sophisticated and perhaps most important of all (iii) the world is generating vast quantities of the fuel that powers AI, that is data which allows the algorithms to be developed with more accuracy. If the terminology, constituent techniques and hopes and fears around AI are not new, what exactly is? First as stated above, a vast increase in computational power and access to training data has led to practical breakthroughs in ML, a singularly important branch of AI. These breakthroughs underpin recent successes stories, from diagnosing precancerous moles to driving a vehicle. Secondly, policy makers are finally paying close attention. In 2016, the USA House Energy and Commerce Committee held a hearing on Advanced Robotics (robots with AI) and the Senate Joint Economic Committee held the ‘first ever hearing focused solely on AI’.2 That same year, the Obama White House held several workshops on AI and published three official reports detailing its findings.3 Emphasising the importance of AI to the future of the USA economy and national security, on 11 February 2019, President Trump issued an Executive Order (EO 13859)4 directing federal agencies to take a variety of steps designed to ensure that the nation maintains its leadership position in AI. Among its objectives, the EO aims to “Ensure that technical standards. . . reflect Federal priorities for innovation, public trust, and public confidence in systems that use AI technologies; and develop international standards to promote and protect those priorities.” The EO also stated that the United States must drive development of appropriate technical standards in order to enable the creation of new AI-related industries and the adoption of AI by today’s industries. The USA White House then released in early 2020, its ten principles for government agencies to adhere when proposing new AI regulations for the private sector, which was open for consultation as I write this book.5 To some extent, the USA has been a little slow in developing policies towards AI. Many other countries have been hard at work developing policies for AI in their respective countries. Some are more advanced and ambitious than others. Some

1

http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html Sen. Ted Cruz. (2016). Sen. Cruz Chairs First Congressional Hearing on Artificial Intelligence (Nov. 30, 2016); The Transformative Impact of Robots and Automation: Hearing Before the J. Econ. Comm., 114th Cong. (2016). 3 Calo, R. (2017). Artificial intelligence policy: a primer and roadmap. University of California, Davis. Vol 51. 399. 4 Presidential Executive Order No. 13859. (2019). Maintaining American leadership in artificial intelligence. (11 February 2019). See: https://www.whitehouse.gov/presidential-actions/executiveorder-maintaining-american-leadership-artificial-intelligence/ 5 https://www.whitehouse.gov/wp-content/uploads/2020/01/Draft-OMB-Memo-on-Regulation-ofAI-1-7-19.pdf 2

182

6

Data Processing and AI

Table 6.1 Global AI policies Country Australia Canada Singapore Denmark Taiwan France EU

Release date May 2018 March 2017 May 2017 January 2018 January 2018 March 2018 April 2018 January 2020 April 2018 May 2018

Strategy document Australian Technology and Science Growth Plan Pan-Canadian Artificial Intelligence Strategy AI Singapore Strategy for Denmark’s Digital Growth Taiwan AI Action Plan France’s Strategy for AI Communication Artificial Intelligence for Europe White paper on AI Industrial Strategy: Artificial Intelligence Sector Deal Artificial Intelligence R&D Strategy Artificial Intelligence Technology Strategy A Next Generation Artificial Intelligence Development Plan UAE Strategy for Artificial Intelligence Finland’s Age of Artificial Intelligence

Italy Sweden India Mexico

March 2017 July 2017 October 17 December 2017 March 2018 May 2018 June 2018 June 2018

Germany

July 2018

UK South Korea Japan China UAE Finland

Artificial Intelligence at the Service of Citizens National Approach for Artificial Intelligence National Strategy for Artificial Intelligence: #AIforAll Towards an AI Strategy in Mexico: Harnessing the AI Revolution Key points of the Federal Government for an AI Strategy

Dutton, T., Barron, B., Boskovic, G. (2018). Building an AI world: report on national and regional AI strategies. CIFAR, 2018. See: https://www.cifar.ca/docs/default-source/ai-society/ buildinganaiworld_eng.pdf

provide an overlaying policy, with specific sub-sets of policies related to AI yet to be developed, such as India. Table 6.1 details some of these policies. However, this is only snapshot at this point in time. It may well need to be significantly expanded by the time this book is published, given the pace of change within the AI space. The need for developing AI policies serves multiple purposes, first it demonstrates the importance that government places on AI technologies and thus encourages its development further. Second, it details the wider ecosystem that the nation may well need to develop or enhance, including government R&D funding, changes to education policies, changes to data availability, access and protection. Thirdly and importantly, it serves to build trust in AI technologies across the multiple domains in which they may be used, given the inherent potential for adverse consequences on society. Increasing trust in AI technologies is a key element in accelerating their adoption for economic growth and future innovations that can benefit society. The EU approach to AI appears to be to regulate AI, assuming this will drive investment, when the opposite may well be true. I would hazard a guess that the EU approach is more to do with breaking up USA firms operating within the EU and

6.1 The Rise of Artificial Intelligence

183

reducing their dominance, i.e., it is a political consideration rather than an economic one. Among the characteristics that relate to trustworthy AI technologies are accuracy, reliability, resiliency, objectivity, security, explainability, safety and accountability. AI standards and related tools, along with AI risk management strategies, can help to address these concerns and spur innovation. However, it would be wrong to assume that there are no standards relating to AI. In fact the opposite is true. There are too many standards that need to be considered relating to the multiple aspects that AI touches upon. The International Organisation for Standardization (ISO) and the International Electro technical Commission (IEC)—ISO/IEC JTC 1/SC 42 Artificial Intelligence (‘SC42’) was established in October 2017 to develop AI standards that can be applied across applications and industries. SC42 is chartered to work on Information Technology standards, with current work items focused on topics such as updated AI terminology, interoperable framework for AI systems, AI lifecycle, big data, AI trustworthiness, use cases and computational approaches. A number of standards have been published or are under development by SC42 so far, as detailed in Table 6.2. As can be seen, significant work is ongoing in standardising AI and its related space. The pace of innovation, as is always the case, is progressing at an increasing pace. Data from WIPO show the areas and pace of innovation within the AI space:6 nearly 340,000 patent families and more than 1.6 million scientific papers related to AI were published from 1960 until early 2018; the number of patent applications filed annually in the AI field grew by a factor of 6.5 between 2011 and 2017; AI is a major topic in scientific literature, with a total of 1,636,649 papers published up to mid-2018; the recent interest in Deep Learning and neural networks is confirmed by data extracted from GitHub, a collaborative platform for open source software development, which shows a constantly increasing number of repositories mentioning these techniques. In 2014, 238 GitHub repositories mentioned neural networks and 43 mentioned Deep Learning. This increased to 3871 and 3276 in 2017, respectively; the AI techniques on which the patent literature focuses most extensively are Machine Learning, followed by logic programming (expert systems) and fuzzy logic. The most predominant AI functional applications are computer vision, natural language processing and speech processing; there are also strong linkages between clusters. For example, Deep Learning often co-occurs with computer vision applications. Companies at the digital frontier, such as Google and Baidu are betting vast amounts of money on AI. McKinsey estimate between US$20 billion and US$30 billion was spent in 2016. Venture capitalists invested US$4 billion to US$5 billion

6

WIPO. (2019). WIPO technology trends 2019: artificial intelligence. Geneva: World Intellectual Property Organization.

184

6

Data Processing and AI

Table 6.2 AI and related standards Area AI AI AI AI AI AI AI AI AI AI AI AI AI AI AI AI AI

IOT IOT IOT IOT IOT

Standard ISO/IEC 20546: 2019 ISO/IEC CD TR 20547-1 ISO/IEC TR 20547-2: 2018 ISO/IEC FDIS 20547-3 ISO/IEC TR 20547-4 ISO/IEC TR 20547-5:2018 ISO/IEC CD 22989: ISO/IEC CD 23053: ISO/IEC NP 23894: ISO/IEC NP TR 24027: ISO/IEC NP PDTR 24028: ISO/IEC NP CD TR 24029-1: ISO/IEC NP TR 24030: ISO/IEC AWI TR 24368 ISO/IEC AWI TR 24372 ISO/IEC AWI 24668 ISO/IEC AWI 38507: ISO/IEC 30141:2018 ISO/IEC 218231:2019 ISO/IEC NP 30166 ISO/IEC NP 30165 ISO/IEC WD 30162

Description Information technology—Big data—Overview and vocabulary Information technology—Big data reference architecture— Part 1: Framework and application process 2018 Information technology—Big data reference architecture—Part 2: Use cases and derived requirements Information technology—Big data reference architecture— Part 3: Reference architecture Information technology—Big data reference architecture— Part 4: Security and Privacy Fabric Information technology—Big data reference architecture— Part 5: Standards roadmap Artificial Intelligence Concepts and Terminology Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML) Information technology—Artificial Intelligence—Risk Management Information technology—Artificial Intelligence (AI)—Bias in AI systems and AI aided decision making Information technology—Artificial Intelligence (AI)— Overview of trustworthiness in Artificial Intelligence Artificial Intelligence (AI)—Assessment of the robustness of neural networks—Part 1: Overview Information technology—Artificial Intelligence (AI)—Use cases Information technology—Artificial intelligence—Overview of ethical and societal concern Information technology—Artificial intelligence (AI)— Overview of computational approaches for AI systems Information technology—Artificial intelligence—Process management framework for Big data analytics Information technology—Governance of IT—Governance implications of the use of artificial intelligence by organisations Internet of Things (lOT)—Reference Architecture Internet of things (IOT)—Interoperability for Internet of things systems—Part 1: Framework Internet of Things (IOT)—Industrial IOT Internet of Things (IOT)—Real-time IOT framework Internet of Things (IOT)—Compatibility requirements and model for devices within industrial IOT systems (continued)

6.2 Impact of AI on Industry

185

Table 6.2 (continued) Area IOT Automation Robotics

Standard ISO/IEC NP 30149 ISO TC 184 ISO TC 299

Description Internet of things (IOT)—Trustworthiness framework Automation systems and integration Robotics

in AI in 2016 and private equity firms invested US$1 billion to US$3 billion. An additional US$1 billion of investment came from grants and seed funding.7 Amazon has achieved impressive results from its US$775 million acquisition of Kiva, a robotics company that automates picking and packing. ‘Click to ship’ cycle times which ranged from 60 to 75 min with humans, fell to 15 min with Kiva, while inventory capacity increased by 50% and operating costs fell an estimated 20%.8 Netflix has also achieved impressive results from the algorithm it uses to personalise recommendations to its 158 million subscribers worldwide (as of December 2019). Helping customers quickly find desirable content is critical—customers tend to give up if it takes longer than 90 s to find a movie or TV show they want to watch. Through better search results, Netflix estimates that it is avoiding cancelled subscriptions that would reduce its revenue by US$1 billion annually.9 Investments in AI have been across all sectors, but FinTech and HealthTech have seen particularly strong growth. The UK counts at least five AI unicorns amongst the total number of private tech companies with a valuation of US$1 billion: these include, Darktrace (US$1.7 billion valuation), Benevolent AI (US$2.1 billion); Improbable (US$2 billion); Graphcore (US$1.7 billion) and Blue Prism (US$1.3 billion). From mid-2015 onwards, AI deals started to outpace the wider tech economy.

6.2

Impact of AI on Industry

The benefits of AI span across a multitude of sectors. Some of the potential benefits and recent advances are shown below in some key industries.

7

McKinsey. (2017). Artificial intelligence the next digital frontier? Kim, E. (2016). Amazon’s $775 million deal for robotics company Kiva is starting to look really smart. Business Insider, 15 June 2016. See: https://www.businessinsider.com/kiva-robots-savemoney-for-amazon-2016-6 9 McAlone, N. (2016). Why Netflix thinks its personalized recommendation engine is worth $1 billion per year. Business Insider, 14 June 2016. See: https://www.businessinsider.com/netflixrecommendation-engine-worth-1-billion-per-year-2016-6 8

186

6.2.1

6

Data Processing and AI

AI in Health and Medicine

AI in healthcare is assisting with the research and prevention of diseases, as well as diagnosis and treatment of patients. Microsoft’s Project Premonition ‘aims to detect pathogens before they cause outbreaks, by turning mosquitoes into devices that collect data from animals in the environment’. Microsoft is developing drones that autonomously find mosquito hotspots; deploying robots to collect them; and using ‘cloud-scale genomics and machine learning algorithms to search for pathogens’.10 Intel’s Collaborative Cancer Cloud is designed to help researchers discover new biomarkers associated with cancer diagnoses and progression.11 In addition to assisting medical research, AI is increasingly used in applications for the practice of medicine, whether that is helping doctors find the right location to operate during surgical procedures or scanning images for early disease detection.12 AI’s ability to classify and identify images allows it to recognise patterns more quickly and accurately than humans. This has been particularly true in the diagnosis of certain types of cancer.13 A team led by pathologist Andrew Beck developed the C-Path system to automatically diagnose breast cancer and predict survival rates by examining images of tissues, just as human pathologists do. Since 1920s, these humans have been trained to look at the same small set of cancer cell features. The C-path team, in contrast had its software look at images with a fresh eye. Without any pre-programmed notions about which features were associated with cancer tissue, the C-Path system found new patterns and these were good predictors of survival rates. Pathologists had not been trained to look at them previously.14 AI is also having a positive impact on the diagnostic of skin cancer. Melanoma, a cancer that forms in the melanocytes (the skin cells that produce melanin), is not easily identifiable by untrained eyes. The extent to which a doctor can confidently recognise melanoma depends on experience and training. AI can now diagnose skin cancer more accurately than these experts. A recent study published in the Annals of Oncology showed AI was able to diagnose cancer more accurately than 58 skin experts. Human doctors got 87% of the diagnosis correct, while their computer counterpart achieved a 95% detection rate.15

10

https://www.microsoft.com/en-us/research/project/project-premonition Rao, N. (2017). Intel recommends public policy principles for artificial intelligence. Intel White Paper (18 October 2017). See: https://blogs.intel.com/policy/2017/10/18/naveen-rao-announcesintel-ai-public-policy/#gs.rp5bd6 12 https://www.microsoft.com/en-us/research/project/medical-image-analysis/ 13 Perry, P. (2016). How artificial intelligence will revolutionize healthcare. Big Think. See: https:// bigthink.com/philip-perry/how-artificial-intelligence-will-revolutionize-healthcare 14 Rimm, D. (2011). C-Path: a Watson-like visit to the pathology lab. Science Translational Medicine. Vol 3, Issue. 108. 15 Haenssle, H., et al. (2018). Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Annals of Oncology. Vol. 29, Issue 8, pp. 1836–1842. 11

6.2 Impact of AI on Industry

187

When doctors are trying to decipher how much a patient’s brain has been damaged by trauma, they use a coma scale. After performing a series of tests, the doctors give the patient a score. That score reflects the patient’s prognosis and may play a part in decisions regarding the use and possible withdrawal of life support machines. In a Chinese trial, an AI system trained on brain scans came up with its own score, which was very different from that given by the doctors. One patient was given a score of seven out of 23 by doctors, but after the technology analysed his brain scans, the AI gave him 20. A score of seven indicates such a low likelihood of recovery that the patient’s next of kin would be given the option of withdrawing life support. But true to the AI’s prediction, the patient eventually woke up. The AI got nearly 90% of cases right by tracing brain activity invisible to the human eye, such as small changes in blood flow to the brain. The system is now an integral part of the hospital’s daily processes and has helped give the correct diagnosis in more than 300 people.16 One need not be in a state-of-the-art facility or hospital to receive this type of care. Mobile phones are increasingly being used for bio-analytical science, including digital microscopy, cytometry, immunoassay tests, colorimetric detection and healthcare monitoring. The mobile phone can be considered as one of the most important devices for the development of next generation point-of-care diagnostics platforms, enabling mobile healthcare delivery and personalised medicine.17 With advancements in mobile diagnostics, millions more people may be able to monitor and diagnose health related problems. Moreover, with increased connectivity through social media, AI can leverage big data in ways that encourage the uptake of preventive measures. For example, some applications are using AI to estimate real -time problematic areas or establishments that may cause food borne illnesses. Chinese healthcare ecosystem platform ‘Ping an Good Doctor’ is used by more than 89 million customers. It has reached strategic cooperation with nearly 50 hospitals across China. Since 2018, the Chinese State Council and the National Health Commission have issued policies allowing for the development of Internet hospitals. The ‘Hospital Cloud’ system of Ping an Good Doctor will be connected to the systems of cooperative hospitals to form an three-in-one Internet hospital management platform, featuring online diagnosis platform, prescription sharing platform and health management platform. Mindmaze uses AI to optimise rehabilitation for stroke patients. Ginger.io uses it to recommend the best time to take medication based on each patient’s metabolism and other factors. A start-up called Turbine uses AI to design personalised cancer treatment regimens. This kind of tailored treatments may reduce health expenditures by between 5 and 9%, add between 0.2 and 1.3 years to average life expectancy and 16

Song, M., et al. (2018). Prognostication of chronic disorders of consciousness using brain functional networks and clinical characteristics. Elife Sciences. See: https://elifesciences.org/ articles/36173 17 Vashist, S., Mudanyali, O., Schneider, E., Zengerle, R., Ozcan, A. (2014). Cellphone-based devices for bioanalytical sciences. Journal of Analytical and Bioanalytical Chemistry, 406(14), pp. 3263–3277.

188

6

Data Processing and AI

increase productivity by US$200 per person per year. Globally, the economic impact of such advances could range from US$2 trillion to US$10 trillion.18

6.2.2

AI in Financial Services

Up until the last few years, there had been no major innovation since the introduction of the first electronic credit card in 1958, ATMs in the 1970s and SWIFT in 1977. From peer-to-peer lending to mobile wallets and crowd funding, innovators are redefining old verticals or creating entirely new ones. Now AI is being used to assess the credit of clients, back test trading models, analyse market impact of trades, interact with customers through chat bots and fraud detection and for regulatory reporting.19 The finance industry is driven by data. With vast amounts of historic data on customer credit, the finance sector is turning to AI to find patterns in data to develop new more accurate credit scoring models, not only for those with a credit history, but also those that may have none but for whom one could be inferred using AI. This would allow those individuals that would have been denied credit in the past to now be granted such credit. Deloitte recently published a survey showing that so-called ‘frontrunner’ financial services firms who use AI in their business processes saw company-wide revenue growth of 19%.20 However, whilst the sector has huge amounts of data, much of this data is ‘dirty’. The data needs to be cleaned and effort exerted to ensure that it does not introduce bias into its decision making processes. Whilst the financial sector is one of the front runners in using AI, it is still at an early stage, given these data challenges. AI is also playing a role in detecting fraudulent transactions and putting a stop to their usage. According to the FCA, UK banks spend £5 billion a year in combating financial crime.21 Action Fraud reports that between 2015 and 2016 there was a 66% increase in the number of reported cases of payments related fraud in the UK.22 Many financial services companies are exploring AI-based fraud prevention alternatives. MasterCard acquired Brighterion in 2017 to incorporate its AI

18

McKinsey. (2016). The age of analytics: competing in a data-driven world. Financial Stability Board. (2017). Artificial intelligence and machine learning in financial services: market developments and financial stability implications. See: https://www.fsb.org/wpcontent/uploads/P011117.pdf 20 Gokhale, N., Gajjaria, A., Kaye, R., Kuder, D. (2019). AI leaders in financial services: common traits of frontrunners in the artificial intelligence race. See: https://www2.deloitte.com/us/en/ insights/industry/financial-services/artificial-intelligence-ai-financial-services-frontrunners.html 21 Arnold, M. (2018). HSBC brings in AI to help spot money laundering. Financial Times. 8 April 2018. See: https://www.ft.com/content/b9d7daa6-3983-11e8-8b98-2f31af407cc8 22 Walker, P. (2018). NatWest uses machine learning to fight fraud. FSTech. 4 March 2018. See: https://www.fstech.co.uk/fst/NAtWest_Machine_Learning_Fraud.php 19

6.2 Impact of AI on Industry

189

technology for fraud prevention.23 NatWest has adopted an AI solution, called Corporate Fraud Insights from Vocalink Analytics to detect and prevent redirection fraud.24 In fighting money laundering, HSBC is using Quantexa AI software, a UK based start-up.25 AI is driving most of the trades that take place on a number of exchanges. AI algorithms are being trained to perform automatic trading based on defined hedging rules. Approximately 9% of all hedge funds use AI to build large statistical models. In 2016, Aidyia launched an AI hedge fund to make all its stock trades. Sentient Investment Technologies uses a distributed AI system as part of its trading and investment platform. Fukoku Mutual Life Insurance uses IBM’s Watson Explorer AI to calculate pay-outs. Feedzai uses AI to detect fraudulent transactions. UK start-up Leverton applies AI to automatically identify, extract and manage data from corporate documents such as rental leases. In October 2017, exchange traded funds (ETFs) were launched that use AI algorithms to choose long-term stock holdings.26 In 2016, Ziyitong was established and launched an AI platform to help recover an estimated RMB150 billion in delinquent loans.27 The AI platform helps recover delinquent loans for approximately 600 debt collection agencies and over 200 lenders (including the Postal Savings Bank of China and Alibaba). The AI platform is a dialogue robot which utilises information about borrowers and their friends’ network. The dialogue robot uses the information to determine the phrasing with the highest likelihood of pressuring the borrower to repay the loan. The dialogue robot will also call the borrower’s friends, encouraging repayment of the loan. Ziyitong claims its recovery rate is 41% for large clients and loans that are delinquent up to 1 week, a rate that is twice that of traditional debt collection methods. Another major change agent in the financial sector will be the use of chatbots. The finance sector employs significant personnel for back office and customer care roles, many performing repetitive roles which could easily be replaced by AI and in particular, chat bots, which are available around the clock. Chatbots are AI based robots programmed to simulate human conversation through messaging platforms, process incoming data and applying AI to continually improve upon the customer service experience they offer. In the early stages of adoption, chat bots are generally confined to relatively simple customer enquiries and hand over complex queries to a

23 Bary, E. (2018). Visa and MasterCard earnings: more than just payments at play. MarketWatch (25 July 2018). See https://www.marketwatch.com/story/visa-and-mastercard-earnings-more-thanjust-payments-at-play-2018-07-23 24 Walker, P. (2018). NatWest uses machine learning to fight fraud. FSTech. 4 March 2018. See: https://www.fstech.co.uk/fst/NAtWest_Machine_Learning_Fraud.php 25 Arnold, M. (2018). HSBC brings in AI to help spot money laundering. Financial Times. 8 April 2018. See: https://www.ft.com/content/b9d7daa6-3983-11e8-8b98-2f31af407cc8. 26 Buchanan, B. (2019). Testimony in hearing: ‘ending perspectives on artificial intelligence: where we are and the next frontier in financial services’. 26 June 2019. See: https://financialservices.house. gov/uploadedfiles/hhrg-116-ba00-wstate-buchananphdb-20190626.pdf 27 Weinland, D. (2018). China’s debt collectors focus in on $200 billion P2P debt pile. Financial Times. 5 June 2018. See: https://www.ft.com/content/a219cd72-67a3-11e8-8cf3-0c230fa67aec

190

6

Data Processing and AI

human assistant. However, as they become more experienced, a Chatbot will be able to take on ever more complex queries, replacing more and more customer service agent tasks. Chatbots can also be used in combination with other tools to further enhance the experience. Examples include a smart queue feature which will prioritise customers based on their customer lifetime value potential and emotion sensing which enables a chatbot to identify panic words/sentences that imply the customer is unhappy and would benefit from a smooth transfer over to a human agent. The Bank of America has launched its AI chatbot called Erica, available through voice or message chat on the bank’s mobile app. JP Morgan has invested in COiN, which is an AI technology that reviews documents and extracts data in far less time than a human. COiN can review approximately 12,000 documents in a matter of seconds, whereas a human would spend more than 360,000 h of work on the same documents.28 Another AI application that is increasingly being used is facial recognition. Banks and telecommunications firms are today asking applicants to take a selfie as part of the account opening and KYC procedures. JetBlue Airways in the USA is also using facial recognition as part of its onboarding process. Other banks are starting to use facial recognition technology for loan applications, where these systems serve as high tech lie detectors to analyse the micro impressions on the face to reveal their emotional and psychological state. Retailers are using such technology to identify repeat customers, or to compare shoppers against a database of known shoplifters. Where retailers can undertake facial recognition and match with their social media profiles and other buying history, it opens up a host of targeted advertisement and up-sell opportunities. How GDPR works with facial recognition technology is however unclear, given you have already processed the data before explicit consent can be gained. The technology underpinning FinTech is also fuelling a spin-off field known as RegTech which aims to make compliance and regulatory activities easier, faster and more efficient. RegTech utilises Big Data and AI and will have an increasingly larger role to play in compliance. The finance sector is subject to literally hundreds of compliance requirements imposed by each of the countries in which it operates. AI has the potential to gather and develop compliance reporting in near real-time.

6.2.3

AI in Public Sector

The public sector offers significant scope for the use of AI. A number of governments are looking into AI to improve decision making, from identifying tax evasion patterns, sifting through health and social service data to prioritise cases for child welfare, supporting improved public safety to reducing prisoner reoffending, to name a few. 28 Yadav, Y., Brummer, C. (2019). Fintech and the Innovation Trilemma. 107 Georgetown Law Journal. 235 (2019) See: https://scholarship.law.vanderbilt.edu/faculty-publications/1084

6.2 Impact of AI on Industry

191

The public sector in general however, is often seen as a laggard in adopting new technologies. The sheer scale of operations means getting it wrong is likely to have an adverse impact on major sections of society, which is brazenly in the public domain due to transparency requirements. The impact could envelope politicians and make their jobs of being re-elected much harder. The public sector when it comes to AI also needs to be more sensitive than most when it comes to possibilities of bias and inequality. It is for these reasons that the public sector has as of now, not seen major adoption of AI. However with its promise to reduce costs, drive better and faster decisions and offer personalised services, it is a technology that cannot be avoided for long. Another barrier for adoption of AI within the public sector is likely to be the fact that the required data is often neither accessible nor discoverable. Much of the data required will reside within different government agencies, who will be reluctant to share such data or will require significant data manipulation before it could be of use. There are however, many potential use cases. One of the obvious applications is the use of chatbots to answer questions its residents/citizens may have about the services being delivered by the government agency or information about their own records. In Los Angeles, a chatbot answers business related questions for citizens. In Mississippi, people can use Amazon Alexa to plug into government information about things like taxes and vehicle registration. The UK police forces have started to use facial recognition systems in the last 4 years. South Wales Police has trialled the technology to scan faces in crowds at major sports events and check them against a watch list drawn from the 12.5 million faces on the police national database. In Malaysia, police are using facial recognition software built by Chinese start-up Yitu Technology on their body cameras to scan and identify faces. In Singapore, the government is aiming to install cameras on more than 100,000 lamp posts which it claims can be used to perform crowd analytics and support anti-terrorism operations. The Chinese government intends to identify anyone, anytime and anywhere in China within 3 s, where it is spending billions of US dollars on video surveillance. Many government agencies, including the UK HMRC are using AI to identify possible tax evasion or fraudulent benefit claims by combining and looking for patterns in data from multiple sources, including social media data. Many other public sector applications of AI include those typically associated with smart cities and include smart city transportation, smart parking, smart utility delivery, as well as better law enforcement. Using predictive crime analytics, AI has helped to efficiently deploy law enforcement.29 AI has been combined with drone footage to combat wildlife poaching and illegal logging in Uganda and Malaysia.30 AI is also helping to 29

Rieland, R. (2018). Artificial intelligence is now used to predict crime. But is it biased? 5 March 2018. See: https://www.smithsonianmag.com/innovation/artificial-intelligence-is-now-used-pre dict-crime-is-it-biased-180968337/ 30 Snow, J. (2016). Rangers use artificial intelligence to fight poachers. National Geographic. 12 June 2016. See: https://www.nationalgeographic.com/news/2016/06/paws-artificial-intelli gence-fights-poaching-ranger-patrols-wildlife-conservation/

192

6

Data Processing and AI

identify key people in social networks of Los Angeles, California’s homeless youth population to help mitigate the spread of HIV.31 Many of the applications of AI may well be internal to the public sector agency in an effort to improve efficiencies and reduce costs. Chatbots could be used by internal IT helpdesks to answer traditional high volume calls, such as recovering passwords.

6.2.4

AI in Retail

In the age of the individual, today’s discerning shopper wants a targeted, personalised retail experience that matches their buying habits. Popular technology companies such as Amazon, Netflix, Spotify and Facebook as well as traditional retailers are now using AI to tailor consumer advertisements and customer experiences. Nielsen’s Artificial Intelligence Marketing Cloud enables, it claims, to ‘respond instantly to real-time changes in consumer behaviour, resulting in more relevant content and advertising, higher levels of customer engagement and improved ROI’.32 Germany’s Cognitec makes systems that offer targeted advertisement, whereby the system scans the faces of people passing by and matches these against known profiles and displays appropriate targeted advertisements. The global facial recognition market is estimated to be worth some US$9 billion by 2022.33 Google, Facebook, Apple, Microsoft, Alibaba and Tencent all have huge databases, which they are all trying to monetise through better advertisement. However, there are significant privacy and big brother concerns around the use of such technology. San Francisco recently became the first major American city to ban the police and other public authorities from using facial recognition technology. Others are likely to follow. AI can be seen to replace employees in the workplace through automation or in many cases complementing them. In Ocado’s (UK online supermarket) distribution centres, robots steer thousands of product-filled bins over a maze of conveyor belts and deliver them to human packers just in time to fill shopping bags. Other robots whisk the bags to delivery vans whose drivers are guided to customers’ homes by an AI application that picks the best route based on traffic conditions and weather. Global fashion retailer Zara is already renowned for developing and shipping new products within 2 weeks. It is now using digital tools to respond even faster to consumer preferences and reduce supply chain costs, attaching reusable radio frequency identification (RFID) tags to every item of clothing in more than 700 of its 2000 plus stores. Ten staff members can now update a store’s inventory in a 31 Clay, J. (2017). USC researcher, and AI, give homeless youth a helping hand with HIV education. USC News 14 July 2017. See: https://news.usc.edu/124831/usc-researcher-and-ai-give-homelessyouth-a-helping-hand-with-hiv-education/ 32 Nielsen Launches Artificial Intelligence Technology. (4 April 2017). See: https://www.nielsen. com/us/en/press-releases/2017/nielsen-launches-artificial-intelligence-technology/ 33 www.MarketResearchFuture.com

6.2 Impact of AI on Industry

193

couple of hours, work that used to take 40 employees more than 5 h, by waving small handheld computers at racks of clothing. The German online retailer Otto uses an AI application that is 90% accurate in forecasting what the company will sell over the next 30 days. The forecasts are so reliable that Otto now builds inventory in anticipation of the orders AI has forecast, enabling the retailer to speed deliveries to customers and reduce returns. Otto is confident enough in the technology to let it order 200,000 items a month from vendors with no human intervention.34 Amazon has built a retail outlet in Seattle that allows shoppers to take food off the shelves and walk directly out of the store without stopping at a checkout kiosk to pay. The store, called Amazon Go, relies on computer vision to track shoppers after they swipe into the store and associate them with products taken from shelves. When shoppers leave, Amazon debits their accounts for the cost of the items in their bag and emails them a receipt. In the future, the shopping experience is likely to be completed by delivery drones. Augmented reality, discussed subsequently, will allow brick and mortar retailers to take these customer experience to a new level blending digital and physical worlds.

6.2.5

AI in Agriculture

As some say, the world doesn’t have a food shortage, just a food distribution problem. Agriculture must become more efficient and sustainable if it is to provide enough food for a growing world population. Digital information about weather, soil conditions and crop health is already helping modern farmers optimise their harvest yields. Now with IOT and AI, further intelligent digital tools are being created, with the objective of conserving resources, safeguarding harvests and protecting the environment. Just as the agricultural sector was an early industrial user of GPS and IOT, it is an early adopter of AI, finding numerous applications for AI technology. AI can help determine how crop protection and seed products can be optimally applied to individual zones of the farm at the correct time. AI has the potential to help farmers budget better for every grain of seed and millilitre of crop protection agents. In the future, digital technologies will help avoid potential harvest losses, increase yields globally and go easy on the environment as well as the farmer’s pocket. The technology may well help reduce the increasing and alarming suicide rates amongst the farming community.

34 The Economist. (2017). How Germany’s Otto uses artificial intelligence. See: https://www. economist.com/business/2017/04/12/how-germanys-otto-uses-artificial-intelligence

194

6.2.6

6

Data Processing and AI

AI in Manufacturing/Logistics

Manufacturers have been digitising their plants and factories with distributed and supervisory control systems and advanced process controls for many years. The next step in the journey is the adoption of AI to improve analytics and within decision support solutions. AI’s potential in manufacturing is vast given the sheer scope of possible applications, from real-time maintenance of equipment to virtual design that will allow for new, improved and customised products, to creating smart supply chains and the creation of new business models altogether. Studies show that unplanned downtime costs manufacturers an estimated US$50 billion annually, with asset failure being the cause of 42% of this unplanned downtime.35 Predictive maintenance is fundamental for manufacturers who have much to gain from being able to predict the next failure of a part, machine or system. AI algorithms, based on historic and IOT enabled information can help formulate predictions about possible asset malfunction and drastically reduce unplanned downtime and can extend the useful life of equipment. AI combined with IOT is being used to collect data about the use and performance of products in the field. Using data from sensors, drones and other hardware, AI can help grid operators avoid decommissioning assets before their useful lives have ended, while simultaneously enabling them to perform more frequent remote inspections and maintenance to keep assets working well. National Grid in the UK is collaborating with DeepMind, an AI start up bought by Google in 2014, to predict supply and demand variations based on weather related variables and smart meters as exogenous inputs. The goal is to cut national energy use by 10% and maximise the use of renewable power. Siemens Corporate Technology announced in 2017, that they have developed a two armed robot that can manufacture products without having to be programmed. The robot’s arms automatically work together, dividing tasks as needed in the same way humans use their own arms. The robot itself decides which task each arm should perform.36 A digital twin, discussed in subsequent sections which pairs the virtual and physical worlds will allow analysis of data and monitoring of systems to head off problems before they even occur.

35

https://partners.wsj.com/emerson/unlocking-performance/how-manufacturers-can-achieve-topquartile-performance 36 https://new.siemens.com/global/en/company/stories/research-technologies/artificial-intelligence/ prototype-robot-solves-problems-without-programming.html

6.3 Impact of AI on Economies

6.2.7

195

AI in Education and Training

AI in education has the ability to transform and individualise the student experience. AI combined with Virtual Reality (VR) and Augmented Reality (AR) will transform and improve online and remote education through virtual classrooms. Online tutoring companies are already using AI to analyse, review and tailor individual learning experiences based on techniques where each student seems most responsive.37 Adaptive learning has been a growing trend, with some 40 companies, such as Knewton and DreamBox Learning, already marketing adaptive learning systems to schools in North America, Europe and Asia.38 Adaptive learning attempts to address the limitations of conventional classroom teaching by capturing information about what each student knows and crafting custom lesson plans based on individuals’ knowledge and progress. At Arizona State University which has used adaptive learning, it has led to student pass rates in maths improving from 66 to 75%.39 A European Union project called iTalk2Learn is currently developing an opensource intelligent tutoring platform to help primary school students learn mathematics. A combination of ML, computer vision and natural language processing enables the platform to interact with and respond to a student’s speech throughout a tutoring session. AI is also likely to mark tests and examination papers, while at the same time offering recommendations for how to close the gaps in learning for each individual student. As AI automates these tasks, it will open up more time for teachers to spend with each student.

6.3

Impact of AI on Economies

Several investment banks and consultancies have attempted to forecast the economic value of applying AI to existing activities by industry and geography. Bank of America Merrill Lynch in 2015 reported that it expected that in 10 years, robotics and AI would produce annually an ‘impact value’ of between US$14 trillion and US$33 trillion, potentially including at the upper end of this range, US$8 to US$9 trillion in cost reductions across manufacturing and healthcare, US$9 trillion in

37

Devlin, H. (2016). Could online tutors and artificial intelligence be the future of teaching? The Guardian (26 December 2016). See: https://www.theguardian.com/technology/2016/dec/26/couldonline-tutors-and-artificial-intelligence-be-the-future-of-teaching 38 Waters, J. (2014). Adaptive learning: are we there yet? The Journal. 4 May 2014. See: https:// thejournal.com/articles/2014/05/14/adaptive-learning-are-we-there-yet.aspx 39 IBL News. (2019). Arizona state university develops the first adaptive-learning degree in science. 26 August 2019. See: https://iblnews.org/asu-transforms-undergraduate-science-education-develop ing-the-first-adaptive-learning-degree/

196

6

Data Processing and AI

employment cost savings and some US$1.9 trillion in efficiency gains by autonomous cars and drones.40 Using different methodologies, the McKinsey Global Institute in 2013 came to a range of US$10 trillion to US$25 trillion per year by 2025 for robotics, AI and dataintensive activities like IOT.41 McKinsey in another study published in 2017, suggested that automation could raise productivity growth worldwide by 0.8–1.4% per year,42 with potential economic impact of autonomous cars and trucks alone in the range US$200 billion to US$1.9 trillion per year by 2025.43 A 2016 report by the Analysis Group, funded by Facebook, estimated the ‘reasonable range’ of economic impact of AI until 2026 at between US$1.49 trillion and US$2.95 trillion.44 A report by Accenture in 2016, based on economic modelling by Frontier Economics, estimated that the widespread use of AI in society and business had the potential to double the annual economic growth rate by 2035 in the dozen developed economies that were studied. The report forecasted, for example, that in the case of the USA, absorption of AI in the economy would increase the rate of growth in gross value added (GVA)—a close approximation to gross domestic product (GDP)—from a baseline 2.6–4.6% in 2035, equivalent to an additional US$8.3 trillion GVA per year. Accenture claims that AI could boost productivity by up to 40% by 2035.45 In 2017, PwC forecast that GDP worldwide could be as much as 14% higher in 2030 because of AI technologies, which it valued as potentially contributing some US$15.7 trillion to the global economy. The majority of the gains would, in its assessment, come from retail, financial services and healthcare in terms of greater productivity, enhanced products and higher demand.46

40 Ma, B., Nahal, S. and Tran, F. (2015). Robot revolution—global robot & AI primer. Bank of America Merrill Lynch, 16 December 2015. 41 McKinsey. (2013). Disruptive technologies: advances that will transform life, business and the global economy. 42 McKinsey. (2017). A future that works: automation, employment, and productivity. 43 McKinsey. (2013). Disruptive technologies: advances that will transform life, business and the global economy. 44 Chen, N., Christensen, L., Gallagher, K., Mate, R. and Rafert, G. (2016). Global economic impacts associated with artificial intelligence, analysis group. See: https://www.analysisgroup. com/uploadedfiles/content/insights/publishing/ag_full_report_economic_impact_of_ai.pdf 45 Purdy, M., Daugherty, P. (2016). Why artificial intelligence is the future of growth. Accenture. See: https://www.accenture.com/t20170524t055435__w__/ca-en/_acnmedia/pdf-52/accenturewhy-ai-is-the-future-of-growth.pdf 46 PwC. (2017). Sizing the prize: what’s the real value of AI for your business and how can you capitalise? See: https://www.pwc.com/gx/en/issues/data-and-analytics/publications/artificial-intelli gence-study.html

6.4 Impact of AI on Society

197

IDC in 2016, estimated that the global adoption of cognitive systems and AI across a wide area of sectors will drive worldwide business revenues from €6.4 billion in 2016 to more than €37.8 billion in 2020.47 Whilst the numbers differ, what is clear is that AI is expected to have a large enough impact that will show up in the GDP numbers. It could be argued that the true economic contribution from AI could be even higher, given the traditional productivity analysis measures fail to capture the true value, in that GDP only measures monetary transactions. If the service is free, it is invisible as output. Going even further, if the AI-infused service replaces a paid for service, GDP numbers may be seen to actually go down, whilst productivity will have been increased in reality. With the scope for AI delivering significant economic benefits, the European Commission (EC) issued in April 2018, a communication on ‘AI for Europe’, announcing among other things, its aim to increase investment in AI research and innovation by at least €20 billion from then until the end of 2020. To support this effort, the EC was to increase its investment dedicated to AI to €1.5 billion for the period 2018–2020 under the Horizon 2020 research and innovation program. The EU Member States also signed a Declaration of Cooperation on Artificial Intelligence in 2018 in which they agreed to work together on the most important issues raised by AI, from ensuring Europe’s competitiveness in research and deployment of AI, to dealing with social, economic, ethical and legal questions. In February 2020, the European Commission released its White Paper on Artificial Intelligence, which called for further investment, as well as many policy proposals to guarantee the ethical use of AI aligned with European values. The Whitepaper aims for Europe to become the most attractive, secure and dynamic data-agile economy in the world. Whilst this is a worthy goal, much of the White Paper appears to fall short on how this can be enabled and instead focusing on how AI can be regulated and what new obligations could be adopted to address the risks caused by AI, starting with preventive obligations at design and development stages, before AI-based products or service are made available on the market. It also proposed a framework to ensure effective liability mechanisms when damage occurs. The proposals for so-called ‘high-risk AI’—that this AI that potentially interferes with people’s rights have to be tested and certified before they reach the market.

6.4

Impact of AI on Society

AI enabled services have the potential to deliver the right knowledge and information at an individual’s fingertips, reducing time wasted in traffic, delivering new forms of entertainment, as well as delivering convenience by matching product and services that are better equipped to meet our needs through AI. Location, language or other obstacles may cease to be barriers to trade or general consumption. International Data Corporation (IDC). ‘Worldwide semiannual cognitive/artificial intelligence systems spending guide’, October 2016.

47

198

6

Data Processing and AI

The significance of AI’s positive impact is unfortunately mirrored by its likely destabilising effects on many aspects of economic and social life as we know it. • Loss of control: Individuals worry about losing control of their personal information and feel increasingly vulnerable to online abuse. Online platforms such as Facebook have come to play a quasi-public role, essentially regulating what individuals read, see, hear or say, while harvesting data to refine their understanding of people’s behaviour, preferences and potentially undermining the democratic process itself. Deep Fake (deepfakes) videos; fictional photos or voices which are AI-generated fake images, videos and audio are becoming more common and more convincing. The result is that it is becoming difficult to know what is the truth and what is fake online. These techniques are being used today for blackmailing individuals, in the future they could be used to shape public opinion. The more insidious impact of deepfakes, along with other synthetic media and fake news, is to create a zero trust society, where people cannot, or no longer bother to, distinguish truth from falsehood; • Loss of employment: AI, together with IOT and robotics will affect employees, as the jobs that many people perform today may become automated and performed by machines; • Loss of security: The platform economy (‘gig’, ‘sharing’ or ‘on-demand’ economies), though small today, is growing quickly across many sectors. The adoption of such a model lowers transaction costs of businesses accessing a larger pool of potential workers and suppliers, with workers increasingly engaged as independent contract workers. This has benefits for some workers (greater flexibility, additional income and access to work), but at the same time, these jobs rely mostly on non-standard work arrangements that may limit access to regular jobs; offer less promising employment careers and reduce access to social protection as employees effectively become self employed. Platforms also exhibit winner takes all economics, which may reduce the competitive process and in the long-run harm consumers. Platforms are discussed in more detail in a subsequent chapter. The transformative nature of AI requires policy makers to balance the potential upside of AI whilst being vigilant against adverse social impacts. Policy makers need to identify potential threats from AI and consider if action is required to tackle these. It is however important that policy makers do not seek to control the evolution of technology, but steer its direction. Focus should shift to nurturing the development of a favourable regulatory environment and effort spent to ensure the necessary inputs are available, including access to training data, underlying technologies and skills to both develop and consume these digital services.

6.4.1

AIs Impact on Employment

The orthodox view is that AI will lead to a ‘jobs apocalypse’ whereby mass unemployment becomes the norm. Numerous reports and books have sounded the

6.4 Impact of AI on Society

199

alarm, predicting massive job losses.48 The historian Yuval Noah Harari goes even further in his book Homo Deus, arguing that AI will render a vast swathe of humanity ‘useless’.49 There is no doubt the widespread usage of AI and robotics will have a profound impact on employment. The changes to employment are already evident. Jobs involving significant amounts of repetition have already been affected by technology and remain at a high risk of automation. The Bank of England, in an informal exercise, put potential job losses at around half the British workforce.50 A study by PwC estimated that some 30% of British jobs are vulnerable to automation from AI and robotics by the early 2030s; the comparable estimates for the USA and Germany are 38% and 35% respectively, while Japan is somewhat lower at 21%.51 However, forecasts of mass unemployment arising from AI are open to criticism. Jobs are comprised of tasks, which themselves vary in the degree to which they can be automated. For example, much of a lawyer’s job can be done by software, like finding the right precedent and constructing arguments.52 But among the valuable activities that lawyers fulfil, is to ease the anxiety of clients. The work is about empathy, not just legal interpretations. In the USA, as legal software has been introduced since 2000, the number of law clerks and paralegals has grown and at a faster rate than the overall workforce, rather than declining.53 The predictions of doom driven by AI may not actually materialise. Using a task-based analysis may be more appropriate. According to a 2016 McKinsey study of more than 2000 work activities across 800 jobs, around half of the activities in the global economy could be automated with current technology. About 60% of occupations have at least a third of tasks that could be automated, but fewer than 5% of jobs can be entirely automated, the report noted.

48

Avent, R. (2016). The wealth of humans: work, power, and status in the twenty-first century. New York, NY: St Martin’s Press; and Ford, M. (2015). Rise of the robots: technology and the threat of a jobless future. New York, NY: Basic Books; and Brynjolfsson, E. and McAfee, A. (2014). The second machine age: work, progress, and prosperity in a time of brilliant technologies. W. W. Norton & Company. and Kaplan, J. (2015). Humans need not apply: a guide to wealth and work in the age of artificial intelligence, New Haven, CT: Yale University Press. 49 Harari, Y. (2016). Homo Deus: a brief history of tomorrow, London: Harvill Secker. 50 Haldane, A. (2015). Labour’s share: speech given at Trades Union Congress, Bank of England, London, 12 November 2015. 51 PwC. (2017). Up to 30% of existing UK jobs could be impacted by automation by early 2030s, but this should be offset by job gains elsewhere in economy. PwC Blog, 24 March 2017. See: https://pwc.blogs.com/press_room/2017/03/up-to-30-of-existing-uk-jobs-could-be-impacted-byautomation-by-early-2030s-but-this-should-be-offse.html 52 Susskind, R. and Susskind, D. (2016). The future of the professions: how technology will transform the work of human experts. OUP Oxford; Reprint edition (22 October 2015). 53 Bessen, J. (2016). The automation paradox. The Atlantic, 19 January 2016. See: https://www. theatlantic.com/business/archive/2016/01/automation-paradox/424437/

200

6

Data Processing and AI

Moreover, an OECD tasks based study, also published in 2016, found that on average, just 9% of jobs in mostly advanced economies, are automatable.54 Whilst computers are getting better at tasks like determining people’s emotional states by observing their facial expressions and vocal patterns, they remain far from being able to emotionally connect with people. Computers could make predictions and diagnoses human health conditions, but humans will still be required to relate and console humans. In addition, many manual jobs that do not need computer’s to accurately predict or find patterns in large data sets are unlikely to be affected by AI. Even robots that many believe could replace manual work, are likely to lack the dexterity required and certainly not at the cost achievable by humans. Cooks, gardeners, carpenters and home health care are not about to be replaced by AI or machines in the near term. Recent OECD findings suggest that so far, while leading to restructuring and reallocation, the introduction of Information Communication Technologies (ICT) have not led to greater unemployment over time. If adopted successfully, i.e., if combined with organisational changes and good managerial practices,55 ICTs can contribute to increased productivity, which progressively translates into lower prices and/or new products, higher final demand and higher employment, thus compensating for the initial job displacement. There is indeed evidence that the introduction of ICT has thus far not produced an increase in technological unemployment.56 Economic history shows that automation may actually create new jobs around new processes and these new jobs still require people. ‘Technology eliminates jobs, not work,’ noted a USA government report on automation published back in 1966.57 Over the past 15 years, automation has created around four times as many jobs in the UK as it has destroyed, according to a report published by Deloitte in 2015.58 Another reason why work will not disappear entirely is that there are many jobs in which human engagement is indispensable. Education, healthcare, sales are areas in which empathy and social skills are critical. People will adapt their skills to where they have comparative advantage over software code and computer chips. For example, if AI systems can detect the propensity of certain diseases, there will be a large share of the population concerned about taking steps to lower their chances of

54

Arntz, M., Gregory, T. and Zierahn, U. (2016). The risk of automation for jobs in OECD countries: a comparative analysis. OECD Social, Employment and Migration Working Papers, No. 189, Paris. 55 OECD. (2004). The economic impacts of ICT—measurement, evidence and implications. OECD. 56 OECD. (2015). ICTs and jobs: complements or substitutes? The effects of ICT investment on labour market demand by skills and by industry in selected countries. OECD Digital Economy Working Papers, No. 259. OECD. 57 Bowen, H. et al. (1966). Report of the national commission on technology, automation, and economic progress: volume I February 1966. United States Government. See: https://files.eric.ed. gov/fulltext/ED023803.pdf 58 Deloitte. (2015). From Brawn to Brains: the impact of technology on jobs in the UK. London: Deloitte LLP. See: https://www2.deloitte.com/content/dam/Deloitte/uk/Documents/Growth/ deloitte-uk-insights-from-brawns-to-brain.pdf

6.4 Impact of AI on Society

201

suffering from the ticking time-bomb inside their bodies. This will mean changes in lifestyle, diet, exercise, stress levels and the demand for ‘health coaches’ will rise, just as they have turned to therapists, nutritionists, executive coaches, personal trainers over the past decade; jobs that largely did not exist a couple of decades ago.

6.4.2

AI Likely to Put Downward Pressure on Wages

While not all jobs will be displaced with AI or machines, a more worrying development is that as some jobs do get automated, the competition for the remaining jobs will increase. Such supply increases will put downward pressure on wages for those remaining jobs. At the same time, when returns on capital are greater than the returns on labour, firms will invest money in AI and machines to perform tasks rather than hire staff. The only way people can remain competitive will be to work for less. As Jason Furman, chairman of the USA Council of Economic Advisers under President Obama, put it in 2016: “The traditional argument that we do not need to worry about the robots taking our jobs still leaves us with the worry that the only reason we will still have our jobs is because we are willing to do them for lower wages”.59 Some argue that over time, wages tend to readjust upwards as the inventions that shake the labour market make their way through the economy and public policy works to redistribute the gains. But this process can be extremely protracted. British workers for instance, did not see substantial real wage gains during the early industrial revolution until the 1840s, some 60 years after the labour upheaval began.60 A more global problem that might also play out in the AI economy can be seen in a more recent economic trend; offshoring, where jobs have not been replaced by robots but by lower cost workers in developing countries. This resulted in lay-offs and put downward pressure on wages in many advanced economies.61 This trend may have however come to an end, as Robotic Process Automation (RPA) starts to make inroads into organisations, reducing incentives for offshoring. Offshoring relied on exports (goods and labour) from countries with low cost labour to richer countries. However, as the cost of manufacturing declined through automation, a process of ‘in-shoring’ or ‘re-shoring’ has been seen, whereby work previously sent abroad comes back to their home countries. More domestic manufacturing means fewer jobs in export industries in poor countries and less global trade. As more work 59 Furman, J. (2016). Is this time different? The opportunities and challenges of artificial intelligence. Remarks at AI now: the social and economic implications of artificial intelligence technologies in the near term. New York University. 07 July 2016. See: https:// obamawhitehouse.archives.gov/sites/default/files/page/files/20160707_cea_ai_furman.pdf 60 Mokyr, J., Vickers, C., Ziebarth, N. (2015). The history of technological anxiety and the future of economic growth: is this time different? Journal of Economic Perspectives, 29(3), pp. 31–50. 61 Bramucci, A. (2016). Offshoring, employment and wages. Working Paper, No. 71/2016, Berlin: Institute for International Political Economy.

202

6

Data Processing and AI

stays local, the use of migrant workers will decline. If a host country’s reliance on migrant labour is reduced through automation, one effect may be that remittances from overseas workers from poorer countries to their home countries fall, deepening those poorer countries’ economic difficulties. AI could ‘erode the comparative advantage of much of the developing world,’ warns David Autor, an economist at MIT.62 An alternative scenario may be that if emerging market jobs are hit hard by AI, this may spur more migration to advanced countries. This would put even more downward pressure on unskilled wages in advanced economies and foster even more anti-migration sentiment and raise political tensions further. President Trump’s anti-globalisation rhetoric may well be a well thought out economic plan to sustain American wages in an era of AI rather than just plain old racism—well that is a positive spin on it anyway. One way for policy makers to look at this problem is to view unemployment as a market failure where the benefits of increasing employment (reduced crime, greater consumer spending, stronger communities) are seen as a positive externality which extends to people throughout society, not just narrowly looking at the employees and unemployment as negative externalities. If unemployment creates negative externalities, then economists would suggest that governments should reward employment instead of taxing it. A principle which has been used by the UK government through the working tax credit system, which rewards people that work. Sounds great, but governments usually tax people because they need the money to fund social programs and infrastructure investment. With automation, as labour is replaced with AI and machines, government tax receipts are likely to fall. Governments may not be in a position to reward workers and will need to explore mechanisms to widen the tax net, possibly taxing machines or AI algorithms. South Korea have an indirect tax for firms that over-automate. Bill Gates hypotheses a form of tax for highly automated businesses, where if a robot displaces a worker with a certain salary, the company would have to pay a tax that would make up for the costs of payroll and income tax. As a society, we need to start discussing whether we need to move away from the current market mechanism and the profit motive, towards sharing the burdens and benefits of these digital developments much more equally—not quite communism but you get the drift. Many have urged Nordic style policies that provide social benefits to individuals regardless of work status, in return for labour market liberalisation. This it is argued, provides businesses freedom to manoeuvre and makes the economy more agile. The model is termed ‘flexicurity’ (flexible security).63 Another more ambitious policy is a universal basic income. The idea is to redistribute economic gains (in this case from AI and machines) more evenly

62

The Economist. (2016). The return of the machinery question. The Economist, 25 June 2016. See: https://www.economist.com/sites/default/files/ai_mailout.pdf 63 Colin, N., Palier, B. (2015). The next safety net: social policy for a digital age. Foreign Affairs, 16 June 2015. See: https://www.foreignaffairs.com/articles/2015-06-16/next-safety-net

6.4 Impact of AI on Society

203

throughout society. Although the idea predates AI, it has acquired momentum with the rise of wealthy digital businesses and in light of the potential productivity boom that AI might unleash. Small trials have been taking place in Finland and Canada since 2017. However, critics argue that the provision of universal basic income actually removes incentives to work, not to mention that in practice these schemes may not be affordable for most governments.64 The idea of a universal basic income actually dates back to the 1960s. In 1968, more than 1200 economists signed a letter in support of a form of basic income addressed to the USA Congress.65 President Nixon tried throughout his first term in office to enact it into law. In his 1969 speech he proposed a Family Assistance Plan that had many features of a basic income program, but it faced a large and diverse group of opponents.66 Many people feared their tax dollars would go to people who could work, but chose not to. The same argument still hold true for almost all countries. Even if governments could afford these schemes and society as a whole accepted its benefits, including shunning scepticism against what many call ‘free loaders’, basic income programs may not be the answer for many other reasons. As Voltaire had said: ‘work saves a man from three great evils—boredom, vice and need’. A guaranteed universal income takes care of need but not the remaining two. People work not just because that’s how they get their money, but also because it’s one of the principal ways they get many other important things like self-worth, sense of community, social interaction, structure to their lives and dignity to name a few. AI driven unemployment will not only have negative economic consequences, but also significant adverse social impact. Another proposal advanced by Jerry Kaplan, an AI expert at Stanford University and a Silicon Valley entrepreneur is for governments to offer generous tax breaks to companies based on the breadth of their share ownership, with the aim of encouraging the widest participation.67 The idea is that more people should benefit from the fruits of AI than just the geeks and firms who build it. Since most people lack the capital to invest, he contends that governments should let people choose where they invest their national pension contributions while they are still young, so they can share in AI’s spoils. An interesting concept, however, it still does not address the need for work as a social construct rather than just an economic necessity.

The Economist. (2016). Basically flawed: proponents of a basic income underestimate how disruptive it would be. The Economist, 4 June 2016. See: https://www.economist.com/leaders/ 2016/06/04/basically-flawed 65 Sreenivasan, J. (2009). Poverty and the governments in America: a historical encyclopedia. ABC-CLIO, 2009. See: https://www.abc-clio.com/ABC-CLIOCorporate/product.aspx? pc¼A1679C 66 Lampman, R. (1969). Nixon’s family assistance plan. Institute for Research on Poverty. University of Wisconsin. See: https://www.irp.wisc.edu/publications/dps/pdfs/dp5769.pdf 67 Kaplan. (2015). Humans need not apply. Yale University Press. 64

204

6.4.3

6

Data Processing and AI

The Need to Rethink Education

As AI spreads into more specific areas, it will require knowledge and skill in data science. Creating an AI system will also require specific functional skills and experience to set up the models and for these models to be tweaked over time and to be designed in a manner that supports human decision making. Functional specialists, not just programmers will have to lead the way. There are calls for policies to encourage more targeted education, particularly in science, technology, engineering and maths. This is one of the main recommendations of a White House report on AI issued in the final weeks of the Obama administration.68 There are also calls for changes to be made to education to encourage entrepreneurship. Management researchers Jeffrey Dyer and Hal Gregersen interviewed 500 prominent innovators and found that a disproportionate number of them went to Montessori schools, where ‘they learned to follow their curiosity’. Montessori have produced alumni such as founders of Google (Larry Page and Sergey Brin), Amazon (Jeff Bezos) and Wikipedia (Jimmy Wales). In a world driven by increasing innovation, entrepreneurship will need to be nurtured.

6.4.4

The Risks of Further Inequality

It is clear that digital disruption has not positively touched everyone to date and that may only get worse over time. The lack of reliable and affordable connectivity infrastructure remains a critical challenge. Broadband Internet is still in its infancy in sub-Saharan Africa, where Internet users represented only 22% of the population in 2016, compared to 44% worldwide. Globally, some 4 billion people do not have access to decent broadband. Even where people have access to broadband, many lack digital identities limiting their ability to access many online services. The longer those countries and their citizens remain excluded from the online world, the greater their missed development opportunities from AI. Policy makers in these countries need policies to promote broadband access, facilitate the provision of a digital identities and education and skills that allow all people to be able to use AI driven services. Many of the education opportunities will only be available to those with high speed broadband. In the absence of universal broadband access, further inequalities are only likely to increase over time.

68

Executive Office of the President. National Science and Technology Council. (2016). Preparing for the future of artificial intelligence, automation, and the economy. Washington: Office of Science and Technology Policy.

6.4 Impact of AI on Society

6.4.5

205

Social Bias and Misuse

AI has the potential to help us make better, more rational decisions based on data rather than instinct or gut feel. However, AI can amplify existing biases, where these human biases are built into the data. There have been examples of AI being used to grant parole in incarceration hearings, that have discriminated against AfricanAmericans and facial recognition software that has higher false-positive rates for many non white faces. These biases are often a function of imperfect training data used to train the AI. It is not that AI itself is biased, but rather that AI makes visible the biases in society or that imperfect data has introduced such bias.69 A study by Shai Danzinger and colleagues showed that Israeli judges were more likely to grant parole at the start of the day and after food breaks and more likely to recommend continued imprisonment right before the judges took a break, when they were presumably tired or had low blood sugars. Economist Ozkan Eren and Naci Mocan found that in one USA state, judges who were graduates of a prominent regional university handed down significantly harsher sentences immediately after their alma mater experienced an unexpected loss in a football game and that these sentences were systematically borne by black defendants. These inherent human biases would be built into the data that would be used in training AI algorithms, thereby perpetuating these biases. The functioning of human cognitive system and the way decisions are made has been studied extensively by many scholars. Many are now converging on the idea that the brain has two separate, yet intertwined systems. Judgment and justification are two separate processes of the cognitive process. The judging process is powered by System 1 and happens almost instantaneously. It is then justified in rational and plausible language supplied by System 2 processes. System 2 may simply be providing justification for the biases of System 1. System 1, as Kahneman70 has labelled, often takes shortcuts instead of reasoning something through thoroughly. It also contains a surprisingly large set of biases. There are 99 chapters in Rolf Dobelli’s book on the subject, ‘The Art of Thinking Clearly’ and 192 entries (as of December 2019) in Wikipedia’s list of cognitive biases. Accurate algorithms are of little use if the source of the bias is not only in the composition of the data sample or of the developer ecosystem, but rather is a by product of the way people think more broadly—the structure of the society we live in. Take the example of Google. It’s sentiment analysis attaches a neutral value to words such as ‘straight’ but a negative value to ‘homosexual’, because it draws from the environment in which those words are placed and it seems it is more likely that

69

Take for example the recent case where an image classification algorithm on Google classified images of African-American individuals as gorillas. Google apologized for the incident—see: BBC News (1 July 2015). 70 Kahneman, D. (2013). Thinking, fast and slow. Farrar, Straus and Giroux.

206

6

Data Processing and AI

negative connotations are attached to minorities on online chats.71 Another 2015 study showed that Google displayed high paying jobs advertisements to men at significantly higher rates than to women, most likely not due to explicit discrimination on anyone’s part, but because the algorithm had learned from past experience that men were more likely to click on such advertisements.72 In another example, the Microsoft Twitter bot within 24 h of being deployed on Twitter was turned into a racist, sexist and genocidal chatbot due to the amount of these types of phrases being ‘fed’ to the bot by other Twitter users. Microsoft had to deactivate the bot immediately. When it was accidentally activated a few weeks later, it once again began making inappropriate tweets.73 In the most concerning of cases, AI could actually disempowered people. This was demonstrated by an algorithm used to predict recidivism rates, which incorrectly scored black defendants in the USA along all metrics, such as the likelihood of reoffending or of committing violent acts.74 These estimates were considered as evidence in sentencing recommendations and because of systemic race and gender biases against classes of individuals, those being sentenced were unfairly and systematically sanctioned. Apart from historic biases built into the AI system, providing only partial training data sets can also limit the efficacy of AI tools. A speech recognition algorithm that has only learned from a particular accent might struggle to recognise that language with other accents. In another case, an algorithm designed to identify tanks was given training data where images of tanks were taken on cloudy days, while photos without tanks were taken on sunny days, so the algorithm instead learned to distinguish between sunny and cloudy days.75 AI can also be put to use in ways that endanger human rights such as privacy or free expression. For instance, China’s social credit systems uses widespread surveillance, facial recognition and analysis of large datasets to identify and reward or

71 Merriman, C. (2017). Google’s AI is already associating ethnic minorities with negative sentiment. The Inquirer. See: https://www.theinquirer.net/inquirer/news/3019938/googles-ai-is-alreadyassociating-ethnic-minorities-with-negative-sentiment 72 The Google example was discovered by researchers conducting testing of Google’s services, see: Datta, A., Datta, A., Procaccia, A., Zick, Y. (2015). Influence in classification via cooperative game theory. International joint conferences on artificial intelligence, pp. 511–517 and Yampolskiy, R., Spellchecker, M. (2016). Artificial intelligence safety and cybersecurity: a timeline of AI failures. See: https://arxiv.org/ftp/arxiv/papers/1610/1610.07997.pdf 73 Kosoff, M. (2016). Microsoft’s racist millennial twitter bot strikes again. Vanity Fair. 30 March 2016. See: https://www.vanityfair.com/news/2016/03/microsofts-racist-millennial-twitter-botwent-haywire-again 74 White defendants were estimated to be less likely to reoffend and be of lower risk of committing future crimes (of both a nonviolent and violent nature). See: Larson, J., Mattu, S., Kirchner, L., Angwin, J. (2016). How we analysed the COMPAS recidivism algorithm. Pro Publica. https:// www.propublica.org/article/how-we-analysed-the-compas-recidivism-algorithm 75 A wide range of AI applications are trained on a relatively small number of freely available data sets, many of which are listed here: https://gengo.ai/datasets/the-50-best-free-datasets-formachinelearning/

6.4 Impact of AI on Society

207

punish citizens for behaviour that the government approves of or deems to depart from their communist (or party) values. Even when AI is used with the best of intentions, outsourcing moral decision making can have unintended consequences. Social media platforms face significant pressure to remove posts containing hate speech, terrorist material, child pornography or violating copyright protections. To police billions of posts, they have turned to AI to identify material for take-down. Yet these machines often misfire and flag perfectly legal expression for removal and the same technology can be used by illiberal regimes to find and eliminate dissent. For all their capabilities, AI tools are still imperfect. They make errors in judgment, act unpredictably and can be tricked. As we take humans out of the loop of some decision making, we are replacing them with agents that possess neither common sense nor a conscience. Humans have good old common sense, something lacking in computers. Despite decades of research, we still don’t know very much about how we acquire our common sense and it is becoming impossible to instil common sense in a computer. It may therefore be appropriate for a person to check a computer’s decision to make sure they make sense. Uber’s surge pricing algorithms for instance could lead to unethical decisions if Uber’s surge pricing algorithms started hiking up prices in the event of a natural disaster or terrorist incidence, when demand could exponentially increase. In these circumstances, the ability for humans to override the algorithm would be important for the longer-term success of Uber. Even when they work well, sophisticated AI have become harder to understand, not just for the end users, but even for the people who designed them in the first place - especially when utilising Deep Learning.76 The black box nature of advanced AI can create difficulties in processes where we value predictability or in understanding the underlying reasons for an action. If one cannot interrogate an AI as to why it made a certain determination, it will be difficult for us to build AI into critical processes. Ultimately, as more data is used to influence decisions and as algorithms are increasingly utilised to shape, guide or make these decisions, policy makers need to be vigilant in requiring transparency, accountability, equity and universality from these applications. As AI becomes embedded in decision making processes, certain types of safeguards and principles are being widely discussed as means for ensuring responsible adoption and use of AI: • Transparency: in terms of when algorithmic processing or AI enabled decision making are being used; • Access to underlying data: providing access to public and privately held data to help regulatory authorities exercise oversight over AI, including access to training data as well as output data from the AI algorithms;

76 Martin, M. (2018). Building AI systems that work is still hard. Tech Crunch, 01 August 2018. See: https://techcrunch.com/2018/01/01/building-ai-systems-that-work-is-still-hard/

208

6

Data Processing and AI

• Predictability and explainability: many AI algorithms are beyond human comprehension and some AI vendors will not reveal how their programs work to protect intellectual property. In both cases, when AI produces a decision, its end users won’t know how it arrived there. Its functioning is a ‘black box’. Requirements for the provision of clear comprehensive explanation as to how the deployed AI actually works should be welcome. While AI may sometimes function as a black box, work is also being done on AI that can themselves explain their decisions;77 • Understandability: A separate but related goal is mandating that AI algorithms and their outputs are understandable, providing both regulatory authorities and users confidence in their use. If AI should always be controllable, it’s not always understandable. The problem for end-users and policy makers is that without understanding how the AI program arrives at certain decisions, it could well be introducing certain biases that may be prejudicial to certain segments of the population or reinforce existing/historic prejudices, as was discussed earlier; • Robust testing: mandating pilots and robust testing wherein the AI is deployed in a sandbox environment; • Human review: mandating consumers the right to redress and human review when a decision is made based on AI, including the ability to correct input data. However, these measures are not free. They entail significant effort and cost on the part of both companies developing AI, regulators and users. While these measures all make up part of a useful toolbox, policy makers need to carefully consider when to require them by law and when self regulatory frameworks or codes of conduct may suffice. AI will be what we as society make of it, with its ultimate impact determined by the governance frameworks we adopt. Whilst countries are starting to develop some general rules and/or frameworks for governing AI, these need to be principles based, rather than too prescriptive, given the early stages of AI development. In many cases it may be appropriate to combine national AI policies and local rules, that adapt for the needs of a specific city for instance. A number of organisations have started developing design principles for the ethical development of an AI ecosystem. This is still an area which is work-inprogress, however these principles provide a good starting point for policy makers. Table 6.3 details some design principles promoted by three separate bodies.

6.4.6

AI May Become a Threat to Humanity

Many commentators have expressed fear that AI may well become a threat to humanity. The more extreme point to fears that have been turned into Hollywood films such as the Terminator or the Matrix. In the face of these risks, different 77 DARPA (the Defense Advanced Research Projects Agency of the United States), for instance, is sponsoring work on self-explaining AI.

6.4 Impact of AI on Society

209

Table 6.3 AI design principles

** Safety Security Accuracy Verifiability

Asilomar principles on safe, ethical, and beneficial use of AI Safety—AI systems should be safe and secure throughout their operational lifetime and verifiably where applicable and feasible

Transparency Explainability Auditability

Failure Transparency—If systems cause harm, it should be possible to ascertain why Judicial Transparency—If systems are involved in key judicial decisionmaking, an explanation that is auditable by a competent human authority should be made available

Responsibility

Responsibility— Designers and builders of AI systems are stakeholders in the moral implications of their use, misuse and actions

FATML principles for accountable algorithms Accuracy—Identify, log, and articulate sources of AI error and uncertainty throughout the algorithm and its data sources so that expected and worstcase implications can be understood and can inform mitigation procedures Explainability— Ensure that algorithmic decisions, as well as any data driving those decisions, can be explained to end users and other stakeholders in nontechnical terms Auditability—Enable interested third parties to probe, understand and review the behaviour of the algorithm through disclosure of information that enables monitoring, checking, or criticism, including through the provision of detailed documentation, technically suitable APIs, and permissive use of terms Responsibility—Make available externally visible avenues of redress for adverse individual or societal effects and designate an internal role for the person who is responsible for the timely remedy of such issues

IEEE principles on ethically aligned design Human Benefit (Safety)—AI must be verifiably safe and secure throughout its operational lifetime

Transparency/ Traceability—It must be possible to discover how and why a system made a particular decision or acted in a certain way and if a system causes harm, to discover the root cause

Responsibility— Designers and developers of systems should remain aware of and take into account the diversity of existing relevant cultural norms; manufacturers must be able to provide programmatic-level accountability proving why a system operates in certain ways (continued)

210

6

Data Processing and AI

Table 6.3 (continued)

** Fairness and values alignment

Privacy

Asilomar principles on safe, ethical, and beneficial use of AI Shared Benefit—AI technologies should benefit and empower as many people as possible Shared Prosperity— The economic prosperity created by AI should be shared broadly, to the benefit all of humanity Non-Subversion—The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, social and civic processes Personal Privacy— People should have the right to access, manage and control the data they generate, given AI systems’ power to analyse and utilise that data Liberty and privacy— The use of personal data by AI must not unreasonably curtail people’s real or perceived liberty

FATML principles for accountable algorithms Fairness—Ensure that algorithmic decisions do not create discriminatory or unjust impacts when comparing across different demographics

IEEE principles on ethically aligned design Embedding Values into AI—Identify the norms and elicit the values of a specific community affected by a particular AI and ensure the norms and values included in AI are compatible with the relevant community Human Benefit (Human Rights)— Design and operate AI in a way that respects human rights, freedoms, human dignity and cultural diversity Personal Data and Individual Access Control—People must be able to define, access and manage their personal data as curators of their unique identity

World Economic Forum. (2018). How to prevent discriminatory outcomes in machine learning. Global Future Council on Human Rights 2016–2018

catch-phrases are now being used to describe what a broad approach to responsible AI might look like, including ‘human-centred AI’, ‘AI for good’ or ‘AI for humanity.’ When considering how AI might become a risk, experts think two scenarios are most likely: • AI harm with intent: the AI is programmed to do something devastating, like autonomous weapons built with AI systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would

6.4 Impact of AI on Society

211

be designed to be extremely difficult to simply ‘turn off’, so humans could plausibly lose control of such a situation. • AI harm without intent: the AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal, which can happen whenever we fail to fully align the AI’s goals with ours. If you ask an obedient AI driven autonomous car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. Stephen Hawking, Elon Musk,78 Steve Wozniak, Bill Gates, and many other big names in science and technology have recently expressed concern in the media and via open letters about the risks posed by AI and have been joined by many leading AI researchers.79 However, there is considerable disagreement over the dangers presented to the human race by AI. At the Tech Crunch conference in 2017, Google’s head of AI was quick to dismiss Elon Musk’s concerns that AI could present an existential threat to humans or cause a third world war. Nevertheless, given the ability of AI, it is inevitable that various militaries across the world will be looking into AI as a weapon, which inevitably other nations will need follow, if nothing else than to defend themselves. The world managed to control the development and proliferation of nuclear weapons, but trying to do so with AI will be next to impossible. AI systems are already set to play an expanded role in USA military strategy and operations in the coming years as the USA DoD puts into practice its vision of a ‘Third Offset’ strategy,80 in which humans and machines work closely together to achieve military objectives. Paired with developments in robotics, we can now imagine a wholly different kind of warfare, with swarms of drones and autonomous ground vehicles taking the place of boots on the ground. In reaction, a broad international discussion led by NGOs have been advocating for the banning of some autonomous or robot weapons. They worry both about how such weapons could change the face of conventional warfare, but also of the risk of loss of control, whether through hacking or autonomous agents gone rogue. A UN group of experts has already been convened to produce recommendations.81 Outright bans of autonomous weapons are unlikely to work. AI arms control presents the same difficulties as cyber arms control. The basic technology is relatively accessible and inherently dual-use in nature. Nuclear or chemical weapons require extensive physical infrastructure to develop and store (and can thus be Elon Musk poured $1B into OpenAI for more transparent AI research. Elon Musk believes that if AI wakes up, we can regulate it. This assumption is mistaken. As soon as AI reaches human level of intelligence, it will be exceeding that level and will be not be able to be regulated. 79 A 2017 open letter by 116 AI researchers makes the same argument. See: Futureoflife.org 80 Brundade, M., el al. (2018). The malicious use of artificial intelligence: forecasting, preventing and mitigation. See: https://arxiv.org/ftp/arxiv/papers/1802/1802.07228.pdf 81 Report of the 2018 session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems, 23 October 2018. Ref: CCW/GGE.1/2018/3. 78

212

6

Data Processing and AI

inspected), while the autonomy and programming of a weapons system is a function of its code, which can be hidden and altered. The key policy questions will be around how military grade AI does not get into the hands of those that seek to harm humanity, whether that is rogue nations or individuals. Governments need to take a lead in developing policies that nurture AI, whilst carefully instigating regulations that ensure it is safe to use and which operates in a manner that is aligned with social norms. Governments must encourage and actively participate in dialogue about the role of AI and what standards are required to ensure there is not a race to the bottom in safety, security and ethics. Government must resist throttling development, even in areas such as autonomous weapons as these are unlikely to work and may well isolate the country in the face of more advanced nations who have a more lassie faire approach to such matters.

6.4.7

Cyber Security Threats Likely to Increase

AI is helping organisations to monitor, detect and mitigate cyber security threats, including spam filters, malicious file detection and malicious website scanning.82 The cybercrime industry is worth nearly one trillion Euros83 and today many AI tools are becoming available online as open source tools that can be paired with inexpensive malware kits to make sophisticated automated cyber-attack tools.84 Take the case of Stuxnet, an attack originally targeted against Iran’s nuclear programme whose code has shown up in cyber-attacks across the world. The market for AI driven cyber security is a growing market. Every cyber security vendor now prominently advertises their AI capabilities.85 Alphabet recently released Chronicle, a ‘cyber security intelligence platform’ that throws massive amounts of storage, processing power and advanced analytics at cyber security data to accelerate the search and discovery of needles in a rapidly growing haystack.86 Many organisations currently adopt security systems called Endpoint Detection and Response (EDR) platforms to counter more advanced cyber threats. The EDR 82 Tully, P. (2018). Using defensive AI to strip cyberattackers of their advantage. Venturebeat.com (6 March 2018). See: https://venturebeat.com/2018/03/06/using-defensive-ai-to-stripcyberattackers-of-their-advantage/ 83 Lewis, J. (2018). Economic impact of cybercrime: no slowing down. McAfee. 84 Google now makes it possible to automate building your own AI system. See Knight, W. (2018). Google’s self-training AI turns coders into machine-learning masters. Technology Review, 17 January 2018. See: https://www.technologyreview.com/s/609996/googles-self-training-aiturns-coders-into-machine-learning-masters/ 85 E.g. DarkTrace specialises in applying AI to cyber security defence. 86 Oltsik, J. (2018). Artificial intelligence and cybersecurity: The real deal. 25 January 2018. See: https://www.csoonline.com/article/3250850/artificial-intelligence-and-cybersecurity-the-real-deal. html

6.4 Impact of AI on Society

213

market represents a US$500 million industry. These tools are built upon a combination of heuristic and AI algorithms to provide capabilities such as next-generation anti-virus (NGAV) and behavioural analytics. Though these systems are fairly effective against typical human authored malware, research has already shown that AI systems may be able to learn to evade them. In 2018, IBM demonstrated an AI-powered malware toolkit that could bypass traditional detections and launch highly sophisticated attacks.87 One of the more recent cyber security phenomenon that has started to emerge, is cyber attackers breaking into a system to exploit the infrastructure in terms of computing power to mine cryptocurrencies, such as Bitcoin for instance. Some of the most damaging cyber-attacks will be against AI systems themselves, executing adversarial attacks or attempting to alter algorithms. An attack against training data can also be used to alter how an AI functions.88

6.4.8

Mental Health

There’s no denying that the Internet has been the most influential force in society in the past 100 years. Never before have we had such unprecedented access to news, knowledge and entertainment from cultures all around the world. At the same time, this wealth of access has led to something called ‘digital information overload’, where our minds simply aren’t able to handle that kind of constant influx of information. When we subject ourselves to information overload, our brains over time adapt to it, by learning to expect that kind of constant stimulation. That’s why when people step away from the Internet, life feels so slow and boring. This phenomenon has even has its own label, called ‘novelty addiction’. Information overload isn’t just about the amount of data that we have to process, but also the variety of data that we have to sort through. Because the brain grows accustomed to over-stimulation, we start looking for things that are ‘new’. According to psychologist Lucy Jo Palladino: ‘When overload is chronic, you live in a state of unresolved stress and anxiety that you can’t meet ongoing demands to process more information’. In fact, studies on Internet-related depression have been around as early as 2010, such as when Leeds University found a potential link between Internet use and depression. For a lot of people, interacting with Facebook can actually result in a negative mood shift, as they see the highlights of other people’s lives, they may feel envious and too much envy can lead to depression. 87

Townsend, K. (2018). IBM describes AI-powered malware that can hide inside benign applications. Security Week, 13 August 2018. See: https://www.securityweek.com/ibm-describesai-powered-malware-can-hide-inside-benign-applications 88 Many of these adversarial attacks are inherently hard to defend against. Some of the first such adversarial attacks we are seeing use audio as a vector, e.g. to hack Amazon’s Alexa or Siri. See: Carlini, N., Wagner, D. (2017). Adversarial examples are not easily detected: bypassing ten detection methods. Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, 2017.

214

6

Data Processing and AI

Digital dementia is another phenomenon that has been described by psychologists as a possible consequence of overusing digital technology, which could result in the deterioration or breakdown of cognitive abilities.89 Overuse of digital technology may also have an impact on personal autonomy, depending on the degree of AI driven digital assistance that is increasingly relied upon to complete even trivial tasks. This can be seen in young adults that increasingly rely on spell checkers and Google search as an alternative to learning ancient skills and memorising important facts. Google Maps is another case in point where people are increasing losing their natural ability to navigate. As a consequence of the growing reliance on digital assistance, basic human capabilities could be lost. Modern technology is also affecting our sleep. The artificial light from TV and computer screens affects melatonin production and throws off circadian rhythms, preventing deep, restorative sleep, causing an increase in stress and depressive symptoms. It is unlikely policy makers or regulators can do much in this area, however better education and awareness may well be the answer.

6.4.9

Political Manipulation and Fake News

Public social media profiles are already reasonably predictive of personality details,90 and may be usable to predict psychological conditions like depression.91 Sophisticated AI systems might allow groups to target precisely the right message at precisely the right time to maximum persuasive potential. Such a technology is sinister when applied to voting intention, but pernicious when applied to recruitment for terrorist acts, for example. Even without advanced techniques, ‘digital gerrymandering’92 or other forms of advertising might shape elections in ways that undermine the democratic process, as was seen in the USA presidential elections. The more entrenched position of authoritarian regimes offers additional mechanisms for control through AI that are unlikely to be as easily available in democracies.93 AI systems enable fine grained surveillance at a more efficient scale. While existing systems are able to gather data on most citizens, efficiently using that 89 Gwinn, J. (2013). Overuse of technology can lead to digital dementia. 12 November 2013. See: https://www.alzheimers.net/overuse-of-technology-can-lead-to-digital-dementia/ 90 Quercia, D., Kosinski, M., Stillwell, D., Crowcrfot, J. (2011). Our Twitter profiles, our selves: predicting personality with Twitter. And Kosinski, M., Stillwell, D., Graepel, T. (2013). Private traits and attributes are predictable from digital records of human behavior. Proceedings of the National Academy of Sciences of the USA. 91 Choudhury, M., Gamon, M., Counts, S., Horvitz, E. (2013). Predicting depression via social media. Proceedings of the Seventh International AAAI Conference on Weblogs and Social Media. 92 Choudhury, M., Gamon, M., Counts, S., Horvitz, E. (2013). Predicting depression via social media. Proceedings of the Seventh International AAAI Conference on Weblogs and Social Media. 93 Brundade, M., el al. (2018). The malicious use of artificial intelligence: forecasting, preventing and mitigation. See: https://arxiv.org/ftp/arxiv/papers/1802/1802.07228.pdf

6.4 Impact of AI on Society

215

data is too costly for many authoritarian regimes. AI systems both improve the ability to prioritise attention and also reduce the cost of monitoring individuals. The information ecosystem itself enables political manipulation and control by filtering content available to users. While such tools could be used to help filter out malicious content or fake news, they could also be used by media platforms to manipulate public opinion. In authoritarian regimes, this could be done by the state, whereas in democracies the same technical tools still exist; they simply reside in the hands of dominant corporations. Even without resorting to outright censorship, media platforms could still manipulate public opinion by ‘de-ranking’ or promoting certain content. For example, Alphabet Executive Chairman Eric Schmidt recently stated that Google would de-rank content produced by Russia Today and Sputnik.94 In 2014, Facebook manipulated the newsfeeds of over half a million users in order to alter the emotional content of users’ posts, albeit modestly.95

6.4.10 The Risk of Creating Data Monopolies That Have Immense Power The greater access to data a firm has, the better positioned it is to solve difficult problems with AI. Machine Learning practitioners have essentially three options to secure sufficient data: (i) they can build the databases themselves, (ii) they can buy the data, or (iii) they can use ‘low friction’ alternatives such as content in the public domain. The last option carries the risk of introducing bias within the AI algorithms discussed earlier. The first two are avenues largely available to big firms or institutions such as Facebook, Apple, Amazon, Netflix, Google or militaries. The reality that a handful of large digital entities possess orders of magnitude more data than anyone else leads to a crucial policy question around data parity. Smaller firms will have trouble entering and competing in the marketplace. The concept of a data commons as advanced by Outlier Ventures is becoming more and more relevant.96 Even when organisations might be set up to share data and have a culture of sharing and collaboration, privacy laws and data protection legislation has had a chilling effect on data sharing. Even when sharing is encouraged internally, technically possible with open APIs and legislation is favourable, current data infrastructure makes it difficult to monetise data. Today there is no business model for data sharing. 94 https://bhr.stern.nyu.edu/blogs/2017/11/24/google-announces-it-will-de-rank-russia-today-andsputnik 95 Goel, V. (2014). Facebook tinkers with users’ emotions in news feed experiment, Stirring Outcry, The New York Times. See: https://www.nytimes.com/2014/06/30/technology/facebook-tinkerswith-users-emotions-in-news-feed-experiment-stirring-outcry.html 96 https://outlierventures.io/research/data-marketplaces-value-capture-in-web-3-0

216

6

Data Processing and AI

I am not suggesting that data protection needs to be relaxed, on the contrary. A critical factor for the sustainable development of a digital society is a robust and effective framework for the protection of privacy, so users can continue to have confidence and trust in digital applications and services. What I am suggesting is that governments pay attention and facilitate the creation of data repositories (appropriately anonymized) that are available to all firms, large and small. This may require data held by dominant digital players being made accessible to others. The following are examples of open data repositories that are essentially provided for training and evaluating AI models: • CIFAR-10 dataset: part of the Canadian Institute for Advanced Research, is a collection of images that are commonly used to train machine learning and computer vision algorithms; • COCO: is a large scale object detection, segmentation and captioning dataset; • Data.gov: maintains a catalogue of over 250,000 open sourced government datasets in topics such as agriculture, climate, consumer, ecosystems, education, energy, finance, health, local government, manufacturing, maritime, ocean, public safety and science; • ImageNet project: is a large visual database designed for use in visual object recognition software research. More than 14 million images have been handannotated by the project to indicate what objects are pictured and in at least 1 million of the images; • MNIST dataset: has handwritten digits with a training set of 60,000 examples and a test set of 10,000 examples. It is a subset of a larger set available from NIST; • Google AI: periodically releases data of interest to a wide range of computer science disciplines for researchers to use; • Kaggle: provides an extensive collection of thousands of datasets from a wide variety of different sources and domains; • OpenML: is a data set repository that links data to algorithms to teach machines to learn better; • Pascal VOC datasets: provides standardised image data sets for object class recognition, a common set of tools for accessing the data sets and annotations and enables evaluation and comparison of different methods; • AWS Registry of Open Data: provides an extensive collection of thousands of datasets from a wide variety of different sources and domains; • UC Irvine Machine Learning Repository: maintains and hosts a diverse collection of now over 400 datasets ranging between the 1980s to 2019. The EU has been giving thought to how to create such a data commons and convince industrial players to share their data with start-ups and other enterprises, as well as with government. The first concrete policy deliverable for this data strategy is a framework for common European data spaces to be published by the end of 2020. These spaces would be designed to encourage companies to pool data. In 2021, the EC will rollout its new Data Act which will remove barriers and introduce rules for

6.4 Impact of AI on Society

217

business to business and business to government data sharing and potentially give people enhanced portability rights.

6.4.11 Policy Responses to Date Many governments are realising the adverse consequences of AI and are now starting to examine how these might or could be minimised through legislation, regulation or multi-party collaboration. As part of the 2018 G7 process, Canada and France announced that they will create a multi-stakeholder International Panel on Artificial Intelligence that can become a global point of reference for understanding and sharing research results on AI issues and methodologies as well as convening international AI initiatives.97 The USA it would seem wants to go it alone and define the rules of the road when it comes to AI. The USA has so far rejected working with other G7 nations on a project known as the Global Partnership on AI, which seeks to establish shared principles and regulations. Instead the USA has released a set of 10 principles for government agencies to adhere when proposing new AI regulations for the private sector. The approach by the USA can be described as light touch. The ten principles primarily relate to: • Enhancing public trust: by promoting reliable, robust and trustworthy AI; • Public engagement: encouraging public participation and engagement in rule making process; • Evidence based: policy making using scientific integrity; • Risk assessed use: by being clear on which risks are acceptable and which are not; • Impact assessed: seeking to balance societal impacts of all proposed regulations; • Flexible approach: being adaptable to changes in AI and its applications; • Demonstrate fairness and non-discrimination: ensuring AI systems do not discriminate illegally; • Disclosure and transparency: on when and how AI is used in decision making; • Safety and security at its core: where all data used by AI systems should be safe and secure; • Consistent and predictable regulation: across different government agencies. In practice, federal agencies will now be required to submit a memorandum to White House Office of Science and Technology Policy (OSTP) to explain how any proposed AI-related regulation satisfies these principles. Though the office doesn’t have the authority to enforce regulations, the procedure could still provide the necessary pressure and coordination to uphold a certain standard.

97

https://pm.gc.ca/en/news/backgrounders/2018/12/06/mandate-international-panel-artificialintelligence

218

6

Data Processing and AI

The Institute of Electrical and Electronic Engineers’ Global Initiative on Ethics of Autonomous and Intelligent Systems was launched in April 2016 to incorporate ethical aspects of human well-being that may not automatically be considered in the current design of digital systems.98 In addition, the World Economic Forum’s Centre for the Fourth Industrial Revolution AI and Machine Learning Portfolio has begun work on three artificial intelligence governance projects.99 In 2018, Microsoft published six principles for developing and deploying facial recognition technology that it now follows to address concerns regarding the misuse of such technology.100 Amazon has taken it one step further, recently unveiling its ‘Our Positions’ by Michael Punke, VP, Global Public Policy, AWS, which sets out their views on topics including climate change, immigration and tax policy.101 The European Commission released its White Paper on AI in January 2020, which favours a risk based approach with sector and application specific risk assessments and requirements. The EC is proposing that a set of binding requirements would apply to developers and users of high-risk AI (irrespective of whether they are based in the EU or not—like the GDPR). Together with the White Paper, the EC released a series of accompanying documents, including a ‘European strategy for data (‘Data Strategy’) and a ‘Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics’ (‘Report on Safety and Liability’). The papers suggest that to ensure high risk AI meets the mandatory requirements, a ‘prior conformity assessment’ should be carried out. This could include procedures for testing, inspection or certification, checks on algorithms and of the data sets used during development. Additional ongoing monitoring may also be mandated, where appropriate. The conformity assessments would be carried out by notified bodies, identified by each member state. According to the White Paper, the key issue for any future legislation would be to determine the scope of its application. The assumption is that any legislation would apply to products and services relying on AI. Furthermore, the EC identifies ‘data’ and ‘algorithms’ as the main elements that compose AI, but also stresses that the definition of AI needs to be sufficiently flexible to provide legal certainty while also allowing for the legislation to keep up with technical progress. The EC’s AI approach appears to favour a context-specific risk-based approach instead of a GDPR ‘one size fits all’ approach. An AI product or service will be considered ‘high-risk’ when two cumulative criteria are fulfilled: (i) Critical sector: The AI product or service is employed in a sector where significant risks can be

98

https://standards.ieee.org/industry-connections/ec/autonomous-systems.html https://www.weforum.org/communities/artificial-intelligence-and-machine-learning 100 Microsoft. (2018). Six principles for developing and deploying facial recognition technology. See: https://blogs.microsoft.com/on-the-issues/2018/12/17/six-principles-to-guide-microsoftsfacial-recognition-work/ 101 https://www.aboutamazon.com/our-company/our-positions 99

6.5 Understanding How AI Works

219

expected to occur; and (ii) Critical use: The AI product or service is used in such a manner that significant risks are likely to arise, e.g., where the use of AI produces legal effects, leads to a risk of injury, death or significant material or immaterial damage. For AI which is not high-risk, the EC has set out the option of introducing a voluntary labelling scheme. The papers also propose a review of how product and software risks and liability may need to change. The EC proposes to adapt and clarify existing rules in the case of stand-alone software placed on the market as it is, or downloaded into a product after its placement on the market. The EC recognised, both in the White Paper and in the Report on Safety and Liability, that the current product safety legislation already supports an extended concept of safety protecting against all kind of risks arising from the product according to its use. However, provisions explicitly covering new risks presented by the emerging digital technologies like AI, the IoT and robotics are likely to be introduced to provide more legal certainty.

6.5

Understanding How AI Works

Having described its applications and potential, it is worth describing the workings of AI and demystifying AI, a little. Early on, the AI community split into two camps. One pursued so called rule based AI, while the other built statistical pattern recognition systems. The former sought to build AI the way adults learn a second language, the latter tried to build AI in much the same was as children learn their first language. Knowledge in a rulebased expert system is represented by IF-THEN production rules. Figure 6.1 illustrates the differences. Whilst the former made some progress early on, much faster than the latter, over time it has become clear that the latter is more formidable than the former. It is impossible to program all the possible rules and even more of a challenge when rules are not set in stone and in the real world, divergence from these rules may be acceptable in certain circumstances to society. A rules based computer might know what a cat is from a picture of a cat, but change one hair and suddenly the computer tells you it’s not a cat. Humans are very good at grouping similar things under a single name, but computers, operating in a world of ones and zeroes, are not. Fig. 6.1 Rules based versus statistical AI technologies

Rules

Rules based AI

Answers

Data

Data Answers

Stascal paern recognion

Rules

220

6

Data Processing and AI

It’s problems like these that gave birth to the field of Machine Learning (‘ML’). While ML and AI are often used interchangeably, ML is more accurately understood as one method to achieve AI. An ML system is trained rather than explicitly programmed. It’s presented with many examples relevant to a task and it finds statistical structure in these examples that eventually allows the system to come up with rules for automating the task. ML uses statistical techniques to give computers the ability to ‘learn’ by creating new mathematical algorithms. For instance, if you wished to automate the task of tagging your holiday pictures, you could present a ML system with many examples of pictures already tagged by humans and the system would learn statistical rules for associating specific pictures to specific tags. ML exhibits comparatively little mathematical theory. ML algorithms aren’t usually creative in finding these transformations; they’re merely searching through a predefined set of operations, called a hypothesis space. In its early days, a core technique, now a staple, emerged: classification. This technique allowed computers to be taught to distinguish between objects with some inherent variation, such as the hairs on a cat. It works as you might expect to teach a child: you show an AI pictures of different cats and it starts to learn the similarities while discarding the differences. You do the same with pictures of dogs, and eventually the AI has a pretty good idea of what cats and dogs are. Classification is an example of supervised learning, which means a human has to be there to teach the AI what’s what. Unsupervised learning is the opposite, where no human interaction is required for the learning process. The most fundamental and important technique for unsupervised learning, clustering doesn’t demand any prior knowledge of what a cat or dog might look like. Clustering means grouping objects based on patterns in data. It might not sound complicated, but for a computer it means understanding the distinction between a cat and a dog without ever having been taught what either are. All you need is data and lots of it. • Supervised (classified) learning: Consists of learning to map input data to known targets (also called annotations) given a set of examples (often annotated by humans). Generally, almost all applications of Deep Learning that are in the spotlight these days belong in this category, such as optical character recognition, speech recognition, image classification and language translation (although Deep Learning can also use unsupervised learning as well). A simple ML algorithm called ‘naive Bayes’ separates legitimate e-mail from spam e-mail today using such a technique; • Unsupervised (clustered) learning: This branch of ML consists of finding interesting transformations of the input data without the help of any targets. Unsupervised learning is the bread and butter of data analytics and it’s often a necessary step in better understanding a dataset before attempting to solve a supervised learning problem. Dimensionality reduction and clustering are well known categories of unsupervised learning; • Reinforcement learning: Long overlooked, this branch of ML recently started to get a lot of attention after Google DeepMind successfully applied it to learning to

6.5 Understanding How AI Works

221

play Atari games (and later, learning to play Go). This technique uses an iterative process to adjust the algorithm to produce outputs trained via reinforcement learning.

6.5.1

Artificial Neural Networks

Some claim that a neural network aims to replicate a model of reasoning used by the human brain. The brain consists of a densely interconnected set of nerve cells, or basic information processing units, called neurons. The human brain incorporates nearly 10 billion neurons and 60 trillion connections, ‘synapses’, between them. Our brain can be considered as a highly complex, non-linear and parallel information processing system. Information is stored and processed in a neural network simultaneously throughout the whole network, rather than at specific locations. Learning is a fundamental and essential characteristic of biological neural networks. The ease with which they can learn led to attempts to emulate a biological neural network in a computer. An Artificial Neural Network (ANN) consists of a number of very simple processors, also called neurons, which are analogous to the biological neuron in the brain. The neurons are connected by weighted links passing signals from one neuron to another. The output signal is transmitted through the neuron’s outgoing connection. The outgoing connection splits into a number of branches that transmit the same signal. The outgoing branches terminate at the incoming connections of other neurons in the network. Knowledge in neural networks is stored as synaptic weights between neurons. It is actually embedded in the entire network and cannot be broken into individual pieces and any changes of a synaptic weight may lead to unpredictable results. A neural network is, in this sense a black-box for its user.

6.5.2

Deep Learning

Deep Learning uses multiple layers of artificial neural networks to simulate human brain activity. However, Deep Learning models are not models of the brain. There’s no evidence that the brain implements anything like the learning mechanisms used in modern Deep Learning models. Deep Learning has enabled a rise in the technology known as computer vision, where machines skilled at image recognition, comparison and pattern identification ‘see’ with similar acuity as human eyes, and then connect what they see based on previously examined training data. Another form of AI technology, Natural Language Processing (NLP), does exactly as the name suggests, interprets and interacts with real-time dialogue. The goal of NLP, which is often combined with speech recognition technologies, is to interact with individuals through dialogue, either reacting to prompts or providing real-time translation among languages. Deep Learning is now used by many

222

6

Data Processing and AI

technology companies including Google, Microsoft, Facebook, IBM, Baidu, Apple, Adobe, Netflix and NVIDIA. Computer vision and NLP are not the only subsets of AI technologies that are driving important advancements in the field, but these two often underpin other applications of AI. For example, robotics combines computer vision, NLP and other technologies to train robots to interact with the world around them in generalisable and predictable ways, facilitate manipulation of objects in interactive environments and interact with people. The deep in Deep Learning isn’t a reference to any kind of deeper understanding achieved by the approach; rather, it represents the idea of successive layers of representations. The number of layers is called the depth of the model. Deep Learning models work through an iterative process whereby the output is observed and checked to see how it corresponds to the output expected, with the inputs consecutively changed until the output does resemble what was expected. This is the job of the loss function of the network, also called the objective function. The loss function takes the predictions of the network and the true target (what you wanted the network to output) and computes a distance score, capturing how well the network has done at achieving its goal. The fundamental process in Deep Learning is to use this score as a feedback signal to adjust the value of the weights a little, in a direction that will lower the loss score for the current example. This adjustment is the job of the optimiser, which implements what’s called the Back Propagation Algorithm: the central algorithm in Deep Learning, as illustrated in Fig. 6.2. Initially, the weights of the network are assigned random values, so the network merely implements a series of random transformations. Naturally, its output is far from what it should ideally be and the loss score is accordingly very high. But with every iteration, the weights are adjusted a little in the correct direction, with the corresponding decrease in the loss score. This is the training loop, which, repeated a sufficient number of times, yields weight values that minimise the loss function. In the early days, doing Deep Learning required significant software coding expertise. Today, basic Python scripting skills suffice to do advanced Deep Learning research. This has been driven most notably by the development of Theano and then TensorFlow, two symbolic tensor manipulation frameworks for Python, greatly simplifying the implementation of new models and by the rise of user friendly libraries such as Keras.102 Choosing the right objective function for the right problem is extremely important. The neural network will take any shortcut it can, to minimise the loss; so if the objective doesn’t fully correlate with success for the task at hand, the neural network will end up doing things you may not have wanted. Imagine an AI trained with the chosen objective function: ‘maximising the average well being of all humans alive’.

102

Keras is a deep-learning framework for Python. Keras was initially developed for researchers, with the aim of enabling fast experimentation. Keras is distributed under the permissive MIT license, which means it can be freely used in commercial projects. Keras has well over 200,000 users, including users at Google, Netflix, Uber, CERN, Yelp, Square.

6.5 Understanding How AI Works

223

Input X

Weights

Layer (data transformaon)

Weights

Layer (data transformaon)

Weight update Predicons (Y)

Target (Z)

Opmiser Loss funcon

Loss Score

Used as a feedback signal to adjust weights (Z-Y)

Fig. 6.2 The workings of multiple layer neural networks

To make its job easier, the AI might choose to kill all humans except a few healthy ones and focus on them, because average well being isn’t affected by how many humans are left. Neural networks are in this sense ruthless in lowering their loss function, so choosing the objective function is vital. The fears of AI Armageddon are driven by these inherent properties. For all the comparisons with the human brain however, today’s neural networks have fewer neurons than the nervous system of even relatively primitive vertebrate animals like frogs. Ideas that AI based on neural networks will any time soon replicate intelligence levels of humans, are rather far-fetched.

6.5.3

Limitations of AI

One critical limitation of AI is that, as a data driven approach, it fundamentally relies on the quality of the underlying data. There are no guarantees that a computer leveraging AI algorithms can detect a pattern never previously encountered, or even scenarios that are only slightly different. As uncertainty grows, therefore, these tools become less useful. For example, a highly publicised ML algorithm

224

6

Data Processing and AI

has been able to identify objects (e.g. dog, cat, truck) in images with less than 16% accuracy in an environment containing some 22,000 object categories. When that world is collapsed into 1000 categories of objects, other algorithms can achieve up to 70% accuracy. In both of these studies, 10 million labelled images were required to ‘train’ the algorithms. In comparison, humans need far fewer examples to ‘learn’ from and have the ability to distinguish and accurately name far more than 22,000 individual objects. This is the key reason why AI is data hungry. In the absence of massive training data, the AI algorithms would be next to useless. When many people compare AI with the workings of the human brain, they tend to forget that the sensory and motor skills we use in everyday lives requires enormous computation and sophistication. Over millions of years, evolution has endowed humans with billions of neurons devoted to the subtleties of recognising a friends face, distinguishing different types of sounds and using fine motor control. In contrast, the abstract reasoning that we associate with high thought like arithmetic or logic is a relatively recent skill, developed over only a few thousand years. It often requires simpler software and less computer power to mimic or even exceed human capabilities on these types of tasks. It is the latter that AI is good at. The former is a long way off. Whilst AI may be useful in pattern recognition and reasoning it is not very helpful when it comes to common sense or intuition. Computers are good at generating answers, not posing interesting new questions—as enlightened sage Voltaire was claimed to have said: “judge a man by his questions, not his answers”. From this perspective, current AI is far from the level of intelligence needed to be comparable with human intelligence. As Hans Moravec has observed: “it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers and difficult or impossible to give them the skills of a one year old when it comes to perception and mobility”.103 Whilst much progress has been made with AI, real progress in the area of General AI remains painfully slow. Within a recent AI competition, DeepMind104 used a new algorithm using reinforcement learning in its algorithm AlphaZero. Each machine was given 1 min of thinking time and during that time AlphaZero searched 80,000 positions per second for chess and 40,000 per second for Shogi, whilst others such as Stockfish (for chess) searched 70 million per second and Elmo (for Shogi) searched 35 million per second. In effect, AlphaZero expended 1000x fewer resources to arrive at a better solution than its opponents due to its use of its deep neural network to tell it where to search. However for each game, DeepMind trained a different instance of AlphaZero—i.e. each instance was an example of an application of narrow AI. DeepMind’s biggest achievement was the creation of an algorithm that can beat anyone at Go, Chess and Shogi but uses less than 1% of the resources (using a

103

Moravec, H. (1990). Mind children: the future of robot and human intelligence. Harvard University Press. 104 DeepMind is the UK based AI company that Google bought at vast expense in 2014 which has gone on to be one of the leaders in its field.

6.5 Understanding How AI Works

225

reinforcement learning system based on Deep Learning) than others. This has substantial implications for running AI in smart phones, nevertheless still requires a huge amount of data. Google’s algorithms played millions of games of Go to learn. That’s far more than a human would require to become world class at Go. That kind of computation requires massive amount of computing power—the computer time alone would be too costly for most users to consider. By one estimate, the training time for AlphaGo cost US$35 million. Many believe with continuous advances in computing, these costs will dramatically fall. It is this continuous exponential explosion in computing that many believe will fuel General AI in the next few decades. I suspect however, that this may be a rather optimistic outlook. The laws of nature are likely to imply General AI is unlikely to be seen anytime soon. 10 nm is currently the cutting edge geometry for semiconductors and beyond around 5 nm the laws of physics start to misbehave. This means that doubling the number of transistors in the same area of silicon every 18 months will no longer be possible using the transistors we know. It is this doubling that has underpinned the exponential improvement in computer capability and AI over the last 40 years. The pace of improvement of computer capability will slow down to the point where General AI may become unachievable—unless of course we move to a world of quantum computing.

Part V Disruptive Data Applications

Applications and Use Cases Data Processing and AI Data Integrity, Control and Tokenization Data Capture and Distribution

Data Connectivity: Telecommunications and Internet of Things

7

Disruptive Data Applications

The combination of ubiquitous high speed connectivity, a plethora of devices to capture vast amounts of data, the secure distribution of such data, Artificial Intelligence (AI) deployed to make sense of data across the ecosystem and automation of many business processes is leading to significant innovation in the application space. New business models are being created, old ones are being tweaked and reworked and previously un-served customers are being delivered products and services, some that they didn’t even know they wanted. Many of these new business models seek to leverage two sided markets like platform businesses such as Uber, Amazon, Facebook are doing, very successfully. Some are combining these technologies to create new delivery models such as autonomous vehicles and drones are attempting to do, whilst others are using these digital technologies to build digital twins to model the real world and gather insights from a safe distance and which are more cost effective than building real world products. Many are combining technological advances to build virtual reality or augmented reality, which whilst today are used mainly for entertainment, have the potential to transform the healthcare, industrial and retail markets. As advances in nanotechnology and quantum computing become mainstream, further innovative applications will be introduced that we may not have even dreamed of today. Some businesses are attempting to simplify what is becoming an increasingly complicated world through digital assistants, although their real purpose may be to gather as much data as possible, to reinforce their data monopolies and drive their AI algorithms and become even bigger, dominating global markets. This section examines these disruptive applications, describing their potential as well as the policy and regulatory questions they raise.

# Springer Nature Switzerland AG 2020 B. Vagadia, Digital Disruption, Future of Business and Finance, https://doi.org/10.1007/978-3-030-54494-2_7

229

230

7.1

7 Disruptive Data Applications

Digital Assistants

In the beginning, there was simplicity, life was hard but simple. The technology consisted of what today is seen as rather rudimentary, simply hand tools, steam power and then electric power. That journey evolved over a period of some hundred years to today, where we have unbelievable levels of complexity, where innovation is happening at breakneck speed and where trying to understand this new revolution is next to impossible. The next phase of the evolution, will most likely see a movement back to some form of simplicity—not in the individual technologies per se, but in how we as conscious society see that technology at work. Our brains and minds although vast in their computational abilities (the brain can store over 100 terabytes of information), cannot fathom how to use and integrate the sheer complexity and chaos that is seen today, with the plethora of technologies, information and content that is at our fingerprints. In order to bring some form of control to this chaos, we will see a level of intelligence that will pervade these separate technologies and systems. We are seeing this today in the form of intelligent agents, such as Google’s Assistant and Amazon’s Alexa, Apple’s Siri or Microsoft’s Cortana.1 Digital assistants can be seen everywhere now. Digital assistants have more than doubled their points of presence in 2018 and 2019, although the ubiquitous presence does not necessarily mean a great user experience. Outside of asking trivial questions, setting timers, playing music and turning lights on and off, digital assistants remain a novelty. They still lack the intuitiveness and ability to perform in noisy environments that have the potential to make them really useful. While the noise environment problem is a relatively simple fix, the intuitiveness is a real AI problem and here making progress remains challenging, although not insurmountable with the passage of time and pace of AI progress. The leaders in this field remain Google and Amazon, with Baidu also making significant progress. Siri was the first digital assistant to hit the mainstream, but Apple’s focus on privacy and the way in which it has implemented Siri locally, has led it to fall behind. Siri has three fundamental problems (that is from an AI perspective, not from a personal privacy perspective): • First, differential privacy: which is the system that Apple uses to ensure that user data in its cloud is unintelligible to anyone but the user himself—which works by inserting random code into the data as it is uploaded to the cloud, such that it makes no sense when it is viewed in isolation. However, when put together with 1 In August 2017, Microsoft and Amazon entered into a partnership where Alexa and Cortana would offer access to each other’s services. The idea was that users would have another easy conduit from which to access Alexa while Cortana would be provided with a badly needed escape from the PC where it has been stuck since the collapse of Window’s Phone. Microsoft has better AI but Amazon has the assistant, the devices, market acceptance and the smart home. If Microsoft throws its weight behind Alexa on the PC, it has a better chance of keeping Google at bay.

7.1 Digital Assistants

231

thousands of other pieces of data the random pieces of code cancel each other out. This gives Apple the ability to understand broader trends of its usage but not what individual users are doing. Apple has used this to promote its security and privacy and as a reason to choose iOS over Android. However, this limits the quality of the algorithms that Apple can train as the data set is compromised; • Second, fragmentation: Apple has fragmented the Siri platform. By implementing the agent on the device rather than in the cloud, Apple has ensured that the user experience of Siri varies from one device to another. This also limits the ability to use Siri to control devices from one another; • Third, secrecy: Apple is a closed and secretive organisation which does not work well when it comes to the AI community. Siri is driven by AI and the AI community works on openness and collaboration which is at odds with the way that Apple does things. For example, Google’s DeepMind published its method for creating AlphaGo. Apple has opened up a little bit and has begun sharing and publishing some of its methodologies for Siri, however that may not be enough if Apple wishes to benefit from the learning’s of others in the AI community. As customers start to recognise the value of their own data and the amount of data that is being freely given away to these digital assistant operators, we may well see a change in mindsets and the need for these operators to revisit their business models. That process may take considerable time, during which these companies continue to gather vast amounts of personal data without users having a clue of how that information is actually being harvested and used. Policy makers will need to take notice. Simply pushing basic customer consent is a easy answer, but one that falls short of what is required. Many of the concepts discussed within the AI section in the previous chapter will be equally relevant when it comes to digital assistants and how data is collected, used and potentially shared with other participants along the value chain and potentially where mandated, with competitors. Mandating some form of wholesale access to data and enhancing data portability will not be easy however. The level of disaggregation of customer consent will increase and may become increasingly difficult to fathom. This can only get exponentially more difficult to monitor and control as data gets shared with other third parties across the ecosystem. Perhaps some form of Blockchain application where providence of data can be proved will be the ultimate answer. However where data is exploding at an unbelievable scale, current Blockchain solutions simply cannot cope. Perhaps in time, alternative consensus mechanisms will be developed that may be light yet secure enough to enable data providence. Until then, customers will have little real control of their data, not just direct personal data, but also their meta data.

232

7.2

7 Disruptive Data Applications

Virtual and Augmented Reality

Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) are changing the way in which people perceive the digital world. VR is the use of computer technology to create a simulated environment. Unlike traditional user interfaces, VR places the user inside an experience. Instead of viewing a screen in front of them, users are immersed and able to interact with 3D worlds. AR is the blending of interactive digital elements, like dazzling visual overlays or other sensory projections into real world environments. Mixed Reality combines the best of both virtual reality and augmented reality. Many organisations, primarily in entertainment and gaming are today utilising VR, AR and MR. Lego’s AR Digital Box in-store kiosk for example, allows customers to ‘see’ finished products by holding the product box close to a screen and IKEA’s App allows shoppers to place digital furniture and other products from the catalogue as overlays within their rooms at home. Augmented reality is allowing brick and mortar retailers to take their showroom experiences to another level, creating unique experiences that blend digital and physical shopping. The virtual layer can provide a platform that allows improved communication, deeper engagement and better personalisation. Virtual ‘try-on’ experiences are being used by brands using AR to raise customer engagement. In 2014, L’Oreal released its Makeup Genius App, which allowed shoppers to virtually try on different shades of blush and mascara before making a purchase decision. Once the makeup is ‘applied’ on the face through the smart phone camera, the L’Oreal facial recognition system follows face movement and angles, showing what the makeup would look like from different perspectives. By early 2016, Makeup Genius had been downloaded more than 20 million times. Nike likewise is using AR and image recognition to connect print advertising with its online store. Upon pointing a smart phone at a Nike advertisement in ‘Runner’s World’, a user can jump to the shopping cart on Nike’s website. Similarly, British online fashion retailer ASOS uses mobile technology to make its magazine advertisements instantly shoppable. According to recent estimates by Goldman Sachs, VR and AR are expected to grow into a US$95 billion market by 2025. The strongest demand for these technologies currently comes from industries in the creative economy, specifically gaming, live events, video entertainment and retail—however over time many applications of VR and AR applications will find their way into a number of sectors, including the healthcare and education sector. Unfortunately, today the use cases are limited. For all the hype surrounding AR and VR, progress and adoption has been slow. VR is the easiest of the three to create but even this remains plagued by weight, comfort, user experience, cabling and nausea issues that have limited its uptake. AR and MR are even more technically challenging, which has resulted in a less than ideal user experience despite billions of dollars of investment on an annual basis. Progress in VR and AR has remained very much an enterprise product where user experience and clunky hardware is less important. Imagine workers in factories or manufacturing firms being able to see the relevant specifications or installation process for any maintenance part they are

7.3 Digital Twins

233

looking at, by simply using VR or AR. This has the potential to save a lot of time wasted in finding the right manuals or looking up information on parts. In the consumer market, users pay for the experience, meaning that if it is poor, they simply will not buy the product. Apple has admitted that real AR is years away. Progress is unlikely to be seen in the consumer market until head units are available that are no more intrusive to wear than a regular pair of glasses and a full field view of the virtual world as it is superimposed upon the real world becomes a reality. Once this happens, a vibrant ecosystem of developers will need to be nurtured to ensure that the experience offered that is both broad and deep. These developments will happen for sure in the near future, it is just that these developments are not there today. Imagine walking down the street and when you look at a restaurant with your augmented reality glasses, reviews in real-time pop up, or the food and drinks menu pops up. With 5G and with video compression technologies becoming a reality, it will be possible to move the VR computation to remote unit (hosted in an Edge Computer at the operators tower), that can stream the visuals wirelessly to fairly light eyewear. Today’s units are clunky because they need to contain the computing resources. From a policy perspective, a number of issues will be raised with the increasing use of VR, AR and MR. Much of the value of AR or MR comes from its ability to contextualise information by overlaying text, images and other artefacts, onto physical objects. This has the potential of infringing on a copyright owner’s exclusive rights to reproduction and alteration. A more immediate problem as seen with the uptake of Pokémon Go, was people trespassing onto other people’s property.

7.3

Digital Twins

A digital twin represents in a digital form a real world entity or system, it is effectively a replica, described by data, of physical assets, processes and systems that helps organisations understand, predict and optimise their performance. A computer program takes real world data about a physical object or systems as inputs and produces as output, predictions or simulations of how that physical object or system will be affected by these inputs. It combines design and engineering details with operating data and analytics about anything from a single part to multiple interconnected systems to an entire manufacturing plant. They attempt to mirror reality and it is hoped, can detect problems that would otherwise remain imperceptible, in the areas of aeronautics, automobiles, production systems, as well as healthcare. As an exact digital replica of something in the physical world, digital twins are made possible thanks to IOT sensors that gather data from the physical world and Artificial Intelligence (AI) which allows these sensory data to be reconstructed. While the concept of a digital twin has been around for many years, it has been the advances in IOT technology and AI that has made it affordable and accessible to many more businesses.

234

7 Disruptive Data Applications

They are built by specialists data scientists or mathematicians, who research the physics that underlie the physical object or system being mimicked and use that data to develop mathematical models that simulate the real world in the digital space. The concept originated at NASA during the early years of space exploration to keep track of complicated machinery. Using digital mirroring technology, NASA could replicate real world systems and equipment. NASA used pairing technology, the precursor to digital twin technology, from the earliest days of space exploration to solve the issue of operating, maintaining and repairing systems when you aren’t near them physically. This was precisely how engineers and astronauts on Earth determined how to rescue the Apollo 13 mission. Today, digital twins are used at NASA to explore next-generation vehicles and aircraft. As IOT becomes more mainstream, digital twins will receive inputs from a greater number of sensors from the real world. These digital twins will be able to help optimise IOT deployments for maximum efficiency, as well as help figure out where things should go or how they operate before they are physically deployed. The British government is creating a digital twin of its entire national infrastructure to prepare for future challenges. Gartner has identified digital twins as a top technology trend, with IDC forecasting that thirty percent of Global 2000 companies will use the technology by 2020.2 Deloitte forecasts the global market for digital twin technologies will reach US$16 billion by 2023. IDC suggests that by 2020, sixty percent of discrete manufacturers will use digital twins of connected products with analytics to track performance and usage for better product and service quality.3 Chevron expects to save millions of dollars in maintenance costs from the digital twin technology they will have deployed on equipment by 2024 in oil fields and refineries. Digital twin technology has even been deployed to refine Formula 1 car racing. In a sport where every second counts, a simulation can help the driver and the car team know what adjustments can improve performance. Boeing has been able to achieve up to a forty percent improvement in first-time quality of the parts and systems it uses to manufacture commercial and military airplanes by using the digital twin asset development model.4 Siemens Healthineers is developing intelligent algorithms that generate digital models of organs based on vast amounts of data. Cardiologists tested the use of these algorithms in cardiac resynchronization in a research project at the University of Heidelberg. Cardiac resynchronization therapy is a treatment option for patients suffering from chronic congestive heart failure. It involves an advanced pacemaker that resynchronizes the beating heart using two electrodes, one implanted on the right ventricle, the other one on the left ventricle. The Heidelberg cardiologists

2

IDC. (2017). IDC FutureScape: Worldwide IOT 2018 predictions. See: https://www.idc.com/ research/viewtoc.jsp?containerId¼US43161517 3 IDC. (2018). PlanScape: Digital Twins for Products, Assets, and Ecosystems. See: https://www. idc.com/getdoc.jsp?containerId¼US43134418 4 https://www.aviationtoday.com/2018/09/14/boeing-ceo-talks-digital-twin-era-aviation/

7.4 Platforms

235

created a digital twin of the patient’s heart, virtually implanted the electrodes and virtually generated electrical pulses to test if correction surgery was likely to be successful in the patient. Clearly the benefits of digital twins can be substantial, however that assumes the digital twin is a truly accurate representation of the physical world. When a digital twin gets this wrong, especially when we are talking of planes, autonomous vehicles or medical grade technology, the consequences can be deadly. Policy makers may need to instigate regulations to guarantee the accuracy of digital twins in high-risk sectors, through setting of standards, greater transparency in algorithms and testing data and perhaps requiring the manufacturers to demonstrate through predefined tests, its robustness. In 2018, the UK Centre for Digital Built Britain (CDBB) published the Gemini Principles to begin enabling alignment on the approach to developing the UK infrastructure twin. Enshrined in the values driving the digital twin initiative is the notion that all digital twins must have clear purpose, must be trustworthy and must function effectively. The nine Gemini principles include: Public good (used to deliver genuine public benefit in perpetuity); Value creation (enable value creation and perform improvements); Insight (provide determinable insight into the built environment); Security (enable security and be secure itself); Openness (open as possible); Quality (built on data of an appropriate quality); Federation (based on a standard connected environment); Curation (clear ownership, governance and regulation); and Evolution (able to adapt as technology and society evolve).

7.4

Platforms

Platforms are intermediaries, they connect two or more distinct groups, for instance buyers and sellers, or content providers and individuals, but the platform owner does not actually own the assets or services being transacted. They are merely technology enabled services that create value primarily by enabling direct interactions between two or more customer or participant groups, with a number of wrap-around services they provide to make the transaction easier, safer or more enjoyable. Digital platforms can also unbundle resources that have traditionally been historically tightly clustered together and which are difficult to consume individually. iTunes, which you may not think of as a platform, used unbundling as its business model turning albums into individual songs that can be purchased. The significance of platforms in the economy can be observed when we compare their valuations compared to traditional firms. In 1990, the top three automakers in the USA had among them nominal revenues of approximately US$250 billion and a market capitalisation of US$36 billion, employing over one million employees. In 2014, the top three companies in Silicon Valley had nominal revenues of US$247 billion, but a market capitalisation of over US$1 trillion and only 137,000 employees. Seven of the top twelve largest companies by market capitalisation, Alibaba, Alphabet, Amazon, Apple, Facebook, Microsoft and Tencent are all effectively platform or ecosystem players.

236

7.4.1

7 Disruptive Data Applications

Size Matters

The latest generation of platforms are what Andrew Ng, the AI scholar calls ‘O2O’, short for Online to Offline—such as Uber, Lyft, Airbnb, GrubHub, Deliveroo. In all of these use cases, customers just want to arrange transactions as efficiently as possible with no unpleasant surprises. To do this the platform needs to ensure it has a lot of potential participants on both sides of the transaction. The success of many platforms are a product of sheer scale. These platforms enjoy operating leverage from efficient process automation, AI enabled search and recommendations and significant network effects created by the interactions of hundreds of millions of suppliers and users on both sides of the platform.5 It is these factors that also create barriers to entry to new comers. Whilst the technological costs of setting up a platform may be minor (although marketing costs can be significant), the first mover advantages and network effects that are created as a result, means it becomes difficult for new competitors to effectively compete. Once the platforms gains scale, there is a near zero marginal cost for adding additional users. As platform user bases get larger, their average costs reduce significantly. In tradition firms, the intrinsic scalability and economies of scope that could be derived from technology was limited by the operating architecture of the organisation that it was deployed in. But over the past decade, we have seen the emergence of firms that are designed and architected to release the full potential of digital networks, data algorithms and AI. The more a firm is designed for scale and scope in its operating model, the more value it can create and capture. Uber reported 2018 revenue of over US$11 billion—a astonishing growth of over forty percent over the previous year. The company posted net income of close to US1 billion in 2018, but an adjusted EBITDA loss of US$1.85 billion, spending huge sums in geographic expansion and marketing expenditure. Uber has also expanded into adjacent markets—leveraging economies of scope. Uber Eats operates in more than thirty countries. The UK is one of its largest international markets, with over 8000 restaurants on board in approximately forty cities. Uber Eats is estimated to have brought in US$3 billion in gross revenue. Such growth is a result of economies of scale, economies of scope and its ability to leverage learning from one market to another. Growth has however been accompanied by ongoing investigations and criminal inquiries being conducted by the USA Department of Justice and other domestic and foreign agencies, including a 2016 data breach of consumer data, a violation of competition laws in Singapore for its acquisition of Grab and an increasing number of governments and local authorities reviewing Uber’s transportation license, given concerns on passenger safety. Lyft, a competitor to Uber has collected data from over one billion rides and over ten billion miles driven to inform its Machine Learning (ML) algorithms. The more rides Lyft facilitates, the better it is able to improve matching efficiently drivers and riders in its ridesharing marketplace, which reduces arrival times and maximises

5

McKinsey. (2015). Competition at the digital edge: Hyperscale businesses.

7.4 Platforms

237

availability to riders.6 It has established relationships with over 10,000 organisations, cities and municipalities to facilitate rides for their employees, customers and constituents. Examples of these include: Blue Cross Blue Shield Institute where rides can be requested on behalf of patients to reduce the number of missed appointments; University of Southern California where students can request Lyft rides that are paid for as a part of the university’s safe rides program. Lyft, just as Uber is attempting to do so, is investing in significant sums into autonomous vehicles. It is hoping to deploy an autonomous vehicle network that is capable of delivering a portion of rides on the Lyft platform within the next five years and within 10 years, to have deployed a low-cost, scaled autonomous vehicle network that is capable of delivering a majority of the rides on the Lyft platform. Most platforms businesses are global in nature, seeking to leverage global economies of scale and network effects to make it difficult for new entrants to compete (e.g. Amazon, Facebook, Netflix). However, the more a network is fragmented into local clusters, the less the impact of scale and network effects and the easier it becomes for challengers to enter (e.g. whilst Uber was one of the earlier entrants into the space and has spent billions creating global scale, it does not benefit from network effects—as new entrants such as OLA encroach into their markets in a select number of cities worldwide). Clustered networks end up being highly competitive. For a truly global operation that can benefit from network effects, there should not be local clusters that competitors can target selectively. Airbnb’s business model is based on brokerage. The company started small but grew rapidly. It went from 200,000 guest arrivals in 2009 to around 100 million in 2018. The company reported an impressive forty percent revenue growth rate in 2018 compared with the previous year. As of 2019, it was present in 65,000 cities in 191 countries around the world. It had over three million listings compared to Marriott’s one million rooms and Hilton and IHG which had around 750,000 rooms each. The business model is based on Airbnb taking three percent cut from the renter and 6–12% from the traveller, depending on the price and quality of the property. For this charge, Airbnb provides customer service, payment handling and damage insurance coverage for its hosts. Renters and hosts can rate each other, thus increasing the trust and the quality of service. Outside of the well known platforms, there are many others attempting to use platform business models. SK Telecom, one of the South Korea’s largest telecom carriers, was an early entrant into the space, launching SK Planet in 2011. By the end of 2012, SK Planet had earned more than US$1 billion in revenue, driven by its 11th online marketplace. Since then, the company has made significant investments, establishing a presence in Silicon Valley to access start-ups based there.7 More

6

Lyft prospectus. Lessin, J. (2013). South Korea’s SK Planet to invest up to $1 billion in US. The Wall Street Journal, 26 June 2013. See: https://blogs.wsj.com/digits/2013/06/26/south-koreas-sk-planet-toinvest-up-to-1-billion-in-u-s/ 7

238

7 Disruptive Data Applications

recently, the company launched the 020 marketing platform, offering data-driven advertising and marketing solutions. The UK’s National Health Service (NHS) recently announced an ecosystem effort to empower entrepreneurial ventures to connect to the NHS and provide a range of service from remote consultation and the use of ‘smart inhalers’ to monitor patients remotely, to the use of AI to interpret CT and MRI scans. Nevertheless, how the platform model will work within the NHS, which is not really necessarily categorised as a two sided market is an open question. Another program launched by the UK secretary of state is NHSX to drive digital transformation across the NHS, as an outcome from a policy paper published in October 2018, ‘The future of healthcare: our vision for digital, data and technology in health and care’. I don’t want to sound pessimistic however the NHS has not exactly had good experience when it comes to technology innovation or transformation. Yes there may be good clinical technology used by pockets of the NHS, but looking holistically across the NHS, it’s digital technology remains very much last century and its collective mindset is that of a large utility monopoly very much in the ‘if it aint broke, don’t fix it’ mode. There are also many established, as well as start-ups now being established in the area of IOT platforms, such as Microsoft, IBM and SAP, or several industrial companies with similar aspirations, such as GE, Bosch and Siemens. Here, platforms are designed to collect and manage data from one side of the platform and connect to service providers on the other side of the platform.

7.4.2

Platform Business Models

The two-sided nature of platforms leads to direct network effects, where increased use or involvement by one group leads to benefits for other members of that same group. In that sense platforms as a concept are not new. Traditional yellow pages are platforms, they connect firms looking to offer their products and services to consumers, whilst consumers are looking to find suitable suppliers. However, platforms now enabled by digital technologies and driven by data has taken these network effects to a new level. These network effects in the digital era can be seen in action when you look at the staggering growth of firms like Facebook, Amazon, Airbnb and Google, all of whom have been able to achieve rapid scale and impact. The intermediary role played by platforms, puts them in a position of power over both sides of the platform; suppliers and consumers. Though they come in many varieties, platforms all have an ecosystem with the same basic structure, comprising four types of functions: (1) the owners of platforms who build the platform and provide technology leadership; (2) providers who serve as the platforms’ interface with users and may provide a governance role (which may be the same as the owners, but not necessarily so); (3) producers who create their offerings; and (4) consumers who use those offerings.

7.4 Platforms

239

There are essentially three types of platform business models: • Asset sharing platforms, such as Airbnb who create value by increasing utilisation of existing perishable assets, whilst helping the users of such assets to search, transact or personalise the use, at a potentially reduced cost; • Social media and content platforms such as Facebook who create value by helping content creators monetise content or to provide targeted advertising; whilst helping users gain access to relevant/entertaining content and which is easier to find; • Trading platforms that facilitate the exchange of goods and services such as Ebay or Amazon, where value is created for both sides. On the supply side through standardisation, extensive distribution channels to reach potentially previously difficult to reach customers and trust building processes and payment mechanisms. Whilst on the demand side through the provision of easy access to trusted suppliers and in many cases, tools to compare products and their prices easily. With a platform, the critical asset is the community and the resources of its members. The focus of strategy shifts from controlling to orchestrating resources, from optimising internal processes to facilitating external interactions and from increasing customer value to maximising ecosystem value. Many platforms also utilise economies of scope through use of existing customer and supplier relationships, branding, sharing of technical expertise and possibly most importantly, the sharing and merging of consumer data. These strong economies of scope are one reason why the same small number of large digital companies have successfully built ecosystems across several adjacent markets.8 Take Uber, starting off as a platform to intermediate taxi rides within a city, who has now expanded into intermediating freight services and into food delivery. While employees help define strategies, design user interfaces, develop algorithms, code software and interpret data, the actual processes that drive customer value are fully digitised. Beyond removing human bottlenecks, digital technologies are intrinsically modular and can easily enable business connections. When fully digitised, a process can easily be plugged in to an external network of partners and providers, or even into external communities of individuals, to provide additional complementary value. Digitised processes are intrinsically multi-sided. As Van Alstyne et al. highlight,9 we need to understand the underlying microeconomics in order to understand the platform business model. Traditional large and successful brick and mortar companies were driven by supply side economies of

8 UK HM Treasury. (2019). Unlocking digital competition, Report of the Digital Competition Expert Panel. March 2019. See: https://www.gov.uk/government/publications/unlocking-digitalcompetition-report-of-the-digital-competition-expert-panel 9 Van Alstyne, M., Parker, G., Choudary, S. (2016). Pipelines, Platforms, and the New Rules of Strategy. Harvard Business Review.

240

7 Disruptive Data Applications

scale, whereas platforms are driven by demand side economies of scale. Understanding this transformation means understanding ‘the inverted firm’. Network effects cause companies to ‘invert’—that is, to shift production from inside the company to outside, because network effects cannot scale within a company as easily as can be achieved from outside the company, given there are simply more users than employees. This means companies shift from vertical integration to open orchestration of entities or customers outside the company in order to create value. These platforms seek to aggregate disconnected suppliers, products or services in fragmented industries and make these available to the other side of the platform. As they shift production outside, they have near zero marginal costs. For instance, Uber does not own its cars, Airbnb does not own its rooms and Facebook does not produce its own content. Not incurring the costs of production, they can scale as fast as they can add partners. Network effects also means that platform value appreciates through use, whereas product value depreciates through use. Platforms also seek to create positive interactions between users and suppliers, sometimes blurring the distinctions. Facebook generates its content from the users themselves with its platform orchestrating this content to other users. The focus of the platform shifts from customer value to a focus on ecosystem value. The business model shifts from dictating processes to persuading more and more participation. In traditional business, size is a double edged sword. As it grows, a business can usually deliver more value at lower price. However, the advantages of scale tend to be limited by the firm’s operating model, which encompasses all the assets and processes it uses to deliver the value it promised to its customers (after a certain scale, diseconomies of scale are seen). As the firm gets bigger, its operating model becomes increasingly complex and with complexity comes all kinds of problems. Complexity becomes the downfall of traditional organisations, increasing operational costs and decreasing service levels. Ultimately, as traditional firms grow, they suffer diseconomies of scale, scope and learning. Once the digital model is established, most of what these firms need for growth is additional computing power. Growth bottlenecks are moved to the technology layer or to the ecosystem of partners and suppliers, not employees. Successful digital companies are experts in leveraging skills and capabilities that lie beyond their core organisation.10 Global results from Accenture Technology Vision 2017 indicate that seventy-five percent of C-level executives believe their competitive advantages will not be determined by their own organisation, but by the strength of the partners and ecosystems they choose.11 These leaders foresee the importance of business networks growing exponentially in the next five years with seventy-four percent even believing that ecosystems will be the basis for new partnership models.

10

Accenture. (2015). Harnessing the Power of Entrepreneurs to Open Innovation. See: https://www. accenture.com/t20151005t162506__w__/us-en/_acnmedia/accenture/next-gen/b20/accenture-g20yea-2015-open-innovation-executive-summary.pdf 11 Accenture. (2017). Technology Vision 2017. See: https://www.accenture.com/_acnmedia/ accenture/next-gen-4/tech-vision-2017/pdf/accenture-tv17-full.pdf

7.4 Platforms

241

Some platforms also have the potential to drive sector level competition, leading to commoditisation of products and services (though not always). This commoditisation drives down prices and delivers business to companies willing to supply products most cheaply (the Amazon model). This model works where there are a large number of suppliers, supplying similar goods and services (e.g. Uber) but in markets in which players are few and the offerings are complicated, commoditisation is unlikely and the platform model may not scale. The long-tail model of platform models, where revenue maximisation is a key objective, implies that prices must be as low as possible to attract the maximum number of users, thereby extending the network effects. It is in the interest of the platform to drive competition between the platform participants to drive down prices and generate the economies of scale. However, platform owners can’t just focus on price. They have a variety of levers to manage, including: user interface and user experience, reputation systems, marketing budgets and core network technology. Almost all digital firms are successful because they innovated the existing business model, experimented and recombined various aspects of value creation and value capture until they found the right balance.

7.4.3

Platform Revenue Management and Governance

Platforms business models are really about revenue management, using algorithms and technologies to deal with finite capacity and perishable inventory. Airlines and hotels have been doing this for a while, where they initially sell seats or rooms to customers with the highest willingness to pay, then moving on to selling off the rest to customers further down the demand curve. However, the trick to platforms is to offer a multitude of products and services that are complementary to each other. Selling one creates demand for others. Single product or service platforms cannot do this and thus their growth potential becomes stinted. To maximise revenue, firms need to understand complementary products and seek to maximise overall revenues. Complementary products are important to understand the economics behind the likes of Apple Apps and the iPhone. Any drop in the price of an App has the effect of shifting the demand curve for the iPhone outwards, increasing the number of people who are willing to pay for the iPhone. The existence of free Apps like Angry Birds has two effects: it generates consumer surplus which is great for users, but also nudges the iPhone’s demand curve outward, which is great for Apple. This explains why Apple’s move into the App market was vital for the iPhone’s success. It is projected that the global App economy will grow almost five-fold from US$1.3 trillion in 2016 to US$6.3 trillion in 2021, as the number of App users increases to upwards of 6.3 billion people.12 12

European app economy State of play, challenges and EU policy, European Parliament Briefing, May 2018. See: http://www.europarl.europa.eu/RegData/etudes/BRIE/2018/621894/EPRS_BRI (2018)621894_EN.pdf.

242

7 Disruptive Data Applications

Apple not only keeps thirty percent of the price of paid Apps, but the lowering of App prices increases the demand for iPhones. In 2015, the revenue source from Apps yielded US$6 billion for Apple. Steve Jobs however also realised that offering a marketplace like the App store could also result in the entering of bad actors with bad behaviour and bad content, which would not only damage the App store, but also the iPhone, whose security and functionality could be compromised. Steve Jobs who was originally reluctant on the App store idea, compromised by allowing outsiders to write Apps, but they would have to meet strict standards, be tested and approved by Apple and be sold only through iTunes Store. It was a way to reap the advantage of having thousands of software developers (utilising the inverted firm), whilst retaining enough control to protect the integrity of the iPhone and the simplicity of the customer experience. Apple realised that its role must be both a technology leader as well as a governance leader. Overtime, platforms have developed a number of practical and innovative governance mechanisms to address consumer trust and address potential inhibitors to consumer engagement. The most common trust mechanisms developed by these platforms include: • Review and reputation systems: helping consumers to make informed choices through review and reputation systems—like Amazon or Ebay does; • Guarantees or insurance: to provide a level of trust in the system—like Airbnb does; • Verified identities: to eliminate fraud and resolving disputes—like Uber does; • Pre-screening: of peer providers, through verification of external databases (e.g. motor vehicle records or criminal background checks)—and like Apple’s App store does; • Secure payment systems: often in co-operation with established external payment systems—like Ebay does; • Education, checklists and forms: including with respect to possible legal or other obligations that may apply—like Airbnb does. Well governed platforms lead to customers forming stronger associations with the platform than with the company on the other side of the platform (Amazon has a stronger brand association than the suppliers of products sold through the platform). That leads to better brand identity, but also a need for brand management. A critical enabler for enforcing good governance is the use of two way reputational systems. That is the customer rates the supplier and vice versa. Dual reputational systems can enforce good behaviour even in the absence of laws or regulations. However, as the platform improves the accuracy of its reputation system to foster stronger trust between its clients and freelancers, more disintermediation is likely to occur, which offsets revenues gains from better matches. After sufficient trust is established between a user and a service provider, services such as a payment escrow and dispute resolution are no longer valued and the need for the platform gets gradually diminished.

7.4 Platforms

243

Another important factor determining value that a platform can capture and sustain is related to the ability for users to multi-home. This determines the viability for users to use competitive alternatives, specifically to situations wherein users or service providers in a network can form ties with multiple platforms at the same time. When multi-homing is common on each side of the platform, it becomes almost impossible for the platform to generate a profit from its business.

7.4.4

APIs Crucial for Platform Business Models

The most successful platform businesses are early to the space, build and leverage network effects, as well as take advantage of the economies of complementary goods, whenever possible. They open up their platforms to a broad range of participants (whilst safeguards are placed to ensure quality). They offer some services to users for free thereby pushing the overall demand curve outward. Fundamentally, they use and open up their application programming interfaces (APIs); those doing so experience on average, a four percent gain in market capitalisation.13 APIs allow platforms to monetise data and integrate a wide range of partners, allowing them to access new value outside of the business. Salesforce. com’s partner ecosystem, for example, offers a developer friendly toolbox that has spurred partners to build a huge number of employee and customer applications that rely on APIs. As a result, more traffic comes through the Salesforce APIs than through its website. They are essential for powering such systems as Google’s documents and maps, Amazon’s voice and web services, Apple’s online market and Facebook’s authentication services. Using APIs however, requires a new way of thinking about partnerships, a new way for business and technology to work together and a new pace of development, funding and coordination. It also comes with new challenges to data privacy and security.

7.4.5

Data Protection Concerns

Platform businesses are driven by data; on the collection and use of data to drive their AI algorithms and match each side of the platform to the other. Platforms need to navigate the complex challenge of appropriate governance for the collection and use of such data whilst providing security safeguards to avoid data security breaches and other privacy or data protection problems that have become commonplace in digital businesses. Users should have confidence that protection of their privacy does not depend on the whims and best efforts of the CEO, as has been the case with Facebook. Whilst the conversation until now has largely focused on the largest 13 Benzell, S., LaGarda, G., Van Alstyne, M. (2017). The Impact of APIs in Firm Performance. Boston University Questrom School of Business Research Paper No. 2843326.

244

7 Disruptive Data Applications

platforms, it might be appropriate to ensure that privacy regulations are applied uniformly, given the rapid scale with which these platforms can grow, what is small today could suddenly become large in a few years.

7.4.6

Competition Concerns

Platforms are not exempt from many aspects of the law (competition law, data privacy laws, consumer law, commercial law and more specialised laws such as those that govern e-commerce), but they have been virtually unregulated (ex-ante regulation). There is now recognition that the rise of these platforms has also created problems. There is now a growing consensus that some form of regulation is required to protect consumers against harm and preserve the competitive process. Given the scale of platform businesses, we inevitably come to issues of competition. In many cases, tipping can occur (when it becomes difficult for competition to be sustainable) once a certain scale by one or two platforms is reached. Behaviours that are harmless, or even positive in competitive markets may become anticompetitive or anti-consumer when a market becomes concentrated. These harms include higher effective prices for consumers, reduced choice, or possibly a downward spiral on quality. Even when consumers do not have to pay a monetary value for the service, concentration could mean that consumers might have had to give up less in terms of privacy or might even have been paid for their data, were it not a concentrated market. The European Commission has found that Google in 2017 had abused its dominant position in the supply of comparison shopping services by using its search engine to give undue prominence to its own shopping services at the expense of rivals. It was fined €2.4 billion. Then in March 2019, it fined Google a further €1.5 billion for the way it controlled how advertisements from rival search engines were positioned on the websites of content publishers which used the Google search algorithm; and recently found that Google abused its dominant position in the market for smart mobile operating systems to increase its market power in the supply of search engine services. In July 2018. it fined Google €4.3 billion for this abuse. The key issue when it comes to concentration, is that of ‘contestability’. Monopolies are not ideal, but they can deliver value to consumers as long as potential competition keeps them on their toes. They will then be forced to innovate and possibly even to charge low prices so as to preserve a large installed base and try to make it difficult for the entrants to dislodge them. This theory works for as long as two conditions are met: Efficient rivals must firstly be able to enter and secondly exit when able to. With most platforms using AI and having access to massive amounts of data to feed these AI algorithms, the barriers for entry become increasingly difficult to mount. An interesting policy and regulatory question that arises is if certain key sets of data gathered by dominant platforms are utilised to feed their algorithms, should these data sets be qualified as common goods (or essential facilities), i.e. enablers for the digital economy that should be made accessible to third parties rather than exploited by dominant operators (as has been the case for

7.4 Platforms

245

dominant telecommunications operators who had to unbundle their networks to enable competitors to enter and use this underlying infrastructure to provide their own services). An important difference with many platforms is the fact that many are beginning to open up their data through APIs in an effort to invert the firm. It is however not clear if all required data is being opened up and whether that will enable sufficient entry. It is worth differentiating between access to data and access to algorithms. Platforms tend to keep the details of their algorithms secret, as these are their real IP and will try to keep much of their customers’ data secret using data protection laws to provide an umbrella as a cover. As Jean Tirole reiterated in his book, ‘Economics for the common good’, the State’s main role with respect to the economy is to create the climate of trust required for trade to flourish. For public authorities, the task is therefore not to build regulatory walls but rather, first and foremost, to provide all of the players with a maximum sense of security and trust to fully unleash innovation. Governments have approached these concerns differently. The USA has encouraged global platform businesses with very little regulation (understandable given most global platform businesses are USA based firms deriving much of their revenues from outside of the USA). Some such as India and South Africa, are keeping a close eye on these businesses. The EU and Singapore have sought to introduce some form of licensing or horizontal regulations for these global platforms. Some such as China and many Middle Eastern countries have sought to block these global platforms into their markets. Many, particularly in the EU are also taking a keen interest in possible anti-trust issues, especially when it comes to mergers and acquisitions. Not surprising, although some may argue, it’s a little too late. Between 2011 and 2016, Apple acquired seventy companies and Facebook more than fifty and Google nearly two-hundred. Often the acquirer already had a competing offering and this it could be inferred, was the driver for these acquisitions with the desire to eliminate future competition and continue to concentrate the markets. Over the last ten years, the five largest firms have made over four-hundred acquisitions globally. None has been blocked and very few have had conditions attached for approval, or even been scrutinised by competition authorities. Table 7.1 details the more prominent acquisitions to date. The EU Merger Regulation focuses upon mergers which would ‘significantly impede effective competition (the SIEC test), in particular by the creation or strengthening of a dominant position.’ However, these tests typically work by examining the effect on prices pre and post merger. Some services provided by online platforms are frequently provided to the consumer at no monetary cost and therefore the absence of a monetary price that can be measured provides a challenge for traditional competition policy analysis. The impact these platforms may have on consumers is also a pertinent question. The powerful negotiating position that certain platforms hold over their users was highlighted by the Australian Competition and Consumer Commission’s preliminary report on digital platforms, published in December 2018. The report highlighted the ‘bargaining power imbalances that exist between digital platforms

246

7 Disruptive Data Applications

Table 7.1 Acquisitions in the platform/digital world Year 2006 2007 2011 2011 2012 2012 2013 2014 2014 2014 2014 2014 2016 2017 2018 2019 2019 2019 2019 2019

Acquiring company Google Google Microsoft Google Facebook Microsoft Google Apple Google Google Facebook Facebook Microsoft Apple Amazon Worldpay Salesforce Uber Twilo Google

Acquired company YouTube DoubleClick Skype Motorola Mobility Instragram Yammer Waze Beats Electronics Nest Labs DeepMind WhatsApp Oculus LinkedIn Shazam Ring FIS Tableau Careem SendGrid Looker

Transaction value (US$ billions) 1.65 3.1 8.5 12.5 1 1.2 0.97 3 3.2 0.625 19 2 26.2 0.4 1 35 15.7 3.1 3 2.6

and consumers’ while also concluding that ‘advertisers have a limited ability to negotiate with Google and Facebook’. Competition authorities globally need to pay more attention to its policy toolkit when examining platform businesses. USA senator and presidential candidate (at the time of writing the book) Elizabeth Warren has also taken on big technology, including Facebook, Google, Amazon and Apple. Warren’s proposals amount to a total rethink of the United States’ permissive merger and acquisition policy over the past four decades. Warren is not alone in thinking that the technology giants have gained excessive market dominance; in fact, it is one of the few issues in Washington on which there is some semblance of agreement. Other candidates, most notably Senator Amy Klobuchar of Minnesota, have also taken principled stands. USA technology monopolies have a global impact as they have often achieved global market dominance before local regulators and politicians know what has happened. Recently, the UK commissioned an expert group, chaired by Barack Obama’s former chief economist, Jason Furman, who produced a very useful report on approaches to competition policy in the face of platform businesses.14 There appears to be an emerging consensus that there is a role for a sector specific regulator for digital platforms, this might be independent, part of the competition 14 https://www.gov.uk/government/publications/unlocking-digital-competition-report-of-the-digi tal-competition-expert-panel

7.4 Platforms

247

authority, part of the data protection authority, or part of the national telecommunications regulator. Such an authority would be required to be provided with a number of anti-trust and regulatory tools if they are to make a genuine difference. Taking the experience of regulation of the telecommunications sector provides a useful perspective for what tools such a digital authority might need to be endowed with: • Customer or supplier data portability: to reduce the switching costs for moving from one platform to another, including the right to delete data from the ‘donor’ platform. However it will be important to define what is meant by customer or supplier data, is it what has been volunteered or actually acquired? Will it also include meta data? In order for such a regulatory remedy to work, operational processes and interoperable standards may need to be defined, mandated or agreed by the sector; • Rights for interoperability between platforms: including mandated open APIs that allow rival platforms to communicate and interconnect with each other. This may allow rival, smaller platforms, to benefits from the economies of scale that the larger platform benefits from; • Mandatory fair and reasonable non-discriminatory licensing: for essential intellectual property, including access to certain data sets. Given platform business are fuelled by data that drives their AI, without access to this underlying building block, no rival platform can effectively compete with dominant platform operators. However, the critical question arises as to what constitutes ‘certain data sets’. Should this include only data provided by the customer themselves or a more broader category of data? There is a risk that requiring open access to wider data sets, risks opening the core intellectual property of the platform and may affect the innovation process itself. Other questions arise such as whether such access should be a one off exercise or a continuous one or whether such access would be permissible under data protection laws and whether customers would be willing to have their data provided to companies they may not trust. Would such access need to be set through some form of regulated price, if so how would that be determined?; • Non-discrimination rules: applied to all dominant platforms, which may well require for separation between what may be called wholesale and retail operations, akin to what has been done in telecommunications networks. The danger, even when dominant platforms are asked to open up their customer/ supplier data sets to competitors will be their continuing incentives to discriminate in the supply and use of such information towards their own business at the expense of rivals. Mandating and monitoring separation may be a means of protecting competitors from such potential for abuse. Concepts such as equivalence of outputs and equivalence of inputs, as has been used within the telecommunications sector may serve as useful starting positions; • Restrictions on the size and limits on vertical integration: through modifying the merger review process, especially given the recent history of large dominant

248

7 Disruptive Data Applications

platforms acquiring possible future competitors. Whilst competition authorities already have powers to review and stop mergers, the methodologies used and economic tests need to be revised. Today’s balance in favour of approving mergers based on the belief that it is better to avoid false positives rather than false negatives may need to be reversed. The use of standard market assessment techniques such as the definition of relevant markets using the so called SNNIP tests may not be relevant in markets where services are perceived to be provided for free. New tests need to be devised. At the same time, regulatory/competition authorities must be empowered to break up existing dominant platforms as a last resort if necessary, as is the case with regulated telecommunications operators (the obvious question here is whether national regulatory or competition authorities would have the powers to do so with multinational firms). Whilst this is a last resort, it serves as an effective regulatory signal that can change behaviour. The burden of proof should also be weighted towards the merging parties. They must prove that the merger will be welfare enhancing, given the asymmetry of information between regulatory/competition authorities and the entities themselves; • Product unbundling requirements: given that platform businesses benefit from both economies of scale and scope due to their use of complementary products/ services, it may be necessary that competitors have a level playing field, where they can compete in scenarios where they may not be able to provide these so-called bundled offerings which the dominant platforms may be able to provide. This requires that the dominant platform unbundles their products/services and possibly makes these available to competitors; • Introducing ‘Privacy by Design’ principals: into the operation of platforms. The EU GDPR already incorporates this, however many other data protection laws do not as yet. There however appears to be a movement for many countries to amend their data protections laws to be more aligned with the GDPR, as it has set a new standard for data protection globally. The EU is also looking at whether it needs to regulate platform businesses. It expects to publish its Digital Service Act in late 2020. It has however already indicated some form of regulation in its digital strategy, where it refers to so-called ‘gatekeeper’ platforms and suggested that they need to remain fair and contestable for innovators, businesses and new entrants. These gate keeps platforms are likely to include Google and Amazon. The idea is to allow innovation by smaller platforms, but for dominate platforms to be regulated on an ex-ante basis. Whilst the development of national policies and regulations (or regionally as is the case with the EU) is called for, it may not be enough. Given the global nature of platforms, international harmonisation of intellectual property rights, taxation and consumer protection is also required. Whilst some discussions have started, especially in terms of digital taxes, we are unlikely to see major consensus anytime soon. There may however be change afoot to the platform business model enabled by advances in Blockchain and token economics. Currently, users who want to hail a ride-sharing service have to rely on an intermediary like Uber. By enabling peer-to-

7.5 Autonomous Vehicles

249

peer transactions, the Blockchain opens the door to direct interaction between parties; a truly decentralised sharing economy results, where the existing platform business may no longer be relevant. As I have previously commented, however, much more work is required to make Blockchains much more nimble and scalable, without the wasteful side products as a result of today’s consensus protocols. In addition a number of governance functions, including guaranteeing the safety of passengers would need novel solutions to be developed in Blockchain enabled peerto-peer ride sharing.

7.5

Autonomous Vehicles

Autonomous vehicles offer the potential to improve traffic flow and free up time spent in the car for other activities. The self-driving car has been in development for several years and some manufacturers are already offering IOT-based features in production models such as automatic braking for collision avoidance. Some car makers had expected to have self-driving cars on the road by 2020, pending regulatory approval. That now appears a rather too ambitious target. Some claim that traffic accidents could be reduced by ninety percent with the adoption of fully autonomous vehicles and by forty percent with partially autonomous vehicles. In the USA alone there were 5.6 million car accidents per year, with property damage from collisions in excess of US$277 billion per year, which is currently factored into insurance premiums. By using technology to avoid low-speed collisions, twenty-five percent of the annual property damage caused by low-speed collisions could be avoided. Autonomous vehicles have the potential to save 95,000 lives per year, with an estimated economic impact of US$180 billion to US$200 billion per year. A 2-year National Highway Traffic Safety Administration (NHTSA) survey from 2005 to 2007 estimated that ninety-four percent of the accidents on USA roadways are due to human error, with over 35,000 people dying on USA roadways in 2015. From inattention to intoxication, from speeding to sleepiness, the human factor is responsible for most of the current dangers on our roadways. According to the NHTSA study, motor vehicle crashes in 2010 cost around US$250 billion in economic activity.15 Autonomous vehicles can also reduce fuel consumption by driving more efficiently. NHTSA estimates that Americans wasted 6.9 billion hours and 3.1 billion gallons of fuel in traffic delays in 2014.16 Under computer control, autonomous vehicles would not indulge in wasteful driving behaviours and with vehicle-to-

15

US Transport NHTSA. (2015). The Economic and Societal Impact Of Motor Vehicle Crashes, (DOS HS 812013). See: https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812013 16 Texas A&M Transportation Institute. (2015). 2015 Urban Mobility Scorecard See: https://static. tti.tamu.edu/tti.tamu.edu/documents/umr/archive/mobility-scorecard-2015-wappx.pdf

250

7 Disruptive Data Applications

vehicle communications, cars can travel close together at highway speeds, reducing wind resistance and raising average speed. Conventional driving imposes not only costs borne by the driver (e.g., fuel, depreciation, insurance), but also substantial external costs, or ‘negative externalities’ as economists like to call it, on other people. For example, every additional driver increases congestion for all other drivers and increases the chance that another driver will have an accident. These externalities have been estimated at approximately thirteen cents per mile.17 If a hypothetical driver drives 10,000 miles, he/she imposes US$1300 worth of costs on others, in addition to the costs that he/she bears. Autonomous vehicle technology has the potential to substantially reduce both the costs borne by the driver and these negative externalities. Autonomous vehicles will also dramatically alter a nation’s transportation network. In the short term, it will impact transportation safety, efficiency and accessibility. It will also create second and third order effects related to jobs, urban planning and economic models. Along with the many benefits of this technology, it will raise public concerns about the safety of these vehicles on public roadways, and the potential displacement of jobs related to transportation. For policy makers, the most pressing challenges will involve crafting a regulatory regime that fosters innovation and ensures safety and protects the privacy of motorists whose movements can be tracked. Advances in autonomous vehicles have accelerated as the costs of key equipment has fallen. A Google autonomous car incorporates several sensing technologies, but its most important ‘eye’ is a Cyclopean LIDAR (a combination of Light and Radar) assembly mounted on the roof. This equipment manufactured by Velodyne, contains sixty-four separate laser beams and an equal number of detectors, all mounted in a housing that rotates ten times a second. It generates 1.3 million data points per second, which can be assembled by onboard computers into a real-time 3D pictures extending one hundred meters in all directions. Some commercial LIDAR systems available around the year 2000 cost up to US$35 million, but by mid 2013, Velodyne’s assembly for self navigating vehicles was priced at approximately US$80,000 a figure that has fallen and which will fall much further in the future. In recent court proceedings, Waymo revealed that it has lowered the cost of its in-house LIDAR system, Grizzly Bear 3, to just US$4000.18 These costs are likely to fall further once manufacturing ramps up and achieves economies of scale. ARK Invest believes the cost of autonomous sensors and computer systems could decline to US$1000–US$2000 by the time self-driving vehicles are commercialised.19 David Hall, the company’s founder and CEO, 17 Andersen, J., et al. (2016). Autonomous Vehicle Technology: A Guide for Policymakers. RAND Corporation. https://www.rand.org/content/dam/rand/pubs/research_reports/RR400/RR443-2/ RAND_RR443-2.pdf 18 Jeong. S. (2018). I’m not sure Waymo’s going to win against Uber. The Verve. 08 February 2018. https://www.theverge.com/2018/2/8/16993208/waymo-v-uber-trial-trade-secrets-lidar 19 Keeney, T. (2017). Mobility-as-a-service: Why self driving cars could change everything. See: https://research.ark-invest.com/hubfs/1_Download_Files_ARK-Invest/White_Papers/SelfDriving-Cars_ARK-Invest-WP.pdf

7.5 Autonomous Vehicles

251

estimates that mass production would allow his product’s price to drop to the level of a camera, a few hundred dollars. Waymo might be leading in terms of technological superiority and potential scalability. This is perhaps not surprising, given Waymo’s first mover advantage, originally part of Google’s self-driving project, the team has been researching and developing autonomous vehicles since 2009. Waymo has invested a significant amount of capital in self-driving cars. According to leaked court filing documents, Alphabet spent US$1.1 billion developing self-driving software and hardware between 2009 and 2015.20 Moreover, the company recently announced agreements to purchase as many as 62,000 Chrysler Pacifica minivans and 20,000 Jaguar I-Pace SUVs over the next three years for self-driving applications. This represents a potential investment of over US$4 billion. Uber is also taking steps to make autonomous vehicles a reality; it has struck a deal with Volvo to purchase as many as 24,000 Volvo XC90 SUVs in the 2019–21 time frame, representing a total capital investment of over US$1.4 billion.21 Waymo is likely to deploy vehicles on networks like Uber and Lyft in conjunction with building out its own network, utilising a hybrid model of automated and human operators. If Waymo were to exclusively focus on its own ridesharing network, its autonomous vehicles would probably not be able to service demand, considering that there are routes the vehicles still can’t complete, given inclement road and weather conditions. However, if Waymo partnered with a ridesharing service, it could selectively deploy its vehicles in conditions and routes where it made sense to do so and rollout its offering in stages to various local markets. However, whilst silicon valley has thrown its technological and deal making weight behind getting humans away from the steering wheel, progress has been relatively slow. AI, a key component of driverless car technology is far from the stage it is required, for it to parse out every situation that may arise on the road. Consider the case of the first fatal crash of a vehicle being operated in ‘self-drive’ mode: a Tesla Model S, which in May 2016 drove at full-speed into the side of a truck; its human driver was killed in the collision. According to investigators, the car’s sensors were confused by sunlight reflecting off the white paint of the truck’s trailer, which it was unable to distinguish from the sky. The system neither braked nor warned the human driver of the impending collision.22 On 18 March 2017, a 49-year-old woman was struck and killed by a self-driving Uber-operated car as she crossed a street at night. The USA National Transportation Safety Board in its initial findings found that the vehicle had in fact identified the Harris, M. (2017). Google has spent over $1.1 Billion on self-driving tech. See: https://spectrum. ieee.org/cars-that-think/transportation/self-driving/google-has-spent-over-11-billion-onselfdriving-tech 21 Pollard, N., Somerville, H. (2017). Volvo cars to supply Uber with up to 24,000 self-driving cars. See: https://www.reuters.com/article/us-volvocars-uber/volvo-cars-to-supply-uber-with-up-to24000-self-driving-cars-idUSKBN1DK1NH 22 Cummings, M., Roff, H., Cukier, K., Parakilas, J., Bryce, H. (2018). Artificial Intelligence and International Affairs Disruption Anticipated. Chatham House Report. 20

252

7 Disruptive Data Applications

pedestrian six seconds before impact but failed to stop or even slow.23 Whereas Waymo utilises two operators, one to drive and one to monitor data, Uber had just one operator, who was distracted at the time of the crash, doing both. The governor of Arizona suspended Uber’s ability to test autonomous vehicles in the state pending the investigation and Uber subsequently pulled out of the state. Clearly, there are many challenges to overcome before vehicles can accurately perceive the state of the environment from sensor systems and for its AI to react accordingly. While many of these technologies are still in a developmental or testing stage, it is already clear that when they are finally ready, vehicles will be able to perceive the road ahead in far greater detail than human drivers and deliver an unprecedented level of transparency when incidents do occur.

7.5.1

V2X Communication

Another area where technology development is progressing is connectivity for autonomous vehicles. V2X communications is the generic term to describe the various forms of communication required to connect vehicles to other vehicles and to connect vehicles to infrastructure (communicating with surrounding infrastructure so they could receive information about hazardous conditions such as icy roads or crashes or nonhazardous conditions, such as congestions; or route recommendations). Figure 7.1 describes the different communication requirements for autonomous vehicles.

7.5.2

Ridesharing and Autonomous Vehicles

Uber and Lyft, both ridesharing platforms are eager to establish themselves in the autonomous vehicles market. Ridesharing has risen to prominence partially driven by a demographic shift away from car ownership. In general, millennials seem to be less interested in driving than prior generations. The percentage of 20 to 24 year olds with driving licenses has steadily decreased from 91.8% in 1983 to 76.7% in 2014 in the USA.24 This trend appears to extend to car ownership; the proportion of new car purchases by young adults has trended downwards recently, with members of Generation Y, twenty-nine percent less likely to own a car than members of Generation X.25 The economics support this movement away from car ownership. In 2017, the average cost to own a car in York City was almost US$1600 per 23 National Transportation Safety Board. (2018). See: https://www.ntsb.gov/news/press-releases/ Pages/NR20180524.aspx 24 Riley, N. (2016). Why are fewer young people getting driver’s license. https://nypost.com/2016/ 01/31/why-are-fewer-young-people-getting-drivers-licenses 25 Cortright, J. (2015). Young people are buying fewer cars. http://cityobservatory.org/youngpeople-are-buying-fewer-cars/

7.5 Autonomous Vehicles

253

Communication of vehicles with each other

e.g. collision avoidance safety systems

V2I

Communication of vehicles with infrastructure

e.g. traffic signal timing / priority

V2P

Communication of vehicles with people in the vicinity (via their smartphones)

e.g. safety alerts to pedestrians, bicycles

V2N

Communication of vehicles with servers

e.g. real-time traffic routing, cloud services

V2X

V2V

Fig. 7.1 V2X communications

month,26 whilst the average monthly amount adults spent on Uber and Lyft was US$84 and US$54, respectively.27 These developments have significant implications for car manufacturers. The available market for cars is likely to shrink over time, at the same time, as digital platforms become intermediaries delivering entertainment and collecting IOT data, the role of manufacturers will only diminish, discussed in more detail later. The first truly autonomous vehicles will likely be robo taxis, geofenced or physically confined to limited areas such as a campus, or business complex, until their use is more widespread and people start to get comfortable with the technology and its safety record. Ride hailing platforms could be the first to utilise them because they offer a gentler slope towards fully autonomy which correlates with advances in technology. Level 5 automation, in which vehicles take care of all the tasks of driving in every situation is still a long way off. For an example of why, look at two often talked up sensors at the core of driverless technology: lidar and cameras. Lidar doesn’t see through snow, or through steam coming off a manhole; and glare and certain long distance conditions can fool cameras. Ride sharing may also spur new models for car ownership and taxation. People seem not to object to paying by the mile when they are being driven by taxis and services like Uber and Lyft. If a driverless world is one in which people generally buy rides rather than cars, then not only might fewer unnecessary journeys be made, political resistance to road pricing might be overcome, but also car sales will likely fall dramatically. Apart from major policy consideration for countries that are producers of cars, there are a host of other policies that must be looked at. These include the need to 26

Korosec, K. (2018). The 10 most expensive cities. Fortune 10 April 2018. See: http://fortune.com/ 2018/04/10/expensive-cities-car-ownership/ 27 Elk, K. (2018). Here’s how much Americans spend on Uber and Lyft in 32 major cities. See: https://www.cnbc.com/2018/05/11/how-much-americans-spend-on-uber-and-lyft.html

254

7 Disruptive Data Applications

extend cyber security for autonomous vehicles, data sharing and licensing and liability regimes. These are briefly detailed below.

7.5.3

Cyber Security

Autonomous vehicles and networked roadways will present new targets for those who might try and disable or hijack vehicles, or use vehicles and infrastructure to host botnets. Already, discussions on these issues are ongoing between the automotive industry and information technology companies. Government entities are involved in many of these discussions, however they must be taking a lead in these discussions and mandating foolproof controls to be built in to the autonomous vehicle ecosystem.

7.5.4

Testing Environments

Companies testing autonomous vehicles in a number of countries are concerned about current and proposed requirements that would mandate the sharing of testing data with regulators. That data, in turn, would be subject to public records requests. Such data could create a competitive disadvantage for companies with competitors able to learn about proprietary data and testing information through public records. Whilst data sharing is a critical regulatory tool for improving the operation of autonomous vehicles and ensuring the AI abilities of autonomous vehicles are reliable, policies that force the sharing of proprietary information or intellectual property may reduce investment and reduce transparency between the private sector and government. Regulators need to ensure that data that is mandated as part of the regulatory testing phase, does not form part of any public information request, or is inadvertently shared with other competitors, at this stage of technology maturity.

7.5.5

Licensing

Without government permits, testing autonomous vehicles on public roads is almost universally illegal. The Vienna Convention on Road Traffic, an international treaty that has regulated international road traffic since 1968, stipulates that a human driver must always remain fully in control of and responsible for the behaviour of their vehicle in traffic. The Inland Transport Committee (ITC) of the United Nations Economic Commission for Europe (UNECE) is examining under the Global Forum for Road Traffic Safety (WP1) how to adapt the conventions to allow for the use of autonomous vehicles. There are essentially two mechanisms adopted when thinking about autonomous vehicles: (1) ‘something everywhere’, which seeks to improve driver assistance systems installed in vehicles and gradually shift driving tasks from drivers to systems, (2) ‘everything somewhere’ which seeks to operate driverless vehicles in a limited geographical area and progressively expand it to wider areas. The majority

7.5 Autonomous Vehicles

255

of recent licenses have been in the ‘everything somewhere’ mode initially, which will migrate towards the ‘something everywhere’ approach over time, as regulators become comfortable with the technology and AI capabilities of autonomous vehicles. European and North American countries such as the USA, Germany, UK and Netherlands were pioneers of autonomous vehicle licensing. Asian countries quickly caught up and have been enacting similar legislation over the last three years. KPMG’s 2018 report autonomous vehicles readiness index ranks twenty countries’ preparedness for an autonomous vehicle future. The Netherlands’ Council of Ministers first approved autonomous vehicle road testing in 2015 and updated its bill in February 2018. The Dutch government is also spending €90 million adapting more than 1000 of the country’s traffic lights to enable them to communicate with autonomous vehicles. The Germany parliament passed a law in May 2018 that allows companies to begin testing autonomous vehicles on public roadways. Whilst drivers are allowed to remove their hands from the wheel, they are required to remain ready to take control in order to handle possible emergencies. The new legislation also requires a black box, designed to record system data and actions for review in the case of accidents. While most European countries adhere to Vienna Convention on Road Traffic, the UK is not a signatory and so is believed to have an advantage in adopting legislation to attract autonomous vehicle manufacturers and technology start-ups. The UK government is aiming for a wide adoption of autonomous vehicles on its roads by 2021. In 2013, the Department for Transport allowed semi-autonomous vehicles to operate on lightly used rural and suburban roads. In 2018, the UK introduced a bill to draw up liability and insurance policies related to autonomous vehicles. Legislation to promote safe use of autonomous vehicles is being developed by the UK Law Commission which will be ready as early as 2021. South Korea is possibly the most aggressive country in terms of government investment in autonomous vehicles, allowing autonomous vehicles with issued licences to operate on public roads and is building an entire artificial town for autonomous vehicle testing. In November 2018, the country opened K-City, which is the largest town model ever built for autonomous vehicle experimentation.

7.5.6

Liability Regimes

Autonomous vehicles bring to the forefront, the need to revisit and amend current frameworks for automobile liability and compulsory insurance to nurture consumer acceptance of autonomous vehicles.28 There are essentially two extremes of product liability regimes: negligence and strict liability. The negligence liability framework (traditionally used in USA and

28

World Economic Forum. (2019). Filling Legislative Gaps in Automated Vehicles.

256

7 Disruptive Data Applications

UK29) evaluates a driver’s liability under a negligence standard. A negligence standard primarily depends on a driver breaching a duty of reasonable care and the burden of proof is put on the victim. For a victim to obtain compensation for a loss, they must prove that the driver breached this duty in operating their vehicle and caused the harm to the victim. In the case of autonomous vehicles, the victim would need to show that the manufacturer of the autonomous vehicle system breached the relevant duty of care. However if the autonomous vehicle industry agree on technical standards that may not necessarily be what some consider below par, it is unlikely that the victim can prove negligence. Therefore it is likely that most countries will move towards the strict liability regime when it comes to autonomous vehicles to strengthen manufacturers’ responsibility. For instance, the UK government established in July 2018 the Automated and Electric Vehicle Act, which strengthens the function of automobile insurance and the obligation of insurance companies. The act obligates insurers that underwrite automobile liability insurance for autonomous vehicles to pay compensation for a loss caused to victims whether the liable party is specified or not.30

7.6

Drones

Drones are aircraft that operate without a crew or pilot on board. Drones were originally developed for military use, but drone technology has been developing rapidly in recent years, so that even private individuals can use drones for leisure and recreation. Commercial use cases are increasing, from photography, remote observation to product delivery vehicles. Insurers and public bodies are utilising drones to survey infrastructure and disaster zones. Drones are being used from photography and filming in movie sets, to rescue operations, pipeline inspections and farming with crop spraying. The disruptive factor is the fall in price, which is bringing longer range and easier to control models into the reach of more and more businesses, public bodies and remote communities. In Switzerland, Swiss Post is connecting hospitals and medical labs by drones; lifting medical supplies and cargo into the skies to avoid transport gridlock and ensuring the security of vital cargo. In Rwanda, the Ministry of Health is saving lives through aerial supply chains that bring blood products to clinics around the country, preventing stock-outs and eliminating expiration of product and wastage from the system.31

29

General Insurance Rating Organization of Japan. (2017). Automobile Liability Insurance System in Major Countries, August 2017. 30 UK Government, Automated and Electric Vehicles Act 2018. 31 World Economic Forum. (2018). Advanced Drone Operations Toolkit: Accelerating the Drone Revolution.

7.6 Drones

257

In late 2015, the 95 year old Japanese firm Komatsu, the second largest construction equipment company in the world, announced a partnership with the USA drone start-up Skycatch, to precisely map a site in three dimensions, continuously send this information to the cloud, where software would match it against the plans for a site. The resulting information is then sent directly an autonomous fleet of bulldozers, dump trucks and other earth moving equipment. Agriculture too could soon be transformed by drones. Drones can fly over fields scanning them at different wavelengths to provide information about crop health. Information gained from these drones can enable much more precise targeting of water, fertiliser and pesticides. Ninety percentage of all crop spraying in Japan is currently done by unmanned helicopters. A significant number of National Aviation Authorities have started establishing new aviation safety legislation regulating the use of drones in their national airspace. In December 2015, the EU Commission proposed a revision of the EU legislative framework. The outcome is a new Regulation, through which the European Union will regulate civil operations of all kinds of drones, progressively replacing national regulations on civil operations of drones lighter than 150 kg. Higher-risk drone operations will require certification. As a general principle, EU rules state that drones must be safely controllable and manoeuvrable and should be operated without putting people at risk. Based on risk, some drones would need automated landing in case the operator loses contact with the drone or collision avoidance systems. All drones should be also designed to take into account privacy by design. There will also be rules on the noise and emissions generated by drones, as is the case for any other aircraft. Drones’ operators will need to be enrolled in national registers and their drones marked for identification, so they can be identified in case of incidents or privacy violations. Further afield, in early 2018, Rwanda passed regulations developed by the Rwandan Civil Aviation Authority focused on the actual risk presented by drone operations, that will enable Zipline blood delivery to ninety-five percent of the country. In Malawi, UNICEF and the Civil Aviation Authority of Malawi worked together to create a drone corridor where the national and international communities were invited to test, fly and learn in a real world environment, leading to the expansion of drone operations for mapping, delivery and multiple other use cases. From a policy and regulatory perspective, a number of decisions will need to be taken, from civil aviation regulations that determine the type of obligations placed on different drones, to spectrum that would need to be designated for drone operations and interference management, to how they are operated with specific requirements for line of sight between the drone and its operator. However, as drone technologies evolve rapidly, the policies that govern these technologies must also be able to consistently evolve. Providing a planned revision of the policy, or a continuous process for evaluating the implementation of policies for drone technologies is a new approach to governance. Table 7.2 details some of the differing mechanisms used to regulate what are called Unmanned Aircraft Systems in regulatory parlance.

258

7 Disruptive Data Applications

Table 7.2 Drone regulation examples Country Australia

Japan

China

New Zealand

South Africa

UK

Drones (Unmanned aircraft systems) regulations UAS weighing less than 2 kg are exempt from the need for a remote pilot’s license or operator’s certificate. Operating UAS weighing 2–25 kg will similarly be exempt if flown over a person’s own land for certain purposes and in compliance with standard operating conditions. Operating UAS weighing between 25 and 150 kg requires a remote pilot’s license. Operators of large UAS, as well as smaller UAS for other non-recreational purposes, will still be required to obtain a remote pilot license and operator’s certificate. Large UAS must also obtain airworthiness certifications. The operation of any type of UAS usually requires that the operator maintain a Visual Line of Sights (VLOS) unless prior approval is granted. UAS that weigh 200 grams or less are exempt from the licensing and conditions in the country’s Aviation Act. It requires operators of all UAS that weigh over 200 grams to monitor the UAS and its surroundings with their own eyes at all times. Regulations on the operations of civil drones took effect in June 2018. UAS weighing 250 grams or over are subject to the regulations: (1) legal person should be a Chinese citizen; (2) drone should be registered; (3) operator must be covered by insurance against liability for third parties on the surface. China allocates specific radio frequency spectrum to UAS flights. UAS flying within VLOS must be operated in the daytime. Such a requirement does not apply to UAS flying beyond visual line of sight (BVLOS). UAS weighing 25 kg or more may not be operated unless constructed under the authority of, or inspected and approved by, an designated person or organisation. UAS may be flown without the need for an operating certificate if they weigh less than 25 kg and operated with VLOS. Flying any aircraft BVLOS requires an operating certificate. Permits the use of what are known as Class-1 and Class-2 UAS, which are further divided into sub-classes for private, commercial, corporate and non-profit operations. The level of both technical and operational requirements that apply to UAS differs based on the type of operation. For instance, a private operation may only be conducted using a Class-1A UAS (less than 1.5 kg in weight) or Class1B UAS (less than7 kg in weight). Private operations are exempt from various requirements applicable to other operations, including the need to obtain a letter of approval, certification of registration and UAS operator certificate. It requires all UAS operations to be conducted within a radio line of sight. Additionally, the air-band radio must have the required output and be configured in such a way that the range, strength of transmission and quality of communication extends beyond the furthest likely position of the UAS from the pilot. The regulations for recreational UAS are contained within the Air Navigation Order 2016. A set of specific, simpler, regulations apply to aircraft that have a mass of 20 kg or less. These UAS must not fly more than 400 ft above the surface. Users are required to register and pass a theory test to get a flyer ID and register to get an operator ID. UAS is subject to the use of frequencies of the 35 MHz band, which is solely dedicated to aeronautic modelling. Regulations require that model control equipment must not cause undue interference with other wireless telegraphy equipment. UAS weighing less than20 kg are required to maintain a direct VLOS.

7.6 Drones

259

What the tables show is the complex and fragmented nature of drone regulation globally. History informs us that a sector only becomes successful, with greater economies of scale and greater consumer adoption and when there is some harmonisation of the global regulatory framework. Further work is required in this respect.

Part VI Other Enabling Disruptive Technologies

Data Processing and AI Data Integrity, Control and Tokenization Data Capture and Distribution

Data Connectivity: Telecommunications and Internet of Things

Other enabling disruptive technologies

Applications and Use Cases

8

Other Disruptive Technologies

This section looks, very briefly, at other emerging disruptive technologies, which although not part of the digital ecosystem, are prominent technologies driving disruption more widely and should not be ignored given their impact on driving changing business models. These include nanotechnologies that will drive the march towards miniaturisation and enable advances in wearable technology and healthcare more generally; Genome analysis that has the potential to revolutionalise how we can tackle life changing diseases through targeted interventions; Quantum computing that will lead to a step change in the power of computing capabilities that will enable further innovations and drive new applications, as well as posing serious challenges to cyber security; 3D printing that will revoluntionalise the manufacturing of products and potentially refine and redistribute geographic economic clusters; and Renewal energy which has the potential to change how we produce and consume energy that has the potential to change the global order.

8.1

Nanotechnology

Nanotechnology is the term given to those areas of science and engineering where phenomena that take place at dimensions in the nanometre scale are utilised in the design, production and application of materials, structures, devices and systems. In the natural world there are many examples of structures that exist with nanometre dimensions, including essential molecules within the human body and components of foods. Although many technologies have incidentally involved nanoscale structures for many years, it has only been in the last decade or two that it has been possible to actively and intentionally modify molecules and structures within this size range. It is this control at the nanometre scale that distinguishes nanotechnology from other areas of technology. Many of the applications at the nanoscale involve new materials which provide radically different physical properties. These include materials in the form of very thin films used in catalysis # Springer Nature Switzerland AG 2020 B. Vagadia, Digital Disruption, Future of Business and Finance, https://doi.org/10.1007/978-3-030-54494-2_8

263

264

8 Other Disruptive Technologies

and electronics, two-dimensional nanotubes and nanowires for optical and magnetic systems and nanoparticles used in cosmetics, pharmaceuticals and coatings. Using nanotechnology, materials can be made stronger, lighter, more durable, more reactive, more sieve-like, or better electrical conductors, among many other traits. Many everyday commercial products currently on the market and in daily use rely on nanoscale materials and processes. Nanoscale additives or surface treatments of fabrics are providing lightweight ballistic energy deflection in personal body armour. Clear nanoscale film on eyeglasses, computer and camera displays, windows and other surfaces are making them anti-reflective, resistant to ultraviolet or infrared light, scratch-resistant, or electrically conductive. Nanoscale materials are beginning to enable washable, durable ‘smart fabrics’ equipped with flexible nanoscale sensors and electronics with capabilities for health monitoring, solar energy capture and energy harvesting through movement. Nanotechnology is helping make lighter yet strong materials for use in cars, trucks, airplanes, boats and space craft, leading to significant fuel savings. Nano-bioengineering of enzymes is aiming to enable conversion of cellulose from wood chips and corn stalks to bio fuels. It is these advances in nanoscale technologies when combined with some form of connectivity that will not only drive Internet of Things (IOT) even further, but also drive digital disruption further. At the turn of the century, a typical transistor was 130–250 nanometres in size. In 2014, Intel created a 14 nanometre transistor, then IBM created the first seven nanometre transistor in 2015, and then Lawrence Berkeley National Lab demonstrated a one nanometre transistor in 2016. Smaller, faster and better transistors may mean that soon a computer’s entire memory may be stored on a single tiny chip. Using magnetic random access memory (MRAM), computers will be able to ‘boot’ almost instantly. Ultra-high definition displays and televisions are now being sold that use quantum dots to produce more vibrant colours while being more energy efficient. Flexible electronics have been developed using, for example, semiconductor nano-membranes for applications in smart phone and e-reader displays. Nanotechnology is already broadening the medical tools and therapies currently available to clinicians. Commercial applications have adapted gold nanoparticles as probes for the detection of targeted sequences of nucleic acids and gold nanoparticles are also being clinically investigated as potential treatments for cancer and other diseases. Better imaging and diagnostic tools enabled by nanotechnology are paving the way for earlier diagnosis, more individualised treatment options and better therapeutic success rates. Nanotechnology is finding applications in traditional energy sources and enhancing alternative energy approaches to help meet the world’s increasing energy demands. Many scientists are looking into ways to develop clean, affordable and renewable energy sources, along with means to reduce energy consumption and lessen toxicity burdens on the environment. Nanotechnology is improving the efficiency of fuel production through better catalysis. It is also enabling reduced fuel consumption in vehicles and power plants through higher efficiency combustion and reduced friction.

8.2 Quantum Computing

265

In addition to the ways that nanotechnology can help improve energy efficiency, there are also many ways that it can help detect and clean up environmental contaminants. Nanotechnology could help meet the need for affordable, clean drinking water through rapid, low-cost detection and treatment of impurities in water. Many airplane cabin and other types of air filters are nanotechnology based filters which allow mechanical filtration in which the fibre material creates nanoscale pores that trap particles larger than the size of these pores. Nanotechnology enabled sensors and solutions are now able to detect and identify chemical or biological agents in the air and soil with much higher sensitivity than ever before. E.g. a sensor has been developed by NASA as a smart phone extension that fire-fighters can use to monitor air quality around fires. Nanotechnologies will enable an explosion of IOT sensors and their capabilities. Like many things in life, sometimes something’s may be too good to be true. Newly identified nano processes and their products may expose humans and the environment in general, to new health risks, possibly involving quite different mechanisms of interference with the physiology of human and environmental species, due to free nanoparticles generated in nanotechnology processes released into the environment, or actually delivered directly to individuals. The exposure to nanoparticles having characteristics not previously encountered may well challenge the normal defence mechanisms associated with, for example, human immune and inflammatory systems.1 Governments again need to have concrete policies and regulations for the development of nanotechnology. Whilst there is clearly significant benefits, including environmental benefits that can be derived from advances in nanotechnologies, the possible side effects need to be better understood and managed. The risks of not doing so may bring advances in nanotechnologies to a grinding halt.

8.2

Quantum Computing

Quantum computing is a type of non classical computing that operates on the quantum state of subatomic particles (for example, electrons and ions) that represent information as elements denoted as quantum bits (qubits). The innovation behind quantum computing lies in the way it takes advantage of certain phenomena that occur at the subatomic level. Knowing fundamental differences between classical and quantum computing helps understand (well sort of) how it works: • Complex information: in classical computing, a computer runs on bits that have a value of either 0 or 1. Quantum bits or ‘qubits’ are similar, in that for practical

1 See: http://ec.europa.eu/health/scientific_committees/opinions_layman/en/nanotechnologies/l-3/ 1-introduction.htm

266

8 Other Disruptive Technologies

purposes, we read them as a value of 0 or 1, but they can also hold much more complex information, or even be negative values; • Entanglement: in a classical computer, bits are processed sequentially, which is similar to the way a person would solve a mathematical problem by hand. In quantum computation, qubits are entangled together, so changing the state of one qubit influences the state of others regardless of their physical distance. This allows quantum computers to intrinsically converge on the right answer to a problem very quickly; • Probabilistic: in classical computing, only specifically defined results are available, inherently limited by algorithm design. Quantum answers are probabilistic, meaning that because of superposition and entanglement, multiple possible answers are considered in a given computation. Problems are run multiple times, giving a sample of possible answers and increasing confidence in the best answer provided. The parallel execution and exponential scalability of quantum computers means they excel with problems too complex for a traditional approach or where a traditional algorithms would take too long to find a solution. Research partnerships between large companies and top universities are forming, most notably Google and the University of California-Santa Barbara; Lockheed Martin and University of Maryland; and Intel and Delft University of Technology. Governments around the world are also forging ahead with quantum computing initiatives as well: • Australia’s government: in early 2016, announced AU$25 million investment over 5 years toward the development of silicon quantum integrated circuit.2 • The United States: based on a 2016 report from the National Science and Technology Council, recommended significant and sustained investment in quantum information science by engaging with academia, industry and government.3 • The European Commission: planned to launch a €1 billion project in 2018 to support a range of quantum technologies.4 Whilst there is progress, we are far away from quantum computers becoming useable. We are still very much at the research and development phase.

2

American Leadership in Quantum Technology Joint Hearing Before the Subcommittee on Research and Technology & Subcommittee on Energy. See: https://www.govinfo.gov/content/ pkg/CHRG-115hhrg27671/pdf/CHRG-115hhrg27671.pdf 3 National Science and Technology Council Report: Preparing for the Future of Artificial Intelligence. November 20, 2016. See: https://publicintelligence.net/white-house-preparing-artificialintelligence/ 4 European Commission will launch €1 billion quantum technologies flagship. See: https://ec. europa.eu/digital-single-market/en/news/european-commission-will-launch-eu1-billion-quantumtechnologies-flagship

8.3 3D Printing

267

Whilst quantum computing is seen as a major development that will benefit many sectors, it will also cause a few to take a deep breath. Much of the digital services today, including digital IDs, financial transactions and mechanisms of keeping information safe and secure has been as a result of using cryptography. The best public key cryptography systems link public and private keys using the factors of a number that is the product of two incredibly large prime numbers. To determine the private key from the public key alone, one would have to figure out the factors of this product of primes. Even if a classical computer tested a millions keys a second, it would take many thousands of years to crack. However, if processing power were to greatly increase with advances in quantum computing, it might become possible for an entity exercising such computing power to generate a private key from the corresponding public key. If this becomes possible, many previously secure services will all of a sudden become ‘open’ and vulnerable. These are known problems and work has already started to develop cryptographic algorithms that are resistant to quantum computing. The NSA announced in 2015 that it was moving to implement quantum resistant cryptographic systems—how far we are to achieve this is however unclear.

8.3

3D Printing

Invented three decades ago, 3D printing has only recently become a viable technology for global manufacturers to produce critical parts for airplanes, wind turbines, automobiles and other machines. It has become viable due to the reduction in its costs and developments in computer aided design, better connectivity and cloud computing and improvements in materials for manufacturing. 3D printing produces objects through a simple process of layering and is sometimes referred to as additive manufacturing, in contrast with traditional (subtractive) manufacturing, which creates parts out of raw materials. As well as global manufacturers, tens of thousands of early adopters are now experimenting with 3D printers or starting mini-manufacturing enterprises. 3D printing can be used in a range of applications, including healthcare, aerospace and construction to name a few. It is being used by countless companies every day to make prototypes and model parts. It’s also being used for final parts ranging from plastic vents and housings on NASA’s next generation moon rover to a metal prosthetic jawbone for the elderly. In the future, it might be used to print out replacement parts or faulty engines on the spot instead of maintaining stockpiles of them in inventory. The industry was worth just over US$4 billion in 2014,5 with some estimating an average growth rate of twenty-five percent per annum—if that continues, it could be worth just over US$50 billion by 2025.

5 UNCTAD. (2018). Technology and Innovation Report 2018, Harnessing Frontier Technologies for Sustainable Development.

268

8 Other Disruptive Technologies

The real societal impact of 3D may be even more profound over the long term. The very design of towns and cities and populations over time have been largely determined by the industrial revolution. Mass manufacturing required large plants, access to labour and to be near public transportation links. Populations over time migrated to these locations and communities where built around them. These manufacturing plants created clusters, with critical supply chain partners locating themselves nearby. Schools and universities were created to both educate and feed into these manufacturing plants the skilled labour required to work and mass produce their products. If 3D printing really does become a substitute for mass manufacturing, these large clusters of labour around large plants will no longer be necessary. It would be more cost effective to build regional or local 3D printing facilities, to reduce logistical costs, reduce lead times and become less reliant on pools of labour only available in certain geographies. Not only the business and operational models of manufactures would change, but so will the need for large industrial towns. This may well result in mass unemployment in certain towns or drive the need for relocation for many workers, at considerable inconvenience. Public and private investment into certain regions and towns which has traditionally been pivoted around industrial towns may well need to be diverted.

8.4

Genome Editing

Genome editing is a type of genetic engineering in which DNA is inserted, deleted, modified or replaced in the genome of a living organism. Genome editing technologies enable scientists to make changes to DNA, leading to changes in physical traits, like eye colour and reduce risks of disease or birth defects. Genome editing technology CRISPR may develop a market of over US$10 billion by 2027.6 Unlike early genetic engineering techniques that randomly insert genetic material into a host genome, genome editing targets the insertion to specific locations. Its potential in improving human well-being is seen to be immense, from increasing life expectancy to reducing healthcare costs. However, the recent controversy over the use of the CRISPR7 Cas98 technique to edit the genes of twins to help make them resistant to HIV has highlighted the lack of established formal norms in this promising, but potentially risky new technology. Researchers in different parts of the world have the potential to make decisions about experiments that could have

6 Global CRISPR Technology Market 2018–2027: Market is Expected to Reach $10.55 Billion See: https://www.prnewswire.com/news-releases/global-crispr-technology-market-2018-2027-marketis-expected-to-reach-10-55-billion-300636272.html 7 Its a family of DNA sequences found within the genomes of prokaryotic organisms such as bacteria and archaea. 8 Cas9 allows for a reliable method of creating a targeted break at a specific location.

8.5 Renewal Energy

269

global consequences, especially in the event of an error, accident or other unforeseen consequence. In 2015, the USA National Academies of Sciences and Medicine, the Royal Society and the Chinese Academy of Sciences hosted the first International Summit on Human Gene Editing in an attempt to develop some guiding principles through collaborative dialogue amongst academics, national policy makers, religious communities and others.9

8.5

Renewal Energy

According to a report by Allied Market Research,10 the global renewable energy market was worth some US$930 billion in 2017 and was expected to grow to over US$1500 billion by 2025 at a CAGR of just over six percent. The World Economic Forum reports that whilst the last decade saw renewable energy such as solar power and onshore wind playing a key role in providing energy access, as well as reducing greenhouse gas emissions, growth has been slowing. Global investment in renewable energy peaked in 2017 at approximately US$325 billion and in 2018 fell by 11.5% to US$288 billion and dropped 14% in the first half of 2019 compared to the same period in 2018.11 However, the slower growth is attributed mainly to falling costs in solar and wind technology globally and to the change in market conditions with reduced subsidies in many countries. The unit cost of solar photovoltaic (PV) systems and onshore wind power has been declining rapidly. The global weighted average of installed cost for solar PV was approx. US$4600/kW in 2010 and had fallen to approximately US$1200/kW in 2018, according to the International Renewable Energy Agency. A decline was also seen for the installed cost of onshore wind power but at a smaller rate, from around US$1900/kW to US$1500/kW. On a wider geopolitical level, the renewal energy market is likely to reshape global power and foreign policies. The GCC, as a key contributor to oil and gas has been a pawn in USA foreign policy and has managed to build its economies on the back of the energy sector. This may change as nations find alternative sources of energy. Decentralisation of energy through renewal energy (solar panels, fuel cells etc) may also help many who currently lack access to the national grid. It may also enable 9

International Summit on Human Gene Editing: a Global Discussion. Committee on Science, Technology, and Law Policy and Global Affairs. See: https://www.nap.edu/read/21913/chapter/1#6 10 Global Renewable Energy Market, opportunities and forecasts 2018–2025. See: https://www. globenewswire.com/news-release/2019/11/04/1939942/0/en/The-global-renewable-energy-mar ket-was-valued-at-928-0-Billion-in-2017-and-is-expected-to-reach-1-512-3-Billion-by-2025-regis tering-a-CAGR-of-6-1-from-2018-to-2025.html 11 World Economic Forum. (2019). Investment in renewable energy is slowing down. Here’s why. See: https://www.weforum.org/agenda/2019/09/global-renewable-energy-investment-slowingdown-worry/

270

8 Other Disruptive Technologies

firms to de-risk the real potential for cyber attacks on the national grid. A 2017 survey of North American utility executives found that more than three-quarters believe there is a likelihood that the USA grid will be subject to a cyber attack in the next 5 years.12 Renewal energy may help firms and governments mitigate against such risks.

12 Accenture. (2017). Outsmarting Grid Security Threats, Accenture Consulting. https://www. accenture.com/_acnmedia/PDF-62/Accenture-Outsmarting-Grid-Security-Threats-POV.pdf

Part VII Enterprise Strategies

Entearprise Strateagies

Data Procaessing and AI Data Integrity, Control and Tokenization Data Capture and Distribution

Data Connectivity: Telecommunications and Internet of Things

Other enabling disruptive technologies

Applications and Use Cases

9

Enterprise Digital Transformation

As digital disruption takes hold and diffuses across sectors, old business models need to be re-examined and organisations need to reinvent themselves or the way they interact with a changing world. As old competitive advantages and what appeared in the past to be strategic resources start to lose favour in a world driven by data and platform businesses, organisations need to revaluate the value chains and ecosystems in which they operate, or might soon operate. Organisations need to disrupt or be disrupted. They need to transform into an ambidextrous organisation—agile and adaptable enough to compete in a everchanging world. With greater digitalisation and the further encroachment of digital players in their markets, most traditional intermediaries will lose out—all firms need to add genuine value to remain relevant. Most organisations are aware of digital disruption but unfortunately don’t really understand what it means for them and how they should digitally transform. Most are very conservative when it comes to new technology. We are probably unlikely to see much change in the next 2–3 years, but thereafter, significant change and upheaval is likely. This section examines how organisations, primarily existing firms, need to adopt their business models, visions, structures, mindsets and governance frameworks in order to compete in a world driven by digital technology.

9.1

The Need to Metamorphosis into Ambidextrous Digital Organisations

Just because you are successful today or may even dominate your sector, does not mean you will be successful moving forward. Just looking to see how advances in digital technologies can be incorporated into current processes and practices entirely misses the point of digital disruption. As the previous chapters have demonstrated, digital disruption is not about advances in a particular digital technology, but how all # Springer Nature Switzerland AG 2020 B. Vagadia, Digital Disruption, Future of Business and Finance, https://doi.org/10.1007/978-3-030-54494-2_9

273

274

9

Enterprise Digital Transformation

those technology advances converge to create new markets, new products or new competitors. Factory owners who considered electrification simply a better power source missed the point entirely, and over time they found themselves falling behind those that re-examined their business more holistically. Organisations that thought the Internet was simply about creating a website again missed the point entirely. Those that adopted the Internet more holistically redefined how products are designed, how they are distributed and priced. The same will hold true for this new digital revolution—simply doing more of what you do, slightly better with a specific digital technology is a guaranteed strategy of losing competitive advantage over time. In a world of digital ecosystems, as industry boundaries blur, strategy needs a much broader frame of reference. CEOs need a wider lens when assessing would-be competitors or partners. The latter will become even more important as it will become increasingly difficult for a single organisation to have the digital technologies or the capabilities to exploit them. They will need partners to help them grow. Relying on existing barriers to entry, such as high physical infrastructure costs, regulatory protection or brands is no longer a robust strategy. User demand will change regulations, companies will find collaborative uses for expensive infrastructure, the power of brands will be eroded through social validation enabled by social media platforms and other similar technologies. Simply investing in digital technologies to digitalise existing functions and processes is not enough to thrive. Organisations need to start from a blank sheet and look at how they can deliver greater value to customers, examine what customers will demand in the near future, who their real competitors are and how they can maintain their competitive advantage either by being smarter, cheaper or faster, or indeed all three. Organisations will need to seriously look at how they can leverage their ‘networks’ to good effect, as the platform businesses are attempting to do. Digital technologies will fundamentally alter the supply and demand dynamics in the economy. Ecosystems and not sectors, will define economic activities. In ecosystems, value chains will converge. Barriers to entry will shift from large capital investments to large customer networks as distribution models shift from a single point to that of multiple nodes. In 1958, the average tenure of companies in the S&P 500 was over 60 years. By 2012, it had fallen to under 20 years.1 Digital transformation will further accelerate the pace of disruption. Everyone has heard of the stories of companies felled by digital disruption, such as Kodak and Blockbuster—however there are a few

1

Sheetz, M. (2017) Technology killing off corporate America. CNBC, 24 August 2017. https:// www.cnbc.com/2017/08/24/technology-killing-off-corporations-average-lifespan-of-companyunder-20-years.html

9.1 The Need to Metamorphosis into Ambidextrous Digital Organisations

275

incumbents that are winning at the digital game; as they start leveraging their resources, customer relationships and global scale.2 DBS Group estimates that by 2020, a telecommunications operator that executes a well defined digital strategy and fully integrates digitisation with its operations will see a forty percent rise in its enterprise value.3 However, doing this is simply not easy when you have a large bureaucratic organisation that moves at the pace of an elephant rather than a tiger, as the new digital players are able to do. It can take quite some time for digital operating models to generate economic value that comes anywhere near the value generated by traditional operating models. This explains why executives in traditional firms have a difficult time at first believing that the digital model will ever catch-up. But after the digital operating model scales beyond critical mass, the value delivered can be significant and firms operating with digital models can easily overwhelm traditional firms. As value delivered increases for digital operating models, the space left for traditional players shrinks. Whilst many accept the need for change, most simply don’t know what to do. Figure 9.1 illustrates the cause and effect relationship driving digital transformation within organisations. The Economist Intelligence Unit recently found that forty percent of CEOs place digital transformation at the top of the boardroom agenda.4 But there is no uniform way in which CEOs are thinking about this. A key challenge for organisations is how to bring together and leverage these technologies to create meaningful value and positive return on investment. Another problem is measuring the return on investment from adopting digital technologies. Studies have found that the real benefits of investment in technologies take an average 5–7 years before full productivity benefits are visible, reflecting the time and effort required to make the other complementary investments that bring the digital technology to life. The lag reflects the time that it takes for managers and workers to figure out new ways to use the technology.5 Some estimates find that for every dollar of investment in technology, another nine dollars is required on training and business process redesign.6

2

Boston Consulting Group. (2019). The Five Rules of Digital Strategy. See: https://www.bcg.com/ publications/2019/five-rules-digital-strategy.aspx 3 Industry Focus, Asian Telecom Sector, DBS Group Research. Equity, 18 Jan 2018. 4 BT and EIU Research, Digital transformation top priority for CEOs, 12 September 2017. See: https://www.prnewswire.com/news-releases/digital-transformation-top-priority-for-ceos-says-newbt-and-eiu-research-300517891.html 5 Brynjolfsson, E., Hitt, L (2000). Computing productivity: firm level evidence. See: http:// ebusiness.mit.edu/erik/cp.pdf 6 Brynjolfsson, E., Hitt,. L., Yang, S. (2002) Intangible Assets: computers and organizational capital. Brookings papers on economic activity, 2002: See: http://ebusiness.mit.edu/research/ papers/138_Erik_Intangible_Assets.pdf and Brynjolsson, E., Saunders, A. (2013). Wired for innovation: How information technology is reshaping the economy, Cambridge MA, London MIT Press, 2013.

Integrang these with data

What business are we in? Who/where are our customers? What do they need/want? Who are our competitors? Who are our partners? Where are our organisational boundaries? How should we be structured?

Internet Mobile Embedded sensors and IOT Connected homes/cars etc Big data Cloud Blockchain/smart contracts Platform business models Artificial intelligence VR/AR Drones Rapidly develop new products, services, markets, business models Meet emerging customer needs





Driving changes to business models

Connect organisations, people, physical assets, processes, etc. in new ways



Digital Transformation

Impact on skills / training

Impact on data capture and analysis

Impact on technology systems

Impact on organisaonal structure

Impact on governance

Impact on strategy

Impact on operang model

Impact on business model

Impact on vision

9

Fig. 9.1 Digital transformation

• • • • • • • • • • •

Digital Technologies

276 Enterprise Digital Transformation

9.2 New Digitally Driven Operating Models

277

Successful digital transformation must rest on a foundation of smart digital strategy. And smart digital strategy, like traditional business strategy, is about making wise investment choices to maximise competitive advantage, growth, profit and value and then implementing with discipline. It is not about implementing the next wave of technologies being pushed by vendors or those making the most headlines. Consider Domino’s Pizza. No amount of digital technology will replace a pizza, but the company realised that digital could strengthen its advantage in speed and convenience. Its mobile App streamlined the steps for ordering and receiving a pizza. Whilst Domino may not be on the top of the list of high value digital enabled business, if you’d invested in the digital heavyweight Google when both it and Domino’s Pizza went public in 2004, you’d have made more money with Domino’s.

9.2

New Digitally Driven Operating Models

The operating model is one of the biggest challenges of digital transformation. Successful operating models in the digital era enable speed of both action and decision making, collaboration across functions and with external partners and effective risk taking. In another word, successful operating models brings about a step change in the organisation’s agility. A key design principle for successful digital transformation is to avoid digitising complex legacy operations unless these are core to your competitive advantage. Instilling untested technologies or processes can do more damage than good. Many organisations start at the perimeter of the organisation, revamping customer facing technology and supply chain technologies and then move on towards the core. Some start with the middle layer, which links the customer with the core legacy systems. However, there is no uniform approach. For one company, digitising the core might be critical to how it delivers customer value or remains competitive, whilst for others just digitising the customer experience may be sufficient. However, looking at these in isolation also misses the point. If digitising the customer experience does not enable the organisation to access detailed customer data and history or make the necessary changes to improve customer experience without digitising the core, then you might as well ask what is the point of digitising the perimeter. A key challenge in driving digital transformation is that the whole organisation is optimised for the current business model. Fundamentally, the organisation itself prohibits breakout from the old models. To transform the organisation, requires a break out from the old model, old people, old metrics and old investment profiles. These transformations typically start off with a separate initiative which is run like a venture, with separate governance processes, with separate resources and with separate funding provided by the organisation. A good example of this is BP who has initiated what it calls ‘launch-pad’. However, launching separate initiatives helps in the beginning, but the long-term goal must be to integrate these ‘ventures’ back into the organisation and be catalysts to wider organisational transformation.

278

9

Enterprise Digital Transformation

Successful digital transformation uses cross-functional, agile teams with an outward looking mandate to deliver customer value. The aim is to deliver change at a pace and scale that allows them to evolve with their customers’ needs and desire. A big bang approach maybe too risky to the business, however being too slow to adapt can equally be dangerous. Organisations face a critical decision: whether to create separate digital groups that allow for rapid progress (but in isolation from the rest of the business), or to strive for a digital infusion approach everywhere and accept that it might sharply slow transformation. Even when separate digital initiatives are launched, they need to infuse these back into the wider organisation. They need to capture the benefits of both approaches through coordination without necessarily centralising. Creating cross-functional collaborative teams are called for, however this is easier said than done. With different departments and individuals being given performance targets that do not generally reward collaboration, it is near enough impossible to bring such teams together. It is not just about bringing cross-functional teams together, but also about bringing cross-functional data together. In traditional firms, software applications and data are still embedded in individual, largely autonomous and siloed organisational units. IT and data are most often gathered in a distributed and inconsistent fashion, separated and isolated by existing organisational subdivisions and by generations of highly specialised and often incompatible legacy technology. Large firms often have thousands of enterprise applications and IT systems, working with a variety of scattered databases and supporting diverse data models and structures. Fragmented data is virtually impossible to safeguard for privacy and security. If data is not all held in a single centralised repository, then the organisation must have an accurate catalog of where the data is with clear guidelines for what to do with it (and how to protect it) and clear standards for how to store it so that it can be used and reused by multiple parties. Whilst the goal is to ensure data is centralised, the company’s experimentation capability needs to be highly decentralised. Almost anyone, wherever they are based or whatever function they work within must be able to launch live experiments, test their hypothesis and use the results to implement meaningful changes to products or services. In becoming a data driven business, the company must strive to become a different kind of organisation, one accustomed to ongoing transformation. This is not about spinning off ventures or creating an AI department. It is about fundamentally changing the core of the firm by building a data-centric operating architecture supported by an agile organisation that enables ongoing change. The organisation needs to holistically design and instil systems that encourage cross functional collaboration, nimbleness and risk taking. That will typically require a major overhaul of performance management systems. The desire to break down silos needs to expand beyond the organisational boundaries. As mentioned earlier, rarely do organisations have the talent or expertise required to drive forward with a redesigned digitally anchored organisation. Organisations need to engage with the

9.3 Reviewing Your Business Model: A Five Step Plan

279

wider ecosystem to fill talent, technology and capability gaps. Innovation is needed not only in terms of technology, but in organisational processes and mindsets. Developing an innovation culture means cultivating a culture of core values—one that rewards collaboration, hard work and continuous learning. Study after study tells us that the companies able to innovate effectively are those that share certain characteristics: a high tolerance for risk, agile project management, empowered and trained employees, collaborative cultures, lack of silos and effective decision making structures. As the 2016 MIT Sloan study on digital transformation found: the one trick that will help is developing an effective digital culture.7 What do I mean by digital culture? That is a good question. To be digital, really means to be agile and innovate. You have seen the pace of change in key digital technologies and the only way to ride this wave, is to be agile and innovative. You need an innovation culture, where employees are engendered to constantly learn, where the organisation and its employees are open minded, rather than set in their ways, you need to have a growth mindset. It is that growth mindset that ensures your employees constantly seek improvements and resist the temptation to be satisfied with the mediocre. Such a digital culture drives employees to seek out and have a good idea of the technology trends and how these correlate with your business. In order to instil such a culture, the organisation needs the right leaders, those that live by the above values, those that are bold enough to ask what they are doing wrong and how they can improve, those that develop an environment for people to thrive and who are forward thinking and can see around the corner, rather than be blinkered with old frames of reference.

9.3

Reviewing Your Business Model: A Five Step Plan

The first decision for the organisation to examine is whether a company is even in the right industry, given the blurring boundaries. In competition policy, economists use what they call the SNNIP test. This is used to define the relevant market, which is a market in which a particular product or service is sold. It is the intersection of a relevant product market and a relevant geographic market. Economists start from a fairly small relevant market and ask themselves, if there was a small increase in price—say ten percent, would marginal customers move to another supplier that was not within the original stated market, if so, then the relevant market is wider than original thought. That thought process continues with the concentric circle widening with each iteration. A similar approach is required by organisations, asking themselves, what market they are in. Not just looking at price, but also a number of other factors, such changes to quality, convenience, customer experiences enabled by technology advances. If players from outside of your traditional markets can enter and start serving your customers, your market is likely to be broader than you may 7 Gerald, K., et al. (2016). Aligning the organization for its digital future. MIT Sloan Management Review, 26 July 2016. See: https://sloanreview.mit.edu/projects/aligning-for-digital-future/

280

9

Enterprise Digital Transformation

have contemplated. The advent of elastic cloud computing for instance will allow many organisations to enter the market that may have been previously excluded. The same technology might also allow you to reallocate resources from infrastructure capex to other areas of the business and design new products and services. The problem many listed firms face is that investors reward organisations to work with the old model rather than investing in digital models. Many firms, including telecommunications operators oversold their digital prospects and delivered little. These firms now need to regain the trust of investors before they can start investing in digital models with any scale. The second decision is asking what customers need or would want if they were aware of them. A deficiency that many traditional organisations suffer from, is truly understanding what customers really want/desire. They try to identify these through surveys and focus groups. However this suffers from a number of deficiencies. Firstly, customer preferences change more frequently than an organisation’s ability to survey and respond. Most organisations are just not nimble enough. Secondly, most customers don’t really know what they want or like, until they have tried it. Sometimes organisations need to create a need rather than wait for it. Many digital innovators are doing just that. The third decision requires a re-evaluation of the competitive landscape. You may find existing competitors are no longer your real competitors. New entrants may become your competitors, who may come from unexpected places. We’re seeing banks get into the travel business, travel agents get into the insurance business, retailers get into the media business, technology businesses get into transport/travel business and so on. Your competitors are unlikely to be what they used to be. The fourth decision is how you structure your operating models. The operating model delivers the value promised to customers. Whereas the business model creates a goal for value creation and capture, the operating model is the plan to get it done. As such, the operating model is crucial in shaping the actual value of the firm. Ultimately, the goal of an operating model is to deliver value at scale, to achieve sufficient scope and to respond to changes by engaging in sufficient learning. The final decision is about identifying those projects that help kick-off your digital transformation journey. Organisations should not just start projects for the sake of starting one. The digital transformation journey is a major change management program and as with all change management programs, it needs to be driven from the top and must demonstrate quick wins to sustain the momentum. If you cannot identify projects that return significant economic value within 1 year, you need to keep looking. If the solution cannot be delivered within 1 year, it will not deliver quick wins and the overall program will soon lose momentum. How organisations adapt and the speed with which, depends on a number of factors including, their history and legacy, whether they are regulated, whether they have hierarchical structures, the degree to which the employees trust management, whether you have a visionary leader, the extent to which they already have modular digital technologies, their inherent culture, to name a few. Take for example many Indian conglomerates—many still have a ‘trader’ mindset and make decisions based on gut feel, don’t have modular digital systems and haven’t traditionally collected

9.4 A Vision with Purpose

281

the full set of data generated within their businesses. These businesses are likely to take longer to digitally transform than others.

9.4

A Vision with Purpose

Westerman, Bonnet and McAfee in their global survey of 431 executives in 391 countries found that only forty-two percent said their senior executives had a digital vision. Only thirty-four percent said the vision was shared among senior and middle managers.8 These numbers are surprisingly low given the rapid rate at which digital transformation is reshaping companies and industries. However, it also goes to show the disparity between successful businesses embracing digital transformation and many others that are being left behind. Their study found that among what they term ‘digital masters’, eighty-two percent agreed that senior and middle managers shared a common vision of digital transformation and seventy-one percent said it was shared between senior and middle managers. Digital masters were far more transformative in their visions with two-thirds agreeing they had a radical vision and eighty-two percent agreeing their vision crossed organisational silos. The so called ‘digital laggards’ as you would expect, scored low on these fronts. I am not advocating creating a digital vision—far from it. Most organisations already have quite useless business visions, having yet another one will provide no useful value. What I mean is having a vision with digital at the core. Vision is what business leaders must give their organisations. A vision of the future is not a decision, nor a strategy, but rather, defines a context or environment of assumptions in which strategies can be developed and decisions made. A vision must consider the needs and expectations of the customer groups, suppliers, employees, the community as a whole and of course the shareholders. Business leaders will evolve their vision and changes in the business environment will dynamically influence this business vision. A vision can change and be changed. Those changes in vision create a different environment in which strategies and strategic decisions need to be assessed and reassessed. Without a vision however, a strategy is aimless and strategic decisions pointless. Competitors can and will inevitably attack business strategy and so a business strategy and strategic decisions inevitably have to evolve. By comparison it is difficult for a competitor to attack a business vision, particularly if the business vision is itself evolving in reaction to the changing environment. Business leaders must first look to their vision, clearly understand and articulate that to their organisations, then judge the value or worth of decisions and strategies in the context of that vision.

8 Westerman, G., Bonnet, D., McAfee, A. (2014). Leading Digital: Turning Technology into Business Transformation. Harvard Business Review, 2014.

282

9

Enterprise Digital Transformation

Digital visions usually take one of three perspectives: re-envisioning the customer experience, re-envisioning operational processes, or combining these approaches to reinvent business models. The right approach is one that reflects the organisation’s capabilities, your customers’ needs and the nature of competition in the industry. In crafting your new vision with digital at its core, you need to identify your strategic assets (and they must remain strategic in a digital world). If the new vision does not build on your strategic assets, then there is no sense in trying to implement it. Strategic assets are valuable, rare, inimitable and non-substitutable. Valuable assets are those you can use to exploit opportunities or neutralise threats. Assets must be rare and not available to most competitors. They must be inimitable otherwise competitors can copy them and beat you at your own game and strategic assets must be non-substitutable, otherwise someone may find a different way to do what you do, but better and at a lower price. One of the most valuable strategic assets is your customer base and the network effects that can be generated from that strategic asset—old organisations need to start thinking like platform businesses in reassessing where true value comes from. Crafting a well designed and worded vision takes time and effort—it should not just be left to the strategy department to craft—it is an organisation wide effort, driven from the very top. If you want people to engage with a vision, you need to make it real for them. What does good look like once the vision is realised? How will people know they’ve achieved it? Why should they care? Great visions include both intent and outcome. Intent is a picture of what needs to change. Outcome is a measurable benefit to the company, its customers, or its employees.

9.5

Agility and Adaptability the DNA of the Digital Firm

In a fast paced digital world, strategic decisions will need to be made under conditions of substantial uncertainty, particularly when complex strategic objectives must be reformulated in the face of a dynamic, sometimes volatile digital environments. Initial assumptions about the environment and other players may be incorrect or incomplete. The range of factors relevant to these decisions is unlikely to be fully known, at least to any one player in the decision process. And the total range of possible effects; direct, second, and third order, of a given strategic decision may be so complex that even the most exacting search misses something. In the past, when companies witnessed rising levels of uncertainty and volatility in their industry, a perfectly rational strategic response was to observe for a little while, letting others incur the costs of experimentation and then moving as the dust settled. That will not work in fast moving digital markets. Strategies in most organisations develop primarily in two distinct ways: intended and emergent strategies. By intended strategies, we are talking about strategies that appear out of a formal process—like the annual planning cycle. Emergent strategies are different—they appear as a day-to-day product of the organisation’s learning process. It might be through an unexpected development in the marketplace, the experience of developing a new product or just simple customer interactions. More

9.5 Agility and Adaptability the DNA of the Digital Firm

283

Table 9.1 Traditional versus emergent strategies Intended strategies (the old world) • Assumes the future is relatively steady and knowable, so you can set reliable long-term objectives and control the environment sufficiently to meet them • Focuses on setting goals that will pull an organisation into the desired future • Divides goals, objectives, roles, and responsibilities into discrete organisational areas • Works in annual cycles to plan the path for the coming year, set expectations, and evaluate performance • Focuses on major threats, shifts, or transformations of the whole organisation at the same time • Driven by insights and needs of leaders (top-down) • A single set of measures for success of the organisation as a whole • Establishes a singular description of the current state, desired state, gaps, and strategies to close gaps • Assumes that the forces and influences on the organisation will remain stable in the foreseeable future • Establishes priorities

Emergent strategies (the digital world) • Assumes that the future is unpredictable, so you must work effectively with the circumstances that surround you • Focuses on defining actions that push the organisation to live within its desired future • Focuses on individual and group actions toward common goals • Works in shorter bursts of multiple cycles of planning and evaluation that are as frequent as feasible to encourage adaptive change in response to the shifting environment • Focuses on individual and small-group actions that make a difference in the ‘here and now’ with an eye toward the preferred future • Reflects the insights and knowledge of the professionals and service-delivery personnel (bottom-up) • Individuals and groups establish measures that are significant to their work • Incorporates diverse perspectives and strategies that encourage on-going learning through continuous improvement in all parts of the organisation • Assumes that the environment inside and outside the organisation will evolve continually • Articulates actions required to accomplish priorities

and more organisations (though still not enough) are realising that they can’t predict everything during the annual planning process and are relying more on organising a series of planned experiments. In a fast paced, digitally disruptive world, annual strategy reviews need to be compressed to a quarterly time frame, with real-time refinements and sprints to respond to triggering events. Table 9.1 describes some of the differences in thinking and acting that these two extreme approaches require. The leadership team must learn to live with increasing levels of uncertainty, complexity and ambiguity as the pace of change accelerates with digital disruption. Determination of cause and effect relationships is being made more difficult by uncertainty about the time lag of effects in complex systems. Organisations must expect to encounter ambiguity as they transition to more complex situations in the face of digital disruption—more ambiguity leads to more resistance. Leaders will be required to do more consensus building as a normal part of their leadership roles. The

284

9

Enterprise Digital Transformation

consensus building process should be designed to uncover information not previously held, perspectives not previously understood and knowledge not previously applied to the solution generating task—this ensures not only that the intelligence that resides within the organisation is captured, but that organisation-wide buy in will be better achieved.

9.6

Good Governance at the Heart of Digital Transformation

Organisational transformation involves making important, sometimes disruptive changes in how organisations get things done. Core organisational processes will need to be redesigned, new technology tools will replace old ones, new skills will be developed and new ways of working introduced. All of this requires careful management and good governance. Despite all the interest and the large and growing literature, the concept of governance however remains elusive. It can be very challenging to explain what governance is, identify examples of good governance and articulate how governance improves organisational performance, however its role becomes vital in managing an organisation in the face of significant uncertainty and radical change. Table 9.2 illustrates, what I call the seven A’s of high performance organisations, taken from my earlier book: Enterprise Governance, published by Springer. A more strategic and integrated approach to governance can transform an organisation—integrating the vision and mission with strategic objectives, with policies, processes, controls, decisions, issues and risks and these ultimately to actions and tasks that are carried out by stakeholders across the enterprise. A more integrated enterprise governance framework for the organisation can create a fully aligned and agile business, where governance is no longer a cost centre but a value creation centre. A well-developed governance structure and system will lead to Table 9.2 Pillars of high performance organisationsa The seven ‘A’s of high performing organisations Accountability Awareness Agility Adaptability Alignment Action Achievement a

Broad elements Boards, teams and individuals in terms of control, risk and performance Listening to signals which suggest the need to adjust Strategy, implementation plans, workforce, delegation Pliable structures, including command and control and planning To vision and strategy and across departments, across functions and stakeholders Concrete visible action and tracking Collecting the right dots and connecting them –objective and benefit realisation

Taken from my other book: Enterprise Governance: Driving Enterprise Performance Through Strategic Alignment. Springer.AG See: https://www.springer.com/gp/book/9783642385889

9.7 Leading the Transformation

285

Joined up

Informed

Bought into

Clear

where decision makers and the stakeholders responsible for implementation are fully aligned and informed

where the wisdom of market and organisational knowledge has been fully harnessed

by consulting and engaging with all stakeholders that need to be considered

where information is presented visually, giving stakeholders instant insight into the decision choices and optimal outcomes

Holistic

Rigorous

Transparent

Rational

where all realistic options have been considered

where an appropriate level of analysis has been undertaken and documented

where decisions can be defended in the face of third party scrutiny

where personal bias by individual stakeholders has been eliminated or harnessed effectively

Compliant

Risk mitigated

Manageable

Feasible

where the chosen strategy demonstrates compliance with applicable laws, regulations and requirements

where all risks have been identified and risk mitigation actions defined

where actions can be distributed amongst stakeholders, who can then be held to account for those actions

where options presented can be demonstrated to be implementable

Fig. 9.2 Joined up governance

better decision making and better decisions. The importance of instilling a formal governance structure and systems to support good governance becomes even more important as organisations attempt to ‘invert’ and seek value from outside of the organisation and develop an array of partnerships. Good governance needs to become good joined up governance. Figure 9.2 illustrates the attributes of a well instituted governance structure within and between partner organisations. One of the most important elements of governance in driving the organisation when faced with digital disruption is to ensure alignment throughout the organisation. In deciding the right governance model, you need to decide what you want to coordinate and share. Then you need to figure out how to make it happen in your company. You need to decide what resources to coordinate and share; what initiatives should be coordinated and shared.

9.7

Leading the Transformation

Many CEOs are choosing to lead the digital transformation themselves. That’s particularly true in the industries that are being heavily disrupted. Because, in that case, it’s a do-or-die situation. The other reason why CEOs often need to be personally involved is because digital disruption impacts many of the different functions at an organisation. The challenge for CEOs is carving out enough time to do this, whilst running the business and meeting quarterly targets.

286

9

Enterprise Digital Transformation

More recently, the chief marketing officer has been frequently seen as the locus of digital transformation in many large enterprises. A 2016 survey9 found that at thirtyfour percent of large enterprises, ownership of digital transformation resided with the Chief Marketing Officer (CMO). This is likely because other business operations, such as sales, HR, finance have already digitalised with tools like CRM and ERP. Marketing is one of the last support functions to digitalise. However, that does not represent an organisation-wide transformation. One solution that is gathering momentum is for a CEO to delegate the digital agenda to a Chief Digital Officer (CDO). The CDOs job is to effectively take the governance mantel and turn it into something real. His or her job is to create synergies and drive transformation, being responsible for creating a unifying digital vision, energising the company around digital possibilities, coordinating digital activities, helping rethink products and processes for the digital age and sometimes providing critical tools or resources. The CDO needs to bring value by understanding the organisation’s technology footprint, its digital footprint, understanding the art of the possible and integrating this with business need. Whereas a Chief Information Officers job was very much about deploying technologies to various parts of the business, the CDO role is more of a leadership role, anticipating the needs of the organisation and the impact various digital technologies and data can have on the business. Success in digital transformation is as much about what you don’t do as what you do. It can become too easy to become distracted. The role of the CDO is to bring a focused approach to digital transformation to deliver real business value. Success only comes when leaders are able to earn employees trust, engage them and mobilise them into action. To mobilise the organisation and achieve high impact, the CDO and the broader leadership team need to ask themselves a number of probing questions: • Signalling—are you making the ambitions and the benefits of digital transformation sufficiently clearly to the organisation and the individual business units? Whether we admit or not, most organisations unfortunately still operate in silos. Unless you can articulate the real value that digital transformation has to the operating efficiency or capability to a individual business unit, it will not be wholeheartedly adopted. If you cannot do this by each individual business unit, you need to demonstrate the value it will have for the organisation as a whole and what might happen it digital transformation was not successful; • Earning the right to engage—are you building sufficient momentum with employees by co-creating solutions and involving those who will have to make the change happen and are the leaders walking the talk? Rarely do top down edicts deliver success in an organisation. Where employees feel they are not

9

Solis, B. (2016). Who Owns Digital Transformation? According To A New Survey, It’s Not The CIO: Forbes, 17 October 2016. See: https://www.forbes.com/sites/briansolis/2016/10/17/whoowns-digital-transformation-according-to-a-new-survey-its-the-cmo/#33d8db2a67b5

9.8 A Roadmap for Digital Transformation

287

involved and solutions are being imposed, they will do their best to avoid the solution and subterfuge the initiative. The old adages of ‘not made here’ and ‘why change if it aint broke’ are unfortunately alive and kicking in most organisations; • Setting new behaviours—are you actively encouraging a cultural shift by using digital technologies to change the way people work, collaborate and communicate? Are you breaking down silos and more importantly making changes to the organisation’s decision making process and performance management systems such that they encourage employees to share and coordinate, rather than hoard information? Many organisation will do the easy bit—they will deploy organisation-wide collaboration tools, in fact some will deploy multiple collaboration tools in an effort to show progress. Most will be treated like a new toy by employees—they will be used for a few weeks, but interest will wear off. If collaboration does not make a real difference to the way employees work—and here I mean saves them time or effort—they will rarely adopt it. Far too many ignore the tougher elements, such as making changes to the organisation’s decision making processes. These remain top-down and highly controlled, whilst management sing the virtues of collaboration and agility.

9.8

A Roadmap for Digital Transformation

Many companies have come to realise that before they can create a wholesale change within their organisation, they have to find an entry point that will begin shifting the needle. This is done by starting to build a roadmap that leverages existing assets and capabilities. Once your initial focus is clear, you can start designing your transformation roadmap, this involves: identifying which investments and activities are necessary (and easiest) to close the gap to achieving the (new) organisational vision, identifying specific initiatives required to close the gap, developing appropriate timing/scheduling of each initiative and determining the dependencies between these and what organisational resources are available. The last of these tasks is the most difficult, yet most important. You need to build into your analysis and roadmap, the following essential insights: • Skills and capabilities—do you know what your skills gaps are, do you have a plan for ramping up digital competences within your organisation (training, hiring, acquiring, partnering, incubating people and skills)? • Modular IT platforms and agile technology—do you have the ability within your technology stack to agilely improve your IT systems in order to digitise the technology stack. Customer facing technology needs to be modular and flexible enough to move quickly, whilst the core IT infrastructure may need to be designed for the stability and resiliency required to manage transactions and support systems;

288

9

Enterprise Digital Transformation

• Clean data—when organisations attempt to become digitally savvy, their first problem is to ensure they are gathering all the data generated within their organisation, from their customers and their supply chain partners. The data that is usually gathered is far from useable. Some estimates claim up to seventy-five percent of effort on digital programs is spent cleansing this data to make it is useable. You need to be realistic in terms of the effort and time required just to get the house in order. During this period, there will be very little visible output to demonstrate progress, but this is a foundational layer, which will drive future value; • Incentive management—do you have appropriate incentives for employees, customers, suppliers etc., for them to share data? Do you understand how different participants (customers, suppliers, internal business units) see the problem differently and are motivated by different values? • Aligning reward structures—are your incentives, rewards and recognitions aligned to your transformation objectives? Digital technologies are also enabling new forms of non-financial rewards, such as gamification. Does the reward system and culture motivate your employees to come out of their comfort zones and try new things? • Measuring, monitoring and iterating—do you have a management process that allows you to measure and monitor the progress of your digital transformation journey in near real-time, do you have enough visibility to adapt as needed? • Focused resources—do you have the ability to pull people out of the short-term pressures that keep them too narrowly focused on incremental efforts and refocus them to pursue longer-term objectives? It is better bringing people that have curiosity and drive to work on digital projects rather than those believed to have the right skills but not the attitude; • Quick wins—have you identified outcomes that can be achieved relatively quickly? The ability to have a swift impact with relatively modest effort provides strong reinforcement to the participants that these efforts are worth pursuing and motivates them to seek even more challenging goals. Do you have mechanisms in place to share these stories of quick wins to a broader group of people?

9.9

Learn Fast, Act Faster

As customer expectations rise, inefficient processes or flows will only increase the rate at which customers get frustrated and leave you for better managed organisations. It is no good just relying on market research as a input to the design of your products or services. Customers’ will not know what frustrates them until they see a product or service or have experienced your processes. It is important to use a test and learn approach using agile methodologies to adapt your products and services based on constant real customer feedback, rather than just focus groups or uniform market research. An example of an agile development company is Tesla. When influential USA product reviewer Consumer Reports declined to recommend Tesla’s Model 3 due to the car’s long stopping distance after its first test drive, Elon

9.10

Concluding Remarks

289

Musk vowed to get the problem fixed within days. One week later, Tesla had sent out an over-the-air software update to all Model 3 vehicles that improved their braking distance by almost 20 feet. Consumer Reports called the turnaround ‘unprecedented’ and quickly reversed its rating on the car.

9.10

Concluding Remarks

It is quite normal for such books to include case studies of those organisations that are doing digital transformation well. I contemplated doing this, but concluded it will provide little value. Each organisation is different, their challenges are unique and the most appropriate response very much personal to them. Trying to replicate what others have done, may simply be impossible and at worse, disastrous. I am afraid the digital transformation journey is different and personal to each organisation. It requires significant thought, analysis, effort and persistency. It is a long hard journey that almost all organisations will need to pursue—many will succeed, some will however fail. Such is the nature of transformation. I hope this chapter provides at least a canvas in which to examine what you might need to do to adapt.

Part VIII Policy Responses

Enterprise Strategies

Data Processing and AI Data Integrity, Control and Tokenization Data Capture and Distribution

Data Connectivity: Telecommunications and Internet of Things

Policy Responses

Other enabling disruptive technologies

Applications and Use Cases

Global Policy Responses: A Snapshot

10

From a policy maker and regulator’s point of view, the emergence of the digital economy changes the landscape. As industries, markets and pricing strategies are transformed, the traditional industry-specific approach to policy setting will increasingly fail to enable expected economic growth and social development outcomes. Is Uber a taxi company or a software company? Is Alipay a bank or non-bank financial institution, or is it a technology (or e-commerce) company? Moreover, what is a ‘monopoly’ and what is adequate market competition when looking at platform businesses? Market definitions that were vital to regulators when identifying dominance are increasingly failing to work effectively. This section briefly reviews global policy responses from governments to date. The review is purposely brief given the rapid changes taking place in terms of policy responses from governments. For readers that would like to keep up to date on policy developments in the Artificial Intelligence (AI) space, at the time of writing, the OECD announced the launch of its AI Policy Observatory, which it claims will collate real-time information on policies and innovations in AI from around the world.

10.1

Shift in Regulatory Focus

As is often the case with emerging technologies, technological advances may be faster than the frameworks that seek to regulate them. Maybe instead of playing catch-up, policy and regulation should try to anticipate the possible yet unknown implications of technology. Either approach (being proactive or waiting) may lead to either too much or too little regulation. Various governments talk about new digital technologies such as 5G, AI, autonomous vehicles with optimism exposing their potential benefits. Many governments are announcing significant sums they will invest in these new digital technologies In September 2018, China announced that it will invest close to

# Springer Nature Switzerland AG 2020 B. Vagadia, Digital Disruption, Future of Business and Finance, https://doi.org/10.1007/978-3-030-54494-2_10

293

294

10

Global Policy Responses: A Snapshot

US$15 billion in the digital economy over the next five years1 and in June 2018, Europe committed some US$10 billion to the first ever Digital Future Program.2 However, holistic cross-sector policies and regulations to nurture these digital technologies or limit the potential for adverse consequences are still in the early stages.

10.2

Government Led Versus Private Sector Led Approaches

The EU and the USA take different approaches to who is driving policy and regulatory change for the digital economy. The European Commission (EC) often takes a lead in regulating emerging digital technologies, with Brussels adopting a government and regulatory led approach to digital development. This can be seen, for example, in the areas of privacy/data protection, net neutrality and in regulating the search engine market. The EU takes the position that regulating the digital economy needs to happen through frameworks set and developed by national governments. This is evident in the EC White Paper on artificial intelligence published in February 2020. By contrast, in the USA it is the private sector which leads much of the digital regulatory agenda, with the government seeking to create an enabling environment for private sector initiatives. For example, the majority of sharing economy Apps were first launched and allowed to flourish in the USA, where these App-based companies and regulators work together to balance innovation with the public interest.3 Whether the USA approach to digital technology policy tends to be more enabling and nimble, responding to signals from the market, with policy and regulation being tools to foster or enable innovation and market growth (as opposed to stability), or whether this is actually due to the clustering effects of silicon valley rather government policy and regulation, is open to question. China and Japan present interestingly different attitudes and approaches to how government and the private sector can work together to enable, empower or oversee and manage the pace of innovation. The Chinese models appears to be modelled on USA, whilst the Japanese model has similarities to the EU model. In China, innovation in the digital payments industry has flourished as key public and private

1

Xia, L. (2018). China to invest multi-billion dollars to develop digital economy. See: http://www. xinhuanet.com/english/2018-09/19/c_137478730.htm 2 European Union. (2018). EU budget: Commission proposes €9.2 billion investment in first ever digital programme See: http://www.europa.eu/rapid/press-release_IP-18-4043_en.pdf 3 Brookings India (2017). The Current and Future State of the Sharing Economy. Impact Series No. 032017. See: https://www.brookings.edu/wp-content/uploads/2016/12/sharingeconomy_ 032017final.pdf

10.3

Europe

295

sector actors rapidly and exponentially grow digital payment ecosystems.4 The growth of the digital payment ecosystem was enabled by the government’s ‘wait and see’ approach to regulation which allows for innovation by industry participants within informal limits, under careful supervision by the relevant regulators. By contrast, Japan has a more risk averse and regulation centric approach to managing technological change coupled with strong government involvement and a top down approach. This has resulted in a mix of coexisting legacy technology and advanced high tech solutions.5 The following section provides a high level snapshot of policy actions initiated by various countries to manage, control or nurture their digital ecosystems. Given the pace of change in digital technologies and policies that follow, there may well be further updates by the time this book is published, so please view these as snapshots in time.

10.3

Europe

• The EU has commenced a series of policy and regulatory reforms for the digital market. In January 2016 it announced the Digital Single Market Act, followed by the GDPR which became effective in May 2018. In September 2018, a code of practice was issued by online platforms, social networks and the advertising industry to kerb what it calls the spread of disinformation. Reform of the telecommunication sector has also commenced with mandated wholesale products for fixed networks, spectrum for wireless broadband, fibre rollout and 5G. In the area of AI, the EC has also established a high-level expert group on AI to make recommendations on policy and investment and set guidelines on the ethical development of AI considering principles such as data protection and transparency. This forms part of the EC’s proposed three-pronged strategy to increase public and private investment in AI, prepare for socio-economic changes and ensure an appropriate ethical and legal framework. It issued in December 2018 its ‘Coordinated Plan on the Development and Use of Artificial Intelligence Made in Europe’, which states ‘the ambition is for Europe to become the worldleading region for developing and deploying cutting-edge, ethical and secure AI, promoting a human-centric approach in the global context’. The EC report, ‘Artificial Intelligence: The European Perspective’, published in the same month, provides an overview of European policies, AI challenges and AI opportunities and aims to support the development of European action in the global AI context. It then published its White Paper on AI in February 2020. 4 Better than Cash Alliance. (April 2017). Social Networks, e-Commerce Platforms, and the Growth of Digital Payment Ecosystems in China: What It Means for Other Countries. See: https://btca-prod. s3.amazonaws.com/documents/284/english_attachments/ChinaReportApril2017Highlights.pdf? 1492606527 5 McKinsey. (2015). The Future of Japan: Reigniting Productivity and Growth.

296

10

Global Policy Responses: A Snapshot

• The EU passed in 2017, the resolution on ‘Civil Law Rules on Robotics’6 setting a global precedent by proposing a comprehensive and tailored legislative approach in this field. It emphasises the need to define the legal status for cyber-physical systems and to address the issues surrounding liability in cases of accidents caused by robot and AI-driven technology, also highlighting other problems such as privacy and data protection, ethics, safety, standardisation as well as the restructuring of the workplace. • The EU resolution on ‘Fundamental rights implications of Big Data’7 pays particular attention to the role of algorithms and of other analytical tools and raises concerns regarding the opacity of automated decision-making and its impact on privacy, data protection, media freedom and pluralism, non-discrimination and justice. The European Parliament supported the proposal to establish a digital clearing house to enact a coordinated holistic regulatory approach between data protection, competition and consumer protection bodies.8 • The EU has enshrined an explicit right for individuals to challenge decisions made based solely on automated processing of personal data in the GDPR. The GDPR addresses the right to privacy, the need for transparency, information and control by citizens about the personal information to be used, in what way and the need for explicit user consent. • The EU Horizon 2020 (2014–2020) project provides European funding for the creation of a platform to host the European AI ecosystem that allows available knowledge, algorithms, tools and resources to be combined. Scientists have drawn up ambitious plans for a multinational European institute devoted to world-class AI research designed to nurture and retain top talent in Europe. This institute is to be called ‘Ellis’. • In France, the government published its AI Plan on 21 March 2017, which included about fifty recommendations. This was followed by a report, ‘For a Meaningful Artificial Intelligence: Towards a French and European Strategy’, delivered in March 2018, promoting better access to data with a focus on health, transport, ecology and defence. In June 2017, the world’s largest incubator for start-ups opened in Paris. The Station F campus covers 34,000 square meters, can accommodate up to 3000 workstations available to 1000 start-ups and directly integrates venture funds and other services. • In the UK, the Digital Economy Act 2017 and the UK Digital Strategy 2017 form foundations for all things digital. The ‘Growing AI in the UK’ report 6

European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)). See: http://www.europarl.europa.eu/doceo/docu ment/TA-8-2017-0051_EN.html 7 European Parliament resolution of 14 March 2017 on fundamental rights implications of big data: privacy, data protection, non-discrimination, security and law-enforcement.(2016/2225(INI). See http://www.europarl.europa.eu/doceo/document/TA-8-2017-0076_EN.html 8 European Data Protection Supervisor. Opinion 8/2016 on coherent enforcement of fundamental rights in the age of big data. See: https://edps.europa.eu/sites/edp/files/publication/16-09-23_ bigdata_opinion_en.pdf

10.4

USA

297

recommended establishing the Alan Turing Institute as a national institute for AI and data science to work together with other public research entities or councils to coordinate demand for computing capacity for AI research and to negotiate on behalf of the UK research community. • Germany’s Ethics Commission on Automated Driving principles is addressing the need for a ‘people-centered’ AI development, proposing to observe the impact of automated driving on employment and design a national strategy aimed at retraining to reduce the negative effect of AI on the workforce. It has published principles dealing extensively with AI accountability and allocation of liability and acknowledged the need to avoid bias and discrimination when AI is applied to public decision-making, stressing the importance of the protection of citizens and citizens’ rights in the public use of AI. • The Nordic-Baltic states made a joint statement on AI collaboration in May 2018 to enhance access to data for AI, while developing ethical and transparent guidelines, standards, principles and values, enabling interoperability, privacy, security and trust. The signatories stated a wish to avoid any unnecessary regulation that could get in the way of this fast-developing field.

10.4

USA

• The White House, acknowledging the growing role of AI for the future, released three reports in 2016: ‘Artificial Intelligence, Automation and the Economy’; ‘Preparing for the Future of Artificial Intelligence’; and ‘The National Artificial Intelligence Research and Development Strategic Plan’. In May 2018, the establishment of a select committee on AI was announced to advise the White House on inter-agency AI R&D priorities and to support the government in achieving its goal of maintaining USA leadership in AI. • In September 2017, a Presidential Memorandum was signed which prioritised high-quality science, technology, engineering and maths (STEM) education, with a particular focus on computer science education. • At the May 2018 Summit on AI for American Industry, the government announced its objective to enable the creation of new American industries, by removing regulatory barriers to the deployment of AI-powered technologies. Other recent USA initiatives include an update to the 2016 ‘Federal Automated Vehicles Policy’; various strategies in the 2016 ‘Big Data Plan’ devoted to open innovation; and the 2016 report ‘Preparing the Future of AI’ which included an open data for AI initiative. The White House recently released in early 2020, its ten principles for government agencies to adhere when proposing new AI regulations for the private sector.

298

10.5

10

Global Policy Responses: A Snapshot

Asia

• In China, the State Council issued in 2017, the ‘Next Generation AI Development Plan’, setting as a goal for the country to become the world’s primary innovation centre by 2030, with the output of AI industries passing RNB 1 trillion. This is supplemented by local government policies designed to promote different regions and provide incentives to AI companies to base themselves in respective provinces. In December 2017, the Ministry of Industry and Information Technology released a ‘Three-Year Action Plan to Promote the Development of New-Generation AI Industry’. The plan proposes strengthening research on the framework of AI standards to cooperate with the world’s top universities and public research organisations. • In India, the 2018 ‘Report of the Artificial Intelligence Task Force’ focuses on public research, including the funding of an inter-ministerial ‘National Artificial Intelligence Mission’ to coordinate AI-related activities. Also in March 2018, India launched a plan to have enabling policies for socially relevant projects, especially a data policy to include ownership, sharing rights and usage policies, as well as tax incentives for income generated through the adoption of AI technologies and applications. This was followed by a discussion paper, ‘National Strategy for AI’ in June 2018, recommending a data protection framework, sectoral regulatory guidelines and the creation of open platforms for learning. • In Japan, the ‘Japan Revitalisation Strategy’ has engaged the AI Technology Strategy Council to draw up a roadmap defining objectives for R&D related to AI technologies and their commercialisation conducted through cooperation between the government, industry and academia. The Ethics Committee of the Japanese Society for Artificial Intelligence (JSAI) was established in 2014 to explore the relationship between AI research/technology and society. It published the Japanese Society for Artificial Intelligence Ethical Guidelines in 2017. • In South Korea, the fourth Industrial Revolution Commission, which is directly answerable to the President, developed an AI R&D strategy to train 5000 AI personnel over the following five years with a fund of US$2 billion. The Commission also plans to establish six AI graduate schools from 2019 to 2022 and nurture 1400 students by strengthening support for AI research at existing university research centres. The Regulatory Reform Plan raises questions about ethics and trust and concerns about the possible unethical use of AI. It established an ethics charter with respect to intelligence information technology and a study of standards and procedures for data collection and algorithm development. The government announced in March 2018, that the Ministry of Science and ICT and the National IT Promotion Agency are to promote the development of AI and big data, notably through the application of open software to traditional industries and the development of application software for open operating systems. Financial support is to be given to software companies to commercialise services and products related to AI and other core technologies and support given to the opening of source codes by companies with IP rights.

10.6

Middle East

299

• In Singapore, ‘The Smart Nation’ initiative, first announced by Prime Minister Lee Hsien Loong in 2014, aims to make Singapore ‘an outstanding city in the world. . . for people to live, work and play in, where the human spirit flourishes’. Building a smart nation is a whole-of-nation effort which comprises three pillars: digital government, digital economy, and digital society. Apart from the smart nation initiative, there are two others: e-Government action plan and Infocomm Media 2025. Recent innovations include the national digital identity project which is aimed to be completed by 2020. Other government initiatives include the launch of an unified QR code payment system; legislation and infrastructure for autonomous vehicles; and mandating 5G standalone networks ahead of most countries. The government is also pushing AI and has developed a dedicated data science consortium and will invest some US$110 million in industry research.

10.6

Middle East

• In the UAE, in October 2017, the ruler of Dubai, announced the UAE Strategy for AI as a major part of the country’s centennial 2071 objectives. This strategy aims to make the UAE first in the world in the field of AI investment in various sectors. The first minister of state for AI was also appointed. There are plans to replace immigration officers at airports with an AI system by 2020, and the ministry of AI is working with other ministries to include AI in the national curriculum. In January 2018, the government announced its plan to train 500 Emirati men and women in AI. Separately the Smart Dubai strategic plan is transforming Dubai into a smart city. • In Saudi Arabia, the megacity project known as the ‘King Abdullah Economic City’ is being engineered to accommodate autonomous vehicles and IOT. It is expected to be finished by 2020. Sophia, a humanoid robot designed by the company Hanson Robotics, became a full citizen in October 2017. • In Israel, the government has authorised a 5-year program worth about US$66 million to promote smart transportation in Israel. In early 2017, the government passed a resolution to implement a national program for increasing the number of skilled personnel in high-tech industry. Well-known universities, including the Open University of Israel and private platforms are offering AI training.