Software Networks : Virtualization, SDN, 5G, and Security [2nd ed] 9781119694724, 1119694728, 9781786304582

1,056 184 29MB

English Pages 313 [300] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Software Networks : Virtualization, SDN, 5G, and Security [2nd ed]
 9781119694724, 1119694728, 9781786304582

Table of contents :
Content: Cover --
Half-Title Page --
Title Page --
Copyright Page --
Contents --
Introduction --
I.1. The first two revolutions --
I.2. The third revolution --
I.3. "Cloudification" of networks --
I.4. Conclusion --
1. Virtualization --
1.1. Software networks --
1.2. Hypervisors and containers --
1.3. Kubernetes --
1.4. Software networks --
1.5. Virtual devices --
1.6. Conclusion --
2. SDN (Software-Defined Networking) --
2.1. The objective --
2.2. The ONF architecture --
2.3. NFV (Network Functions Virtualization) --
2.4. OPNFV --
2.5. Southbound interface --
2.6. The controller 2.7. Northbound interface --
2.8. Application layer --
2.9. Urbanization --
2.10. Conclusion --
3. Fabric, SD-WAN, vCPE, vRAN, vEPC --
3.1. Fabrics control --
3.2. NSX and VMware company --
3.2.1. CISCO ACI (Application Centric Infrastructure) --
3.2.2. OpenContrail and Juniper --
3.2.3. Brocade --
3.2.4. Nokia's SDN architecture --
3.3. SD-WAN --
3.4. vCPE --
3.5. vRAN --
3.6. vEPC --
4. Open Source Software for Networks --
4.1. Open source software --
4.2. Open Compute Project (OCP) --
4.3. OPNFV --
4.4. ONAP (Open Network Automation Protocol) --
4.5. Open vSwitch --
4.6. OpenDaylight platform 4.7. FD.io --
4.8. PNDA --
4.9. SNAS --
5. MEC --
5.1. eNodeB and gNodeB virtualization --
5.2. C-RAN --
6. Fog Networking --
6.1. Fog architectures --
6.2. Fog controllers --
6.3. Fog and the Internet of Things --
6.4. Conclusion on the Fog solution --
7. Skin Networking --
7.1. Skin networking architecture --
7.2. Virtual access points --
7.3. Software LANs --
7.4. Participatory Internet --
7.5. Conclusion --
8. Software Network Automation --
8.1. Automation of the implementation of software networks --
8.2. Management of a complex environment --
8.3. Multi-agent systems 8.4. Reactive agent systems --
8.5. Active, programmable and autonomous networks --
8.6. Autonomic networks --
8.7. Conclusion --
9. New-generation Protocols --
9.1. OpenFlow --
9.2. VXLAN --
9.3. NVGRE --
9.4. MEF Ethernet --
9.5. Carrier-Grade Ethernet --
9.6. TRILL (Transparent Interconnection of a Lot of Links) --
9.7. LISP (Locator/Identifier Separation Protocol) --
9.8. Conclusion --
10. Mobile Cloud Networking, the Mobile Cloud and Mobility Control --
10.1. Mobile Cloud Networking --
10.2. Mobile Cloud --
10.3. Mobility control --
10.4. Mobility protocols --
10.4.1. Mobile IP or MIP 10.4.2. Solutions for micromobility --
10.5. Multihoming --
10.6. Network-level multihoming --
10.6.1. HIP (Host Identity Protocol) --
10.6.2. SHIM6 (Level 3 Multihoming Shim Protocol for IPv6) --
10.6.3. mCoA (Multiple Care-of-Addresses) in Mobile IPv6 --
10.7. Transport-level multihoming --
10.7.1. SCTP (Stream Control Transmission Protocol) --
10.7.2. CMT (Concurrent Multipath Transfer) --
10.7.3. MPTCP (Multipath TCP) --
10.8. Conclusion --
11. Wi-Fi and 5G --
11.1. 3GPP and IEEE --
11.2. New-generation Wi-Fi --
11.2.1. Wi-Fi 5 (IEEE 802.11ac) --
11.2.2. IEEE 802.11ad --
11.2.3. IEEE 802.11af

Citation preview

Software Networks

Revised and Updated 2nd Edition Advanced Networks Set coordinated by Guy Pujolle

Volume 1

Software Networks Virtualization, SDN, 5G and Security

Guy Pujolle

First edition published 2015 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc., © ISTE Press Ltd 2015. This edition published 2020 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address: ISTE Ltd 27-37 St George’s Road London SW19 4EU UK

John Wiley & Sons, Inc. 111 River Street Hoboken, NJ 07030 USA

www.iste.co.uk

www.wiley.com

© ISTE Ltd 2020 The rights of Guy Pujolle to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988. Library of Congress Control Number: 2019950464 British Library Cataloguing-in-Publication Data A CIP record for this book is available from the British Library ISBN 978-1-78630-458-2

Contents

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xi

Chapter 1. Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

1.1. Software networks . . . . . . 1.2. Hypervisors and containers . 1.3. Kubernetes . . . . . . . . . . 1.4. Software networks . . . . . . 1.5. Virtual devices . . . . . . . . 1.6. Conclusion . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

4 6 8 9 11 12

Chapter 2. SDN (Software-Defined Networking) . . . . . . . . . . . . . . . . . .

13

2.1. The objective . . . . . . . . . . . . . . . . 2.2. The ONF architecture . . . . . . . . . . . 2.3. NFV (Network Functions Virtualization) 2.4. OPNFV . . . . . . . . . . . . . . . . . . . 2.5. Southbound interface . . . . . . . . . . . 2.6. The controller . . . . . . . . . . . . . . . 2.7. Northbound interface . . . . . . . . . . . 2.8. Application layer. . . . . . . . . . . . . . 2.9. Urbanization . . . . . . . . . . . . . . . . 2.10. Conclusion . . . . . . . . . . . . . . . .

. . . . . .

. . . . . . . . . .

. . . . . .

. . . . . . . . . .

. . . . . .

. . . . . . . . . .

. . . . . .

. . . . . . . . . .

. . . . . .

. . . . . . . . . .

. . . . . .

. . . . . . . . . .

. . . . . .

. . . . . . . . . .

. . . . . .

. . . .

. . . . . . . . . .

. . . . . .

. . . .

. . . . . . . . . .

. . . . . .

. . . .

. . . . . . . . . .

. . . . . .

. . . .

. . . . . . . . . .

. . . . . .

. . . .

. . . . . . . . . .

. . . . . .

. . . .

. . . . . . . . . .

. . . . . .

. . . .

. . . . . . . . . .

. . . . . .

. . . .

. . . . . . . . . .

. . . . . .

33

. . . .

. . . . . . . . . .

. . . . . .

Chapter 3. Fabric, SD-WAN, vCPE, vRAN, vEPC . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . .

14 16 21 23 23 25 26 27 28 30

. . . .

. . . . . . . . . .

. . . . . .

. . . . . . . . . .

3.1. Fabrics control . . . . . . . . . . . . . . . . . . . . . . 3.2. NSX and VMware company . . . . . . . . . . . . . . 3.2.1. CISCO ACI (Application Centric Infrastructure) 3.2.2. OpenContrail and Juniper . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . .

. . . .

. . . . . . . . . .

. . . .

. . . .

33 35 39 40

vi

Software Networks

3.2.3. Brocade . . . . . . . . . . 3.2.4. Nokia’s SDN architecture 3.3. SD-WAN . . . . . . . . . . . . 3.4. vCPE . . . . . . . . . . . . . . 3.5. vRAN . . . . . . . . . . . . . . 3.6. vEPC . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

42 43 43 47 48 49

Chapter 4. Open Source Software for Networks . . . . . . . . . . . . . . . . .

51

4.1. Open source software . . . . . . . . . . . . . . 4.2. Open Compute Project (OCP) . . . . . . . . . 4.3. OPNFV . . . . . . . . . . . . . . . . . . . . . . 4.4. ONAP (Open Network Automation Protocol) 4.5. Open vSwitch . . . . . . . . . . . . . . . . . . 4.6. OpenDaylight platform . . . . . . . . . . . . . 4.7. FD.io . . . . . . . . . . . . . . . . . . . . . . . 4.8. PNDA . . . . . . . . . . . . . . . . . . . . . . . 4.9. SNAS . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . .

. . . . .

. . . .

. . . .

. . . . .

. . . .

. . . .

. . . . .

. . . .

. . . .

. . . . .

. . . .

. . . .

. . . . .

. . . .

. . . . . . . . .

. . . .

. . . . .

. . . .

97

. . . .

. . . . .

. . . .

. . . . . . . . .

. . . . . .

Chapter 8. Software Network Automation . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . .

. . . . . . . . .

. . . . . .

89 90 92 94 96

. . . .

. . . . .

. . . .

. . . . . . . . .

. . . . . .

. . . . .

8.1. Automation of the implementation of software networks 8.2. Management of a complex environment . . . . . . . . . . 8.3. Multi-agent systems . . . . . . . . . . . . . . . . . . . . . 8.4. Reactive agent systems . . . . . . . . . . . . . . . . . . .

. . . . .

. . . .

. . . . . . . . .

. . . . . .

89

. . . . .

. . . .

. . . . . . . . .

. . . . . .

Chapter 7. Skin Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . . . . . . .

. . . . . .

79 82 86 87

. . . . .

. . . .

. . . . . . . . .

. . . . . .

. . . .

7.1. Skin networking architecture . 7.2. Virtual access points . . . . . . 7.3. Software LANs . . . . . . . . 7.4. Participatory Internet . . . . . 7.5. Conclusion . . . . . . . . . . .

. . . .

. . . . . . . . .

. . . . . .

79

. . . .

. . . . . . . . .

. . . . . .

Chapter 6. Fog Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . .

70 74

. . . .

. . . . . . . . .

. . . . . .

5.1. eNodeB and gNodeB virtualization . . . . . . . . . . . . . . . . . . . . . . . . 5.2. C-RAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . . . . . . .

. . . . . .

69

. . . .

. . . . . . . . .

. . . . . .

Chapter 5. MEC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . . . . . . .

. . . . . .

51 53 54 61 64 65 66 67 68

. . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

6.1. Fog architectures . . . . . . . . . 6.2. Fog controllers . . . . . . . . . . 6.3. Fog and the Internet of Things . 6.4. Conclusion on the Fog solution .

. . . . . . . . .

. . . . . .

. . . .

. . . . .

. . . .

. . . .

97 99 101 105

Contents

vii

8.5. Active, programmable and autonomous networks . . . . . . . . . . . . . . . . 8.6. Autonomic networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

107 109 113

Chapter 9. New-generation Protocols . . . . . . . . . . . . . . . . . . . . . . . .

115

9.1. OpenFlow . . . . . . . . . . . . . . . . . . . . . . . . . 9.2. VXLAN . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3. NVGRE . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4. MEF Ethernet . . . . . . . . . . . . . . . . . . . . . . . 9.5. Carrier-Grade Ethernet . . . . . . . . . . . . . . . . . . 9.6. TRILL (Transparent Interconnection of a Lot of Links) 9.7. LISP (Locator/Identifier Separation Protocol) . . . . . 9.8. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

117 123 124 125 126 129 131 132

Chapter 10. Mobile Cloud Networking, the Mobile Cloud and Mobility Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

133

10.1. Mobile Cloud Networking . . . . . . . . . . . . . . . . . . . . 10.2. Mobile Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3. Mobility control . . . . . . . . . . . . . . . . . . . . . . . . . 10.4. Mobility protocols . . . . . . . . . . . . . . . . . . . . . . . . 10.4.1. Mobile IP or MIP . . . . . . . . . . . . . . . . . . . . . . 10.4.2. Solutions for micromobility . . . . . . . . . . . . . . . . 10.5. Multihoming . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6. Network-level multihoming . . . . . . . . . . . . . . . . . . . 10.6.1. HIP (Host Identity Protocol) . . . . . . . . . . . . . . . . 10.6.2. SHIM6 (Level 3 Multihoming Shim Protocol for IPv6) . 10.6.3. mCoA (Multiple Care-of-Addresses) in Mobile IPv6 . . 10.7. Transport-level multihoming . . . . . . . . . . . . . . . . . . 10.7.1. SCTP (Stream Control Transmission Protocol) . . . . . . 10.7.2. CMT (Concurrent Multipath Transfer) . . . . . . . . . . 10.7.3. MPTCP (Multipath TCP) . . . . . . . . . . . . . . . . . . 10.8. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . .

161

. . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . .

Chapter 11. Wi-Fi and 5G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . .

133 137 139 143 144 145 146 148 149 150 151 153 153 157 159 160

. . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . . . . . . . . . .

11.1. 3GPP and IEEE . . . . . . . . . . . . . . 11.2. New-generation Wi-Fi . . . . . . . . . . 11.2.1. Wi-Fi 5 (IEEE 802.11ac) . . . . . . 11.2.2. IEEE 802.11ad . . . . . . . . . . . . 11.2.3. IEEE 802.11af . . . . . . . . . . . . 11.2.4. Halow (IEEE 802.11ah). . . . . . . 11.2.5. Wi-Fi 6 (IEEE 802.11ax) and super WiGig (IEEE 802.11ay) . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . .

. . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

162 163 164 166 167 168

. . . . . . . . . . . . . . . . . . . . .

169

viii

Software Networks

11.3. Small cells . . . . . . . . . . . . . . . . . 11.3.1. Femtocells . . . . . . . . . . . . . . . 11.3.2. Hotspots . . . . . . . . . . . . . . . . 11.3.3. Wi-Fi Passpoint . . . . . . . . . . . . 11.3.4. Virtualization of Wi-Fi and HNB . . 11.3.5. Backhaul networks . . . . . . . . . . 11.4. Software radio and radio virtual machine 11.5. 5G . . . . . . . . . . . . . . . . . . . . . . 11.5.1. 5G radio . . . . . . . . . . . . . . . . 11.5.2. The core network . . . . . . . . . . . 11.5.3. C-RAN . . . . . . . . . . . . . . . . .

. . . . . . . . . . .

170 171 174 175 179 182 184 185 189 192 193

Chapter 12. The Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . .

197

12.1. Sensor networks . . . . . . . . . . . 12.2. RFID. . . . . . . . . . . . . . . . . . 12.3. NFC (Near-Field Communication) . 12.4. NFC contactless payment . . . . . . 12.5. HIP (Host Identity Protocol) . . . . 12.6. Healthcare Internet . . . . . . . . . . 12.7. Case study: the smart city . . . . . . 12.8. Conclusion . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . . . . .

. . . . . . . .

. . . . . . . . . . .

. . . . .

. . . . .

. . . . . .

. . . . . . . .

. . . . . . . . . . .

. . . . .

. . . . . .

. . . . . . . .

. . . . . . . . . . .

. . . . .

. . . . . .

. . . . . . . .

. . . . . . . . . . .

. . . . .

. . . . . .

. . . . . . . .

. . . . . . . . . . .

. . . . .

. . . . . .

. . . . . . . .

. . . . . . . . . . .

. . . . .

. . . . . .

. . . . . . . .

. . . . . . . . . . .

. . . . .

. . . . . .

. . . . . . . .

. . . . . . . . . . .

. . . . .

. . . . . .

. . . . . . . .

. . . . . . . . . . .

. . . . .

. . . . . .

. . . . . . . .

. . . . . . . . . . .

. . . . .

. . . . . .

. . . . . . . .

227

. . . . .

. . . . . .

. . . . . . . .

. . . . . . . . . . .

Chapter 14. Tactile Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . . . .

. . . . . . . . . . .

217 220 220 222 224 226

. . . . .

. . . . . .

. . . . . . . .

. . . . . . . . . . .

. . . . . .

14.1. Tactile internet applications . . . . . . . . . . . 14.2. Functionalities required for the tactile internet 14.3. Technical specifications for 5G . . . . . . . . . 14.4. Tactile internet in Industry 4.0 . . . . . . . . . 14.5. Conclusion on tactile internet . . . . . . . . . .

. . . . . .

. . . . . . . .

. . . . . . . . . . .

215

. . . . . .

. . . . . . . .

. . . . . . . . . . .

Chapter 13. Vehicular Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . . . . .

198 200 204 206 207 207 210 213

. . . . . .

. . . . . . . .

. . . . . . . . . . .

. . . . . . . .

13.1. 5G . . . . . . . . . . . . . . . . . . . . . . . 13.2. 5G standardization . . . . . . . . . . . . . . 13.2.1. 5G vehicular networks . . . . . . . . . 13.2.2. Technological presentation of C-V2X . 13.3. VLC . . . . . . . . . . . . . . . . . . . . . . 13.4. Conclusion . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . . . . .

. . . . .

. . . . . .

. . . . .

. . . . .

227 229 232 234 235

Chapter 15. Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

237

15.1. Secure element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2. Secure elements-based solution . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2.1. Virtual secure elements . . . . . . . . . . . . . . . . . . . . . . . . . . . .

239 242 242

Contents

15.2.2. The TEE (Trusted Execution Environment). 15.2.3. TSM . . . . . . . . . . . . . . . . . . . . . . 15.2.4. Solution without a TSM. . . . . . . . . . . . 15.2.5. HCE. . . . . . . . . . . . . . . . . . . . . . . 15.2.6. Securing solutions . . . . . . . . . . . . . . . 15.3. The blockchain . . . . . . . . . . . . . . . . . . . 15.4. Conclusion . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

244 245 249 250 250 256 257

Chapter 16. Concretization and Morphware Networks . . . . . . . . . . . . .

259

16.1. Accelerators . . . . . . . . . . . . . 16.2. A reconfigurable microprocessor . 16.3. Morphware networks. . . . . . . . 16.4. Conclusion . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

ix

. . . .

. . . .

260 261 266 268

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

269

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

271

Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

273

Introduction

Currently, networking technology is experiencing its third major wave of revolution. The first was the move from circuit-switched mode to packet-switched mode, the second from hardwired to wireless mode, and finally the third revolution, which we will examine in this book, is the move from hardware to software mode. Let us briefly examine these three revolutions, before focusing more particularly on the third, which will be studied in detail in this book. I.1. The first two revolutions A circuit is a collection of hardware and software elements, allocated to two users – one at each end of the circuit. The resources of that circuit belong exclusively to those two users; nobody else can use them. In particular, this mode has been used in the context of the public switched telephone network (PSTN). Indeed, telephone voice communication is a continuous application for which circuits are very appropriate. A major change in traffic patterns brought about the first great revolution in the world of networks, pertaining to asynchronous and non-uniform applications. The data transported for these applications make only very incomplete use of circuits, but are appropriate for packet-switched mode. When a message needs to be sent from a transmitter to a receiver, the data for transmission are grouped together in one or more packets, depending on the total size of the message. For a short message, a single packet may be sufficient; however, for a long message, several packets are needed. The packets then pass through intermediate transfer nodes between the transmitter and the receiver, and ultimately make their way to the endpoint. The resources needed to handle the packets include memories, links between the nodes and sender/receiver. These resources are shared between all users.

xii

Software Networks

Packet-switched mode requires a physical architecture and protocols – i.e. rules – to achieve end-to-end communication. Many different architectural arrangements have been proposed, using protocol layers and associated algorithms. In the early days, each hardware manufacturer had their own architecture (e.g. SNA, DNA, DecNet, etc.). Then, the OSI (Open System Interconnection) model was introduced in an attempt to make all these different architectures mutually compatible. The failure of compatibility between hardware manufacturers, even with a common model, led to the re-adoption of one of the very first architectures introduced for packet-switched mode: TCP/IP (Transport Control Protocol/Internet Protocol). The second revolution was the switch from hardwired mode to wireless mode. Figure I.1 shows that, by 2020, terminal connection should be essentially wireless, established using Wi-Fi technology, including 3G/4G/5G technology. In fact, increasingly, the two techniques are used together, as they are becoming mutually complimentary rather than representing competition for one another. In addition, when we look at the curve shown in Figure I.2, plotting worldwide user demand against the growth of what 3G/4G/5G technology is capable of delivering, we see that the gap is so significant that only Wi-Fi technology is capable of handling the demand very strongly until 2020, and then less and less due to the massive opening of new frequencies, especially those higher than 20 GHz. We will come back to wireless architectures, because the third revolution also has a significant impact on this transition towards radio-based technologies, especially 5G technology.

Figure I.1. Terminal connection by 2020

Introduction

Addional spectrum

Spectral efficiency

Missing capacity

x512

Capacity apacity demand demand for for m mobile obile networks networks

xiii

x256

x128

x64

x32

X16

X8

X4

X2 X1

2012

2013

2014

2015

2016

2017

2018

2019

2020

2021

2022

Figure I.2. The gap between technological progress and user demand. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

I.2. The third revolution The third revolution, which is our focus in this book, pertains to the move from hardware-based mode to software-based mode. This transition is taking place because of virtualization, whereby physical networking equipment is replaced by software fulfilling the same function. Let us take a look at the various elements which are creating a new generation of networks. To begin with, we can cite the Cloud. The Cloud is a set of resources which, instead of being held at the premises of a particular company or individual, are hosted on the Internet. The resources are de-localized and brought together in resource centers, known as datacenters. The reasons for the Cloud’s creation stem from the low degree of use of server resources worldwide: only 10–20% of servers’ capacities are actually being used. This low value derived from the fact that servers are hardly used at all at night-time, and see relatively little use outside of peak hours, which represent no more than 4–5 hours each day. In addition, the relatively low cost of hardware meant that, generally, servers were greatly oversized. Another factor that needs to be taken into account is the rising cost of personnel to manage and control the resources. In order to optimize the cost of both resources and engineers, those resources need to be shared. The purpose of Clouds is to facilitate such sharing in an efficient manner.

xiv

Software Networks

Figure I.3 shows the growth of the public Cloud services market. Certainly, that growth is impressive, but in the final analysis, it is relatively low in comparison to what it could have been if there were no problems of security. Indeed, as the security of the data uploaded to such systems is rather lax, there has been a massive increase in private Clouds, taking the place of public Cloud services. In Chapter 11, we will examine the advances made in terms of security, with the advent of secure Clouds.

Billion of dollars

400

350

300

250

200

150

100

50

0

2012

2013

2014

2015

2016

2017

2018

2019

Figure I.3. Public Cloud services market and their annual growth rate

Virtualization is also a key factor, as indicated at the start of this chapter. The increase in the number of virtual machines is undeniable, and in 2019, three quarters of the servers available throughout the world are virtual machines. Physical machines are able to host increasing numbers of virtual machines. This trend is shown in Figure I.4. In 2019, each physical server hosts approximately 10 virtual machines. The use of Cloud services has meant a significant increase in the data rates being sent over the networks. Indeed, processing is now done in datacenters, and both the data and the signaling must be sent to these datacenters and then returned to the user after processing. We can see this increase in data rate requirement by examining the market of Ethernet ports for datacenters. Figure I.5 plots shipments of 1 Gbps Ethernet ports against those of 10, 40 and 100 Gbps ports. As we can see, 1 Gbps ports, which are already fairly fast, are being replaced by ports that are ever more powerful.

Average number of virtual machines per physical machine

Introduction

11 10 9 8 7 6 5 4 3 2 1 0

2012

2013

2014

2015

2016

2017

2018

Figure I.4. Number of virtual machines per physical server

Figure I.5. Ethernet port shipment

2019

xv

xvi

Software Networks

The world of the Cloud is, in fact, rather diverse, if we look at the number of functions which it can fulfill. There are numerous types of Clouds available, but three categories, which are indicated in Figure I.6, are sufficient to clearly differentiate them. The category that offers the greatest potential is the SaaS (Software as a Service) cloud. SaaS makes all services available to the user – processing, storage and networking. With this solution, a company asks its Cloud provider to supply all necessary applications. Indeed, the company subcontracts its IT system to the Cloud provider. With the second solution – PaaS (Platform as a Service) – the company remains responsible for the applications. The Cloud provider offers a complete platform, leaving only the management of the applications to the company. Finally, the third solution – IaaS (Infrastructure as a Service) – leaves a great deal more initiative in the hands of the client company. The provider still offers the processing, storage and networking, but the client is still responsible for the applications and the environments necessary for those applications, such as the operating systems and databases.

Figure I.6. The three main types of Cloud

More specifically, we can define the three Cloud architectures as follows: – IaaS (Infrastructure as a Service): this is the very first approach, with a portion of the virtualization being handled by the Cloud, such as the network servers, the storage servers and the network itself. The Internet network is used to host PABX-type machines, firewalls or storage servers, and more generally, the servers connected to the network infrastructure.

Introduction

xvii

– PaaS (Platform as a Service): this is the second Cloud model whereby, in addition to the infrastructure, there is an intermediary software program corresponding to the Internet platform. The client company’s own servers only handle the applications. – SaaS (Software as a Service): with SaaS, in addition to the infrastructure and the platform, the Cloud provider actually provides the applications themselves. Ultimately, nothing is left to the company, apart from the Internet ports. This solution, which is also called Cloud computing, outsources almost all of the company’s IT and networks. Figure I.7 shows the functions of the different types of Cloud in comparison with the classical model in operation today.

Figure I.7. The different types of Clouds

The main issue for a company that operates a Cloud is security. Indeed, there is nothing to prevent the Cloud provider from scrutinizing the data, or – as much more commonly happens – the data from being requisitioned by the countries in which the physical servers are located; the providers must comply. The rise of sovereign Clouds is also noteworthy: here, the data are not allowed to pass beyond the geographical borders. Most states insist on this for their own data. The advantage of the Cloud lies in the power of the datacenters, which are able to handle a great many virtual machines and provide the power necessary for their execution. Multiplexing between a large number of users greatly decreases costs. Datacenters may also serve as hubs for software networks and host virtual machines

xviii

Software Networks

to create such networks. For this reason, numerous telecommunications operators have set up companies that provide Cloud services for the operators themselves and also for their customers. In the techniques that we will examine in detail hereafter, we find SDN (Software-Defined Networking), whereby multiple forwarding tables are defined, and only datacenters have sufficient processing power to perform all the operations necessary to manage these tables. One of the problems is determining the necessary size of the datacenters, and where to build them. Very roughly, there are a whole range of sizes, from absolutely enormous datacenters, with a million servers, to femto-datacenters, with the equivalent of only a few servers, and everything in between. I.3. “Cloudification” of networks Figure I.8 shows the rise of infrastructure costs in time. We can see that a speed increase implies a rise in infrastructure costs whereas the income of telecommunication operators stagnates, partly due to very high competition to acquire new markets. It is therefore absolutely necessary to find ways to reduce the gap between costs and income. Among other reasons, two aspects are essential to start a new generation of networks: network automation using autopilot and the choice of open source software in order to decrease the number of network engineers and to avoid license costs for commercial software. Let us examine these two aspects before studying the reasons to turn to this new software network solution. The automation of the network pilot is the very first reason for the new generation. The concept of the autopilot created here is similar to that of a plane’s autopilot. However, unlike a plane, a network is very much a distributed system. To achieve autopilot, we must gather all knowledge about the network – which means contextualized information – in all nodes if we want to distribute this autopilot or in a single node if we want to centralize this autopilot. Centralization was chosen for obvious reasons: simplicity and network congestion by packets with knowledge. This is the most important paradigm of this new generation of networks: centralization. This way, the network is no longer a decentralized system. It becomes centralized. It will be necessary to pay attention to the center’s security by doubling or tripling the controller, which is the name given to this central system. The controller is the control device that must contain all knowledge about users, applications, nodes and network connections. From there, smart systems will be able to pilot packets in the infrastructure for the best possible service quality for all the clients using the network. As we will see later on, the promising autopilot for the 2020s is being finalized: the open source ONAP (Open Networking Automation Platform).

Introduction

xix

The second important aspect of the new generation of networks is the open source software. The rise of these open source software always comes from a need to reduce costs, and also to implement standards that can easily be followed by companies. The Linux Foundation is one of the major organizations in this area, and most of the software shaping future networks come from this Foundation, among which is the OPNFV (Open Platform Network Functions Virtualization) platform. This is the most important one since it gathers open source software that will act as a basic frame. This tendency towards open source software raises questions such as: what will become of network and telecom suppliers since everything comes from open source software? Is security ensured with these thousands of thousands of coding lines in which bugs will occur? And so on. We will answer these questions in the Chapter 4, on open source software. The rise of this new generation of networks, based on datacenters, has an impact on energy consumption in the world of ICT. This consumption is estimated in 2019 to account for 7% of the total carbon footprint. However, this proportion is increasing very quickly with the rapid rollout of datacenters and antennas for mobile networks. By way of example, a datacenter containing a million servers consumes approximately 100 MW. A Cloud provider with 10 such datacenters would consume 1 GW, which is the equivalent of a sector in a nuclear power plant. This total number of servers has already been achieved or surpassed by 10 well-known major companies. Similarly, the number of 2G/3G/4G antennas in the world is already more than 10 million. Given that, on average, consumption is 1500 W per antenna (2000 W for 3G/4G antennas but significantly less for 2G antennas), this represents around 15 GW worldwide. Continuing in the same vein, the carbon footprint produced by energy consumption in the world of ICT is projected to reach 20% by 2025 if nothing is done to control the current growth. Therefore, it is absolutely crucial to find solutions to offset this rise. We will come back to this in the last chapter of this book, but there are solutions that already exist and are beginning to be used. Virtualization represents a good solution, whereby multiple virtual machines are hosted on a common physical machine, and a large number of servers are placed in standby mode (low power) when not in use. Processors also need to have the ability to drop to very low speeds of operation whenever necessary. Indeed, the power consumption is strongly proportional to processor speed. When the processor has nothing to do, it should almost stop, and speed up again when the workload increases. Mobility is also another argument in favor of adopting a new form of network architecture. Figure I.8 shows that in 2020, the average speed of wireless solutions

xx

Software Networks

will be of several dozens of Mbit/s on average. Therefore, we need to manage the mobility problem. Thus, the first order of business is the management of multi-homing – i.e. being able to connect a terminal to several networks simultaneously. The word “multi-homing” stems from the fact that the terminal receives several IP addresses, assigned by the different networks it is connected to. These multiple addresses are complex to manage, and the task requires specific functionalities. Mobility must also make it possible to handle simultaneous connections to several networks. On the basis of certain criteria (to be determined), the packets of the same message can be separated and sent via different networks. Thus, they need to be re-ordered when they arrive at their destination, which can cause numerous problems.

Figure I.8. Speed of terminals based on the network used

Mobility also raises the issues of addressing and identification. If we use the IP address, it can be interpreted in two different ways: for identification purposes, to determine who the user is, and also for localization purposes, to determine the user’s position. The difficulty lies in dealing with these two functionalities simultaneously. Thus, when a customer moves sufficiently far to go beyond the sub-network with which he/she is registered, it is necessary to assign a new IP address to the device. This is fairly complex from the point of view of identification. One possible solution, as we can see, is to give two IP addresses to the same user: one reflecting his/her identity and the other the location. Another revolution that is currently under way pertains to the “Internet of Things” (IoT): billions of things will be connected within the next few years. The prediction is that 50 billion will be connected to the IoT by 2020. In other words, the number of connections will likely increase tenfold in the space of only a few

Introduction

xxi

years. The “things” belong to a variety of domains: 1) domestic, with household electrical goods, home health care, domotics, etc.; 2) medicine, with all sorts of sensors both on and in the body to measure, analyze and perform actions; 3) business, with light level sensors, temperature sensors, security sensors, etc. Numerous problems arise in this new universe, such as identity management and the security of communications with the sensors. The price of identification is often set at $40 per object, which is absolutely incompatible with the cost of a sensor which is often less than $1. Security is also a complex factor, because the sensor has very little power, and is incapable of performing sufficiently sophisticated encryption to ensure the confidentiality of the transmissions. Finally, there is one last reason to favor migration to a new network: security. Security requires a precise view and understanding of the problems at hand, which range from physical security to computer security, with the need to lay contingency plans for attacks that are sometimes entirely unforeseeable. The world of the Internet today is like a bicycle tire which is made up entirely of patches (having been punctured and repaired numerous times). Every time an attack succeeds, a new patch is added. Such a tire is still roadworthy at the moment, but there is a danger that it will burst if no new solution is envisaged in the next few years. Near the end of this book, in Chapter 15, we will look at the secure Cloud, whereby, in a datacenter, a whole set of solutions is built around specialized virtual machines to provide new elements, the aim of which is to enhance the security of the applications and networks. An effective security mechanism must include a physical element: a safe box to protect the important elements of the arsenal, necessary to ensure confidentiality, authentication, etc. Software security is a reality, and to a certain extent, may be sufficient for numerous applications. However, secure elements can always be circumvented when all of the defenses are software-based. This means that, for new generations, there must be a physical element, either local or remote. This hardware element is a secure microprocessor known as a “secure element”. A classic example of this type of device is the smartcard, used particularly prevalently by telecom operators and banks. Depending on whether it belongs to the world of business or public electronics, the secure element may be found in the terminal, near to it or far away from the terminal. We will examine the different solutions in the subsequent chapters of this book. Virtualization also has an impact on security: the Cloud, with specialized virtual machines, means that attackers have remarkable striking force at their disposal. In the last few years, hackers’ ability to break encryption algorithms has increased by a factor of 106.

xxii

Software Networks

Another important point that absolutely must be integrated in networks is “intelligence”. So-called “intelligent networks” have had their day, but the intelligence in this case was not really what we mean by “intelligence” in this field. Rather, it was a set of automatic mechanisms, used to deal with problems perfectly defined in advance, such as a signaling protocol for providing additional features in the telephone system. In the new generation of networks, intelligence pertains to learning mechanisms and intelligent decisions based on the network status and user requests. The network needs to become an intelligent system, which is capable of making decisions on its own. One solution to help move in this direction was introduced by IBM in the early 2000s: “autonomic”. “Autonomic” means autonomous and spontaneous – autonomous in the sense that every device in the network must be able to independently make decisions with knowledge of the situated view, i.e. the state of the nodes surrounding it within a certain number of hops. The solutions that have been put forward to increase the smartness of the networks are influenced by Cloud technology. We will discuss them in detail in the chapter about MEC (Mobile Edge Computing) and more generally about “smart edge” (Chapter 5). Finally, one last point, which could be viewed as the fourth revolution, is concretization – i.e. the opposite of virtualization. Indeed, the problem with virtualization is a significant reduction in performance, stemming from the replacement of hardware with software. There is a variety of solutions that have been put forward to regain the performance: software accelerators and, in particular, the replacement of software with hardware, in the step of concretization. The software is replaced by reconfigurable hardware, which can transform depending on the software needing to be executed. This approach is likely to create morphware networks, which will be described in Chapter 16. I.4. Conclusion The world of networks is changing greatly, for the reasons listed above. It is changing more quickly than might have been expected a few years ago. A suggestion to redefine networks architecture was put forward, but failed: starting again from scratch. This is known as the “Clean Slate Approach”: forgetting about everything we know and start over. Unfortunately, no concrete proposition has been adopted, and the transfer of IP packets continues to be the solution for data transport. However, in the numerous propositions, virtualization and the Cloud are the two main avenues which are widely used today and upon which this book focuses.

1 Virtualization

In this chapter, we introduce virtualization, which is at the root of the revolution in the networking world, as it involves constructing software networks to replace hardware networks. Figure 1.1 shows the process of virtualization. We simply need to write a code that performs exactly the same function as the hardware component. With only a few exceptions, which we will explore later on, all hardware machines can be transformed into software machines. The basic problem associated with virtualization is the significant reduction in performance. On average (although the reality is extremely diverse), virtualization reduces performance by a factor of 100: i.e. the resulting software, executed on a machine similar to the machine that has been virtualized, runs 100 times more slowly. In order to recover from this loss of performance, we simply need to run the program on a machine that is 100 times more powerful. This power is to be found in the datacenters hosted in Cloud environments that are under development in all corners of the globe. It is not possible to virtualize a certain number of elements, such as an antenna or a sensor, since there is no piece of software capable of picking up electromagnetic signals or detecting temperature. Thus, we still need to keep hardware elements such as the metal wires and optical links or the transmission/reception ports of a router and a switch. Nevertheless, all of the signal-processing operations can be virtualized perfectly well. Increasingly, we find virtualization in wireless systems. In order to speed up the software processing, one solution would be to move to a mode of concretization, i.e. the reverse of virtualization, but with one very significant difference: the hardware must behave like a software. It is possible to replace the software, which is typically executed on a general machine, with a machine that can be reconfigured almost instantly, and thus behaves like a software program. The components used are derived from FPGAs (Field-Programmable Gate

Software Networks: Virtualization, SDN, 5G and Security, First Edition. Guy Pujolle. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

2

Software Networks

Arrays) and, more generally, reconfigurable microprocessors. A great deal of progress still needs to be made in order to obtain extremely fast concretizations, but this is only a question of a few years. The virtualization of networking equipment means we can replace the hardware routers with software routers, and do the same for any other piece of hardware that could be made into software, such as switches, LSRs (Label Switching Routers), firewalls, diverse and varied boxes, DPI (Deep Packet Inspection), SIP servers, IP PBXs, etc. These new machines are superior in a number of ways. To begin with, one advantage is their flexibility. Let us look at the example given in Figure 1.1, where three hardware routers have been integrated in software form on a single server. The size of the three virtual routers can change depending on their workload. The router uses little resources at night-time when there is little traffic, and very large resources at peak times in order to handle all the traffic.

Figure 1.1. Virtualization of three routers. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

Energy consumption is another argument in favor of virtualization. While, to begin with, consumption would rise because we are adding an extra piece of software (the hypervisor or container), it is possible to share the resources more effectively, and move those resources, grouping them together on physical machines, and put other machines, which have become idle, on standby. A physical machine can accommodate virtual machines if, as mentioned above, we add a hypervisor or a container manager, which is a software program that enables multiple containers, hence multiple virtual machines, to run simultaneously. In fact, the word “simultaneously” implies a macroscopic scale: on a microscopic scale, the virtual machines are executed sequentially one after another. In the context of virtual servers, this serial execution is not a problem. In the area of networks, it may become a problem for real-time applications, which require a very short response time. Each virtual machine’s processing time must be sufficiently short to give the impression that all the virtual machines are being executed in parallel. Figure 1.2 shows the architecture of virtualization.

Virtualization

Hypervisor

Hardware equipment

Container

OS

Container

OS

Container

OS

Container

Virtual machine

Virtual machine

Dom0

3

Operating system

Hardware equipment

Figure 1.2. A virtualized machine. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

In this section, we will go over the two solutions to obtain virtual machines, as shown in Figure 1.2. The hypervisor is a virtual machine monitor (VMM), which is often open source. Hypervisors operate on standard hardware platforms. In addition to the VMM, running directly on the physical hardware, the architecture generally comprises a number of domains running simultaneously. These domains execute virtual machines isolated from one another. Each virtual machine may have its own operating system and applications. The VMM controls access to the hardware from the various domains, and manages the sharing of the resources between the different domains. Thus, one of the VMM’s main tasks is to isolate the different virtual machines, so that the execution of one virtual machine does not affect the performances of the others. All peripheral drivers are kept in an isolated domain specific to them. Known as “domain zero” (dom0), it offers a reliable and effective physical support. Dom0 has special privileges in comparison to other domains, known as “user domains” (domU) and, for example, has unfettered access to the hardware of the physical machine. User domains have virtual drivers and operate as though they have direct access to the hardware. However, in reality, those virtual drivers communicate with the dom0 in order to access the physical hardware. The hypervisor virtualizes a single physical network interface, de-multiplexing the incoming packets from the physical interface to the user domains and, conversely, multiplexing the outgoing packets generated by those user domains. In this procedure, known as virtualization of the network input/output, the domain 0 directly accesses the input/output peripherals, using their native drivers, and performs input/output operations on behalf of the domUs.

4

Software Networks

The user domains use virtual input/output peripherals, controlled by virtual drivers, to ask the dom0 for access to the peripheral. Each user domain has its own virtual network interfaces, known as foreground interfaces, which are required for network communications. The background interfaces are created in the dom0, corresponding to each foreground interface in a user domain, and act as proxy for the virtual interfaces in the dom0. The foreground and background interfaces are connected to one another via an input/output channel, which uses a zero-copy mechanism to match the physical page containing the packet and the target domain. Thus, the packets are exchanged between the background and foreground interfaces. The foreground interfaces are perceived by the operating systems, working on the user domains, as real interfaces. However, the background interfaces in the dom0 are connected to the physical interface and to one another via a virtual network bridge. It is the default architecture, called “bridge mode”, used, for instance, by the Xen hypervisor, which was certainly one of the first to appear. Thus, both the input/output channel and the network bridge establish a path for communication between the virtual interfaces created in the user domains and the physical interface. We will go back to hypervision techniques later in this chapter. Before this, let us introduce the second solution to support virtual machines, which seems to take the lead thanks to its simplicity and efficiency while offering a little less functionality. This solution is based on a unique operating system supporting containers that host the virtual machines. More precisely, a container is an abstract data structure, class or type that makes it possible to collect objects. The container technique is more flexible and simpler than that embedding an operating system for each virtual machine. Containers can migrate from one hardware to another, thus performing virtual machine migrations. The open source software called Kubernetes, which we will study later in this chapter, makes it possible to orchestrate migrations from one hardware’s containers to another hardware in the same cluster. The Kubernetes orchestrator seems to become standard to implement virtual machines in the new generation of networks. 1.1. Software networks Virtual machines, in turn, can be used to create virtual networks, which are also known as software networks. For this purpose, we need to link virtual machines together in the same way as we would connect different physical machines. Of course, the communication links must be shared between the different software networks. A set of software networks are represented in Figure 1.3.

Virtualization

5

Figure 1.3. A set of software networks. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

Each software network may have its own architecture and its own characteristics. One software network could be devoted to a VoIP service, another to an IPTV service, a third to a highly secure application, a fourth to support professional applications, a fifth for asynchronous applications such as electronic messaging, etc. We could, in fact, practically create a software network for each user. The personalized software network is set up at the moment when the user connects. It is eliminated when the user signs out. However, this solution does not scale up, and today, we are limited to a number of software networks suited to the hardware capacity of the underlying physical infrastructure. Each software network receives resources allocated to it on the basis of the user demands. However, resources remain shared by different techniques that allow virtual networks to recover resources from other unused virtual networks. It should be noted that, in general, the virtual nodes are found in datacenters, which may be of varying size and importance: enormous central datacenters, regional datacenters, local datacenters and small datacenters such as femto-datacenters. We will come back later on to the choices which may be made in this field. One of the characteristics of software networks is that the virtual machines can be migrated from one physical machine to another. This migration may be automated based on whether a node is overloaded or out of order.

6

Software Networks

In the physical nodes that support the software networks, we can add other types of virtual machines such as firewalls, SIP servers for VoIP, middle boxes, etc. The networks themselves, as stated above, may obey a variety of different protocol architectures such as TCP/IPv4, UDP/IPv4, IPv6, MPLS, Ethernet Carrier Grade, TRILL, LISP, etc. Isolation is, of course, a crucial property, because it is essential to prevent a problem on one software network from having repercussions for the other networks. The handover of streams from one software network to another must take place via a secure gateway outside of the data plane. This is absolutely necessary to prevent contamination between networks, such as a complete shutdown for a network attacked, for example, by a distributed denial of service (DDOS). 1.2. Hypervisors and containers Clearly, virtualization needs hardware, which can be standard. We speak of commodity hardware (white box), with open specifications, produced en masse to achieve particularly low prices. We will talk further about it in the chapter on open source software (Chapter 4). There are various ways of placing virtual machines on physical equipment, and they can be classified into three broad categories, as shown in Figures 1.4–1.6. The first two figures correspond to hypervisors and the third figure corresponds to containers.

Figure 1.4. Paravirtualization. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

Virtualization

7

A paravirtualization hypervisor is a program that is directly executed on a hardware platform and which hosts virtual machines linked to operating systems that have been modified so that the virtual machines’ instructions are directly executed on a hardware platform. This platform is able to support guest operating systems with their drivers. The classic hypervisors in this category include Citrix Xen Server (open source), VMware vSphere, VMware ESX, Microsoft Hyper-V Server, Bare Metal and KVM (open source). These programs are also known as type-1 hypervisors. The second category of hypervisor, or type 2 hypervisor, is a program that is executed on the hardware platform, supporting native operating systems, which means without any modification. The native operating system, when invited by the hypervisor, is executed on the device thanks to an emulator so that the underlying device takes all constructions into account. The guest operating systems are unaware that they are virtualized, so they do not require any modifications, as opposed to paravirtualization. Examples of this type of virtualization would include Microsoft Virtual PC, Microsoft Virtual Server, Parallels Desktop, Parallels Server, Oracle VM Virtual Box (free), VMware Fusion, VMware Player, VMware Server, VMware Workstation and QEMU (open source).

Figure 1.5. Virtualization by emulation. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

8

Software Networks

The third type leaves behind the previous hypervisor systems, running several machines simultaneously as containers. In such case, we speak of an isolator. An isolator is a program that isolates the execution of the applications in an environment, called the context, or indeed the zones of execution. Thus, the isolator is able to run the same application multiple times in a multi-instance mode. This solution performs very well, because it does not cause any overload, but the environments are more difficult to isolate.

Figure 1.6. Virtualization by containers. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

In summary, this last solution facilitates the execution of the applications in execution zones. In this category, we can cite the examples of Linux-Vserver, chroot, BSD Jail and Open VZ and most of the container solutions such as Docker. 1.3. Kubernetes Kubernetes (also called K8s) is an open source system that allows deployment, rise and management of containered applications. This solution was first created by Google, which gave it to the Cloud Native Computing Foundation. This platform allows deployment automation, rise and implementation of application containers on clusters and servers. This open source software works with a whole range of container technologies, such as Docker, for example.

Virtualization

9

The Kubernetes architecture is shown in Figure 1.7. We can see Pods, which are containers or a group of containers hosted by servers that belong to a cluster of hardware. ETCD is the persisting storage unit for the cluster’s configuration data. The scheduler’s goal is to share the workload on servers, thus managing Pods’ execution in the best possible way. Finally, Kubelet is responsible for the execution state of each server. Master Orchestrator Controller Scheduler Kubernetes Client

API Server

Node

Node Kubelet Docker Kube-proxy Pod

Pod

Pod

Node Kubelet Docker

Kubelet Docker Kube-proxy Pod

Pod

Pod

Kube-proxy Pod

Pod

Pod

Volume

Figure 1.7. Architecture of the Kubernetes orchestrator

1.4. Software networks Software networks have numerous properties that are novel in comparison to hardware networks. To begin with, we can easily move virtual machines around, because they are simply programs. Thus, we can migrate a router from one physical node to another. Migration may occur when a physical node begins to fail, or when a node is overloaded, or for any other reason decided on in advance. Migration of a node does not actually involve transporting the whole of the code for the machine, which would, in certain cases, be rather cumbersome and time-consuming. In general, the program needing to be migrated is already present in the remote node, but it is idle. Therefore, we merely need to begin running the program and send it the configuration of the node to be moved. This requires the transmission of relatively little data, so the latency before the migrated machine starts up is short. In general, we can even let both machines run at once, and change the routing so that the data only flow through the migrated node. We can then shut down the first router.

10

Software Networks

More generally, we carry out what is known as urbanization: we migrate the virtual machines to different physical machines until we obtain optimal performance. Urbanization is greatly used for optimization in terms of energy consumption or workload distribution, as well as to optimize the cost of the software networks or to make the network highly reliable or resilient. For example, in order to optimize energy consumption, we need to bring together the virtual machines on shared nodes and switch off all the nodes that are no longer active. In actual fact, these machines would not be shut down but rather placed on standby, which does still consume a small amount of energy, but only a very small amount. The major difficulty with urbanization arises when it is necessary to optimize all operational criteria at the same time because they are often incompatible – for example, optimizing consumption and performance at the same time. A very important characteristic mentioned earlier is isolation: the software networks must be isolated from one another, so that an attack on one network does not affect the other networks. Isolation is complex, because simultaneously, we need to share the common resources and be sure that, at all times, each network has access to its own resources, negotiated at the time of establishment of the software network. In general, a token-based algorithm is used. Every virtual device on every software network receives tokens according to the resources attributed to it. For example, for a physical node, ten tokens might be distributed to network 1, five tokens to network 2 and one token to network 3. The networks spend their tokens on the basis of certain tasks performed, such as the transmission of n bytes. At all times, each device can have its own tokens and thus have a minimum data rate determined when the resources were allocated. However, a problem arises if a network does not have packets to send, because then it does not spend its tokens. A network may have all of its tokens when the other networks have already spent all of theirs. In this case, so as not to immobilize the system, we allocate negative tokens to the other two networks, which can then surpass the usage rate defined when their resources were allocated. When the sum of the remaining tokens less the negative tokens is equal to zero, then the machine’s basic tokens are redistributed. This enables us to maintain isolation while still sharing the hardware resources. In addition, we can attach a certain priority to a software network while preserving the isolation, by allowing that particular network to spend its tokens as a matter of priority over the other networks. This is relative priority, because each network can, at any moment, recoup its basic resources. However, the priority can be accentuated by distributing any excess resources to the priority networks, which will then always have a token available to handle a packet. Of course, isolation requires other characteristics of the hypervisors and the virtualization techniques, which we will not discuss in this book. Virtualization needs to be linked to other features in order to fully make sense. SDN (Software-Defined Networking) is one of the paradigms strongly linked to virtualization, because it involves the uncoupling of the physical part from the

Virtualization

11

control part. The control part can be virtualized and deported onto another machine, which enables us, for example, to have both a far great processing power than that of the original machine and also a much larger memory available. 1.5. Virtual devices All devices can be virtualized, with the exception of those which handle the reception of terrestrial and wireless signals, such as electromagnetic signals or atmospheric pressure. For example, an antenna or thermometer could not be replaced by a piece of software. However, the signal received by that antenna or thermometer can be processed by a virtual machine. A sensor picking up a signal can select an appropriate virtual processing machine in order to achieve a result that is appropriate for the demand. One single antenna might, for example, receive signals from a Wi-Fi terminal as well as signals from a 4G terminal. On the basis of the type of signal, an initial virtual machine determines which technology is being used, and sends the signal to the virtual machine needed for its processing. This is known as SDR (Software-Defined Radio), which is becoming increasingly widely used, and enables us to delocalize the processing operation to a datacenter. The networking machines that we know can always be virtualized, either completely or at least partially. A partial virtualization can correspond to the processing part, the control part or the management part. Thus, today, we can uncouple a physical machine that, in the past, was unique, into several different machines – one of them physical (e.g. a transceiver broadcasting along a metal cable) and the others virtual. One of the advantages of this uncoupling is that we can deport the virtual parts onto other physical machines for execution. This means that we can adapt the power of the resources to the results we wish to obtain. Operations originating on different physical machines can be multiplexed onto the same software machine executing on a single physical server. This solution helps us to economize on the overall cost of the system, as well as on the energy expended, by grouping together the necessary power using a single machine that is much more powerful and more economical. Today, all legacy machines in the world of networking have either been virtualized already or are in the process of being virtualized – Nodes-B for processing the signals from 3G, 4G and 5G mobile networks, HLRs and VLRs, routers, switches, different types of routers/switches such as those of MPLS, firewalls, authentication or identity management servers, etc. In addition, these virtual machines can be partitioned so they execute on several physical machines in parallel. We can appreciate the importance of the Cloud and associated datacenters, because they are placed where the processing power is available at a relatively low cost, as is the memory space needed to store the virtual machines and a whole range

12

Software Networks

of information pertaining to the networks, clients and processing algorithms. For the past few years, with server virtualization, the tendency has been to focus on huge datacenters, but with the help of distribution, the size of datacenters is decreasing. This size varies more and more and some are becoming smaller, becoming skin datacenters or femto-datacenters, or Fog and MEC (Mobile Edge Computing) datacenters. Another interesting application of virtualization is expanding. It is about digital twins. A hardware is associated with a virtual machine executed in a datacenter located either near or far from the hardware. The virtual machine executes exactly what the hardware does. Obviously, the hardware must supply the virtual machine with power when there is a change in parameters. The virtual machine should produce the same results as the hardware. If results are not similar, this shows a dysfunction from the hardware, and this dysfunction can be studied in real time on the virtual machine. This solution makes it possible to spot malfunctions in real-time and, in most cases, to correct them. Examples of digital twins are being used or developed just like a plane engine twin that is executed in a datacenter. Similarly, soon, vehicles will have a twin, allowing us to detect malfunctions or to understand an accident. Manufacturers are developing digital twins for objects, but in this case, the digital twin’s power can be much bigger and it can perform actions which the object is not powerful enough to perform. Scientists dream of human digital twins which could keep working while the human sleeps. 1.6. Conclusion Virtualization is the fundamental property of the new generation of networks, where we make the move from hardware to software. While there is a noticeable reduction in performance at the start, it is compensated by more powerful, less costly physical machines. Nonetheless, the opposite move to virtualization is crucial: that of concretization, i.e. enabling the software to be executed on reconfigurable machines so that the properties of the software are retained and top-of-the-range performances can again be achieved. Software networks form the backbone of the new means of data transport. They are agile, simple to implement and not costly. They can be modified or changed at will. Virtualization also enables us to uncouple functions and to use shared machines to host algorithms, which offers substantial savings in terms of resources and of qualified personnel.

2 SDN (Software-Defined Networking)

The SDN (Software-Defined Networking) technology is at the heart of this book. It was introduced with strong control centralization and virtualization, enabling physical networking devices to be transformed into software. Associated with this definition, a new architecture has been defined: it decouples the data level from the control level. Up until now, forwarding tables have been computed in a distributed manner by each router or switch. In the new architecture, the computations for optimal control are performed by a different device, called the controller. Generally, the controller is centralized, but it could perfectly well be distributed. Before taking a closer look at this architecture, let us examine the reasons for this new paradigm. The limitations of traditional architectures are becoming significant: at present, modern networks no longer optimize the costs at all (i.e. the CAPEX and OPEX). The networks are not agile. The time to market is much too long, and the provisioning techniques are not fast enough. In addition, the networks are completely unconnected to the services. The following points need to be taken into account in the new SDN paradigm: – overall needs analysis; – dynamic, rather than static, configuration; – dynamic, rather than static, policies used; – much greater information feedback than is the case at present; – precise knowledge of the client and of his/her applications, and, more generally, his/her requirements.

Software Networks: Virtualization, SDN, 5G and Security, First Edition. Guy Pujolle. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

14

Software Networks

2.1. The objective The objective of SDN (Software-Defined Networking) is to reduce costs by virtualization, automation and simplification. For this purpose, SDN facilitates the customization of the networks, a very short set-up time and a network deployment with the right quality of service rather than a general quality of service. The architecture of SDN can be summarized with three fundamental principles, as shown in Figure 2.1. The first is the decoupling of the physical and virtual layers (hardware and software). This enables virtual devices to be loaded on hardware machines provided, of course, that these hardware machines can host a hypervisor or containers. The second principle is going from a hardware to a logical aspect. This new environment enables us to spontaneously change the network by adding a new network or by taking a network away as much as we want. Finally, the third principle is that of automation – the best possible automation – of the operations carried out on the network, whether for management or for control. This automation is obtained thanks to centralization. The storage, processing and the network use virtual machines associated with each of these domains that share the hardware resources. We can equally find three types of virtual machines on the same server. In order for the environment to be able to be executed without problems, we must add security, management and control virtual machines. Today, a company needs to have all five of these groups of virtual machines set up in order to constitute an operational information system. These five domains are shown in Figure 2.2.

Soware

Soware

Soware

Decoupling

Cloudificaon

Automaon

Hardware

Hardware

Hardware

Figure 2.1. The three basic principles. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

SDN (Software-Defined Networking)

15

Figure 2.2. The five domains necessary for the life of a company

The five domains described above can be put in place by way of virtual machines associated with each of the domains. The whole computing environment of a company can thus be concentrated in the Cloud in the form of virtual machines distributed in datacenters. This environment is illustrated in Figure 2.3 by datacenters containing the virtual machines necessary for the construction of the company’s computing system.

Figure 2.3. Virtualization of the five domains

In addition to this environment, there are applications that may be of two types: business applications and applications to control or orchestrate the environment itself. The search for new products has therefore turned towards autopilot systems, which are also referred to as orchestrators in the literature. The complete environment, which is vital to the functioning of a company, is shown in Figure 2.4, which shows the importance of the orchestrator in the general architecture of informational and operational systems.

16

Software Networks

Figure 2.4. The pilot program

2.2. The ONF architecture In order for this new world of SDN to have a chance of being successful, it has to be standardized. This standardization was carried out by the ONF (Open Networking Foundation), which was set up under the auspices of large companies in the USA, following the proposal of this architecture by Stanford University and Nicira. The architecture proposed by the ONF is shown in Figure 2.5. It comprises three layers. The bottom layer is an abstraction layer, which decouples the hardware from the software, and is responsible for data transport. This level describes the protocols and algorithms that enable IP packets to advance through the network to their destination. This is called the infrastructure plane. The second layer is the control plane. This plane contains the controllers providing control data to the data plane so that the data are channeled as effectively as possible. The ONF’s vision is to centralize control in order to facilitate the recovery of a great deal of information on all the clients. The centralized controller enables obtaining a sort of intelligence. The infrastructure to be managed is distributed between the controllers. Of course, we need to take account of the problems caused by a centralized environment, and therefore duplicate the decision elements. Controllers carry out different functions, such as the provision of infrastructure or the distribution of loads on different network devices to optimize performances or reduce energy consumption. The controller is also in charge of the configuration of

SDN (Software-Defined Networking)

17

network equipment such as firewalls, authentication servers and, more generally, all servers necessary for the proper operation of the network. These different machines must be put in the most appropriate places in order to enhance the overall network functioning. Finally, the uppermost layer, the application plane, is responsible for the applications needed by the clients and storage, computation, network, security and management applications. This layer introduces the programmability of the applications, and sends the controller all of the necessary information to allow the opening of the software networks meeting the needs of the applications. This layer also includes control, orchestration and management applications that are vital to the good functioning of the company’s computing system. The application plan must be able to channel the information required to open up the network that corresponds to the application towards the controller. Any new service can be introduced quickly, and will give rise to a specific network if it cannot be embedded on a pre-existing network. The ONF architecture is shown in Figure 2.5, with its three layers: the application layer and programmability, the control layer with centralized intelligence, and abstraction at the infrastructure layer. We will come back to look at the interfaces between these layers, which are important for the compatibility of products from different vendors. The ONF has standardized the intermediary layer and the interfaces. Certain parts of the architecture are taken up by other standardization organizations so as to conform to the legal standards.

Figure 2.5. The ONF architecture

The ONF’s general architecture can actually be more detailed, as shown in Figure 2.6. Once again, we see the infrastructure layer, but it is expanded into two

18

Software Networks

planes: the physical plane and the logical plane. The physical plane is in charge of all the hardware, and more generally, the physical infrastructure. The logical plane corresponds to the establishment of the software networks constructed on the basis of virtual machines, sharing the physical infrastructure in accordance with the rules deriving from the higher layers. This vision of the architecture enables us to clearly discern the hardware and the networks that exist in companies from the software, which is added to offer the necessary flexibility. This architecture requires datacenters ranging in size from very small to very large, depending on the size of the company and on the resources distribution to the periphery. Telecom operators have not missed this opportunity, and have entered into the market as Cloud providers. Companies such as Amazon and Google have gone directly for the goal, putting in place the infrastructure necessary to become major players in the world of telecommunications.

Figure 2.6. The SDN architecture

In the architecture shown in Figure 2.6, we see the control layer and the application layer with the northbound and southbound APIs (Application Programming Interfaces) between those layers, and the eastbound and westbound APIs with other controllers. The northbound interface facilitates communication between the application level and the controller. Its purpose is to describe the needs of the application and to pass along the commands to orchestrate the network. Later on, we will describe the current standards governing this interface. The southbound interface describes the signaling necessary between the control plane and the virtualization layer. With this aim in mind, the controller must be able to determine the elements that will make up the software network to set up. In the other direction, the current network resource consumption must be fed back so that the controller has

SDN (Software-Defined Networking)

19

as full a view as possible of the usage of the resources. The bandwidth necessary for the feeding back of these statistics may represent a few percent of the network’s capacity, but this is crucial for optimization which will improve performance by much more than a few percent. In addition to the two interfaces described above, there are also the eastbound and westbound interfaces. The eastbound interface enables two controllers of the same type to communicate with one another and make decisions together. The westbound interface must also facilitate communication between two controllers, but ones which belong to different sub-networks. The two controllers may be compatible but they may also be incompatible and, in this case, a signaling gateway is needed. Figure 2.7 shows a number of important open source programs that have been developed to handle a layer or an interface. Starting from the bottom, in the virtualization layer, network virtual machines were standardized by the ETSI in a working group called NFV (Network Functions Virtualization), which we will revisit in detail later on. Here, let us simply note that the aim of NFV is to standardize all network functions with a view to virtualizing them and facilitating their execution in different places from the original physical machine. To complete this standardization, an open source software code has been developed which allows full compatibility between virtual machines. The control plane includes the controllers. One of the best known is OpenDaylight – an open source controller developed collaboratively by numerous companies. This controller, as we will see later on, contains a large number of modules, often developed in the interest of the particular company that carried out that work. Today, OpenDaylight is one of the major pieces in the Cisco architecture, as well as that of other manufacturers. Later on, we will detail most of the functions of OpenDaylight. Of course, there are many other controllers – open source ones as well – such as OpenContrail, ONOS, Flood Light, etc. The uppermost layer represents the Cloud management systems. It is roughly equivalent to the operating system on a computer. It includes OpenStack, which was the system which was most favored by the developers, but many other products exist, both open source and proprietary. The southbound interface is often known through its standard naming from the ONF: OpenFlow. OpenFlow is a signaling system between the infrastructure and the controller. This protocol was designed by Nicira and has led to a de facto standard from the ONF. OpenFlow transports information that properly defines the stream in question to open, modify or close the associated path. OpenFlow also determines the actions to be executed in one direction or the other over the interface. Finally,

20

Software Networks

OpenFlow facilitates the feeding back of measurement information performed over the different communication ports, so that the controller has a very precise view of the network. The northbound and southbound interfaces have been standardized by the ONF to facilitate compatibility between Cloud providers, the control software and the physical and virtual infrastructure. Most manufacturers conform to these standards to a greater or lesser degree, depending on the interests in the range of hardware already operating. Indeed, one of the objectives is to allow companies that have an extensive range to be able to upgrade to the next generation of SDN without having to purchase all new infrastructure. A transitional period is needed, during which we may see one of two scenarios: – the company adapts the environment of the SDN to its existing infrastructure. This is possible because the software layer is normally independent of the physical layer. The company’s machines must be compatible with the manufacturer’s hypervisor or container products. However, it is important to add to or update the infrastructure so that its power increases by at least 10%, but preferably 20 or 30%, to be able to handle numerous logical networks; – the company implements the SDN architecture on a new part of its network, and increases it little by little. This solution means that both the old generation of the network and the new need to be capable of handling the demand.

Figure 2.7. Example of open source developments

Now let us go into detail about the different layers. Once again, we will start with the bottom layer. The physical layer is quintessential and is designed around the most generic possible hardware devices in order to obtain the best cost/benefit ratio, but these devices are still suited for networking, i.e. with the necessary

SDN (Software-Defined Networking)

21

communication cards and the appropriate capacity to host the intended software networks. It is clear that performance is of crucial importance, and that the physical infrastructure has to be able to cope with what is at stake. One of the priorities, though, is to minimize the physical layer to avoid consuming too much in the way of energy resources. With this goal in mind, the best thing to do is to have an algorithm to appropriately place the virtual machines, with a view to putting the highest possible number of physical machines on standby outside of peak hours. Urbanization is becoming a buzzword in these next-generation networks. Unfortunately, urbanization algorithms are still in the very early stages of development, and are not capable of dealing with multi-criteria objectives: only one criterion is taken into account – e.g. performance. Algorithms can be executed based on the criteria of load balancing. It can also take into account the energy consumption by doing the opposite of load balancing – i.e. channeling the data streams along common paths in order to be able to place a maximum number of physical machines on standby. The difficulty in the latter case is being able to turn the resources back on as the workload of the software networks increases again. Certain machines, such as virtual Wi-Fi access points, are difficult to wake from standby mode when external devices wish to connect. First, we need electromagnetic sensors that are capable of detecting these mobile terminals, and second, we need to send an Ethernet frame on a physical wire to the Wi-Fi access point with the function Wake-on-LAN. 2.3. NFV (Network Functions Virtualization) The purpose of NFV (Network Functions Virtualization) is to decouple the network functions from the network equipment. This decoupling enables us to position the software performing the control of a device on a different machine than the device itself. This means we can place the operational programs of a machine in a datacenter within a Cloud. Standardization is being done by a working group from the ETSI, which is a European standardization body, but in this particular case, the project is open to all operators and device manufacturers from across the world. Over 200 members are taking part in this standardization effort. The objective of this initiative is to define an architecture that is able to virtualize the functions included in the networking devices, and to clearly define the challenges needing to be overcome. The standardization tasks are being carried out by five separate working groups, described in detail below. The first group, “Architecture of the Virtualization”, has the objective of producing a reference architecture for a virtualized infrastructure and points of reference to interconnect the different components of that architecture. The second group, “Management and Orchestration”, is charged with defining the rollout, instantiation, configuration and management of network services that use the NFV

22

Software Networks

infrastructure. The third group, “Software Architecture”, aims to define the reference software architecture for the execution of virtualized functions. The fourth group, “Security Expert Group”, as the name suggests, works on the security of the software architecture. Finally, the last group, “Performance and Portability Expert Group”, needs to provide solutions to optimize performances and manage the portability of the virtual machines representing the functions of this new environment. Figure 2.8 shows a number of appliances handled by NFV, such as firewalls or SBCs (Session Border Controllers). These functions are moved to a powerful physical machine or to a virtual machine in a datacenter.

Figure 2.8. NFV (Network Functions Virtualization)

Figure 2.9 goes into a little more detail as to the machines whose functions are externalized. ETSI’s aim is to standardize the virtual machines used to perform these functions. The power of the server determines the rate at which the functions are executed. This makes sense for the functions of a firewall or deep packet inspection (DPI), which require extremely high processing power to determine the applications passing through the network in real time.

Router

Tester

Hardware equipment

Soware equipment

Figure 2.9. NFV machines

SDN (Software-Defined Networking)

23

2.4. OPNFV OPNFC, or Open NFV, is a new movement that is a collaborative project from the Linux Foundation. The objective is to create a platform to speed up the rise of NFV. Another objective of OPNFV is to increase the agility of services by facilitating better usage of the underlying functions. Telecom operators are particularly interested by this initiative; they hope to obtain a referential open source platform to validate the interoperability of NFVs in a multi-provider context. OPNFV is also an excellent opportunity for the creation of an open platform that could be used by all participants. OPNFV also encourages cooperation between manufacturers, with the aim of driving NFV forward and ensuring consistency, performance and interoperability between virtualized network infrastructures. The OPNFV project is in close cooperation with the NFV program at ETSI to ensure a coherent implementation of the NFV standard. The first thing that is expected of OPNFV is to provide an NFV Infrastructure (NFVI), Virtualized Infrastructure Management (VIM) and APIs to other NFV elements. This collection forms the basic software necessary for Management and Network Orchestration (MANO). The standards are being drawn up in collaboration with the main open source projects. OPNFV is working with a large number of these projects to coordinate the integration of NFV. We will go through a more detailed study of this platform in the chapter on open source software (Chapter 4). 2.5. Southbound interface The southbound interface is situated between the controller and the devices on the virtualization plane. This signaling protocol passes the configuration commands in one direction and manages the statistical information feedback in the other. There are many open source proposals for some under development. An initial list of protocols for this southbound interface is: – OpenFlow from the ONF; – I2RS (Interface to Routing Systems) from the IETF; – Open vSwitch Data Base (OvSDB); – NetConf; – SNMP; – LISP; – BGP. We will detail the most significant protocols in Chapter 4. However, to introduce this southbound interface, let us look at the most iconic protocol: OpenFlow.

24

Software Networks

This signaling is shown in Figure 2.10. It takes place between the controller and the network devices. The data transported obey the trilogy “match-action-statistics” – in other words, we need to: – determine the streams uniquely by matching a certain number of elements such as addresses or port numbers; – specify the actions transmitted from the controller to the networking devices, such as updating a forwarding table or switch table, or, more generally, a forwarding device; – transfer tables containing statistics on the degree of use of the ports, transmission lines and nodes, such as the number of bytes sent through a given port. This solution has been standardized by the ONF (Open Network Foundation), and we will describe it in greater detail in Chapter 4.

Figure 2.10. The signaling protocol OpenFlow

SDN (Software-Defined Networking)

25

2.6. The controller The controller, as its name indicates, is designed to control the data plane and receives from the application layer the necessary elements to determine the type of control that needs to be exerted. This position is shown in Figure 2.11. Thus, the controller must receive policy rules to be respected. On the basis of the description of the applications to be taken into account, it deduces the actions needed on the networking equipment. The actions can be carried out on routers, switches, firewalls, load balancers, virtual VPNs and on any other hardware that is necessary to the good functioning of the network. A very great many open source controllers have been developed. OpenDaylight represents a good example; this open source software was developed by the Linux foundation. A good 40 companies devoted experienced developers to the project. The controller as such has numerous decision-making modules and various northbound and southbound interfaces. We will describe the latest release of OpenDaylight hereinafter.

Figure 2.11. The control layer and its interfaces

The controller contains modules that handle different functions necessary for the proper operation of the network. Of these functions, one of the most important is that of a “load balancer”. In fact, this term denotes algorithms that determine the

26

Software Networks

best paths to follow on the data plane. This module must decide, on the basis of the statistics received, which nodes in the network infrastructure the packets should be sent through. This decision should optimize the user demands or, more specifically, the user applications. Load balancing is essentially valid during peak hours. During other times, the load balancer must determine the most appropriate paths, on the basis of the users’ requirements. Load balancing becomes load unbalancing: unlike at peak times, we need to channel as many streams as possible through common paths so as to be able to shut down the maximum possible number of intermediary nodes. This function is represented diagrammatically in Figure 2.12.

Figure 2.12. The load balancing protocol

2.7. Northbound interface The northbound interface between the application plane and controller passes information pertaining to the applications’ needs so that the controller can open up the best possible software network with the appropriate qualities of service, adequate security and the essential management for the operations to be able to be carried with no problems. The basic protocol for these transmissions is based on the REST (Representative State Transfer) API. This application interface must enable us to send the information necessary for the configuration, the operations and the measurement report. We find this protocol, based on RESTful, integrated into numerous Cloud management systems, and in particular, in the interfaces of most of these systems, such as: – OpenStack, in which the REST API is integrated; – those of the service providers; – the API Virtual Private Cloud (VPC).

SDN (Software-Defined Networking)

27

The representations facilitated by the REST protocol enable us to pass the information over the northbound interface with the following properties: – each resource is identified individually; – the resources can be manipulated by representations; – the messages are self-descriptive: they explain their nature by themselves; – each access to the subsequent states of the application is described in the present message. 2.8. Application layer The application layer is essentially formed of Clouds that contain the application virtual machines. These may be business machines or network element management machines – responsible, for instance, for the managing of handovers or determination of the best access, at any given time, for a multi-technology terminal. Thus, this layer essentially contains the operating systems for the Cloud. The best-known such system is, undoubtedly, OpenStack. It is a Cloud management system that controls large sets of resource offering processing power, storage and network resources. OpenStack has already had more than twenty releases in only nine years (start of 2011 to start of 2019). The latest version is Stein, released at the start of 2019. OpenStack is open source software with an Apache license. The OpenStack architecture is shown in Figure 2.13. It is modular, and contains numerous modules, such as Nova for computation, Swift for storage, Glance for the imaging service, Dashboard for the settings and control panel, etc.

Figure 2.13. The OpenStack system

28

Software Networks

The part that is of interest to us most in this book is Neutron, which handles the network module. In this context, OpenStack provides flexible network models to take care of the applications. In particular, OpenStack Neutron manages the IP addresses, and allows for static addresses or uses DHCP. Users can create their own network in which SDN technology is used. OpenStack has numerous extensions, such as IDS (Intrusion Detection Systems), load balancing, the option to deploy firewalls and VPNs. To conclude this section and summarize the SDN architectures in place, in Figure 2.14, we have shown the different components that have been put in place to achieve overall operation. The top and bottom parts represent the Cloud and the physical/logical networks. Between these two points, management and control of the network and of the applications needs to take place. In terms of the business application, we find sets of software modules – mostly open source – to deploy cloud-computing infrastructures, and more specifically IaaS (Infrastructure as a Service). On the contrary, we find the applications to establish a virtualized network structure, with the commands necessary to handle the business applications.

Figure 2.14. The overall architecture of SDN solutions

2.9. Urbanization We have already mentioned the issue of urbanization of virtual machines in a network. Let us now look again at this concept in a little more detail. It involves placing the virtual machines in the network, i.e. in the Cloud, so that optimum performance is attained. While performance is, obviously, very important, in today’s world, the cost of datacenters, and therefore of networks, is based mainly on energy expenditure. To clarify the issue, Figure 2.15 shows the cost of a datacenter, distributed between the infrastructure and maintenance costs.

SDN (Software-Defined Networking)

Billion of $

29

Energy consumpon and A/C Hardware

100

50

2001 2001

2005 2005

2010 2010

2015 2015

2020 2020

Figure 2.15. The cost of a datacenter environment. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

Urbanization is an initial response to this demand to economize on power supply, attempting to bring together the virtual machines on common servers so as to be able to put a large number of servers, which have become idle, on standby. This solution is very useful at night and at low-demand times of the day. During the peak hours, the servers need to be awakened again, with a wider distribution of the virtual machines between the different servers. Urbanization also takes account of cost factors, leading us to use the physical machines at night, and migrate the virtual machines, having them go all around the world in the space of 24 hours to stay in night mode. Evidently, this solution is viable only for light virtual machines that can be shifted with no problems, and is absolutely not appropriate for virtual storage machines processing “Big Data”. Urbanization may affect other criteria such as the availability of the network, by facilitating access to emergency paths and virtual machines, distributed appropriately so that there are multiple access paths. Reliability also pertains to the same sensitive points, and solutions to reliability issues may be found through virtualization and urbanization. Security elements may also be behind an urbanization. For example, certain sensitive machines may regularly change places so as to prevent having to deal with DDOS attacks. Similarly, a network may be cloned and, in the event of an attack, highly authenticated clients are switched to the clone, while the original network is gradually deprived of its resources to prevent it becoming a honey pot for the attacker.

30

Software Networks

Figure 2.16 shows a set of software networks, supplemented by diverse and varied virtual machines (servers, IP-PBX, Wi-Fi access points, etc.), which must obey an urbanization algorithm to optimize a set of criteria. Later, we will revisit the question of the intelligence to be introduced into networks to optimize the processes of urbanization.

Figure 2.16. The urbanization of a network environment. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

2.10. Conclusion The new generation of networks is presented in Figure 2.17. It is presented in the form of datacenters of varying size, from gargantuan datacenters to femtodatacenters located in the user’s pocket. These datacenters contain virtualized networking devices or networking functions that are decoupled from the hardware machines. This ensemble is controlled by pilot, orchestrator or controller machines. Figure 2.17 only shows one controller, but, in fact, given the centralized vision of control, the number of devices handled by a single controller can be no greater than a few hundred. For a rather big network, we need a great many different controllers, which will be more or less mutually compatible. The eastbound and westbound interfaces are likely to play an increasingly important role in facilitating the connection of the different sub-networks. It is clear that a very great many new propositions are likely to emerge in this direction, with extensions of protocols already in use, such as BGP.

SDN (Software-Defined Networking)

31

Figure 2.17. Vision of tomorrow’s SDN networks. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

One difficult problem that is beginning to be considered is the urbanization of virtual machines that are in the datacenters. Depending on the optimization criterion, the time of day or the performance requirements, the virtual machines migrate so that they are located in the best possible place. Most typically, urbanizations are done for the purpose of saving energy, which tends to consist of grouping together the virtual machines on common servers, or for the purpose of optimizing performance, which involves the opposite operation – distributing the resources to the servers as much as possible. It should be noted that in terms of networks, the optimum placement of the virtual machines is no longer an equal distribution between all the physical machines, but on the contrary, the filling of those physical machines with the maximum possible number of virtual machines, obviously without causing a degradation in performance. The other physical machines are placed on standby to save energy. In the networks, this involves sending the streams along common paths, rather than dispersing them in the infrastructure network. The SDN market is fairly difficult to predict, because while demand is high, so too are the costs involved in making the move to this new architecture. These high costs are due to novelty and to the difficulty in integrating SDN into the existing networks. In particular, operators who possess a significant physical infrastructure are wondering about the integration of SDN: how can it be introduced without an additional costly structure? Indeed, it is not an option to demolish everything and rebuild from scratch in this new context. Therefore, we need to introduce SDN in specific environments such as the renovation of a part of the network or the construction of a new localized infrastructure.

32

Software Networks

Figure 2.18 gives an indication of what the SDN market could be in coming years. In this figure, we can see that there is a good distribution between service providers, the companies offering datacenter services, Cloud providers and, to a lesser extent, the companies’ access networks. Finally, if we look at the manufacturers’ vision, we see two avenues, which are represented by VMware and Cisco: software-orientation and hardware-orientation. The software-oriented vision is installed on the physical machines of practically all manufacturers, while hardware-orientation is greatly favorable for manufacturers – especially well-established ones. 40 000 Billion of $ 35 000 30 000

25 000

20 000

15 000 10 000 5 000

0

2012

2013

2014

Enterprise access

Cloud access

2015

2016

2017

2018

2019

2020

Cloud access provider

Service provider

Figure 2.18. OpenFlow market, and more generally SDN market. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

Even though today, the SDN is made for high capacity networks, there is a strong chance that it will become widely used on the periphery and that SDN will prove to be the right solution for controlling all peripheral devices. In particular, OpenFlow Wi-Fi access points are coming onto the market, and are likely to make their presence felt fairly quickly, with SDN following on in their wake. This will be the subject of Chapters 4 and 9. They focus on smart edge.

3 Fabric, SD-WAN, vCPE, vRAN, vEPC

In the previous chapters, we introduced the SDN, its centralized architecture and its use of virtual machines. In this chapter, we will study the main categories of products linked to SDN. They can be divided into five main classes. The first class corresponds to fabrics that make out the access networks to datacenters’ servers. SD-WANs (Software-Defined Wide Area Networks) are the second class. They relate to a solution that allows the optimization of the use of WAN networks in a company. The third class gathers solutions of virtualization of a local network under the name vCPE (virtual Customer Premises Equipment). The fourth class, the vRAN (virtual Radio Access Network), is the virtualization of a network that expands between the core network and the antennas on which users are connected. Finally, the last class is the virtualization of the core network or vEPC (virtual Evolved Packet Core). We will go over these different classes in the next sections. 3.1. Fabrics control A fabric is the network linked to a datacenter that allows rapid movement from one server to another within the same datacenter. The fabric is also used in order to access any server from the outside. The word fabric comes from the fact that there are many paths to move from one server to another in order to ease balance and traffic distribution. The goal of SDN is to centralize the order that enables the distribution of the flows over all the fabric paths. Two fabrics are described in Figure 3.1, the old and the new generation used within SDN. The left-hand figure shows the solution that is still frequently used: a level of switches and a level of routers. The new generation has two levels of switches to reach even higher capacities thanks to the switches’ high speed. The switches that are the closest to servers are called leaf, that is the leaves from a tree, whereas switches, that are closer to the exit, are called spine, meaning the spine of the

Software Networks: Virtualization, SDN, 5G and Security, First Edition. Guy Pujolle. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

34

Software Networks

network. Thanks to switches, any server from the datacenter can be linked to any other server from the datacenter through different paths. These paths are supplied and implemented by the automatic setup of the SDN controller in order to optimize the total speed.

Figure 3.1. Old and new generation fabric architecture

The SDN controller can also control several fabrics simultaneously: it manages intra and inter datacenter flows. Figure 3.2 shows such a case with two fabrics interconnected through a WAN (Wide Area Network). The protocols used in intra and inter fabric communications will be introduced in the chapter focusing on Cloud networking protocols.

Figure 3.2. Fabrics interconnection

Fabric, SD-WAN, vCPE, vRAN, vEPC

35

Since 2015, many architectures have been on the market to manage fabrics. We will describe some of them. 3.2. NSX and VMware company VMware is the first major company to have turned towards virtualization. It is a software company, not a hardware company, which explains why its NSX architecture is entirely software-based, and relies on the hardware of the network equipment manufacturers and on Cloud providers. The NSX architecture is presented in Figure 3.3. This architecture, as we have just said, is completely software-based and open, making use of open source software machines. The VMware platform is essentially based on the virtualization of the NSX architecture. This platform contains the main networking and control elements. On the networking side, the machines may be level 2 or 3 routers or switches, depending on the user requirements. On the control side, the platform contains the basic elements with the possibility of putting virtual firewalls in place, managing the streams to channel them along paths determined by the load-distribution machine and by the opening of virtualized VLANs. (Note that here, the word “virtual” appears twice in “virtualized VLAN”: indeed, a VLAN – Virtual Local Area Network – is a local software network that uses physical machines which may not necessarily be located in the same place, and that local network is, itself, situated in a virtualized universe). The uppermost layer is formed of Cloud management systems that can either be VMware or come from the open source Cloud management system. The northbound interface is a REST-type API. The software network part is based on datacenters, with internal virtual devices or, on physical machines from different manufacturers with which VMware has developed partnerships. The protocol of the southbound interface is OpenFlow, produced by Nicira, which was bought by VMware for over a billion dollars. The hypervisors or the containers to support the virtual machines may be of any type, as may the physical hardware. The basic virtual device is Open vSwitch, which was also originally developed by Nicira. It is one of the very first virtual switches available on the open source market. Today, it is an extremely powerful switch with numerous functions, which we will now discuss.

36

Software Networks

Figure 3.3. The NSX architecture

Figure 3.4 gives a somewhat more detailed view of the NSX architecture with its various components. Let us briefly come back to Open vSwitch, which represents one of the great de facto standards of virtual network machines. This open source switch is shown in Figure 3.5, with the different modules associated with its management. It enables us to group together virtual machines that manage security, quality of service, monitoring and automated control.

Fabric, SD-WAN, vCPE, vRAN, vEPC

Figure 3.4. Detailed NSX architecture

Figure 3.5. The characteristics of Open vSwitch

37

38

Software Networks

From the standpoint of security, the switch has VLAN isolation and traffic filtering. Monitoring takes place by way of various protocols such as NetFlow or specific versions provided by manufacturers linked to VMware. Quality of service is handled by traffic-shaping and traffic-queuing techniques. The southbound interface with the controller uses several protocols – including, of course, OpenFlow, which is privileged, as well as OvSDB (Open vSwitch Data Base) and various network machine configuration protocols. The key functions of NSX VMware can be summarized as follows: – logical switching: NSX performs complete level-2 or -3 switching in a virtual environment, dissociated from the underlying hardware; – NSX gateway: NSX offers level-2 gateways for a transparent connection to the physical loads and the legacy VLANs; – logical firewall: NSX opted for a distributed, virtualization-oriented firewall with identification and activity monitoring; – logical load distributor: NSX provides a complete load distributor with SSL termination; – logical VPN: NSX facilitates the opening of site-to-site or remote-access VPNs in software form; – NSX API: NSX provides a RESTful API for integration with any Cloud management platform. The mode of operation means that we can have complete dissociation of the physical infrastructure from the software networks formed of virtual machines. Virtualization of the networks enables us to superpose them on any physical hardware whose input/output ports are suited to the transmissions. In addition, NSX works with any type of hypervisor, which makes it easy to virtualize the servers. The NSX system facilitates the reproduction of the physical network model in software form, whether of level 2, 3, 4 or 7. NSX also includes automation by way of a RESTful (REpresentational State Transfer) API, which enables the Cloud management platforms to automate the supply of network services. Finally, NSX offers a platform that is capable of adding in services from other suppliers. These software and hardware products, stemming from the integration of physical machines from partners of VMware, include network gateway, application provision and security safety.

Fabric, SD-WAN, vCPE, vRAN, vEPC

39

3.2.1. CISCO ACI (Application Centric Infrastructure) The company CISCO took rather a long time to come out with its own SDN architecture, which it did in 2014. This architecture has the application at the center. The objective is to automatically construct the networks needed for the applications in an environment that essentially comprises CISCO’s own physical machines. This architecture is presented in Figure 3.6. As is indicated by the name, it is “application-centric”, because the networks depend precisely on the characteristics of the service to be performed. However, it is also a hardware-based architecture, because CISCO hardware is needed in order to make it work, although physical machines from other manufacturers could be used to supplement the hardware part.

Figure 3.6. The ACI architecture

In Figure 3.6, we can see the heart of the architecture, which goes from the applications to a controller with numerous functions to create customized software networks. The northbound interface is particularly developed at CISCO. It uses a REST representation, like most SDN products. The Cloud supports the applications and – by way of an orchestrator, which might be found, for example, in OpenStack – communicates with the controller using a CISCO API. The controller includes numerous intelligent modules to interpret the demands from the northbound interface and transform them into software networks associated with each application or set of applications with the same characteristics. This intelligent module, or APIC (Application Policy Infrastructure Controller), is at the heart of CISCO’s architecture. It uses artificial intelligence techniques to determine the relations between the needs of the applications and the networks that need to handle them. The complete ACI architecture is presented in Figure 3.7. We can see that it is based on a CISCO physical infrastructure in the form of the Nexus 9000 machines. The protocols for the southbound interface are of varying types,

40

Software Networks

with no particular preference given to OpenFlow. The protocols between the servers and, more specifically, between virtual machines, are also of varying types, but they include TRILL and LISP, which we will detail in Chapter 9.

Figure 3.7. Detailed ACI architecture

Thus, the essential components of the ACI architecture are: – the APIC (Application Policy Infrastructure Controller) made by CISCO, which is the main component of the architecture; – the application sequencing profiles: they correspond to the logical representation of all the components necessary to provide service and of their interdependence on the hardware; – the physical devices necessary for CISCO’s ACI architecture: they belong to the CISCO Nexus range. Cisco Nexus 9000 machines function either in Cisco NX-OS mode for purposes of compatibility and consistency with the older Cisco Nexus switches, or in Cisco ACI mode so as to take full advantage of the services based on the policies and functions of infrastructure automation. 3.2.2. OpenContrail and Juniper In 2014, Juniper also released its own SDN architecture, essentially based on open source software, but with Juniper’s own addition. The heart of this architecture is the OpenContrail controller. OpenContrail from Juniper is an open, simple and agile SDN solution that is able to automate and orchestrate the creation of virtual networks on demand. More

Fabric, SD-WAN, vCPE, vRAN, vEPC

41

precisely, there was a separation between the open source, now called Tungsten Fabric, and the Juniper part, which kept the name OpenContrail. Virtual networks are perfectly well suited for applications that require networks to be opened. In this architecture, we can emphasize its simplicity for the establishment of virtual networks that integrate smoothly with the existing physical networks, and are simple to manage and orchestrate. The environment is also open. There is no dependence on one particular provider. In addition, this open architecture eliminates the costs related to device manufacturers. The Juniper platform works with a broad range of open hypervisors, orchestrators and physical networks. Finally, an enormous advantage is the agility of the solution, which reduces the time to market of new services by automating the creation of software networks, facilitating the interconnection of private, public and hybrid clouds. Service providers can use Contrail to offer a whole range of new and innovative services – particularly Cloud-based and virtualized services. The Juniper platform is shown in Figure 3.8. It should be noted that, as this figure shows, the platform tends to use OpenStack more than anything else, but this is not obligatory; other Cloud management programs are also acceptable.

Figure 3.8. The OpenContrail platform from Juniper

42

Software Networks

3.2.3. Brocade Brocade, which was acquired by Broadcom in 2016, is a company that is investing heavily in SDN. They offer a platform that is more or less identical to that described above. It has a four-level architecture that is fairly typical in the world of SDN. The highest level pertains to the Cloud management system, which may be OpenStack, VMware Clouds or other solutions of the same type. This level contains the processes of orchestration and automation for requesting the opening of software networks. The underlying control level contains the decision-making algorithms to determine which paths to open, and additional algorithms, linked to the costs, for example. In this category, we could point to cost management algorithms that determine the most cost-effective placement of the paths, or those which will likely generate most revenue. The southbound interface can use different protocols, but OpenFlow and OpenScript are favored for this relation between the controller and the software networks. The controller can be based on open source technology such as OpenDaylight or come from other companies such as VMware. The next layer in this architecture from Brocade is the virtualization layer, with a strong NFV component and the possibility of creating software networks on machines from Brocade or other manufacturers. This hardware part represents the physical layer. Various transport protocols are used in the physical network, including VXLAN, NVGRE, TRILL and NV03 (protocol from the IETF’s workgroup Network Virtualization overlay) which we will examine in detail in Chapter 9. Brocade’s solution offers flexibility and agility in the SDN architectures. It is shown in Figure 3.9.

Figure 3.9. Brocade’s architecture

Fabric, SD-WAN, vCPE, vRAN, vEPC

43

3.2.4. Nokia’s SDN architecture Nokia’s solution is, to a large extent, attributable to its daughter company: Nuage Networks. The architecture is shown in Figure 3.10. The basic outlines are the same as those described above in reference to other manufacturers. The part where we find something highly specific to Nokia pertains to optical networks and mobile networks. In particular, Nokia offers a vEPC (virtual Enhanced Packet Core) – i.e. a core network for a latest-generation mobile network, which can be seen as a core network for 5G. With its products, Nokia wishes to encourage network functions virtualization and to have a significant impact on the standardization of NFV.

Figure 3.10. The SDN platform from Nokia

3.3. SD-WAN SD-WAN (Software-Defined Wide Area Network) is a solution to control and manage a company’s multiple WANs. There are different types of WAN: Internet, MPLS, LTE, DSL, fiber, wired network, circuit link, etc. SD-WAN uses SDN technology to control the entire environment. Like SDN, data plane and control plane are separated. A centralized controller must be added to manage flows, routing or switch policies, packets priority, network policies, etc. SD-WAN technology is based on overlay, which means nodes that represent underlying networks. Figure 3.11 shows an SD-WAN network. We can see a group of WAN networks represented by three networks: the MPLS network, usually set up by a telecom operator or an integrator, the Internet network, which is a very cheap or free network

44

Software Networks

but the service quality of which is not always known, and the LTE network, which is the mobile operator’s core network that can be accessed via 3G, 4G or 5G. The choice between these three networks is not an easy one since it depends on the quality of service required by the application, the time of day, the security level, etc. The goal for SD-WAN is to direct the flow of packets towards the appropriate networks in order to optimize costs. The idea is to use the Internet as much as possible in order to decrease the company’s communication costs.

Figure 3.11. SD-WAN network

Thanks to the following figures, let us see how SD-WAN works. The company’s network is described in Figure 3.11. The company has several networks, especially several WAN networks, to perform communication between different campuses and affiliates, the headquarters and datacenters. We can add access networks to the WAN networks.

Figure 3.12. Company networks

Fabric, SD-WAN, vCPE, vRAN, vEPC

45

The overlay network must be built from the network topology, which has been done in Figure 3.13. For this, we must add sensor-based networking devices able to measure the performances with their neighboring sensors. The main performances correspond to the response time between the two sensors located one at start and the other at the exit of the network, to the speed, to security, etc. In other words, sensors must measure the performances of networks that shape the company.

Figure 3.13. Overlay network of the SD-WAN network

Once we have obtained the performances of different networks, these networks can be removed so as to keep only the overlay network, that represents the global network and the associated measures, to go from one sensor to the other. Figure 3.14 shows the overlay network with only the intermediary devices left to separate the company’s networks.

Figure 3.14. Overlay network

46

Software Networks

The next step is to send all the measures to a controller. It will automate the decisions on which network to use to obtain the service quality required by the application. Figure 3.15 describes this architecture with the controller. The controller can be located in different places such as the company’s headquarters, its datacenter, a focal point located at the heart of the network, a server located in the MPLS network or a datacenter that belongs to an integrator or a service supplier.

Figure 3.15. SD-WAN network with its controller

The key elements of SD-WAN are described as follows: – optimization of the choice of WAN through which packets must pass to optimize the network passing; – use of DPI-integrated probe (Deep Packet Inspection) to determine the response time and the speed of each flow; – possibility of commuting flows through application and service quality; – optimization of the access to the company’s datacenters for all the flows coming from all the SD-WAN access points; – communication security through the opening of secure channels from end to end or on specific networks; – automation of configurations thanks to the controller; – improved availability that allows multiple connections between two network points; – three SD-WAN approaches can be defined according to the controller’s position. The first approach, the CPE-centric approach, revolves around a controller located in one of the company’s local networks with several virtual machines also

Fabric, SD-WAN, vCPE, vRAN, vEPC

47

located in the user’s local network. The second approach, the WAN-centric approach, revolves around the use of virtual machines located in one or several datacenters located in the WAN. The third possibility is a hybrid approach in which some virtual machines are located in the CPE and others are located in the WAN. For example, in the CPE, we can find a WAN optimizer, cache servers, NAC, etc. In the WAN, we can find an IPS system, a malware detection system, etc. SD-WAN actors comprise several categories of manufacturers, among which are: – editors who sell their software to major accounts, and mostly to integrators; – telecom equipment manufacturers who offer clients the hardware and software needed to realize an SD-WAN; – integrators who offer a full solution from editors’ software and from telecom equipment manufacturers’ hardware. The telecom operators are often part of the integrators; – SD-WAN operators who own all of the elements to set up a full solution. Figure 3.16 shows the main aspects that are taken into account in SD-WAN and some names of manufacturers. This is not an extensive list.

Figure 3.16. Advantages of SD-WAN and some associated manufacturers

3.4. vCPE vCPE (virtual Customer Premises Equipment) involves virtualization of the local network. The CPE itself includes equipment located in the company such as the

48

Software Networks

Client domain

Access provider domain

router, firewall, intermediary devices such as middlebox, etc. This equipment is virtualized in the vCPE, and the main goal is to determine where to place these virtual machines. Choices are similar to those made for the controller in the SD-WAN. LAN virtual machines must be positioned either in the datacenter, itself located either in the local network, or in the PoP (Point of Presence) of the operator who manages the access interface to the local network, or in one of the operator’s datacenters, or in one of the company’s datacenters using the operator’s network to access it. A specific case called uCPE (Universal Customer Premises Equipment) is the implementation of virtual machines in a white box which is a datacenter using open source equipment. This open source equipment with open specifications can, for example, come from the OCP (Open Compute Project). We will study this project in further details in the chapter on open source software. This white box contains virtual machines like VNF (Virtual Network Function), which carry out the CPE’s functions. We can see an example of the vCPE in Figure 3.17. In this figure, the main virtual machines are positioned in a hybrid way in the local network in the core network. Core router

Core router

WAN Op Firewall

PE router

PE router

Edge Routing

Load Balancer

Remote servers

CPE Router

Load Balancer

vCPE

LAN

WAN Op Firewall Specific box

Load Balancer

Edge Routing

WAN Op Firewall LAN

Local servers

Figure 3.17. Example of a vCPE with the hybrid architecture

More and more often, these two products, SD-WAN and vCPE, are integrated in one single product. Virtual machines carrying out the CPE are managed by the SD-WAN controller. This way, a service can be open and take into account both the local network’s share and the WAN networks’ share. This allows us to manage service quality and security from end to end. 3.5. vRAN vRAN (Virtual Radio Access Network) is the virtualization of RAN equipment, meaning of the network which links antennas to the core network. This equipment is essentially made of RRH (Remote Radio Head). RHH takes over the conversion of

Fabric, SD-WAN, vCPE, vRAN, vEPC

49

the radio signal into a digital signal, then its transformation into a visual signal in order to send digital signals to the BBU (BaseBand Unit). Then, the BBU can connect to the core network in order to pass the packets on to the recipient. The BBU processes a digital signal from the RRH. Many solutions can be implemented to cluster the functions. Figure 3.18 shows the solution offered in the C-RAN (Cloud RAN) which we will study in the chapter on 5G. In this case, the BBU equipment is virtualized and deported to a datacenter that can be located up to a hundred kilometers from the antenna. BBUs of several antennas are clustered in the datacenter to perform load balancing and optimize the use of resources. This solution belongs to all the vRAN.

Figure 3.18. Cloud-RAN (C-RAN): a 5G-type vRAN

vRAN is an NFV application on the level of access network (RAN). It is a centralization and virtualization of the hardware part of base stations in the Cloud. MEC (Mobile Edge Computing) datacenters should ensure the best compromise between distance from the datacenter to the antenna and RRU multiplexing, in order to optimize time, energy consumption and costs. 3.6. vEPC vEPC (virtual Evolved Packet Core) is the virtualization of the core network of mobile networks. This virtualization involves all equipment needed for the core network to function. It is virtualized in one or several datacenters according to the network size and required multiplexing. We can find equipment such as SGs (Serving Gateway) that route the packets from access networks, PDNGs (Packet Data Node Gateway) that make out interfaces between the core networks and other

50

Software Networks

data networks, and PCRFs (Policy and Charging Rules Function) that take care of the client interface.

Client domain

Access provider domain

More generally, the core network is virtualized by positioning virtual machines (as in the previous cases) in one of the company’s datacenters located in the core network or in an access network. As indicated in Figure 3.19, virtualized equipment comprises routers, switches, LSR, firewall, controller, load balancer, sensor, etc.

NAT IMS Core Router

NAT

DPI

PE Router

IMS

Visibility

DNS

QoS

Network equipement

Core Router

LAN

DPI QoS

Visibility Virtual PE Router

NFV Service Insertion Point

NFV compute, storage and network

CPE Router

CPE Router

HSS

LAN

Figure 3.19. Virtualization of a core network

4 Open Source Software for Networks

One of the major developments expected in the 2020s is operating open source software. If many open source software are already being used, they are only operating through companies that take care of their maintenance and developments. These companies can come from the world of open source software, but they can also be integrators or manufacturers that have part of their architecture integrated in open source software. In the beginning, the difficulty will be that there are many developments, often competitors, and they end up joining each other. In this chapter, we will examine the main open source software in the field of networks. 4.1. Open source software The Linux Foundation is one of the main organizations collecting developments in the network world. More generally, it takes the open source software of the computing world into its care. Figure 4.1 shows the main ongoing projects.

Figure 4.1. Linux Foundation’s projects. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

Software Networks: Virtualization, SDN, 5G and Security, First Edition. Guy Pujolle. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

52

Software Networks

However, the Linux Foundation is not the only organization to focus on these developments. For the network part, Figure 4.2 shows a wider and more exhaustive range of major projects regarding networks.

Figure 4.2. Range of open source software projects in the field of networks. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

Facing so many projects, we decided to focus on presenting only the most important open source software throughout a study presented in Figure 4.3.

Figure 4.3. Most important open source software

Open Source Software for Networks

53

The first is not really open source software, rather open source hardware. But let us start with this one. Given their importance, the next two open source software will be further explained in this book. After examining OCP, we will describe the OPNFV project, which gathers most of the software, and then we will give some information on the main open source software.

4.2. Open Compute Project (OCP) The main goal of the OCP is to give open access to hardware specifications and more precisely hardware intended for datacenters of different sizes. Open access to these specifications encourages manufacturers to produce them in mass quantity in order to obtain very low costs. ODMs (Original Design Manufacturers) manufacture these machines as a white label. More specifically, white boxes are unbranded servers manufactured in very large quantities in order to virtualize hardware. The market of these white boxes is booming, growing from $900 million in 2017 to $4 billion in 2020. We have studied a specific case of white boxes in the previous chapter on vCPEs. The project was first launched by Facebook in order to produce low cost datacenters. After the purchasing cost, the second goal is to produce equipment with the lowest consumption possible and which would be perfectly adapted to the users’ needs. Among the general goals are the 480 V electrical supply in order to reduce energy losses, the deletion of everything which does not contribute to servers’ efficiency, the reuse of hot air emanating from the servers, the optimization of cooling in the machines room, the elimination of the need for a central supply which does not handle interruptions, etc. For the network part, the subjects are the full disintegration of software and hardware for open network environments, use of Linux and REST interfaces, automated management of the configuration and provisioning of bare metal servers – meaning servers that are given to the client without any software, motherboards for all-purpose or multifaceted switches, fully open connectivity, energy efficiency, design of the cooling system and finally, compatibility with SDN architectures. This open hardware should become the base of companies’ datacenters of any size, after being the base of datacenters of big Clouds such as Facebook, Google, Amazon and Microsoft.

54

Software Networks

4.3. OPNFV The goal of the OPNFV (Open Platform NFV) is to create an open source software platform verified through NFV (Network Functions Virtualization). OPNFV must be an architecture allowing us to perform a carrier-grade platform. Carrier grade implies strong availability, which allows us to reach the value of at least 99.999%, which means a few minutes’ stops per year corresponding to the failure and maintenance times. The initial phase of OPNFV should allow us to define the NFVI (NFV Infrastructure) and the environment’s VIM (Virtualized Infrastructure Management). NFV’s APIs should also be standardized. Overall, the first phase should stop at the definition of the infrastructure based on the VNF (Virtualized Network Functions) and on MANO (Management and Network Orchestration). After this initial phase, which will take a few months, the European standardization organization ETSI, in charge of the project, decided to widen the platform field by offering an open source software for everything necessary to have a full and operating platform. The first version directly uses developing open source projects. The first ones to be chosen are OpenDaylight, OpenStack, KVM, Open vSwitch, Ceph, etc. This first version is completed by regular releases obtained along with developments. The framed architecture is presented in Figure 4.4.

Figure 4.4. Basic frame of the OPNFV platform. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

Open Source Software for Networks

55

This platform is made up of three big parts: MANO for management and orchestration, NFVI to define virtualization-based infrastructure and finally, the virtual machines used: VNFs (Virtual Network Function). Figure 4.5 presents the MANO sub-group. The NFV orchestrator (NFVO) aims to perform the same automated network setup. The orchestrator must be able to choose the virtual machines capable of performing the service required. For this, the virtual machines must be chained, which means that they must be put one behind the other in order to form a chain that will represent the order in which the application’s package flow will go through them. Finally, the virtual machines must be urbanized (meaning placed) on hardware, in order to perform the service. The NFV manager is in charge of the management of VNF instances, i.e. of virtual machines, of their configuration and of the event reporting from virtual machines and their local management system. Finally, the VIM manages the control and management of the NFV infrastructure (NFVI). For this, it controls and manages the network resources, computing and storage. Finally, it gathers and sends performance measures to the VNFM.

Figure 4.5. Description of MANO. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

56

Software Networks

In June 2015, the ARNO release was introduced. The release integrates OpenStack (VIM), KVM (hypervisor), Ceph (distributed storage), OpenDaylight (controller) and Open vSwitch (software switch). This release describes in detail the way thousands of developers work. Three phases are suggested: definition of new needs and the characteristics associated with these needs, development, deployment and new modules tests, and finally, integration of these new modules in the platform. This first release is presented in Figure 4.6.

Figure 4.6. ARNO release

The second phase, Brahmaputra, is described in Figure 4.7. It integrates new controllers: in addition to OpenDaylight, two other controllers are integrated with ONOS (Operating Network, Operating System) and OpenContrail. ONOS was introduced because this open source software gives eastbound and westbound interfaces through its distributed operating system. However, this controller does not have a clearly established interface since it is located inside the operating system. OpenContrail is an open source software from the Contrail company, which was bought by Juniper. Despite adhering to Juniper, the OpenContrail open source software was still developed until the split between the open source and the Juniper controller. The new name given to OpenContrail for the open source part is Tungsten Fabric. After this second release, performance tests show flaws due to the complexity of the architecture. This is the reason for the emergence of accelerators like DPDK or ODP. The basic idea of accelerators is to minimize the amount of new copies in the servers’ memories and to find fast paths in order to optimize the execution of transfer algorithms.

Open Source Software for Networks

57

Figure 4.7. The Brahmaputra release

The next release, Colorado, follows the module choices of the general architecture. This release is described in Figure 4.8. It has some extensions, such as a Fast Data input/output (FD.io) accelerator and the extension of architectures X86 to ARM. Moreover, a new module of virtual machines chaining has been added: SFC (Service Function Chaining) and VPNs support.

Figure 4.8. Colorado release

58

Software Networks

The Danube release shown in Figure 4.9 brings the Open-O module, or open-Orchestrator, to the MANO environment. This open source software project for orchestration was gathered with another project led by ATT and Amdoc on automation. The result of this grouping was the creation of ONAP (Open Network Automation Platform) which we will mention again later. Even after important discussions, there has been no link between controllers and eastbound–westbound interfaces’ definition.

Figure 4.9. Danube release

The Euphrates release from March 2017 presented in Figure 4.10 shows the rise of techniques of containerization with Kubernetes as the orchestrator of containers. We described this orchestrator in detail in Chapter 1. OVN (Open Virtual Network) is an open source project that defines switch functions for virtual switching. OVN is a possible instantiation of OvS (Open vSwitch). From the side of the orchestration and of the platform management, we witness the appearance of Open Baton, which is an open source platform implementing MANO specifications. It can be considered as a go-between between the Open-O of the previous version of OPNFV and ONAP, which is the promising standard for the 2020s.

Open Source Software for Networks

59

Figure 4.10. Euphrates release

The Fraser release is shown in Figure 4.11. The changes are about the introduction of the last OpenStack release and the interfaces update with MANO, which is going towards ONAP. The OpenContrail controller is replaced with Tungsten Fabric. As OpenDaylight and Tungsten Fabric depend on the Linux Foundation, they should converge. Moreover, OpenDaylight has an eastbound– westbound interface, which makes it the perfect choice to become a unique standard.

Figure 4.11. Fraser release

60

Software Networks

The next release, Gambia, dates back to November 2018 for the R0 version and then December 2018 and January 2019 for the R1 and R2 versions. Gambia does not really introduce new modules, but it extends already selected software. In particular, Kubernetes is extended; ONAP, in its Casablanca release, is being closely examined and favorably considered; and the platform experiment is facilitated with the development of Lab-as-a-Service to allow many more tests. Native applications for the Cloud (Cloud-native applications) are one of the main subjects of study for this release. Cloud-native computing is a movement orchestrated by the CNCF (Cloud-Native Computing Foundation). It aims to simplify and shorten delays in order to set up applications executed in the Cloud. This solution uses open source software in order to deploy applications that take the form of microservices. A microservice is executed in its own container. This container is dynamically orchestrated in order to optimize the use of resources. Cloud-native technologies allow software developers to create quality products faster. This technology is presented in Figure 4.12.

Figure 4.12. The four components of Cloud native

DevOps is a movement to simplify cooperation between software developers and system administrators to constantly supply new software releases and allow the constant development according to the clients’ needs. Microservices form a solution to develop an application as a collection of very small services. Each microservice is

Open Source Software for Networks

61

executed in its own process and communicates with other microservices through APIs. This solution allows us to develop and use microservices in parallel. This way, they can be modified without influencing the other services. This solution simplifies the setup of applications and the management of their lifecycle. The drawback is that many APIs exist to help microservices communicate. The Hunter release came out in June 2019 and introduced many scenarios in order to test the completion of the platform and to discover its weaknesses. The end of 2019 saw the I release, Iruya, which relies on the OpenStack Rocky version and expends automation with ONAP. 4.4. ONAP (Open Network Automation Protocol) Network automation is the base of the entire new generation. Several side projects occurred in order to create this network autopilot. Open-O projects coming from China Telecom, Huawei and ZTE, and OpenECOMP (the open source software version of ECOMP, ATT’s SDN platform), all gathered in 2017 to form ONAP. The goal is the design, creation, orchestration, automation and management of the networks’ lifecycles. The main companies leading this project are ATT, Amdocs, Orange, Huawei, ZTE, VMware, China Mobile, Bell Canada and Intel. The first releases are available in Amsterdam (November 2017), Beijing (May 2018), Casablanca (November 2018) and Dublin (May 2019). ONAP must automatically automate the network environment. For this, three major sub-systems are gathered: service creation, orchestration and maintenance. These three sub-systems are connected as shown in Figure 4.13.

Figure 4.13. The three ONAP sub-systems. VIM: Virtualized Infrastructure Manager. NFVI: Network Functions Virtualization Infrastructure. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

62

Software Networks

Figure 4.14 shows the modules of the ONAP project, gathered in three sub-systems. Service creation is performed with the SDC (Service Design and Creation) module that manages the following elements: – resources (infrastructure, network and application); – services, by considering all information available to start, destroy, update and manage services; – user products by taking costs into account; – product gathering configurations.

by

considering

recommended

Figure 4.14. ONAP modules

The service creation sub-system is described in Figure 4.15.

Figure 4.15. Service creation sub-system

specifications

and

Open Source Software for Networks

63

The orchestration sub-system is described in Figure 4.16.

Figure 4.16. Orchestration sub-system

The orchestration sub-system contains the service orchestrator and different ONAP autopilot controllers such as the SDN controller (SDN-C), the choice of which is OpenDaylight, the application controller (APPC), the virtual function controller (VF-C) and the infrastructure controller (Infra-C) which can handle several infrastructures. The third sub-system about network exploitation (operation) is described in Figure 4.17. It contains the DCAE (Data Collection Analytics & Events) module with the Holmes sub-module for alarm correlation and the NFV-O sub-module for monitoring.

Figure 4.17. Operating sub-system

64

Software Networks

The policy module handles all the rules in order to set them up, distribute them and ensure their maintenance. Among these policy rules, we can mention virtual machine urbanization, data management, access control, actions to perform, interactions, etc. Finally, the CLAMP (Closed Loop Automation Management Platform) module handles the feedback loops in order to update configurations in case the network state is modified. To conclude this section on ONAP, we must mention that ONAP is the central piece of the new generation of networks. Its goal is to automate the operation of the network, and of the storage and computing, from the service creation to its deployment and operation.

4.5. Open vSwitch Open vSwitch is the most widely used switch throughout the world of virtual networks. Almost all devices use this open source software as it has so many functions. It can be executed as a switch with a hypervisor or as a distributed switch on several hardware servers. It is the default switch in XenServer 6.0, Xen Cloud Platform and it supports KVM. Open vSwitch is integrated in many Cloud systems such as OpenStack, OpenQRM, OpenNebula and oVirt. It is distributed by Ubuntu, Debian, Fedora Linux, FreeBSD, etc. An overview of Open vSwitch architecture is presented in Figure 4.18.

Figure 4.18. Open vSwitch virtual switch

Open Source Software for Networks

65

Open vSwitch is used in several products and is executed in many large-size production environments. The Open vSwitch community manages the OVN (Open Virtual Network) project. OVN completes the existing functions of OvS and adds the native management of virtual network abstractions such as multiple layers, level 2 and 3 virtual security machines, overlay tunnels and gateways towards the virtualized world. 4.6. OpenDaylight platform OpenDaylight (ODL) platform is a project hosted by the Linux Foundation in order to create an open source platform, the aim of which would be to perform a control able to configure SDN switches through a southbound interface. As the OpenDaylight platform is multiprotocol and modular, the users can create a tailormade SDN controller in order to meet their specific needs. This modular and multiprotocol approach gives the possibility to choose a unique southbound interface or to select several protocols in order to solve complex issues. The platform handles all southbound interfaces that can be found in the literature, whether in stores or as open source, including OpenFlow, OVSDB, NetConf and BGP. OpenDaylight also offers all the northbound interfaces used by the applications. These applications often act as a controller to gather information on the network, carry out analysis and then use it to create new rules applicable to the network. The OpenDaylight controller creates a virtual Java machine. This means that it can be deployed on almost any device and platform that covers Java. The controller uses the following tools: – Maven to manage the automation of the production of Java software projects; – OSGi for the dynamic loading and the conditioning of JAR files; – a Java interface to record events, specifications and training models; – API-REST in order to place the topology manager, the user’s follow-up, the flow setup and the associated static routing. The Oxygen release includes a plugin for the southbound interface P4. It allows us to handle the abstraction of the data plane in the OpenDaylight platform. The signaling protocol of the southbound interface P4, sometimes considered the successor of OpenFlow, was added to the signaling system already available in OpenDaylight. Other than the P4 plugin, the Oxygen version is dedicated to the orchestration of containers. It includes a Kubernetes plugin to orchestrate workloads based on containers and extensions of the Neutron northbound interface for environments with several containers to host virtual machines.

66

Software Networks

OpenDaylight supplies a BGP stack with improved management of multibroadcasting BGP/CEP and BGP/MPLS. OpenDaylight is thus a mature solution to perform SD-WAN control. Works on optical transport have also been carried out within OpenDaylight. The Fluorine version that dates back to August 2018 allows us to use the PCE-type control (Path Calculation Engine). Moreover, Fluorine has a new implementation for optical infrastructures control. Several new functions were added to the latest version of OpenDaylight. They aim to improve the management of network virtualization in Cloud, Fog and Skin environments. It includes the management of IPv6, the installation of security groups with or without status and the hardware offloading SR-IOV (Single Root I/O Virtualization) for OvS (Open vSwitch) which strongly improves the performance of the virtual switch. This work was mostly developed for OpenStack environments. It is now used to integrate OpenDaylight to the orchestration engine of Kubernetes containers. The SFC (Service Function Chaining) updates done on OpenDaylight’s Fluorine version speed up service implementation by automatically chaining virtual machines. For OpenDaylight to be operational on large networks, a development of eastbound–westbound interfaces is necessary. It is one of the functions developed in the mid-2019 Neon release. For this purpose, OpenDaylight offers an interface between controllers called SDNi (inter SDN controller communication), which uses BGP4 protocol. 4.7. FD.io FD.io (Fast Data input/output) is an open source software project to obtain very high performance in inputs/outputs of networking equipment. This project was launched in 2016. The basic VPP (Vector Packet Processing) module offers a code executed in the user space on the x86, ARM and power architectures. VPP’s design is independent from the hardware, kernel and deployment (bare metal, hypervisor and container). VPP allows us to extend the boundaries in terms of performance. Independent tests show that when using VPP, the FD.io software is at least twice as fast as technologies that are currently available. VPP processes the biggest packet vector of the network’s IO layer available which belongs to the same session and executes it fully. As these packets have all the same supervision data, it is very simple to process them together. Therefore, instead of processing the first packet on the whole graph which represents inputs/outputs and then proceeding to the second packet once the first process is

Open Source Software for Networks

67

complete, VPP processes all packets in one session located in the graph node. This way, the system can process all packets of a same vector faster since the supervision zone gives data to the first packet which is the same as in the next packets. 4.8. PNDA PNDA (Platform for Network Data Analytics) is a platform to perform Big Data analytics on data coming from a network that uses several open source software. PNDA offers an innovative approach for the collection, processing and analysis of Big Data. The platform is equipped with a simplified data pipeline that facilitates the availability of the right data at the right time. By differentiating data sources from data users, it is possible to integrate all data sources and make them available for the whole application wishing to process them. This type of platform can process data according to the specific incoming flows. Client applications can use one of the interfaces to transmit structured requests or directly use flows. The PNDA platform is inspired from models of Big Data processing architecture. It stores data as raw data in a resilient and distributed files system. PNDA gives the necessary tools to constantly process data in real-time and to perform in-depth analysis on voluminous data records. The PNDA architecture is shown in Figure 4.19.

Figure 4.19. PNDA platform

68

Software Networks

There are many PNDA plugins. The most important ones are shown in Figure 4.19. We can find flows coming from the OpenDaylight controller which gathers all network statistics through the southbound interface and especially via OpenFlow. The second possibility is Logstash, a tool that collects, analyzes and stores logs. It is usually associated with ElasticSearch, a distributed search engine. We can also find OpenBPM (Business Process Management), which encapsulates BGP messages of one or several BGP peers in a single TCP flow towards one or several databases. The pmacct plugin is a collection of monitoring tools that, among other functions, export network flows. More generally, all telemetry software can be used to put data in the PNDA platform. 4.9. SNAS The SNAS (Streaming Network Analytics System) software is an open source project to collect, follow and access dozens of millions of routing objects (routers, peers, prefixes) in real-time. SNAS especially allows interaction, visualization and analysis of the packets routing in a very simple way. Figure 4.20 shows the SNAS software environment.

Figure 4.20. SNAS software environment

5 MEC

MEC (mobile edge computing) was defined by the European standardization organization ETSI, in order to take into account very short reaction times between the user and the datacenter which controls this user’s applications. Among the applications targeted by the environment, we can mention autonomous vehicles, which require latency time of milliseconds, local video images processing to decide whether to transmit them to a video processing platform located further away, augmented reality, real-time mobile applications, etc. Thanks to MEC, we can obtain the equivalent of Cloud computing on the endpoints of the network and especially on the radio access network (RAN), thus very close to the user’s mobile. The RAN offers a service environment with a very low latency and a large bandwidth, as well as real-time access to data linked to wireless (radio, location, cell load, intercellular change, etc.). Mobile edge computing can be seen as the setup of a Cloud server installed very close to the user’s mobile. Its first use is to virtualize equipment located between the client and the MEC datacenter in this datacenter. Moreover, it is also possible to virtualize some RAN equipment in the MEC server. The ETSI defines the MEC environment as described in Figure 5.1. The mobile edge host contains the following elements: – Virtualization infrastructure supplies computing, storage and network resources in order to execute applications on platforms located on the periphery. – The mobile edge platform gathers all the functions required to execute applications of the mobile users on a Cloud-type infrastructure. The platform for mobiles can also provide various services.

Software Networks: Virtualization, SDN, 5G and Security, First Edition. Guy Pujolle. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

70

Software Networks

– MEC applications for mobiles are instantiated on the MEC datacenter and validated by the management system. Mobile management is executed at two levels: the mobile edge system and the mobile edge host. – The mobile edge host level management includes an orchestrator for mobiles as the main component. It offers an overview of the peripheral system. It comprises the platform manager for mobiles and the virtualization infrastructure manager. This module also manages applications of the platform in charge of the mobiles attached to the platform itself.

Figure 5.1. MEC environment as defined by the ETSI

5.1. eNodeB and gNodeB virtualization Figure 5.2 represents an eNodeB (4G) or gNodeB (5G) that is the equipment to process signals emanating from clients to transform them into binary elements, themselves generating packets to send to the core network. The eNodeB also processes basic mobile network functions such as client attachments, handovers, etc. One of the goals of the MEC datacenter is to virtualize the eNodeB or the gNodeB, which means to have a software version of the processing and to execute this software in the datacenter. For this reason, the antenna does not have any important tasks to perform, only to collect the signal coming from a user and transmit this signal to the datacenter. Signal processing to transform it into a digital signal is done in the datacenter’s MEC servers, along with all applicative processing.

MEC

71

Figure 5.2. eNodeB or gNodeB of an antenna of a network of mobiles. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

The datacenter cannot be located too far from the antenna because the signals coming from the antenna represent the whole flows of all the connected clients, as well as supervision flows. The order of magnitude is of 1 Gbps for 4G and will reach 10 Gbps, and then 20 Gbps for 5G. On copper wires, it is possible to reach speeds in the range of 1 Gbps on a very short distance, which is the reason why the eNodeB is located at the foot of the antenna or along the post supporting the antenna. To go further, the use of a RoF (Radio over Fiber) is required in order to extend the distance and go up to 20 kilometers, or even further away with high quality single mode fibers. In other words, we deport the eNodeB processing into the datacenter. This architecture is described in Figure 5.3.

Figure 5.3. eNodeB deporting in an MEC datacenter

72

Software Networks

The MEC datacenter is used by many applications that can be virtualized. Overall, it is possible to virtualize all equipment located between the client and the datacenter, especially ADSL boxes for Internet access, Internet boxes, intermediary gateways, etc. For an operator, the advantage of this solution is that they do not have to deploy millions of boxes and maintain and upgrade them regularly. The Internet box can easily be personalized. In the long run, when smartphones and other tablets are constantly connected from everywhere, it will be time to virtualize the terminal equipment itself: functions located on smartphones will be located in the MEC datacenter. Among the applications that will be impacted by the MEC datacenter, we can mention performance management, augmented reality applications, geo-tracking and video stream management. Video processing is shown in Figure 5.4. Video streams coming from surveillance cameras transit through datacenters where they can be analyzed and trigger alarms. For example, cameras can count down the number of people located on a platform and trigger an alarm if the number that has been set up is overcome. It is also possible to look for movements on an image, to perform face recognition or to process Big Data analytics on several videos in order to find some correlation. Some streams can be selected and be transmitted to a central datacenter where larger-scale correlations can be looked for.

Figure 5.4. Example of video processing by the MEC datacenter

MEC datacenter will have a great influence on companies’ local networks and operators’ access networks. Two major impacts are: LAN (Local Area Network) or (Customer Premises Equipment) virtualization and potentially the disappearance of these intermediary networks. In the first case, LAN or CPE equipment are virtualized in MEC datacenters. The load balancers, firewalls, Wi-Fi access points,

MEC

73

intermediary appliances, etc. are replaced with virtual machines. In the second case, LAN or CPE clients own a machine with a SIM card to access the operator and all the company’s clients are directly connected to the antenna. This solution is particularly interesting with 5G since the difference between client’s data rates towards the antennas and the other way around are of several Mbps. The drawback is, of course, that the company will lose control over applications that will be deported on MEC servers, but the advantage is being able to outsource LAN deployment to the operator. Another application also ranked in the MEC servers is about the so-called Industry 4.0. This industry of the future deals with the control of production means. The goal is to synchronize all processes to manage to drastically lower the production cost. For example, it can be the manufacture or vehicles that are different from one another, but located on the same assembly chain without any speed decrease compared to the case where all vehicles are similar. The location of the MEC datacenter can vary, according to the load or the need to reach the endpoint while being closer to the client. The different locations are presented in Figure 5.5. The MEC datacenter can be located in the user’s premises by replacing the Internet box by a server embedding virtual machines. The box can be an HNB (Home-NodeB) corresponding to the 4G/5G antenna that serves a very small cell named femtocell, located in the home of the user or in the company. When talking about this technology, the terms skin datacenter, home datacenter, or mist datacenter are referred to which means a tiny datacenter where the access antenna is located. It can also be the Internet box with its Wi-Fi antenna, or a broadband gateway with an ADSL modem, a wire modem or an optical fiber access. The MEC datacenter can also be located right next to the antenna, a few meters from it, and be linked to it by copper wires, thus allowing a data rate of a few Gbps. It can be devoted to a single antenna located a few meters further away. In such a scenario, the optical fiber supporting the RoF (Radio over Fiber) technology is necessary. RoF is a technology for which light is modulated by a radio frequency signal and transmitted on an optical fiber connection. The main technical advantages of the use of optical fiber connections are that there are very few transmission failures, and the sensitivity to electromagnetic interference sounds is considerably reduced compared to an electric signal transmission. Finally, the datacenter can be located much further away, a few kilometers away, and gathers several antennas to allow better multiplexing and lower cost per user.

74

Software Networks

Figure 5.5. MEC datacenter location

5.2. C-RAN In Chapter 3, we have already partially described the Cloud-RAN, or C-RAN, as being one of the products coming from SDN networks. This C-RAN (Cloud-Radio Access Network) solution is one of the revolutions in the world of networks. Indeed, all processing is done in an MEC datacenter where decisions are centralized. This solution requires MEC datacenters located 5–40 kilometers from the antennas and gathering several antennas. This case is described in Figure 5.6. We will examine this in detail in the chapter on 5G, but we will already give some important aspects.

RoF (Radio over Fiber)

Datacenter EoF (Ethernet over Fiber)

Figure 5.6. C-RAN

MEC

75

The revolution comes from the access network to the Cloud, which focuses on a RoF-type optical network. In that case, the local loop is formed with hyper-simplified antennas that do not process the signal, but which only send the electromagnetic signals back on an optic fiber towards the MEC datacenter. The signals reach the datacenter that has virtual machines to process the radio signals. We should note that this solution allows us to have a great diversity of receiving antennas, the first virtual machine to only deal with the recognition of signals received, which allows us to detect an antenna type: Wi-Fi, 3G/4G/5G and Bluetooth, which are a major innovation for the future. Then, the MEC datacenter can receive all virtual machines necessary to process applications. This solution requires a hardware optical fiber splitter. There is no more concentration or control equipment. There is no more intermediary box or intermediary processing machine. The signal is transmitted as such to the MEC datacenter. In other words, there is no more network, no more message, no more packet, no more frame, only a succession of radio signals. The cost of C-RAN systems connected to an MEC datacenter is certainly one of the lowest on the market since all intermediary machines are eliminated. However, the drawback of this solution is that it is centralized, which means that it requires all the standard precautions of centralized environments. For example, it is necessary to double or triple the resources located on different datacenters. Another issue comes from the latency time for simple or local actions such as handover. If we take a distance of 50 kilometers between the periphery and the center, the return propagation time is of approximately 0.5 millisecond. Time required for the execution of control algorithms must be added, which brings the reaction time to a value of approximately 5 milliseconds. Most of the control applications can adapt to this type of values. However, some applications must have a latency time less than 5 milliseconds such as the control of autonomous vehicles or machine tool synchronization. The solution presented by the centralized C-RAN architecture is certainly interesting, but it might be rejected in developed countries where antennas are already linked to the core network thanks to MPLS networks or other systems. However, in developing countries where there are only few infrastructures, C-RAN is a good opportunity to enter 5G environments directly and with low costs. This centralization can be compensated by a light distribution on the periphery level. In this case, we can see the emergence of small Clouds (Cloudlets) located near the relay antennas. This proposal is described in Figure 5.7.

76

Software Networks

Figure 5.7. C-RAN with an antenna-level distribution. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

The second solution to be closer to users is to exploit femtocell-type antennas, or more precisely HNB (Home-NodeB). In this case, signals emerge in a more classic way towards MEC datacenters. The standardized PON (Passive Optical Network) architecture uses optical fiber and passive optical splitters to move towards the datacenter. This solution is described in Figure 5.8. On the user’s side, the box is called ONT (Optical Network Termination), and on the network side, it is called OLT (Optical Line Termination). Femtocells can support a cloudlet that asks to be executed, as we will see later, on a femto-datacenter, a skin datacenter or a mist datacenter. These datacenters can handle several virtual machines to manage processes that need to happen in real-time or closer to the user.

Figure 5.8. PON architecture

MEC

77

Moreover, this solution does not make the local loop disappear, it keeps its functions. This local loop, optical fiber with passive splitters, only takes over standard solutions where the electromagnetic signals are transformed into digital signals, which are themselves put in frames and packets. The control system is partly distributed, and coordination is necessary between the different MEC datacenters for all communications between two different networks. In this context, the eastbound and westbound interfaces become especially important. Figure 5.9 introduces this architecture in which MEC datacenters are interconnected.

Figure 5.9. Interconnection of datacenters by the core network

Coming back to Figure 5.8, it suggests that Cloudlets are within greater reach of the user than Cloud services on MEC datacenters. Cloudlets are in the access antennas. This is another solution which we will study in detail in Chapter 7 about skin networking using femtocells.

6 Fog Networking

Fog computing is also called Edge computing. In this model, data, computing and storage are gathered in devices at the edge of the network rather than at its core. CISCO introduced the phrase Fog Networking to define this new model which allows objects to connect to edge devices. 6.1. Fog architectures Before splitting Cloud architecture into four main levels (Cloud, MEC, Fog and Skin), it was first strongly centralized around a central datacenter, then split into two between the Cloud and the Fog, as shown in Figure 6.1.

Figure 6.1. Cloud architecture split into two levels

Software Networks: Virtualization, SDN, 5G and Security, First Edition. Guy Pujolle. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

80

Software Networks

Fog covers everything located at the endpoints of the network from the edge router, from the operator’s side or from the company’s side. If located on the company’s side, the Fog server is then called NAC (Network Access Controller). If it is located on the operator’s side, the Fog server is an SBC (Session Border Controller), which today has greater functions than it originally had, to secure VoIP and play the part of a SIP firewall. The word Fog can sometimes be used for very small datacenters, which we will mention in Chapter 7 under the name “skin datacenters”. Indeed, the users are close to the Wi-Fi or the femtocell access points that are embedded into the skin datacenter. This case is described in Figure 6.2.

Figure 6.2. Fog server, here called skin server

Many examples of Fog datacenters can be described. For example, the case in Figure 6.3 is a datacenter inside a hospital. In this example, all sensors, access points for Wi-Fi, Bluetooth and other connection gateways are connected to the Fog datacenter via land wiring, wire cable or optical fiber. Doctors who are connected to the Wi-Fi can execute their many application softwares through this cabling system. Objects, such as, for example, hospital beds, are easily classified. This way, it is very easy to locate a specific bed. Patients use a virtual network that is also connected to the Fog server to play, read or go on the Internet. These two networks are isolated so that if one network suffers from an attack, the attack cannot propagate to the other network.

Fog Networking

81

Cloud datacenter Room1 Room2 Room3 Room4 Room5

Internet

Room6

Fog datacenter Hospital

Figure 6.3. A Fog datacenter to manage a hospital

Figure 6.4 shows a second case with the connection of objects in the industrial processes of Industry 4.0. Industry 4.0 is the automation of production means in order to personalize products without affecting the price of the product compared to a mass-produced one. This case is rather similar to the case presented in the previous chapter on MEC servers. Here, instead of a connection on the antennae of a telecommunications operator and an MEC datacenter, the control is performed through the factory network in order to access the Fog datacenter.

Figure 6.4. Process of Industry 4.0. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

82

Software Networks

With smart production, a client can act on the production: for example, they can modify the values obtained by the sensors in order to create specific and personalized products. This measurement management system, or CDAS (Control and Data Acquisition System), is the basis of the new generation of factories. As shown in Figure 6.4, decisions are taken at the same levels as what we saw before: in the skin for very reactive processes such as machine-tool piloting or the execution of embedded processes, Fog for processes using virtualization or processes covering companies and smart cities. The Cloud level deals with processes that require a very high capacity of resources. Other examples could be mentioned, such as a Fog server in a train to manage client’s connections in the car, or the smart city, swarming with sensors to connect on the Fog datacenter, etc.

6.2. Fog controllers Figure 6.5 shows the different scenarios for the placement of the controller in a software network environment. We will see three main scenarios. The first is highly centralized around an SDN controller. The control data are fed back automatically to that center. The second solution is to use intermediary machines to replace the border routers. The border router does not disappear; it is virtualized in the new Fog datacenter that replaces it. These machines could be SBCs (Session Border Controllers) – a term that tends to be used in operator networks – or NACs (Network Access Controllers), corresponding to the solution favored by companies. In this scenario, the solution sends the control data to datacenters. Finally, the last example of architecture consists of positioning the control as close as possible to the user in femto-datacenters. The advantages are clearly the very short response time, the agility of the solution because of the possibility of changing the virtual machines used in the femto-datacenters and the addition of a form of network intelligence very close to the user. The disadvantage here is that the solution involves widespread distribution of the information, with all the complications that this can entail to perform controls, which, obviously, will not take place locally. The three main solutions described above can be combined, with two levels or even all four levels, acting simultaneously, certain virtual machines may be migrated to a femto-datacenter or a Fog datacenter, while others remain in a central (Cloud) or regional (MEC) datacenter.

Fog Networking

Access box

83

Session Border Controller

SDN Controller Session Border Controller

Access box

Figure 6.5. Architecture of an environment with controllers

We will now focus on Fog servers and datacenters with their main virtual machine: the controller. However, there are other virtual machines in the Fog: the machines that manage company functions such as the firewall, middleboxes, authentication servers, SIP servers, etc. Another important use of Fog servers is proximity marketing. We will now analyze this case. First, let us look at the functions the Fog controller manages. They are presented in Figure 6.6. The first function takes care of the connection to the network via the Wi-Fi access point, but any other connection solution, such as cable or Bluetooth, can also be managed by the controller. In the case of Wi-Fi connection, which is most of the time, the connection includes authentication that allows us to determine the identity of the user who is connecting. This authentication is done through IEEE 802.1x standard, which implies that the authentication server is located in the controller. The controller might only own a proxy. In this case, the authentication server is further away in the network. Other cases of connection authorization exist, such as requiring a form in order to know who wants to connect. Authentication through several means is of course accepted, such as receiving

84

Software Networks

a password through one’s smartphone and thus being able to identify the client through their phone number. Authentication via usernames from other websites such as Google, Facebook and LinkedIn is also widely used. The controller then manages the access points (AP). This management has several levels, starting with lower layers such as power and frequency used in the access point up to the upper layers managing the application level. There are many functions in the application level, such as sending messages through the controller’s SMTP, using a local printer without the client requiring the corresponding driver, and giving access to a datacenter from the access point. MANAGE HOTSPOTS

DEPLOY

CONNECT

COLLECT

MONETIZE ENGAGE

ANALYZE

Figure 6.6. The main functions of a Fog controller

There is also the data collection function in order to create user profiles. These profiles can help the system quickly adapt to the clients by gathering precise information on the client’s usual behavior. These profiles are also used for proximity marketing. Proximity marketing is used to send precise and personalized advertisements to a user. The marketing message is unique and is only sent to one person. To create this marketing to target the user with precision, precise information on their location and what they are doing is required. These profiles must comply with the existing laws, which vary according to

Fog Networking

85

countries and continents. Overall, due to privacy, companies carrying out smart marketing must destroy the data they collect as soon as the client switches contexts, which means, for example, as soon as they leave a shopping center. The next function of the controller is data analysis. This analysis is carried out via tools from artificial intelligence and mostly from Big Data analytics. The analysis allows us to determine many characteristics, especially by determining correlations which human beings cannot see, given the very high amount of information. The last function is to monetize information coming from the analysis, for example: to optimize the location of goods sold in a supermarket. Figure 6.7 describes the process of proximity marketing in more detail. Analycs

DMP Geolocaon

Wi-Fi connecvity

Mail campaign Figure 6.7. Proximity marketing

In this proximity marketing, the Fog server has many virtual machines. The first machine is the controller itself, and then a machine takes care of precise geolocation of the users connected to the Fog server. The controller collects user profiles from analysis carried out by, for example, Big Data analytics. Client data are then stored, sorted out and analyzed by a DMP (Data Management Platform). These solutions allow us to develop marketing campaigns that are precisely targeted. This platform allows us to send notifications to the user through, for example, a BLE (Bluetooth Low Energy) connection or a specific application linked to the store. Finally, this platform facilitates the purchase of advertising space in a strategic way on all channels available in the advertising field.

86

Software Networks

6.3. Fog and the Internet of Things We will now discuss the basic use of Fog networking: the Internet of Things. It is the Fog’s number one goal: to get closer to the periphery in order to collect and process flow of information coming from objects quickly. A virtual machine can be located in the server to process the object. However, a virtual machine can also secure the object and another virtual machine can manage object failures, etc. Here, we will focus on Fog networking. Its goal is to connect objects on the Fog datacenter. The chapter on the Internet of Things will provide further information. Here, we will focus on describing the situation. The network that allows object connection can be split into three parts: short-range networks such as Bluetooth or ZigBee, long-range networks such as Sigfox or LoRa, and telecom networks such as LTE-M. The first part is split into two sub-parts: high data rates networks and networks that carry quite low data range signaling. In the first sub-part, we can find Wi-Fi, which is the simplest solution. We can also find Bluetooth versions such as BLE (Bluetooth Low Energy). In the second sub-part, we can find ZigBee and 6LowPAN with much lower data rates and energy consumption. The range of these networks is of a few meters or dozens of meters in direct view. Networks have data rates that are more or less significant, ranging from 7 Gbps for WiGig solutions to Bluetooth with a capacity of a few hundreds of Kbps. Halow, a new version of Wi-Fi for objects, holds a data rate of a few bits per second to a few kilobits per second. In very low data rate solutions, to carry commands, as well as some data, we can find 6LowPAN that uses IPv6. The long-distance networks include Sigfox, which is a proprietary solution, and LoRa, which is a more open solution. The access cards require little energy. The data rate is very low to radiate on several dozens of kilometers. The techniques coming from telecommunications operators’ mobile networks are very late compared to the two other solutions. However, they should become the most widely used solutions thanks to their data rate diversity and their easy rise. The first solution is LTE-M, defined during 4G standardization. It will be replaced with a second generation with NB-IoT (Narrow Band Internet of Things) in 5G. Figure 6.8 describes the different categories of networks for the Internet of Things. Fog networking and Fog computing play a part in the first category about LANs (Local Area Network): networks with a reach of some dozens of meters allowing objects to connect to Fog servers. The data flow storage and processing coming from sensors is performed in small datacenters. The second category of networks for the IoT (Internet of Things) can also rely on Fog servers or on more important datacenters such as MEC datacenters. This solution checks

Fog Networking

87

through telecommunications operators who first chose LoRa (most of the time) or Sigfox (sometimes) to fill in the gap left from the late launch of 4G or 5G networks devoted to IoT. Finally, the third category is fully turned to MEC datacenters, sometimes via Fog servers. 100 Mbps

WiGig LTE

10 Mbps 1 Mbps

WiFi Bluetooth 5.0

LTE-M

Bluetooth 4.0 100 Kbps 10 Kbit/s 1 Kbps

Halow Zigbee, Z-Waze, BLE

LTE NB-IoT LoRa

Bluetooth1.0

Ultra Narrow Band (SigFox, etc.)

100 bps 10

100

1000

10000

Distance in meters

Figure 6.8. Networks for the Internet of Things. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

6.4. Conclusion on the Fog solution The Fog solution is starting to emerge in the word of Cloud applications. It eases the use of virtual machines which clients need at the endpoints. Response times decrease down to a millisecond in the 2020 generations. This way, new usages can be managed such as a game distributed in a stadium with thousands of users or the connection of objects which is going to increase by millions over the world. The Fog layer is used more and more often by companies in order to replace LAN network equipment by virtualizing and taking care of most of the company’s applications.

7 Skin Networking

Skin networking is the new paradigm that is the least developed. This represents the Cloud level that is the closest to the user since it is defined as reachable with Wi-Fi or femtocell. The skin datacenter directly holds the users’ connection antenna. Skin networking is not a standardized name, and we can come across names such as mist networking as well. These femto-datacenters can also fit in a pocket and, in the future, be mobile. Of course, its power is reduced due to its size and mobility. This very small datacenter can work with a battery. However, we can place many virtual machines. An example of skin datacenter would be a smartphone with very large memory and many multi-core processors. This kind of mobile datacenter will only emerge in 2025. First, the skin datacenter is a very powerful hardware environment with reduced size. This datacenter must be able to receive all virtual machines necessary for the functioning of a home, a small company or a department of a big company. 7.1. Skin networking architecture Femto-datacenters are located very close to clients, i.e. in the access points or in boxes situated in the user’s area. The immediate advantage of femto-datacenters is the extremely short response time to run a control algorithm, and therefore the ability to carry out operations in real time. However, this requires devices with significant amounts of resources. Given the reduction in the price of hardware, this solution could be preferred to the Fog. In addition, with the arrival of femtocells, it is possible to take advantage of this to enlist a little local power, running virtual machines to establish connections and perform local computation in those boxes. In this case, we can speak of Metamorphic Networks: the network metamorphoses depending on the client who is connecting.

Software Networks: Virtualization, SDN, 5G and Security, First Edition. Guy Pujolle. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

90

Software Networks

In Figure 7.1, a metamorphic network is shown using a femto-datacenter-type box. The network is represented in different ways depending on the client or the application, because the network changes according to the client and the application. The box is a femto-datacenter in which the clients’ virtual machines are located to handle the networking applications (networking apps). These networking apps include all network-level applications that may be found in the literature and in Clouds, such as IP-PBX, routers, switches, SIP servers, firewalls, etc., in particular the virtual machines created by NFV (Network Functions Virtualization) standardization, such as DPI (Deep Packet Inspection), computing of NAT, BRAS, etc.

Figure 7.1. A femto-datacenter

7.2. Virtual access points A femto-datacenter may support diverse virtual machines – particularly virtual Wi-Fi access points. The client has a choice as to which access point he/she will use: it may be their office access point, their operator’s access point, their home access point or indeed any other access point with its own individual characteristics. The box may contain a large number of virtual access points – all different from one another and, of course, isolated. The exact number depends on the power of the femto-datacenter and on the number of potential users. The data rate received by each user is dependent on the number of virtual access points, the number of users on each virtual Wi-Fi access point and the Wi-Fi technology used. The virtual access points share the physical antenna, using a token-based technique, for example. Obviously, the capacity of each virtual access point is contingent upon the number of tokens and is therefore not as powerful as a non-shared access point. However, in light of the rise in Wi-Fi power, as we will see in the chapter on 5G, each virtual access point can have sufficient capacity. In particular, the use of Wi-Fi 5 (IEEE 802.11ac) and Wi-Fi 6 (IEEE 802.11ax) standards means that each virtual access point is able to achieve sufficient data rates for all conventional applications, and even some less than conventional ones.

Skin Networking

91

Undeniably, there are some flaws with virtual Wi-Fi access points! Each virtual Wi-Fi access point has its own signaling frames. If the number of virtual Wi-Fi access points is high, then the overhead expenditure is great. The access point described here is a virtual Wi-Fi access point, but it can perfectly well be a virtual NodeB, i.e. the device that manages the physical antenna for a mobile network. In this case, the box may be considered as an HNB (Home NodeB) or an MNB (Metro NodeB), extended with significant capacities to receive virtual computation machines. More generally, all wireless access techniques can be virtualized, and thus facilitate personal connections. Depending on the hardware that can be replaced by software, major advances can be implemented to work towards multi-technology access solutions. Figure 7.2 shows the case of a virtualized HNB using Wi-Fi frames to carry 4G/5G frames between terminal devices and the HNB. HNB virtualizaon Hypervisor/Container

4G/5G

Optic fiber

Wi-Fi 5 or Wi-Fi 6

Core network

HNB 4G/5G

HNB virtualizaon

Optic fiber

Core network

HNB Figure 7.2. Virtualized HNB

In Figure 7.3, we show the context of virtual access points, which is one of the fundaments of the “smart edge” environment. Indeed, for users to connect, we need to put antennas in place, and we can exploit those antennas to do all the local computation needed for access control. In this context, a large portion of the network’s intelligence is on the periphery. The difficulty is in handling communications between users situated in different sub-networks, because we need to coordinate the networking elements that are used for the end-to-end communications. Communications through the eastbound and westbound interfaces are crucial for distributed SDN solutions to interconnect the femto-datacenters.

92

Software Networks

Figure 7.3. Context of a network around a femto-datacenter

Figure 7.3 can be seen as an example of a femto-Cloud, serving two virtual access points and capable of connecting two clients using different network operators. Each client has, in the box, his/her own virtual Wi-Fi access point or dedicated HNB to connect to the network. The two clients do not know one another and they each have their own personalized access point. In this context, other virtual machines can be placed in the box to manage a particular interface or perform a specific signal computation operation. More generally, operators are beginning to look at the possibility of rolling out their own virtual access point so as to take position in places where their clients may be. For example, a lecture hall that has a femto-datacenter could connect all the clients of the operator X after migration of a virtual access point X into the femto-datacenter. The virtual access points belong to the network X and are managed directly by the operator X. If the physical access point breaks down, the responsibility for its repair lies with the infrastructure operator, e.g. the owner of the lecture hall. There are various business plans to provision a femto-datacenter, ranging from renting resources to install a virtual machine to charging by the amount of time and capacity used. 7.3. Software LANs Looking again at the examples from the previous section, it is possible to describe a software LAN. This is a local area network formed of different femto-datacenters in which the virtual machines can be stored in memory to create

Skin Networking

93

these software LANs. Numerous virtual machines such as routers, switches, firewalls or application machines can be transported to the femto-datacenters from a central Cloud or a local server. The decision to leave a machine in the central or regional Cloud or to transport it to one of the femto-datacenters depends on the characteristics of the application and on the evaluation of the traffic that it could generate. Figure 7.4 shows a scenario for a company that has several boxes to create software LANs. In this case, to begin with, the company’s Wi-Fi access points need to be installed in all of the boxes. Another solution is to select some virtual access points introducing logical networks. These virtual LANs support specific applications, such as VoIP, IPTV, bank access, messaging, file transfer, professional application, etc. The software networks are established by the virtual machines that run in the boxes. It is also possible to add a completely isolated network for visitors to the company. This allows guests and visitors to connect to the Internet, using the company’s network, with no risk of an attack. The software networks can use all different protocols from one another, including IPv4, IPv6, MPLS, Ethernet, SDN, etc. It is necessary to choose the protocol that best suits the application for which the data are being sent over the network. The connections between the boxes may be hardwired or wireless, using protocols corresponding to the type of network chosen by the user. This may be OLSR, AODV, TRILL, or indeed any other protocol developed in this field.

Figure 7.4. A femto-datacenter environment to create virtual LANs

94

Software Networks

Software LANs open up new possibilities, such as the installation of SDN Wi-Fi access points. Indeed, by combining a virtual Wi-Fi access point and an SDN switch such as Open vSwitch, it is possible to send OpenFlow commands from a controller to the Wi-Fi access point. It is also possible, of course, to install an SDN controller such as OpenDaylight, OpenContrail or FloodLight in one of the boxes to control the other boxes making up the software network. In this scenario, as in the previous ones, certain virtual machines may remain in the central or regional datacenters, depending on their size and their rate of use. Although it is predicted that SDN using central networks will become dominant, an ever-growing number of networking individuals are calling for the implementation of SDN in the periphery, because this way the control mechanism is much simpler, the cost is much lower than WAN (Wide Area Network) solutions offered by device manufacturers and the system is just as agile. 7.4. Participatory Internet The notion of participatory Internet is when users take part in the creation of autonomous networks, meaning networks that can keep working without any connection with the Internet network or with external servers. Moreover, the nodes can disappear, and other nodes can appear. There is no network center since the latter can disappear any time. This way, participatory networks use users’ terminal devices where most of the processes and protocols are fully distributed. The advantage of this type of network is that they are strongly protected from Internet threats since the company can unplug their network and keep working in a closed mode without connection. Only services that require external servers such as Twitter or Facebook cannot be executed, unless there is an agreement according to which an internal server can execute the application locally. However, all classic Internet protocols and applications for which there are open source software that can be executed in a distributed way. Phone or internal video, storage, computing and even network applications can be executed without any connection, thanks to virtual machines. From time to time, a company with a participatory network can connect to or disconnect from the outside via a gateway. Another advantage of this type of network is a very low consumption. Indeed, as everything happens directly between clients, there is no need to look for an antenna, a DSLAM or a server that can be located on the other side of the globe to connect. Figure 7.5 shows such network in which servers are actually skin datacenters that automatically connect among themselves. They have diverse virtual devices such as routers, firewalls, application machines for voice or video and professional

Skin Networking

95

applications. These cases support all Internet protocols in a distributed mode and offer many functionalities, as well as a wide range of applications, which are also fully distributed.

Figure 7.5. Participatory Internet network

There are two types of participatory Internet networks: ad hoc networks and mesh networks. Figure 7.6 describes these two types of networks.

Figure 7.6. Mesh network and ad hoc network

96

Software Networks

In a mesh network, nodes belong to an operator who can be seen as belonging to clients interconnecting to perform a participatory Internet network. Nodes do not make out the terminal device, but intermediary machines that handle Internet protocols and network applications. These nodes are skin datacenters and they create the users’ Cloud. The terminal device is then devoid of its functions. The advantage is the independence from the user’s smartphone or tablet and it can bring much more power. In the mesh network, nodes are directly the users’ terminals, meaning smartphones, tablets and other laptops. Usually, they are not skin datacenters. These environments then become less interesting, because they need to be powerful in order to support all protocols in a distributed way, and all distributed applications. The drawback of the mesh network is that final users must carry both their terminal device and the mesh node. The mesh operator can also have fixed nodes, like in a company. The mesh nodes are characterized by two Wi-Fi access points: one to connect the other nodes and one to connect users. There are many applications in the skin datacenter (or femto-datacenter). They can be useful, on the one hand, for network control and management, and on the other hand, for applications that have to be distributed so that, if one of the nodes disconnects, for whatever reason, the system can keep functioning. Therefore, DHCP and phone applications for example must be distributed. The number of virtual machines and the number of applications the system can contain depends on the strength of the nodes. With the current embedded systems, this number can be very high. 7.5. Conclusion On the cusp of the 2020s, centralization is still used and skin datacenters represent only a small part of datacenters today. However, for several years, many researches have focused on distribution of these big centers. Many experiments show that skin datacenter networks performing uberized networks are possible. This means that they can run without an operator. The mesh network nodes can move, they can promptly connect to new nodes, handle security by isolating networks, set up almost instantly and adapt to their environment. We will show an example in the chapter on vehicular networks. Soon, each vehicle will hold a skin datacenter. Today, this is partially the case thanks to very low power servers with no virtualization management. These servers will interconnect in order to perform mesh networks and thus help automation and automated control of the vehicles.

8 Software Network Automation

Automation is the keyword of the new NFV and SDN network generation. Until now, automation has not been present in the world of networks, except in some specific cases such as failure diagnosis. For automation, a network autopilot is required. This autopilot must contain three main components. First, the orchestrator has to determine which virtual machine to use, chain and develop in the network. The second component is the virtual machine configurator, which is the SDN controller that has to set up virtual machines. The third component is the virtualized infrastructure manager that has to detect and diagnose the infrastructure’s virtual machines. These components are the MANO (Management and Orchestration) function mentioned in Chapter 4. The main automation software has already been described in Chapter 4 on open source software. It is ONAP (Open Network Automation Platform). In this chapter, we will instead focus on the use of artificial intelligence tools to build automatisms in the different functions that need to be automated and to distribute this intelligence. Indeed, the first generation of autopilot is centralized, mainly because artificial intelligence tools work well in this context. Distributed artificial intelligence is more complex since it requires a great volume of knowledge of all aspects of the network. The streams of the knowledge plane can become very important compared to the network capacity, and the action sharing algorithms are even more complex in order to be easily implementable. 8.1. Automation of the implementation of software networks An important point in the advent of the new generation of software networks, whether in terms of the core network or of access, relates to automation for the establishment, management and control of personalized networks designed in the context of software networks. These personalized networks may be SDNs or use

Software Networks: Virtualization, SDN, 5G and Security, First Edition. Guy Pujolle. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

98

Software Networks

legacy architectures, or indeed a mix of the two, given that each software network is entirely independent of the others. With this in mind, as shown in Figure 8.1, we need to add an self-piloting system to the general architecture. Autopilot relates to virtual grid machines, storage and network machines, but we also need to add security and management elements, which have become absolutely indispensable for the proper operation of a complete system.

Figure 8.1. Self-piloting system. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

The piloting system may be referred to by a number of different names, such as “orchestrator” or “high-level pilot”, to indicate that this element is completed with the controllers that are situated in the control layer and managers of the management layer. The piloting system may be centralized, which is generally the case with today’s Cloud architectures or fully distributed by the network nodes, which is the case we studied in the previous chapter with mesh networks or, as is most often, in a conventional situation, it is a mixture between centralized and distributed organization. At present, numerous developments are being made in this field, with virtual machines that act as intelligent agents. Intelligence is a much-used term in computer science, which simply means the ability to communicate, to reason and to make decisions. Up until the start of the 2000s, intelligence in networks was very low. The concepts of intelligent networks, dating from the start of the 1990s, introduce a primary intelligence, whose role is to

Software Network Automation

99

automatically adapt the network components to the users’ demands, but without reasoning, merely based on predefined rules. The next section is devoted to this view of intelligence in networks. It also introduces autonomic networks, which replaced programmable networks or active networks. An autonomic network is one which is capable of configuring itself, and wherein the nodes may become autonomous in case of failure or cut-off in communications. Since the beginning of the 2000s, true intelligence – i.e. reasoning-based intelligence – has been offered by certain network components, which are able to make decisions pertaining to control or management. The devices that make those decisions stem from the domain of artificial intelligence and smart objects. In particular, multi-agent systems have been around for a long time, which are able to handle security and failures. Intelligent agents offer the first category of tools whose introduction on a large scale is about to modify management and control environments, making them more autonomous and more reactive. We will now examine the reasons for this power, and then the way in which we construct multi-agent systems. 8.2. Management of a complex environment As networks have become increasingly complex, the management and control of these environments has become necessary for a variety of reasons. Network environments are becoming increasingly dynamic, as we have seen. Numerous and varied applications are interwoven, making it difficult to control the resources. The statistical gain during the packet processing on a common network is undeniable, but, if the data flows too far exceed the network’s capacities, a meltdown of performance is inevitable. Network environments are, by nature, distributed, which makes it complicated to control and manage them. In addition, enormous scale necessitates even closer control. Sizing is a complex problem to grasp, and we can state that there are no truly effective tools available in this domain, in that the optimal value of parameters that need to be taken into account are difficult to determine. For parameters, we have a choice between data rate, response time, degree of usage of the couplers and of the central units, bit error rate, packet error rate, repeat rate and failure rate. In addition, the values of the mean, the variance and sometimes the higher-order moments need to be taken into account in order to gain a real idea of the performances. The engineering of network design involves two main aspects: the qualitative and the quantitative. The qualitative aspect often corresponds to operational security, in the sense that it is possible to prove that the system is stable or that there is no

100

Software Networks

state in which the network ceases to work. The quantitative aspect refers to the values of the parameters listed in the previous paragraph, with the aim of quantitative analysis being to show that these values are reasonable for normal operation of the network. Security is an important function, to which intelligence may contribute. Today, a certain degree of standardization enables us to have a clearer view of the problems, and a few major classes of security have been defined, corresponding to needs clearly expressed by the users. We can easily imagine the contribution made by intelligent tools in the world of security to discover anomalies, analyze them, give a diagnosis, propose a solution and resolve the problem. Management is also a domain where intelligent agents could play a leading role. When a network is running, it needs administration – i.e. we need to be able to control all the operations that take place in the network, from breakdowns and accounting to security, performance management and username management. Various specific administrative domains already use intelligent components – in particular, the following: – configuration management; – security management; – fault management; – performance management; – accounting management. Intelligence of the agents may stem from different areas. The most usual model stems from the domain of distributed artificial intelligence (DAI). Artificial intelligence means that a device can take the place of a human being to perform a task. DAI is equivalent to a society of autonomous agents, working together to achieve an overall objective. There are numerous reasons to turn to DAI – notably the following: – integration of different points of view. When the data become more specific, inconsistencies may occur in the rule base. The ways in which knowledge is expressed are different depending on whether we are addressing the user, the developer or the technician. Moreover, two experts who have the same expertise will not always come to the same conclusion. The different points of view are also often contradictory: one might attach a great deal of significance to the costs, and therefore favor a less expensive system, while another would choose to develop the

Software Network Automation

101

publicity, and thus opt for a dearer system. The use of DAI helps to achieve a compromise between different options by negotiation; – representativeness of the real world. In general, it always tends to be a group of experts, with different qualifications and specialties, who work together to realize a set of goals. In addition, while it seems easy to understand and therefore to model the behavior of individuals (all of their exchanges) thanks to the numerous sociological studies that are available, the way in which the human brain works, and the reasoning process, are less well understood. For these reasons, the application of distributed artificial intelligence is gradually becoming a reality in the context of network management. 8.3. Multi-agent systems An agent is an autonomous entity, capable of communicating with other agents, and of perceiving and representing its own environment. A collection of these agents, interacting with one another, forms a multi-agent system. We classify such systems according to numerous criteria, such as the size of the agents, the number of them interacting, the mechanisms and the types of communication, behavior, organization and control of each agent and the representation of the environment. Based on these criteria, we distinguish two main categories of multi-agent systems: – systems of cognitive agents; – systems of reactive agents. Cognitive agents have an explicit representation of the environment and of the other agents. They are able to take account of their past, and operate with a social means of organization. Systems using this type of agent will have only a small number of agents. Several levels of complexity can be envisaged: – processes in which the actors implement communication directives; – communicative modules, which use specialized communication protocols (requests, commands); – cooperative agents, which work with the concepts of skill, mutual representation and task allocation; – intentional agents, which use notions of intention, commitment and partial plans; – negotiating agents, which carry out conflict resolution by negotiation;

102

Software Networks

– organized agents, which act in accordance with regulation and social laws. The agents communicate with one another using a specific language. This is intentional communication, which essentially comprises two types: communication by information sharing and communication by message sending. Communication between agents takes place by information sharing when the solution to the problem is centralized in a global data structure, distributed by all the agents. This structure initially contains the data of the problem, and is enriched over the course of its resolution until the solution is reached. It constitutes the only means of communication between the agents. This type of communication is often spoken of as the blackboard model, which is discussed in detail in numerous publications. The agents deposit and read a piece of information in a shared data zone – the blackboard – as shown in Figure 8.2.

Figure 8.2. Operation of the blackboard

A blackboard system comprises the following three basic elements: – the blackboard, in which all the elements involved during the resolution are stored. This data structure is shared by the agents and is organized hierarchically, which enables us to consider the solution at various different levels of detail; – the agents, which generate and store their data in the blackboard. They are independent modules, referred to as “knowledge sources”. Their role is to cooperate to solve a given problem. The knowledge sources are independent, as they are not aware of one another’s existence and merely react to the events of the blackboard being changed;

Software Network Automation

103

– a control device, which ensures that the system operates in accordance with a certain strategy. Its role, among other things, is to resolve conflicts of access to the blackboard between the agents, which may intervene without having been triggered. Indeed, in the absence of centralized control, the knowledge sources react in an opportunistic manner – i.e. they react as best they can. This control device itself functions in accordance with the blackboard model. Blackboards have the advantage of providing structure and an automatic method (divisions and hierarchy) in the way in which we approach a field of knowledge. They also exhibit the advantage of organizing sets of rules in systems with production rules. However, their lack of local memory means that they are not able to truly function as multi-agent systems. As a general rule, multi-agent systems use a blackboard for each agent. Multi-agent systems based on message communication are characterized by total distribution of the knowledge, the partial results and the methods used to achieve a result (see Figure 8.3). Certain actor languages offer an accurate incarnation of this type of system. Communication can take place either on a point-to-point basis or by broadcast.

Figure 8.3. Operation of a multi-agent system

Such a system is built around two components: – local computation: quite unlike blackboard-based systems, the knowledge is no longer concentrated in the same space, but instead is compartmentalized and distributed between the different agents. An agent can only manipulate its own local knowledge base, send messages to other agents that it knows (its “acquaintances”) and create new agents. At any given time, the agents do not have an overall vision of the system and only have a local point of view regarding the elements; – message forwarding: when an agent sends a message, it specifies which agent the response to the message should be sent. It may indeed be the agent that sent the original message, but it may just as well be another agent, specially created for the circumstance.

104

Software Networks

The agents have a more or less precise knowledge of the other agents in the system. They must be aware of and represent the abilities of those agents, and the tasks being performed at any given moment, plus the intentions and commitments of the agents. This aspect of matters raises the problem of the representation of this type of knowledge and also of its updating. The principle of task allocation constitutes one of the essential points relating to multi-cognitive-agent systems. A problem is composed of a certain number of tasks, performed by agents that bring together all of the partial solutions to obtain the overall solution (see Figure 8.4).

Figure 8.4. Problem-solving

To successfully allocate the tasks, we need to know the skills of each of the agents, decompose a problem into sub-problems, distribute the solving activities to the appropriate agents and, as required, redistribute those activities dynamically. Task allocation models can be classified into two categories: – centralized allocation: in this modular approach, an agent breaks down a problem into sub-problems and distributes those sub-problems to other agents, which are subordinate to the original agent. In this case, the actions are consistent, but there is a problem of reliability and extensibility. In addition, an agent is entirely dedicated to the distribution of tasks. Thus, the system is not used to the full extent of its capabilities; – decentralized, or distributed, allocation: each agent is capable of breaking up its own problem into sub-problems and thus distributing the tasks associated therewith. All the agents carry the same weight in the decision-making process. This type of allocation is appropriate for applications which already have a distributed

Software Network Automation

105

structure. The reliability and the possibility of extension are better than with the previous model, but it is more difficult to maintain consistency. Three types of methods may be used to decompose a problem: – static: with each agent knowing the skills of the other agents, the sub-problems can be attributed to the best-qualified agents; – dynamic: the agents work together to distribute the sub-problems in the most effective way possible; – mixed: each agent is aware of the skills of the other agents, but this knowledge is periodically updated. The autonomy of the agents is founded on the concept of intentionality. We can differentiate intention in an action from the intention to commit an action in the future. In the latter case, we have a standing goal. In order for an agent to have the intention to perform an action, that agent must believe that the action is possible, envisage committing to performing it, estimate that if certain conditions are fulfilled, then the action can successfully be carried out, and finally, not attempt to bring about all of the consequences. However, we may well wonder what happens when the action has been carried out by another agent, when an agent has two intentions in the conditions in which an agent can give up on its intention. 8.4. Reactive agent systems A reactive agent does not have an explicit representation of its environment and cannot take account of its past. The way in which it operates is simple, following a set of pre-programmed stimulus/response-type decisions. The system’s organization is biological and the number of agents present in such a system is very high. Communication is non-intentional. For example, the agents leave traces of their presence, or signals, which can be perceived by other agents. We then speak of environmental communication. This type of agent results from the following postulate: the interaction of a large number of simple agents can emerge from complex organizations. We can consider various levels of complexity for a reactive agent: – stimulus/response: simple reactions to the environment; – coordination of elementary actions: inhibition mechanisms, relations between elementary actions;

106

Software Networks

– reactive cooperation: mechanisms of recruitment between agents, aggregation of elementary agents; – reproduction: mechanisms of reproduction of reactive agents; – organization of reactive agents. Eco-resolution is a problem-solving technique based on the use of reactive agents. Here, problem-solving is considered to be the result of a set of interactions. This view stands in opposition to those adopted in conventional approaches to problem-solving, such as space state exploration, which poses problems of combinatorial explosion. The distributed problem-solving approach is based on a radically different idea: that of the appearance of configurations as stable or steady states of a dynamic system, whose evolution is due to interactions stemming from the behaviors of small, fairly simple agents. In conventional systems, all the data are contained in the statement, with the system bound to find how to pass from the initial configuration to the final state. On the contrary, with the phenomenon of eco-resolution, the determination is purely local. The agents are characterized by behaviors of satisfaction, attack and retreat. The problem itself is defined by a population of autonomous agents, all seeking satisfaction. The final result is the consequence of a non-deterministic interaction. The model defines the combination of behaviors. In conclusion to this section, we can argue that the study of learning, real-time operations or distribution, corresponds to a need in the area of network management. However, it is the last point which seems to hold the greatest interest, in that multi-agent systems, in a manner of speaking, constitute a generalization of the techniques of expert systems. Thus, they provide added value to conventional AI systems by offering a new type of architecture, involving communication as well as internal reasoning. The aspects of opening and distribution make them interesting for a network management system. As regards the real-time aspect, the approach adopted by AI seems less clear. Nevertheless, it is a crucial step in the design of a network administration system. Indeed, the response times (to a failure, for example) must be as short as possible, although this criterion is not as important here as it is in decision-support or command-support systems. Learning remains the Achilles’ heel of knowledge-based systems. As long as these systems are unable to improve, expand and refine their knowledge on the basis

Software Network Automation

107

of their own experience, they will be dependent on the goodwill and availability of experts and the quality of a manual update. 8.5. Active, programmable and autonomous networks Active networks are similar to other packet- and frame-transfer networks. The basic unit transferred in these networks is the packet. The role of the nodes is to examine the different fields in the packet, which are placed in accordance with a very rigid predefined system. In particular, the address field determines the output port. The different fields in the packet are interpreted by a virtual machine. We can consider that this virtual machine is a packet interface, which is typically called a “network API” or “NAPI” (Network Application Programming Interface). For an IP network, the IP API is the language defined by the syntax and semantics of the IP header. In typical networks, virtual machines are fixed and the language used is basic. With regard to active networks, we can say that the nodes provide a programmable NAPI. If we consider that, in an IP network, the header of the packet provides the input to the virtual machine, we can define an active network as one in which the nodes possess a virtual machine that executes the code contained in the packet headers. Numerous categories of active networks can be defined on the basis of the following attributes: – expressive power of the language, which determines the degree with which the network will be able to be programmed. The language can range from simple orders to very highly evolved languages. The simpler the language, the shorter the computation time. Conversely, the more powerful the language, the simpler the implementation of custom-tailoring; – possibility of defining a stable state on the basis of the previous messages in the same stream, so as to increase the rate of execution without having to redefine a state of the virtual machine; – granularity of control, which enables us to modify the behavior of a node for all packets that pass through it, regardless of the stream to which that data packet belongs, or, at the other extreme, to modify the node’s behavior only for the particular packet being processed at the time. All intermediary cases may be found – in particular, common behavior on the same stream or on the same set of streams; – means of giving programming orders: it is possible to consider that the orders are given to the active nodes by specific packets – e.g. signaling packets – rather than being indicated in a more or less highly evolved language in the packet header;

108

Software Networks

– architecture of the nodes, to examine the level of this architecture at which the commands come into play or, in other words, the level at which the programming interface is situated. The architecture can influence the choices of software and hardware. In particular, they can use reconfigurable processors at higher or lower conceptual levels. The functions of the nodes of active networks are shared between the runtime environment and the node’s operating system. Figure 8.5 shows such an architecture of an active network.

Figure 8.5. Architecture of active networks

It is possible to send commands to the runtime environment using an encapsulation protocol, which is called ANEP (Active Network Encapsulation Protocol). The header of an ANEP packet contains a field identifying the type of packet. Several runtime environments may be present in an active node; the address of the node requires an additional address. The existing interfaces include: – the interface of access to the runtime environment; – the interface between the runtime environment and the node’s operating system; – the interface of access to the node’s operating system.

Software Network Automation

109

Programmable networks form part of active networks, whose role is to develop a set of software abstractions of the network resources, so that we can access these resources by their abstraction. The objective of these networks is to render the nodes programmable so as to adapt them to the user demands and the required services. The programming commands – which may be sent either through a signaling network or by user packets containing control programs – can attack the nodes at different levels of abstraction. The IEEE has set up a working group with the purpose of standardizing these interfaces. The concept of an active and programmable network is gradually being supplanted by that of an autonomous network. An autonomous network is a network that does not need a management center or control center to make its decisions. Thus, an autonomous network is a network that can decide for itself how it will behave. This is a concept that was introduced for NGNs (Next-Generation Networks), whose aim is to replace all existing networks by a single IP network, thus integrating all communication media. An autonomous network must be capable of managing itself, detecting problems, repairing itself and controlling itself when no communication is possible. The network elements themselves must participate in the construction of an autonomous network with diverse properties, such as resource optimization, context awareness and automatic organization of security. The objective is to understand how to learn which are the right decisions to make, what influence the different elements of the network have and, more generally, how to optimize the network’s behavior. The tools to create this type of autonomous system are drawn from multi-agent systems, which we discussed above. Self-organization of an IP network first requires an overall view of the network and an understanding of the consequences of any event that happens in the network. Then, the network must be capable of reacting to that event. Autonomous networks can be designed on the basis of networks other than IP for very specific objectives, such as networks with critical missions or interplanetary networks, wherein the control center takes so long to communicate with the probes that it becomes impossible to make decisions in real-time. 8.6. Autonomic networks Now we come to autonomic networks which, by definition, are autonomous and spontaneous networks. They are the networks which we defined earlier, but with an

110

Software Networks

added property of spontaneity – i.e. they are real-time systems: the process is capable of reacting autonomously and within an acceptable time lag. Figure 8.6 offers an initial illustration of the definition of autonomic networks.

Figure 8.6. Definition of an autonomic network

Autonomic networks are capable of self-configuring to adapt dynamically to any changes in the environment, self-optimizing to ensure their operational efficiency is always optimal, self-repairing to ensure it is highly reliable, and self-protecting to ensure the security of the resources and information passing through it. To perform these different functions, autonomic networks must have a certain number of attributes: – self-awareness; – environment awareness; – self-monitoring; – self-adjusting. To achieve these objectives, we need to change the architecture of the networks. Thus, autonomic networks offer a new form of architecture with four planes, where a knowledge plane is added to the usual three: the data plane, the control plane and the management plane. The difference with the SDN architecture stems from the division of the control plane into two sub-planes – the knowledge plane and the control plane itself. The management plane is also an additional plane in this architecture, but it is integrated into the virtualization plane where each software network has its own management plane. Figure 8.7 shows the new architecture of autonomic networks. The purpose of the knowledge plane is to collect all of the knowledge in the network and, for each point in that network, obtain a fairly universal view. The difference with the SDN

Software Network Automation

111

architecture, here, is that the knowledge is centralized at a single point, rather than distributed to each point. The purpose of the Knowledge Plane is to run the control algorithms found in the Control Plane, which in turn controls the Data Plane. These planes correspond to the first four layers of the conventional network architecture. The Management Plane is responsible for the administration of the other three layers.

Figure 8.7. The architecture of autonomic networks

The objective of the knowledge plane is to render the network more intelligent by enabling it to understand its own behavior, which gives rise to a new generation of protocols. Up until now, every control algorithm (routing, quality of service, security, reliability, etc.) has had to go looking for the elements it needed, by itself. For example, a routing algorithm such as OSPF looks for the state of the upstream and downstream links to the input and output points of the network. This information can be exploited by other algorithms, such as an algorithm monitoring QoS, congestion or indeed admission to the network. Using a knowledge plane, this information is found in that plane, and each control algorithm can go in search of it as and when needed. In time, the standardized protocols should be altered to take account of this knowledge plane. Another advantage of the knowledge plane is the possibility of using information, for a control algorithm, which would not have been able to be taken into account by the normal algorithm. Evidently, it is important not to make the network too cumbersome, so as to be able to transport knowledge of all kinds, and in vast quantities. With this in mind, there are two possibilities: either to centralize the knowledge, as the SDN

112

Software Networks

architecture does, or to distribute it as widely as possible, making just enough knowledge available, at each point, for decisions to be able to be taken. The distributed solution comes from the situated view technique. “Situated views” come from the world of artificial intelligence and indicate the integration of pieces of knowledge that are situated in a given view. Thus, we can define a situated view by the number of hops necessary to find the information – e.g. one or two hops. Figure 8.8 shows a one-hop situated view.

Figure 8.8. One-hop situated view. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

The one-hop situated view exhibits the advantage of being very easily scalable, because the transport of the knowledge is limited. This does not mean that the knowledge cannot diffuse through the network, because a point that receives a piece of knowledge integrates it into its own knowledge base and forwards it on to its neighbors. This knowledge, however, is less fresh than if the situated view were a little larger. Thus, it is necessary to optimize the situated view, depending on the knowledge needed by the control algorithms. With this in mind, we need to answer the

Software Network Automation

113

questions: which, where and why? “Which” refers to the knowledge necessary to optimize an algorithm; “where” relates to the span of the situated view; and “when” indicates the refreshes needed for that knowledge to be usable. On the basis of these different parameters, it is possible to generate situated views that are a little more complex than the simple one-hop definition, including the information of links and all the seconds. Until 2008, networks were remarkably static, and a network engineer often needed to be present to deal with things when a problem arose. In the case of very large networks, tens of network engineers could be needed to perform maintenance and deal with the diverse problems that emerged. The objective of self-adjusting networks is to offer automated piloting of the network by a program capable of managing the control algorithms in a coordinated manner, and thereby optimize the network’s operation. At the start of the 2000s, an initial attempt was made, consisting of using programmable networks and active networks. The research was not entirely conclusive, for reasons of security and cost. A new generation was launched in 2005, with the autonomic networks we have just presented. 8.7. Conclusion The Cloud requires centralization, with its huge datacenters capable of handling enormous masses of data and of computing forwarding tables for a very high number of data streams. Unfortunately, this solution is not entirely satisfactory, because the reaction times are long, and reliability and security pose numerous problems, which considerably increase as the networks become more complex. This is the reason for the advent of distributed solutions using far smaller datacenters, much nearer to the users. However, this type of architecture requires a greater degree of intelligence to coordinate the set of controllers. Slowly but surely, intelligence is being included in networks. This intelligence covers communication, reasoning and decision-making. Multi-agent systems provide the mainstay of this intelligence, which is able to make control or management decisions when required. We are only at the very beginning of the story, but intelligence has already been widespread for a number of years at time of writing. This intelligence can be exercised from huge interconnected datacenters. In this case, it is centralized for the management and control of a sub-network. The interconnection of those sub-networks requires a greater degree of distribution.

114

Software Networks

When the networks are controlled by datacenters of all sizes, the intelligence will need to be very widely distributed. While intelligence serves to control the networks, it also has other impacts. In particular, security is one of the main beneficiaries of this intelligence. For example, an intelligent component, such as a neural network, is capable of analyzing what a person enters on a keyboard and shutting down communication if that input is not recognized. Intelligence in networks, therefore, is a vast field for R&D, which should lead to the implementation of a much greater intelligence in all domains relating to telecommunications. Another underlying vision is that of the reduction of the costs in terms of specialized personnel for surveillance, maintenance and, more generally, all the operations necessary for the survival of a network. It should become possible to reduce the number of engineers and technicians in the networking field by 50–80%, and thereby make substantial savings. However, it should be noted that new professions will emerge, with architects and high-level engineers associated with the domain of networks.

9 New-generation Protocols

Protocols are currently undergoing numerous mutations, owing to the advent of the Cloud and of SDN architectures. The legacy protocols based on the routing of IP packets and MPLS (MultiProtocol Label Switching) are still in place for the connection of users or data transport, offered by the major operator networks. However, an entirely new generation of networks is emerging and fighting to earn a place in the new arsenal of operators and Cloud providers. Firstly, there are the networks within datacenters, which must provide a very fast link between the servers, of which there are thousands, or even tens or hundreds of thousands. The protocol cited most at present is TRILL (Transparent Interconnection of Lots of Links). TRILL is a level-2 Ethernet network with extensions to facilitate routing in a level-2 Ethernet environment. Very broadly speaking, we can qualify TRILL as an Ethernet network encapsulated within an Ethernet network, enabling us to have a hierarchy of addresses and to carry out routing internal network. We will come back to this later on. Then, there are inter-datacenter networks, which also carry enormous quantities of data, going from one sub-network to another sub-network. There is a certain number of possibilities that are mentioned in the literature that are found in reality with Cloud management providers. In general, they are extensions of VLAN (Virtual LAN) techniques to maintain consistent VLAN numbers, even when the datacenters are not in the same sub-network. In addition, the number of VLANs is limited by numbering zone, which only allows for 4096 VLANs, using only the 12 bits in that zone. This category of protocols includes: – VXLAN (Ethernet in UDP), originally developed by VMware; – NVGRE (Ethernet in IP), developed by Microsoft; – 11aq (EVLAN), stemming from the standardization of Carrier-Grade Ethernet.

Software Networks: Virtualization, SDN, 5G and Security, First Edition. Guy Pujolle. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

116

Software Networks

At the very least, the following two protocols need to be added to this list: – MPLS, developed by operators and standardized by the ITU and the IETF; – LISP (IP in IP) supported by numerous hardware manufacturers and operators. The last of these protocols – LISP (Locator/Identifier Separation Protocol) – is fundamentally different from the previous ones, because it is a level-3 protocol, which is the encapsulation of an IP packet within an IP packet. The reason for this encapsulation is the need to have two IP addresses associated with an end device: the inner address to identify the intended receiver and the outer address for routing. This latter address denotes the physical position of the receiver. This solution means that the virtual machines can be moved but retain the same identity by modifying the IP address of the placement of the virtual machine. We will return to this in greater detail later on. Finally, the rising protocol corresponds to that developed for SDN: it is OpenFlow, which was introduced by Nicira, later acquired by VMware. We will begin with a look at this protocol. Remember that, as illustrated by Figure 9.1, SDN (Software-Defined Networking) is the solution that decouples the data plane from the control plane. The computations and control algorithms no longer take place directly in the nodes of the network, routers or switches, but in a server situated somewhere in the network, called the controller. The controller may be a physical device or a virtual machine situated in a datacenter of greater or lesser size.

Figure 9.1. SDN architecture

The data plane includes the forwarding of packets. This solution ensures that the hardware is independent of the software networks and that numerous parameters are taken into account. Indeed, this decoupling enables the controller, which is

New-generation Protocols

117

usually centralized, to recover precise information from each client in the network and to perform complicated computations using load and QoS algorithms so that each flow in the network has its own forwarding tables in the network nodes. In the extreme case scenario, each flow could have its own tables, taking account of all the characteristics of the client who produces this flow: performance, energy consumption, reliability, security, resilience, cost, etc. This solution cannot currently be scaled, but it could become scalable in the future with major decentralization protocols adapted to allow communication between controllers (see the eastbound and westbound interfaces in the SDN architecture discussed in Chapter 2). Let us begin by detailing the OpenFlow protocol.

9.1. OpenFlow Figure 9.2 shows an OpenFlow switch and an OpenFlow controller. The term “switch” is used for the OpenFlow nodes because, as we will see, paths are usually determined by the controller. The OpenFlow switch has two parts: the part that contains the queues, the frame transmitters and frame receivers, with the flow tables associated therewith, and the second part that governs communication with the controller using the OpenFlow signaling protocol. The first part contains all the necessary elements for the physical transport of the frames of the node and also contains the tables used to direct the flows into the right egress queue. There may be a table for each flow such as a table for a set of flows multiplexed on the same path. OpenFlow provides the necessary elements from the controller to create, manage and destroy lines from the flow table which could also be called a switching table. The second part relates to the communication between the node and the controller, and the information that must be sent between them. OpenFlow uses a secure SSL/TLS channel to authenticate both ends of the communication, which greatly reduces the risk of attack on the communication under way and demands mutual authentication. OpenFlow offers the means of identifying the flows of packets using level 1, 2, 3 and 4 information. OpenFlow also transports actions to indicate what needs to be done to the flow. Finally, it feeds back very precise statistics to the controller so that the path determination algorithm can do its job with near-perfect knowledge of the state of the network. As the controller is centralized, there is a slight time lag in the transmission of the data.

118

Software Networks

Figure 9.2. OpenFlow protocol

Figure 9.3. OpenFlow protocol in a network

New-generation Protocols

119

Figure 9.3 shows a more general OpenFlow network. The controller is on the right of the figure and opens SSL/TLS connections with all the nodes in the network. Potentially, certain nodes may not be OpenFlow-compatible and form a sub-network carrying OpenFlow signaling. In general, the nodes are compatible with OpenFlow, meaning that they are able to interpret OpenFlow commands and introduce actions on the flows in the flow tables or transform them to conform to forwarding tables or switching tables. The packet flows never go through the controller, except the case where a function, such as DPI (Deep Packet Inspection), is implemented in the controller, to analyze the set of all packets in that flow. In Figure 9.3, no flow goes through the controller, except the first packets, which are needed to determine the demands and the type of flow being dealt with. The fields in OpenFlow are shown in Figure 9.4. This figure presents the flow table, which contains the three types of information listed above. The first field enables matching between a row in the table and the flow. The flow is described by the ingress port number and the Ethernet addresses, VLAN number, VLAN priority, IP addresses, protocol and the type of service fields, and finally the port numbers of the TCP or UDP transport layer.

Ingress port

Egress port

Matching field

Acons

Counter

Matching field

Acons

Counter

Matching field

Acons

Counter

Ether. dest.

Ether. source

VLAN ID

VLAN priority

IP source

IP dest.

Flow table

IP Proto.

IP TOS

L4 Port source

L4 Port dest.

Figure 9.4. Fields in the OpenFlow protocol

The flow table is supplemented by values of the frame counters, byte counters, etc., giving a precise indication of the statistics of flows across all ports in the network. Therefore, the controller has a very precise and complete view of the whole network under its management. Among the numerous possible actions that can be transported by OpenFlow signaling, the most common ones include: – sending a packet over a list of ports; – adding/rejecting/modifying a VLAN Tag; – destroying a packet;

120

Software Networks

– sending a packet to the controller. The first generation OpenFlow was standardized by the ONF (Open Network Foundation) in 2009. Since then, new versions have been released, essentially containing extensions, such as: – version v1.1 in 2011, to take account of the MPLS and Carrier-Grade Ethernet Q-in-Q techniques, which we will examine a little later on. Multicasting and management are also taken into account, as is the possibility of having multiple routing tables; – version v1.2 introduces the possibility of controlling networks using the protocol IPv6. This version also goes into much greater detail about the statistics of flows and introduces multiple controllers when the network can be split into several parts, with each one being handled by a separate controller; – the next version, v1.3, extends the range of protocols taken into account even further, including MAC-in-MAC from Carrier-Grade Ethernet and the possibility of having multiple communication channels between a controller and each of the OpenFlow switches in the network; – version v1.4 takes a new direction, with optical network control; – version 1.5 came out in 2015 and offers some extensions, especially on flow table processing and statistics feedback. These different versions of the OpenFlow protocol are described in Figure 9.5.

Figure 9.5. The different ONF standards pertaining to the OpenFlow protocol

One of the difficulties is the process of matches, which extends on a very wide field of 12 tuples. The flow table can thus be very wide. In order to speed up processing, the flow table is split into 12 smaller tables, as shown in Figure 9.6.

New-generation Protocols

121

Figure 9.6. Management of the flow table in OpenFlow

Group tables are added to these flow tables. They are located between two elementary flow tables, as shown in Figure 9.7. Thanks to these group tables, additional information can be added in the flow table in order to, for example, allow a multipoint or to destroy some packets.

Figure 9.7. A group table integrated between two elementary flow tables

122

Software Networks

Many open source programs are OpenFlow-native or OpenFlow-compatible – to mention just a few: – Indigo: an open source implementation which is executed on a physical machine and uses the characteristics of an ASIC to execute OpenFlow; – LINC: an open source implementation that runs on Linux, Solaris, Windows, MacOS and FreeBSD; – Pantou: OpenFlow for a wireless environment, OpenWRT; – Of13softswitch: a software switch produced by Ericsson; – XORPlus: an open source software switch; – Open vSwitch: an open source software switch developed by Nicira and integrated into Linux kernel (in versions 3.3 and later). Open vSwitch is greatly used by numerous manufacturers in their architecture. Similarly, many switches and routers are OpenFlow-compatible. This long list includes the following: – NOX: the first OpenFlow controller; – FlowVisor: a Java OpenFlow controller that behaves like a transparent proxy between an OpenFlow switch and multiple OpenFlow controllers; – POX: a Python-oriented OpenFlow controller with a high-level SDN interface; – FloodLight: a Java OpenFlow controller; – OpenDaylight: an open source project for a modular platform which contains a controller at its center. The OpenDaylight controller is used as a primary controller by numerous manufacturers, even when those manufacturers have other proprietary solutions that they could use; – ONOS (Open Network Open System): one of the last projects launched as an open source switch. This is a distributed operating system which acts as the eastbound–westbound interface. The OpenFlow nodes can now depend on two controllers: primary and secondary. The number of switches handed by the ONOS controller can increase a lot since the control is done thanks to several controllers linked between one another; – Tungsten Fabric: the open source version of Open Contrail. In terms of the controllers, in addition to those mentioned above, there are numerous developments under way, compatible with OpenFlow, but handling many other northbound and southbound interfaces that have been developed to satisfy manufacturers’ needs.

New-generation Protocols

123

OpenFlow is undoubtedly the best known of the protocols used for the southbound interface, but nothing is yet finalized in the advance of SDN. This signaling protocol has the advantage of being relatively simple and well suited for its intended use. Let us now examine a certain number of protocols for the Cloud and, more specifically, for interconnecting datacenters. For this purpose, it is common to use VLANs, but with extensions to avoid a significant limitation on the number of VLANs. One of the most widely used techniques is a direct extension of VLAN: VXLAN. 9.2. VXLAN VXLAN (Virtual eXtensible LAN) is a solution capable of delivering communications in Clouds between datacenters. The technology is fairly similar to that used for VLANs and Carrier-Grade Ethernet extensions. It was originally developed by Cisco and VMware. The drawback to the basic solution of the VLAN IEEE 802.1Q standard is the limitation to 4096 networks. The VXLAN solution enables us to extend the basic technology in parallel to Carrier-Grade Ethernet. In the VXLAN solution, in order to move from one sub-network to another, we want to go through absolutely any network, and therefore, for this, we need to go back through the transport layer. For this reason, the Ethernet frame to be transported between datacenters is encapsulated in a UDP message which, itself, is encapsulated in an IP packet and then in an Ethernet frame. These successive encapsulations are shown in Figure 9.8.

Figure 9.8. VXLAN protocol

124

Software Networks

To the basic Ethernet frame, we add a VXLAN identification zone of 24 bits, meaning we can achieve over 16 million VLANs and then a 64-bit UDP field, which is then encapsulated in an Ethernet frame with fields, source address, destination and VLAN number, associated with VXLAN. The basic Ethernet frame therefore forms the “data” part of the UDP message, which, itself, is transported in an IP packet encapsulated in an Ethernet frame. This property enables us to create VLANs beyond an Ethernet domain. These encapsulations are represented in Figure 9.9. It should be noted that the overload is significant because the VXLAN frame adds 36 bytes to the basic frame.

Figure 9.9. VXLAN encapsulation

9.3. NVGRE Another protocol that is also carried out by the IETF is NVGRE (Network Virtualization using Generic Routing Encapsulation). It is taken care of by various industrial players, including Microsoft. This protocol, like the former, enables us to pass through an intermediary network between two datacenters, using an IP network. In order to preserve the value of the VLAN, we need to encapsulate the basic Ethernet frame in an IP packet, itself encapsulated in frames, to pass through the IP network. The original Ethernet frame finds itself on the other side of the network without modifying the VLAN zone. Figure 9.10 shows the tunnel which is opened in the IP network to transport the original frame. To interconnect datacenters, the classic solutions such as MEF (Metro Ethernet Forum) Ethernet, EVLANs (extended VLANs) or indeed Carrier-Grade Ethernet offer entirely usable solutions. We will begin by looking at the MEF Ethernet solution.

New-generation Protocols

125

Figure 9.10. NVGRE protocol

9.4. MEF Ethernet MEF (Metropolitan Ethernet Forum) networks have been on the scene for quite some time. Originally, their purpose was to interconnect company networks in a single, very high capacity hub. Hubs are also perfectly appropriate for interconnecting datacenters. MEF networks use switched Ethernet networks at 1, 10, 40 and 100 Gbps. To achieve telephony over IP and services requiring strict time constraints, Ethernet operator networks (Carrier-Grade Ethernet) enable us to introduce priorities, with the difference that the field IEEE 802.1p, which serves for the introduction of these priorities, only has 3 bits. These 3 bits can only encode 8 different levels of priority, as opposed to the 14 levels defined by the IETF for DiffServ services. The levels of priority proposed by the MEF are as follows: – 802.1p-6 DiffServ Expedited Forwarding; – 802.1p-5/4/3 DiffServ Assured Forwarding; – 802.1p-5, which presents the lowest loss; – 802.1p-3, which presents the highest loss; – 802.1p-2 DiffServ Best Effort. In the Ethernet environment, flow control is generally a tricky problem. Various proposals have been made to improve it. In particular, back-pressure methods

126

Software Networks

involve the sending of control messages by overloaded switches, thus allowing the connected switches to cease their transmissions to the congested node for a period of time indicated in the control guidelines. The choice made by MEF is based on the DiffServ control scheme, where we find exactly the same parameters: – CIR (Committed Information Rate); – CBS (Committed Burst Size); – PIR (Peak Information Rate); – MBS (Maximum Burst Size). This solution demonstrates the ubiquitousness of the world of Ethernet, which offers a large number of solutions capable of competing among themselves to satisfy the new generation necessary in the world of cloudified networks. 9.5. Carrier-Grade Ethernet Ethernet was designed for computer applications, rather than for applications in the world of telecommunications, which requires particular qualities that we call “carrier grade”. In order to conform to the requirements of the operators, therefore, the Ethernet environment has had to adapt. We now speak of Carrier-Grade Ethernet – i.e. a solution acceptable for telecom operators with the control – and management tools necessary in this case. This mutation essentially relates to switched Ethernet. Carrier-Grade Ethernet must possess functions that telecommunication networks – most notably, the following:

are

found

in

– reliability, which means that there must be very few failures. The MTBF (Mean Time Between Failures) needs to be at least 50,000 hours; – availability, which must achieve the classic values in telephony – i.e. a functioning state 99.999% of the time. This value is far from being achieved by conventional Ethernet networks, which actually only offer availability around 99.9% of the time; – protection and restoration. When a failure occurs, the system must be able to recover within a maximum of 50 ms. This duration comes from telephony, where breakdowns lasting longer than this are unacceptable. SONET networks, for example, reach this value for their reconfiguration time. The solutions generally use redundancy, either total or partial, which involves engaging another path, arranged in advance;

New-generation Protocols

127

– performance optimization by active or passive monitoring. The operations are not all entirely homogeneous when the packet flows vary. Thus, we need to adapt the flows so they can flow without a problem; – the network must be able to accept the SLAs (Service Level Agreements). The SLA is a typical notion in an operator network when a customer wants to negotiate a service guarantee. The SLA is determined by a technical part – the SLS (Service Level Specification) – and an administrative part whereby penalties are negotiated if the system does not satisfy the requirements; – management is also an important function of operator networks. In particular, failure-detection systems and signaling protocols must be available in order for the network to work. From a technical standpoint, Carrier-Grade Ethernet is an extension of VLAN technology. A VLAN is a local-area network in which the machines may be a very great distance apart. The objective is to make this network operate as though all of the points were geographically close to one another to form a local-area network. A VLAN may contain several users. Each Ethernet frame is diffused to all the machines in the VLAN. The tables that determine the frame switching are fixed and may be viewed as switching tables in which the addresses of the recipients are references. When the VLAN only has two points, the VLAN defines a path. This is the vision that was employed for Carrier-Grade Ethernet. Paths are formed by determining VLANs. The path is unique and simple if the VLAN only has two points. VLAN introduces a multipoint if there are more than two points. The problem with this solution stems from the limited size of the VLAN field, which is only defined by 12 bits. This is perfectly appropriate in the context of a company network with standard Ethernet switching, but becomes completely insufficient for Carrier-Grade Ethernet, which is aimed at operator networks. Therefore, the size of the VLAN field has had to be increased. Carrier-Grade Ethernet can be subdivided into various solutions of extension of the VLAN zone, all of which are illustrated in Figure 9.10. The most typical solution consists of using the IEEE 802.1ad standard, which has a variety of names: PB (Provider Bridge) Ethernet, QiQ (Q-in-Q) or cascading VLAN. IEEE 802.1ah is also known as MiM (MAC-in-MAC) or PBB (Provider Backbone Bridge). The most advanced solution is called PBT (Provider Backbone Transport), or a PseudoWire (PW) over PBT. A PseudoWire is an MPLS-based tunnel whereby Ethernet frames can be transported over an IP network with QoS guarantees.

128

Software Networks

The solutions described in the preceding sections are illustrated in Figure 9.11.

Figure 9.11. The different versions of Carrier-Grade Ethernet. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

Let us first examine PB (Provider Bridge) Ethernet technology. The ISP adds a VLAN number to that of the customer. Thus, there are two VLAN numbers: the C-VID (Customer-VLAN ID) and the S-VID (Service-VLAN ID). The introduction of the ISP enables us to extend the concept of the VLAN to the operator’s network without destroying the user’s VLAN. This solution enables us to define the transmission to be carried out in the operator’s network. As the field of the Ethernet frame, where the VLAN ID is indicated, has a length of 12 bits, this enables us to define up to 4094 entities in the operator network. These entities may be services, tunnels or domains. However, while 4094 is a sufficient value for a company, it falls very short of an operator’s needs. Implementations exploit a reference translation to extend the domain, but this increases the complexity of managing the whole network. Thus, this solution is not appropriate for large-scale networks. The solution proposed by the group IEEE 802.1ah PBB (Provider Backbone Bridge) improves on the previous one by switching the frame traffic on the MAC address. This solution, said to be MIM (MAC-in-MAC), encapsulates the client’s

New-generation Protocols

129

MAC address in an operator MAC. This means that the core operator only needs to know its own MAC addresses. In the PBB network, the matching of the user MAC addresses and network MAC is known only by the edge nodes, thus preventing the combinatorial explosion of the number of MAC addresses. The third solution, called PBT (Provider Backbone Transport), is fairly similar to the MPLS technique, but provides the properties necessary for Carrier Grade, such as the unavailability of less than 50 ms. In a manner of speaking, this is a secured MPLS tunnel. The PBT tunnel is created like an MPLS tunnel, with the references corresponding to the endpoints of the network. The client and server VLAN numbers are encapsulated in the MPLS tunnel, which may, itself, have differentiation into operator VLANs. Thus, the real reference is 24 + 48 bits, so 72 bits. The last solution is that of the PS service (PseudoWire) offered by MPLS. In this case, the user and operator VLANs are encapsulated in an MPLS service tunnel, which may, itself, be encapsulated in an MPLS transport tunnel. This solution stems from the encapsulation of tunnels in MPLS. Carrier-Grade Ethernet technology is of interest to numerous operators. Only time will tell which solution ultimately wins out. However, even at this stage, it can be conclusively stated that VLAN-within-VLAN encapsulation will be present in all such solutions. 9.6. TRILL (Transparent Interconnection of a Lot of Links) TRILL is an IETF standard implemented by nodes called RBridges (Routing Bridges) or TRILL switches. TRILL combines the strong points of bridges and routers. Indeed, TRILL determines a level-2 routing using the state of the links. RBridges are compatible with level-2 bridges defined in the IEEE 802.1 standard and could gradually come to replace them. RBridges are also compatible with IPv4 and IPv6 routers and therefore perfectly compatible with the IP routers used today. Routing done with the protocol IS-IS at level 2 between the RBridges replaces STP (Spanning Tree Protocol) in TRILL. Figure 9.12 shows the processes employed by TRILL. Using level-2 routing between the Ethernet addresses means that we no longer have to configure level 3 and the associated IP addresses. The link state protocol used for routing may contain additional TLV (type, length, value) data. In order to prevent potential loops when rerouting, RBridges maintain a “hop count” – i.e. they count the number of nodes through which the message is passed. When this number reaches a maximum value, determined by the operator, the frame is destroyed.

130

Software Networks

Figure 9.12. TRILL protocol

When an Ethernet frame reaches the first RBridge in the network, called the IRB (Ingress RBridge), an extra header is added: the TRILL header. The frame is decapsulated by the ERB (Egress RBridge). In the TRILL part of the new frame, we find the address of the ERB, which makes this a routing technology, as it uses the recipient’s address rather than a reference. This address of the RBridges pertains to 2 bytes, as opposed to the conventional 6 bytes occupied by the Ethernet frame. Thus, addressing is done on the basis of addresses defined by the user him/herself. The added header contains 6 bytes in total, terminating with the ingress and egress addresses, preceded by the hop count and a flag. The Ethernet frames are recovered after decapsulation at their egress. Thus, we see a classic mode of forwarding – either using the VLAN number, with the Ethernet address serving as a reference, or decapsulating the Ethernet frame to retrieve the IP packet. An interesting point is the possibility to transfer the frames from the same flow using several paths simultaneously: this is known as multipath, or ECMP (Equal Cost MultiPath), which enables us to detect the different paths of equal cost and point the frames along these various paths.

New-generation Protocols

131

9.7. LISP (Locator/Identifier Separation Protocol) LISP was developed to facilitate the transport of virtual machines from one datacenter to another without changing the IP address of the virtual machine. In order for this to work, we need to separate the two interpretations of the IP address: the identifier of the user machine and the locator used for routing. If we wish to preserve the address of the virtual machine, it is necessary to differentiate these two values. This is what LISP does, but it is not the only protocol to do so: HIP (Host Identity Protocol) and SHIM6 (level-3 Multihoming Shim Protocol for IPv6) also differentiate the two interpretations, but with mechanisms based on the destination machines. With LISP, the routers take care of the association between the address of the destination machine – the EID (Endpoint ID) – and the address used for routing – the RLOC (Routing-Locator). When a communication is begun between an addressee machine and a server machine, the traffic is intercepted by an iTR (ingress Tunnel Router), which must determine the appropriate RLOC to access the server machine, whose address is that of the EID. The network egress to reach the EID is handled by the eTR (egress Tunnel Router). To perform this operation, the router consults a directory service – the EID-RLOC. Once it has obtained the receiver’s RLOC, the packet can be routed to that receiver. This process is not visible from the machine and the server. Figure 9.13 illustrates the main elements of LISP.

Figure 9.13. LISP protocol

132

Software Networks

It should be noted that LISP enables us to use addresses other than those of level 3 in IPv4 or IPv6, such as a GPS location or a MAC address. 9.8. Conclusion The networks found in Cloud architectures are essentially Ethernet-based and generally use a level-2 architecture. In other words, less use is made of IP, which is far more complex and a good deal slower, with the exception of LISP, which is favored by numerous manufacturers and operators because of its appropriateness for the Cloud environment. Thanks to the considerable success of level 2, working group 802 at the IEEE is increasing the number of their campaigns in other directions, such as wireless networks, for which wireless Ethernet has become the main standard, for access networks or home networks. In Chapter 11, we will take a look at the new standards that are emerging in this domain.

10 Mobile Cloud Networking, the Mobile Cloud and Mobility Control

Mobile Cloud Networking is a solution that has become very important in the development of high-data rate mobile networks. The basic element is the mobile terminal, which moves around and requests services from a Cloud which, for its part, is fixed. Conversely mobile Cloud also refers to technologies where the Cloud itself is mobile and moves around in order to stay connected to its users. It moves around with the client. The mobile Cloud may be virtual, as well as physical. In the first section, we will begin by examining Mobile Cloud Networking, and in the second section, we will look at the mobile Cloud, before concluding this chapter with a discussion of the means of control of mobility of the terminals and users. 10.1. Mobile Cloud Networking The exact definition of Mobile Cloud Networking is very controversial, because there is no real definition and no clear architecture. There are two predominant orientations in the field of Mobile Cloud Networking: – an application orientation, which means that a mobile device with limited resources is able to handle applications which require intensive computations or more memory than the terminal has; – a network orientation, which involves the optimization of algorithms for the control of mobile services. In the context of application orientation, we can cite various applications which correspond to this framework, such as Rich Mobile applications, mobile Cloud gaming or indeed MGaaS (Mobile Game as a Service), augmented reality or mobile

Software Networks: Virtualization, SDN, 5G and Security, First Edition. Guy Pujolle. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

134

Software Networks

applications involving intensive computations such as OCR (Optical Character Recognition) or natural language use. In the context of network orientation, we might find, for example, firewalls for mobile devices, handover control or the management of mobile terminal attachment. The architectures for Mobile Cloud Networking cover the different classifications mentioned above: Cloud networking for mobile terminals, local clouds, virtual clouds or indeed Cloudlets which use small size datacenters such as Skin or Fog datacenters. The first architecture is shown in Figure 10.1.

Figure 10.1. An architecture for Mobile Cloud Networking

This architecture shows a mobile terminal – a smartphone – which uses a central Cloud to handle a certain number of applications. These applications are CPU intensive, a very high memory requirement or the use of specific resources which cannot be accommodated on the smartphone as a “Big Data” repository. The communications between the smartphone and the Cloud must be reasonable so that the wireless interface can handle them without any problems. In summary, the terminal is a lightweight machine which calls on the Cloud to carry out hungry processes. Figure 10.2 shows the second architecture, which is fairly similar to the previous one but where the Cloud – meaning the datacentre – is no longer central but is local, or at least not too far from the user. This is a Skin or Fog datacenter. The resources may be hosted on other mobile devices that are connected to the Cloud. However, most of the time, these machines are not powerful enough to support heavy applications. The mobile devices may themselves form a Cloud. In other words, neighboring mobile terminals, if they are inactive, can be used for computation,

Mobile Cloud Networking, the Mobile Cloud and Mobility Control

135

storage or even networking in this environment. We will consider these Mobile Cloud scenarios in the next section.

Figure 10.2. A local architecture for Mobile Cloud Networking

This Mobile Cloud Networking is built from connected machines and possibly a local datacenter, which may be very local with Skin or Fog datacenters. Overall, these small datacenters can be situated at the Home Gateway level (Internet Box), the access router level, the DSLAM level, or directly behind the antenna, like an MEC datacenter. Figure 10.3 presents a third case of Mobile Cloud Networking with a virtual Cloud. In this solution, the client with his/her mobile device wishes to connect to a whole set of Clouds to cover all the services he/she requires. As a single Cloud is usually insufficient to contain all the services necessary for a user, the user needs to request connection to several Cloud providers to obtain the services required. As it may be tricky and complex to know all the solutions, one possibility is to call on an intermediary which has extensive knowledge of all the Clouds. This intermediary is able, at any given time, to choose the best possible Cloud provider for a given application. The Cloud is virtual because it is represented by an intermediary virtual machine capable of connecting the client to the best Cloud to deliver the required application. Another name given to this solution is “Sky”, and the intermediary is a Sky provider. One of the difficulties for the Sky provider is intermediation with Clouds which may be very different to one another, but which must not appear any differently to the end user.

136

Software Networks

Figure 10.3. A third mobile Cloud architecture

Finally, the fourth architecture encountered under the auspices of mobile Cloud also pertains to the use of a small Cloud – a Cloudlet – corresponding to a small datacenter which moves with the client. The datacenters, on which the Cloud is deployed, do not move: the client’s virtual machines do and, most importantly, the connection towards datacenters hosting the virtual machines. This gives the impression that the Cloud is moving along with the client. With each handover, the mobile terminal attaches to a new local datacenter. The client’s virtual machine migrates and is executed in this new datacenter. This solution is shown in Figure 10.4.

Figure 10.4. A fourth architecture for mobile Cloud

Mobile Cloud Networking, the Mobile Cloud and Mobility Control

137

A Cloudlet is a concept of a small Cloud situated in a zone with very high demand. The Mobile Cloud Networking provider assigns the most appropriate Cloudlet to quickly respond to the service request of the moving client. In summary, Mobile Cloud Networking architectures are based on a group of datacenters on which virtual machines move, thus shaping mobile Clouds in order to enhance different criteria: – performance and reliability of applications for mobiles; – performance of control applications for mobiles; – minimization of energy consumption by terminals and datacenters; – availability; – high security: - m-commerce, - m-Cloud access, - m-payment. The architectural differences we have described can be combined to optimize the above criteria. Once again, we see the hierarchy of datacenters ranging from enormous datacenters to the minuscule skin datacenters. It is not easy to optimize these environments, even though the architecture is sometimes relatively well defined. Often, it appears that the system of the future is Clouds that form simply and are just as easily transformed. This system is beginning to gain momentum, known by the name “mobile Cloud”, which we will now examine. 10.2. Mobile Cloud The mobile Cloud is a set of small datacenters which, once they are connected, form a Cloudlet or a Cloud, if it is bigger. The difficulty lies in the mobility of such Cloudlets and the diversity of forms of mobility; so, the datacenters can attach to create a Cloudlet and detach. If all the little datacenters move simultaneously, they create a VANET (Vehicular Area Network) which supports the mobile Cloud. On the other hand, if the mobiles, which transport the little datacenters, move independently of one another, with no coordination between them, the mobile Cloud has greater difficulty in forming and evolving on the basis of the movements. Figure 10.5 shows an example of a mobile Cloud, wherein each vehicle has its own skin datacenter.

138

Software Networks

Figure 10.5. Example of the mobile Cloud

Communications between datacenters take place by way of a mesh or ad hoc network. An ad hoc network occurs in the case where the datacenters are hosted on the client’s terminal device. In the case of a mesh network, there is a specific network which forms the link between the network nodes on which clients are connected. Skin datacenters are then integrated in the mesh network’s nodes. This mesh network has nodes with two communication ports: one to connect with the clients, and the other with the nodes of the mesh network. In the case of a mesh network, the vehicle has a box which contains the skin datacenter, supporting inter-vehicle communication. The devices in the vehicle – objects, sensors or any other piece of equipment – are connected to the mesh box. As we have seen, the box has two communication ports: one for connecting devices inside or outside the vehicle, and a second for communication between different boxes. The second solution – ad hoc networks – can be used to form direct connections between the objects or other equipment. This solution is not advocated by operators, because the user’s device needs to include networking equipment to perform communication. In addition, user applications, running on the same machine, may interfere with this equipment. An example of the mobile Cloud is shown in Figure 10.6, where all the vehicles are traveling at approximately the same speed, which enables the datacenters to simply connect with one another at acceptable speeds to form a wider Cloud. In fact, there are two mobile Clouds: one for each direction of travel of the vehicles.

Mobile Cloud Networking, the Mobile Cloud and Mobility Control

139

Figure 10.6. Two mobile Clouds

A large mobile Cloud can be established when a traffic jam occurs, as shown in Figure 10.7. When hundreds of vehicles are in the same place, the overall power may become quite substantial. The vehicles must be connected in a mesh network with specific protocols, such as OLSR, to allow for data transmission between the vehicles. 10.3. Mobility control Mobility is difficult to manage because the decisions essentially need to be made in real time, and the motion means that it is impossible to use powerful entities such as datacenters, which are generally situated too far from the periphery, except perhaps if we consider new technologies such as C-RAN, where the local loop needs to be rethought and replaced by an optical network. Therefore, with a few exceptions, we must move the decisions to the endpoints, and this delocalization is done by local controllers. We again see the issue of SDN on the periphery, which we have already touched on many times. The OpenFlow protocol may play an important role in connecting the client and the controller.

140

Software Networks

Figure 10.7. A large mobile Cloud

Two types of controller may be found, low-level controllers and high-level controllers. The former handle the lower layers: frequency regulation, power regulation and adaptations of the physical and link layers – i.e. of levels 1 and 2. These controllers can also manage certain types of security applications, such as detecting rogue access points. High-level controllers are associated with SDN controllers. They are capable of handling the properties shown in Figure 10.8.

Figure 10.8. Properties of mobile device controllers

Mobile Cloud Networking, the Mobile Cloud and Mobility Control

141

These properties contain the authentication of users by any given technique, with the harvesting of a maximal number of characteristics of the user and his/her applications through the northbound interface. Gathering such information allows us to set up application profiles so that the SDN can have precise control. It can be a local control on a Fog server level or a more central control by an MEC server. Application profiles are sometimes completed with information coming from other sources than the network or the northbound interface. The second major property is to determine the user profile, either by harvesting information when authenticating or by associating the user with a pre-defined profile. This profile enables us to determine the applications that can be run and thus facilitate access control depending on the user’s demands. For example, the user profile indicates whether or not he/she has access to the Internet solely via HTTP, has the right to use VoIP services, send e-mail, use a local printer, etc. The third property of controllers pertains to the management and control of flows by using a protocol such as OpenFlow, or another, more conventional, southbound interface protocol such as NetConf. The controller must also be capable of satisfying the obligatory demands of States, such as storing the logs of clients who connect to the controller. The last two sets of properties pertain to the management of the connection, firstly by providing the elements necessary to perform a “zero configuration”: the controller is capable of substituting the mobile terminal to provide particular properties. For example, the controller can intercept a user’s electronic messages and send them using its own SMTP mail server. Indeed, the network does not accept messages of the user who is a guest of the company. Another example is the use of a printer from a mobile phone. The controller can load a virtual driver and enable the mobile to print locally. Finally, the controller can manage solutions to enable the user to recover passwords to connect to the access points. If we examine the authentication more specifically, the most conventional method is a window in which the client enters his/her username and password, using the IEEE 802.1x standard, for example. Another solution is access by authentication on another site, such as Google or Facebook, or on a payment server. Once the external authentication has been performed, a specific protocol such as OAuth can send the authentication back to the controller. One final solution may be to ask the user to fill out an information page in order to gain access. It should be noted that in the last two scenarios, the client must first be able to connect to the network without authentication. Thus, the access point must allow him/her to pass the first login request on a specific, provisional authorization of the

142

Software Networks

controller. This solution is often an option in authentication systems. However, it is somewhat risky, because once he/she is authenticated on an external server, the client can continue to navigate without returning for a firm authentication on the local controller. Figure 10.9 presents the two solutions for application controllers: the first controller is hardware and the second virtual. The advantage of the virtual controller is to use the exact amount of resources at any given time – i.e. a great deal of resources at peak hours and practically no resource during idle periods.

Figure 10.9. Hardware and software solutions for a controller

Figure 10.10 shows the different ways in which it is possible to obtain Internet access authorization using an access point and an access-point controller. Case 1 corresponds to client access to an external authentication site. He/she obtains provisional authorization to pass the access point and seek authentication or recognition on an external site, which may be located anywhere on the Internet. Once this authentication has been performed, the result is transmitted back to the controller, which sends the order to the access point to allow the client to pass. Case 2 deals with a solution using an SDN controller. When a flow comes in, the SDN access point sends a request to the SDN controller, which begins by authenticating the client on a captive portal or using another authentication solution. Once authentication has been completed, the controller grants authorization to the access point, specifying which flow is to be allowed through. The client can then access the Internet.

Mobile Cloud Networking, the Mobile Cloud and Mobility Control

143

Case 3 concerns a local hardware controller which, after authentication, allows the client to communicate with the Internet via the access point. The IEEE 802.1x protocol can easily be used in this situation. Case 4 represents the solution where we need to access the central controller which, after authentication and consultation of the profile, allows the client to access the Internet.

Figure 10.10. Access authorization cases by a controller

10.4. Mobility protocols The convergence of landline and mobile telephones means that any application installed on a server can be used from a fixed or mobile device. The first condition to achieve this convergence is that a mobile be able to remain connected to the network while moving around, without the functioning of the application being interrupted. Changes of cell (known as handovers) must take place transparently and seamlessly, with the same quality of service being maintained if the application requires it.

144

Software Networks

We will begin by discussing macromobility, which involves the management of mobility of terminals between two different IP domains, and micromobility, which relates to the change of attachment point within the same IP domain. In the first case, the main protocol is IP Mobile; in the second, various protocols are in competition with one another. We will then go on to examine multihoming – i.e. the case where a terminal is simultaneously connected to several networks. Two main types of mobility control solutions in IP networks have been put in place, depending on the mobile terminal’s movement: support for macromobility and support for micromobility. In the former case, the mobile changes the IP network, whereas in the latter, it stays within the same network but changes its attachment antenna. Macromobility is handled by the Mobile IP protocol. For micromobility, numerous protocols have been defined. We will examine the main ones in this chapter. 10.4.1. Mobile IP or MIP IP is increasingly being presented as a solution to the problems posed by mobile users. The Mobile IP protocol (MIP) can be used in IPv4, but the potential lack of addresses complicates the management of communication with the mobile. IPv6 is preferable, because of the large number of available addresses, which means that temporary addresses can be assigned to the stations as they move around. The operation of Mobile IP is as follows: – a station has a home address, with an agent attached to it, whose role is to monitor the match between the home address and the temporary address; – when a call comes into the mobile station, the request is sent to the database in which the home address is held; – because of the agent, it is possible to match the home address to the temporary address and send the connection request to the mobile. This solution is similar to that used in mobile networks. The terminology employed in Mobile IP is as follows: – Mobile Node: terminal or router which changes its attachment point from one sub-network to another; – Home Agent: router of the sub-network where the mobile node is registered;

Mobile Cloud Networking, the Mobile Cloud and Mobility Control

145

– Foreign Agent: router of the sub-network being visited by the mobile node. The Mobile IP environment is formed of three relatively separate functions: – agent Discovery: when the mobile arrives in a sub-network, it searches for an agent capable of handling it; – registration: when a mobile is outside of its home domain, it registers its new address (Care-of-Address) with its Home Agent. Depending on the technique used, registration may take place directly with the Home Agent, or be done through the Foreign Agent; – tunneling: when a mobile is outside of its home sub-network, the packets need to be delivered to it by way of the technique of tunneling, which links the Home Agent to the Care-of-Address. Figures 10.11 and 10.12 show the arrangement of communication in Mobile IP for IPv4 and IPv6. 10.4.2. Solutions for micromobility Various protocols have been put forward by the IETF for the management of micromobility in order to improve Mobile IP. Indeed, micromobility solutions are primarily intended to reduce the signaling messages and the latency of handover caused by the mechanisms of Mobile IP. Generally, we distinguish two categories of approaches for micromobility: those that are based on tunnels and those that use forwarding tables.

Figure 10.11. Mobile IP for IPv4

146

Software Networks

Figure 10.12. Mobile IP for IPv6

Tunnel-based approaches use local and hierarchical registrations. For this purpose, they use IDMP (Intra-Domain Mobility Management Protocol), HMIPv6 (Hierarchical MIPv6) and FMIPv6 (Fast MIPv6). HMIPv6 is a hierarchical arrangement which differentiates local mobility from global mobility. Its advantages are the improvement of handover performances, because local handovers are dealt with locally, and of the transfer rate, along with the minimization of the loss of packets which may arise during the transitions. HMIPv6 considerably reduces the signaling load for managing mobility on the Internet. This is because the signaling messages do not travel through the whole of the Internet, but instead remain confined to the site. The aim of FMIPv6 is to reduce the latency of handover by remedying the shortcomings of MIPv6 in terms of the time taken to detect the mobile’s movement and register the new care-of-address, as well as by pre-emptive and tunnel-based handovers. Approaches based on forwarding tables consist of keeping routes specific to the host in the routers to transfer messages. Cellular IP and HAWAII (Handoff-Aware Wireless Access Internet Infrastructure) are two examples of micromobility protocols using forwarding tables. 10.5. Multihoming Multihoming involves a terminal connecting to several networks simultaneously in order to optimize the communications by selecting the best network or by allowing an application to take several paths. Thus, the packets forming the same message may take several different paths simultaneously. In a slightly longer term, a terminal will be connected to several networks, and each application will be able to choose the transport network (or networks) to be used.

Mobile Cloud Networking, the Mobile Cloud and Mobility Control

147

The term “multihoming” refers to the multiple “homes” that a terminal may have, with each “home” assigning its own IP address to the terminal. Thus, the terminal has to manage the different IP addresses assigned to it. Multihoming enables us to have several simultaneous connections to different access networks. A multihomed terminal is defined as a terminal that can be reached via several IP addresses. We can distinguish three scenarios: – a single interface and several IP addresses. This is the case for a terminal in a multihomed site. Each IP address of the terminal belongs to a different ISP. The goal of this configuration is to improve reliability and optimize performance by using traffic engineering; – several interfaces, with one IP address per interface. The terminal has several physical interfaces, and each interface has only one IP address. Such is the case with a multi-interface mobile terminal. The interfaces may use different access technologies and be connected to different networks. Each interface obtains an IP address assigned by its own network; – several interfaces, with one or more IP addresses per interface. This is the most commonplace situation. The terminal has several physical interfaces, each of which can receive one or more IP addresses, attributed by the attached networks. There are two main approaches to multihoming: one using the routing protocol BGP (Border Gateway Protocol) and one using the Network Address Translation mechanism. These solutions are based on network routing and address management techniques to achieve high connection ability and improved performance. The earliest multihoming protocols were at the transport level, with SCTP (Stream Control Transmission Protocol) and its extensions. At the network level, SHIM6 (Level 3 Multihoming Shim Protocol for IPv6) and HIP (Host Identity Protocol) are the two protocols supporting multihoming, standardized by the IETF. Another way of looking at multihoming protocols for a mobile terminal is to develop the multipath transmission protocol on the basis of the single-path transmission protocol. Such is the case with the registration of multiple addresses in a mobility situation: addresses known as the multiple Care-of-Address (mCoA) and MTCP (Multipath TCP). The mCoA protocol is an evolution of Mobile IPv6, which allows multiple temporary addresses (CoAs) to be registered on the same mobile terminal. This solution defines a registration ID whose purpose is to distinguish the CoAs of the same terminal. MPTCP is a recent extension of TCP to take care of data transmission along several paths simultaneously. An MPTCP connection is made up of several TCP connections.

148

Software Networks

The objectives of a multihoming protocol for a multi-interface mobile terminal are as follows: – the available paths must be simultaneously used to increase the bandwidth; – the load-sharing algorithm that distributes the data along multiple paths must ensure that the total bandwidth is at least equal to that obtained on the best path; – the use of several paths must enhance the reliability of the communication; – the multihoming protocol must support mobility by dealing with the problems of horizontal and vertical handovers; – the interfaces of different access technologies should provide users with better radio coverage. 10.6. Network-level multihoming In this section, we present three protocols that take care of multihoming at the network level: HIP (Host Identity Protocol), SHIM6 (Level 3 Multihoming Shim Protocol for IPv6) and mCoA (Multiple Care-of-Addresses). Before going into detail about HIP and SHIM6, let us introduce the main principles of these two protocols. In an IP network, the IP address is used both as the terminal’s identifier and locator. We have already seen this property in the case of the LISP protocol. This dual function of the IP address poses problems in a multihoming environment. Indeed, as each terminal has several IP addresses associated with its interfaces, a terminal is represented by multiple IDs. In mobility, when the terminal moves outside of the coverage range of a given technology, it switches to the interface of another technology. Communication is then interrupted because the upper layers think that the different IDs represent different terminals. The simultaneous transmission of data along several paths is impossible, because those paths are not considered to belong to the same terminal. With HIP and SHIM6, we separate the two functions of the IP address in order to support multihoming. An intermediary layer is added between the network layer and the transport layer. This layer enables us to differentiate the node’s ID and its address. The ID no longer depends on the node’s attachment point. Although the node may have several addresses, it is introduced by a unique identifier in the upper layers. When one of the node’s addresses is no longer valid, the ID is associated with another address. The intermediary layer handles the link between the ID and the address. Therefore, the change of address becomes transparent to the upper layers, which associate only with the ID.

Mobile Cloud Networking, the Mobile Cloud and Mobility Control

149

10.6.1. HIP (Host Identity Protocol) HIP is an approach that totally separates the identifying function from the localizing function of the IP address. HIP assigns new cryptographic “Host Identities” (HIs) to the terminals. In the HIP architecture, the HI is the terminal’s ID presented to the upper layers, whereas the IP address only acts as a topological address. As shown in Figure 10.13, a new layer, called the HIP, is added between the network and transport layers. This HIP layer handles the matching of the terminal’s HI and its active IP address. When a terminal’s IP address changes because of mobility or multihoming, the HI remains static to support that mobility and multihoming. The HI is the public key in a pair of asymmetrical keys generated by the terminal. However, the HI is not directly used in the protocol, because of its variable length. There are two formats of HI, using cryptographic hashing on the public key. One representation of HI, with 32 bits, called the LSI (Local Scope Identifier), is used for IPv4. Another representation of HI, with 128 bits, called the HIT (Host Identity Tag), is used for IPv6. Before communicating with one another, the terminals use the HIP base exchange protocol to establish the HIP association. HIP base exchange is a cryptographic protocol which enables the terminals to authenticate one another. As shown in Figure 10.14, the base exchange comprises four messages that contain the communication context data, such as the HITs, the public key, the security parameters of IPSec, etc. I and R, respectively, represent the initiator and the responder of the exchange. In order to protect the association against Denial-ofService (DoS) attacks, a puzzle is added into messages R1 and I2. As the HI replaces the identifying function of the IP address, the HI and one of the IP addresses available for a terminal must be published in the DNS (Domain Name System).

Figure 10.13. HIP architecture

150

Software Networks

Figure 10.14. Base exchange procedure for HIP

Figure 10.15 shows the base exchange procedure, involving the DNS server. Before beginning the HIP base exchange, the initiator interrogates the DNS, using the responder’s DNS name, to obtain its HI and IP address. After having received the necessary data, the initiator executes the basic exchange by means of four messages: I1, R1, I2 and R2.

Figure 10.15. HIP base exchange with DNS

10.6.2. SHIM6 (Level 3 Multihoming Shim Protocol for IPv6) Like network-level multihoming protocols, SHIM6 also supports the separation of the identifying and localizing roles of the IP address. SHIM6 employs an intermediary SHIM sub-layer within the IP layer. Unlike HIP, the SHIM6 approach does not introduce a new namespace for the identifiers. The terminal considers one of its IP addresses as its identifier. Known as

Mobile Cloud Networking, the Mobile Cloud and Mobility Control

151

the ULID (Upper-Layer Identifier), it is visible to the upper layers. Once the terminal’s ULID is chosen, it cannot be modified for the duration of the communication session. This mechanism ensures a transparent operation of all upper-layer protocols in a multihoming environment, because these protocols always perceive a stable IPv6 address. The SHIM sub-layer handles the matching of the ULID and the terminal’s active address during the transmission. As shown in Figure 10.16, terminal A has two addresses – IP1A and IP2A – and so does terminal B (IP1B, IP2B). The stable source and destination addresses perceived by the upper layers are, respectively, IP1A and IP1B. Suppose that the IP1A and IP1B addresses are no longer valid for the two terminals. If terminal A continues to send packets to terminal B, the application of terminal A indicates IP1A and IP1B as the packet’s source and destination addresses, because it is not aware of the change of addresses. When the SHIM sub-layer of terminal A receives that packet, it converts the indicated ULIDs into the real addresses of the two terminals. Then, the packet is sent over the address IP2A of terminal A to the address IP2B of terminal B. When the SHIM sub-layer of terminal B receives the packet, it translates the addresses back into ULIDs (IP1A for the source and IP1B for the destination) before sending data up to the next layer. Due to the matching between the ULID and the address, performed by the SHIM sub-layer, the changes of address are totally transparent for the upper-layer protocols.

Figure 10.16. Example of matching in SHIM6

10.6.3. mCoA (Multiple Care-of-Addresses) in Mobile IPv6 MIP6 (Mobile IPv6) is the protocol used to manage mobility for IPv6. In the IP mobility architecture, each mobile node has a static HoA (Home Address) which is attributed by its home network. When the mobile node enters a host network, it

152

Software Networks

obtains a new, temporary IP address known as the CoA (Care-of-Address). It informs its home agent (HA), which is in the home network. The HA stores, in its cache (Binding Cache), the match between the CoA and the HoA of the mobile node. Then, if there is any traffic destined for the HoA, it is intercepted by the HA and redirected to the mobile node’s true address using an IP tunnel. Consequently, the mobility of the mobile node becomes transparent to the corresponding nodes. The principle of Mobile IPv6 is shown in Figure 10.17.

Figure 10.17. Mobile IPv6

The limitations of Mobile IPv6 stem from the fact that the client can only register one CoA with his/her HA. Indeed, when the mobile node moves to a new network, it obtains a new CoA. The mobile node must inform its HA of this change in order for the HA to update its cache. The HA then replaces the old CoA with the new one. Consequently, at any given time, there can only be one association between a CoA and HoA. With a view to being able to support multihoming, an extension to facilitate the registration of multiple CoAs was defined: a mobile node receiving several CoAs associated with its interfaces can register the same number of matches between its CoAs and its unique HoA with its HA. With this goal in mind, a binding identifier (BID) in the cache is used. Every instance of a registration (binding), which is equivalent to a match between a CoA and an HoA, is identified by a unique BID. By using one or more Binding Update messages, the mobile node informs its HA of its CoA/HoA matches and the corresponding BIDs. In its binding cache, the HA stores all the matches bound by the mobile node, as shown in Figure 10.18. Multipath transmission is managed by the mobile node. For outgoing data flows, the mobile node sends the traffic in accordance with its flow distribution algorithm. For incoming data flows, the mobile node defines the traffic preferences and informs the

Mobile Cloud Networking, the Mobile Cloud and Mobility Control

153

HA of those preferences. The HA then distributes the traffic destined for that mobile node via the different CoAs in keeping with the stated preferences.

Figure 10.18. Registration of several CoAs in Mobile IPv6

10.7. Transport-level multihoming In the classic transport protocols TCP and UDP, each terminal uses a single address for a communication. If the terminal’s address changes, the connection is interrupted. With the evolution of access technology, it is quite common for a terminal to have several IP addresses associated with its interfaces. The solutions put forward for the transport level involve each terminal maintaining a list of IP addresses. One address can be replaced by another without the communication being broken. Transport-level multihoming protocols are extensions of TCP such as MPTCP (Multipath TCP), Multihomed TCP, TCP-MH and SCTP (Stream Control Transmission Protocol). In this section, we will discuss the two most widely used standardized protocols: SCTP and MPTCP. 10.7.1. SCTP (Stream Control Transmission Protocol) Stream Control Transmission Protocol (SCTP) is a transport protocol which was defined by the IETF in 2000 and updated in 2007. Originally, this protocol was developed to transport voice signals on the IP network. As it is a transport protocol, SCTP is equivalent to other transport protocols such as TCP and UDP. SCTP offers better reliability, thanks to congestion-monitoring mechanisms, and the detection of duplicate packets and retransmission. In addition, it supports new services which set it apart from other transport protocols: multistreaming and multihoming. Unlike TCP, which is a byte-oriented protocol, SCTP is a message-oriented protocol. This is shown in Figure 10.19 – the data are sent and received in the form of messages.

154

Software Networks

Figure 10.19. SCTP architecture

When the SCTP transport service receives a data message from an application, it divides that message into chunks. There are two types of chunks: those containing user data and control chunks. In order to be sent to the lower layer, the chunks are encapsulated into SCTP packets. As Figure 10.20 indicates, an SCTP packet may contain several chunks of data or control chunks. Its size must be less than the MTU (Maximum Transmission Unit).

Figure 10.20. Structure of SCTP packet

The two SCTP terminals establish an SCTP connection. As mentioned above, SCTP can support multistreaming on a single connection. Unlike with TCP, several flows can be used simultaneously for communication in an SCTP connection. If congestion occurs on one of the paths, it only affects the flow being sent along that path and has no impact on the other flows. Each SCTP flow independently numbers its chunks, using the sequence numbers StreamID and Stream Sequence Number (SSN). Thanks to multistreaming, transport capacity is greatly improved. One of the most important services provided by SCTP is multihoming support. In this case, each SCTP terminal has a list of IP addresses linked to its interfaces. When the connection is established, the SCTP terminals exchange their address lists. Of the IP addresses on that list, the terminal chooses one as a primary address, with the others being secondary addresses. In accordance with this principle, each SCTP connection has one primary path and multiple secondary paths. The primary path is used for data transmission. In case the primary path breaks down, SCTP uses one of those secondary paths to continue communication.

Mobile Cloud Networking, the Mobile Cloud and Mobility Control

155

SCTP uses the Heartbeat mechanism to check whether a path is active. The Heartbeat message is periodically sent to each address advertised by the other terminal. The consecutive absence of an acknowledgment, Heartbeat-Ack, from the addressee indicates that a path is unavailable. Figure 10.21 shows the Heartbeat mechanism. Terminal A periodically sends the Heartbeat message (every THB seconds) to check the availability of the path between two IP addresses (IP1a and IP1b). After n tests without acknowledgment, the path (IP1a, IP1b) is deemed to be inactive. If the primary path becomes inactive, a secondary path is used to send data until the primary path becomes active once more. In each SCTP connection, a single path may be used for the data transmission, with the others considered as backup paths. Multihoming stems from the possibility of using several paths simultaneously. Extensions of SCTP are needed in order for SCTP to be able to completely support multihoming.

Figure 10.21. Heartbeat mechanism

To support mobility, SCTP offers an extension called Mobile SCTP, which allows dynamic address reconfiguration. When a change of IP address takes place

156

Software Networks

because the terminal has moved, that terminal sends an ASCONF (Address Configuration Change Chunk) message to request the addition of the new IP address, the deletion of the old address and possibly the changing of the primary address for that connection. LS-SCTP (Load-Sharing SCTP) is another extension of SCTP which allows the aggregation of bandwidth in SCTP. The terminals can simultaneously use several available paths for the transmission, ensuring the independence of the congestion control for each path. The extension LS-SCTP is composed of two new elements: – a new parameter in the INIT and INIT-ACK chunks, which indicates whether the terminals support the extension; – two new types of chunks: an LS-Data (Load-Sharing Data) chunk, used for sending data, and an LS-SACK (Load-Sharing SACK) acknowledgment chunk, which provides selective acknowledgments on each path. Figure 10.22 shows the architecture of LS-SCTP. On each side of the connection, we see an overall Association Buffer, a Path Assignment Module and a Path Monitor. LS-SCTP separates the flow control from the congestion control. The flow control is managed at the level of the connection. The terminals use the association buffer to receive data from all paths before sending them to the corresponding terminal or to the application.

Figure 10.22. Architecture of LS-SCTP

Mobile Cloud Networking, the Mobile Cloud and Mobility Control

157

The fragmentation and reassembling of the data take place in the connection buffer. Congestion control is performed for each path. Each path has its own congestion control parameters, such as the size of the congestion window (cwnd), the slow-start threshold (ssthresh), the round-trip time (RTT), etc. To separate the flow control from the congestion control, LS-STCP uses two different sequence numbers. The first is the ASN (Association Sequence Number), which is the sequence number for the active connection. It is used to order all the chunks received by the buffer on the connection to the receiver. The second is the sequence number of data chunks for each path, called the PSN (Path Sequence Number). It ensures that each path is reliable and controls the path congestion. In the LS-SCTP architecture, the Path Assignment module takes care of the distribution of the data between the available paths. This distribution is based on the availability of bandwidth. This module distributes chunks depending on the cwnd/RTT ratio of each path. Once the path is selected, the chunk provides a PID, indicating the path and a PSN. Another difference between LS-SCTP and SCTP is the retransmission mechanism. To improve the likelihood of a retransmitted data chunk reaching the receiver, the retransmission uses a different path to that used for the previous attempted transmission. The Path Monitor module is responsible for overseeing the available paths and updating the list of active paths. This module updates the active path list when it obtains information from the network relating to the failure of an existing path or, the availability of a new path. When the transmitter detects that a path is unavailable, it changes the status of that path to “inactive”. Inactive paths still remain associated with the connection. The transmitter continues to monitor them by means of the Heartbeat mechanism. As soon as an inactive path becomes active once more, Path Monitor adds it to the list of active paths and uses it for transmission. 10.7.2. CMT (Concurrent Multipath Transfer) Multihoming is partially supported by SCTP. The only reason for having multiple paths available is to have redundancy in place. SCTP transmits via the primary path. The secondary paths are used only if the primary path fails. An extension of SCTP, called CMT (Concurrent Multipath Transfer), has been put forward for transmission via multiple paths. In CMT, there is no distinction between the primary and secondary paths. All paths are equivalent, and are used simultaneously in data transmission. CMT employs the same system of sequencing numbers as SCTP. CMT uses the round-robin algorithm to send the data along the available paths. The sending of data

158

Software Networks

to the transmitter is monitored by the size of the congestion window (cwnd) for each path. CMT sends the data along a path as soon as the congestion window for that path becomes available. When several paths can be used for the transmission, the path is selected using the round-robin model. CMT fills the congestion window of each path before moving on to the next. Despite the benefits of using multipath transmission, CMT brings with it the phenomenon of de-sequencing, which degrades performance, because of the fact that the paths in CMT may have different characteristics in terms of bandwidth and delay. CMT identifies three causes of de-sequencing of data at the receiver’s end: – needless fast retransmissions. In a multipath transmission, paths may have different bandwidths and delays. It may be that an acknowledgment from a fast path will indicate the loss of a packet while that packet is still in transit on a slower path, and only arrives at its destination later on. CMT uses the same congestion control mechanism as TCP. When the transmitter receives three duplicate acknowledgments indicating the loss of a packet, it deems that packet to be lost in the network and triggers the fast retransmission of the packet. However, this retransmission is unnecessary, because it is the de-sequencing that is the cause of the duplicated acknowledgments; – inaccurate updating of the congestion window. The mechanism by which the congestion window is updated only allows it to be increased if a new cumulative acknowledgment (CUM ACK) number is received by the transmitter. When the acknowledgments with the same CUM ACK have arrived, even if they contain new data gaps, the transmitter does not modify its congestion window. As the phenomenon of de-sequencing occurs regularly in CMT, several acknowledgments with the same acknowledgment are sent to the transmitter. When the data gaps are covered by a new acknowledgment, the congestion window is only increased with the new data acknowledged in the most recent acknowledgment. The previously acknowledged data in the gaps do not contribute to the growth of the congestion window. In other words, the congestion window update does not repeat exactly the volume of data transmitted; – increased acknowledgment traffic. The principle of delayed acknowledgments in TCP is also used in SCTP. Instead of sending an acknowledgment for each and every packet received, the use of a group acknowledgment for several packets reduces acknowledgment traffic. SCTP uses this mechanism if the packets reach the receiver in the correct order. De-sequenced packets must be acknowledged immediately. However, as it is quite common for de-sequencing to occur in CMT, if the receiver cannot delay the sending of the acknowledgments, then

Mobile Cloud Networking, the Mobile Cloud and Mobility Control

159

acknowledgment traffic is greatly increased, which may impact the network’s performance. CMT includes the following solutions for these problems: - in order to prevent needless retransmissions, CMT offers the algorithm SFR (Split Fast Retransmit), which enables the transmitter to correctly interpret duplicate acknowledgments. SFR defines a virtual buffer for each destination within the transmitter’s retransmission buffer. With the additional information of each destination, such as the highest acknowledged TSN for each destination, the transmitter can distinguish between de-sequencing and actual data loss, with a view to correctly triggering fast retransmission. A chunk with the TSN T for destination M is deemed lost if and only if T is lower than the highest acknowledged TSN for destination M; - the algorithm CUC (cwnd Update for CMT) is proposed to correctly update the congestion windows for the paths. At the level of the transmitter, each destination has a variable known as the PSEUDO-CUMACK, which represents the smallest anticipated TSN. Upon receipt of a SACK acknowledgment, the transmitter checks whether there is a change of PSEUDO-CUMACK for each destination. An increase of a PSEUDO-CUMACK triggers the updating of the congestion window of the corresponding destination, even if the CUM ACK does not advance. Thus, the congestion window for each destination increases in parallel to the acknowledged data, without having to wait for the new CUM ACK; - CMT offers the algorithm DAC (Delayed ACK for CMT) to reduce acknowledgment traffic. DAC enables the CMT to delay the acknowledgment, even if the packets arrive out of sequence. As the sending of acknowledgments is often reported, the transmitter must closely analyze each acknowledgment received to detect any loss of data as quickly as possible. For this reason, in each acknowledgment, the receiver informs the transmitter of the number of data packets it has received since the sending of the last acknowledgment. This proposition requires modifications to be made to both transmitter and receiver. 10.7.3. MPTCP (Multipath TCP) TCP is the most widely used transport protocol by Internet-based applications, but this protocol does not support multihoming. In spite of the existence of different paths, each TCP connection is limited to serving only one path for transmission. The simultaneous use of several paths for the same TCP session could improve data rate and offer better performance by strengthening the reliability of the connection. Multipath TCP, defined by the IETF, is a modified version of TCP which supports multihoming and allows the simultaneous transmission of data along several paths. As MPTCP is compatible with TCP, there is no need to modify existing applications

160

Software Networks

to use its services. MPTCP is designed to gather together several independent TCP flows in a single MPTCP connection, ensuring transparency for the upper layers. 10.8. Conclusion Mobile Cloud Networking is a paradigm which is growing enormously in importance, with the development of mobile terminals that are often not very powerful in comparison to the demand of certain applications. Mobile Cloud is another paradigm where clients come together to form a Cloud. As yet, this solution is not widely used, but it is likely to increase with the installation of more powerful boxes in vehicles. Finally, we have examined network application controllers, which, along with SDN access points, are becoming increasingly common, and this solution represents the likely future of the discipline.

11 Wi-Fi and 5G

Standardization of 5G is a two-phase process. The first phase was finalized in 2018. This was the radio part. The second phase will end in June 2020. This second phase deals with the core network, which is the most important revolution in 5G. However, it is possible, even now, to obtain a fairly clear picture of what 5G will offer, by looking at the numerous testing environments. Figure 11.1 represents the different steps to achieve 5G. Starting from the origin of mobile networks, there was analog circuit-switched 1G, then digital circuit-switched 2G, followed by digital packet-switched 3G, with the exception of operator voice signals. With 4G, we see the move to 100% IP technology, even for voice signals, and 4G also handles multimedia applications. Finally, 5G offers increased data rates, critical missions and the connection of billions of devices.

Figure 11.1. The different generations of wireless networks

In Figure 11.1, we also see two other directions taken by wireless networks. The first is WiMAX (IEEE 802.16). However, this initiative failed – not so much from

Software Networks: Virtualization, SDN, 5G and Security, First Edition. Guy Pujolle. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

162

Software Networks

the technological standpoint, but definitely so from the political perspective. Given the power of mobile network operators, who have chosen the 3GPP as a standardizing body, WiMAX disappeared before the visible trend from 1 to 5G. The second direction is Wi-Fi, with billions of access points and cards sold every day. The Wi-Fi family is currently expanding to include increasingly high data rates and increasingly longer ranges. We are going to examine these new standards in detail in this chapter. 11.1. 3GPP and IEEE The 3GPP and the IEEE are the two main standardization bodies. The different standards are marked in Figure 11.2. It should be noted that LTE (Long Term Evolution) is originally the second version of 3G. The 4G technology begins with LTE-A (LTE-Advanced), which is the first 100% IP standard. In 3G LTE, telephone signals still use GSM – i.e. 2G in digital circuit-switched mode. However, it is possible to use telephony in VoIP mode over the packet channel of the LTE interface. In this case, the voice signals are referred to as VoLTE (Voice over LTE).

3GPP

GSM EDGE UMTS HSDPA HSUPA LTE LTE-A, 4G Pro, 5G

Convergence Femto-cell Passpoint IEEE 802.11u IEEE

Wi-Fi 1, Wi-Fi 2, Wi-Fi 3, Wi-Fi 4, Wi-Fi 5, Wi-Fi 6, Wi-Fi 7

Figure 11.2. The two major wireless solutions and their convergence

Figure 11.2 shows the convergence of the two techniques developed by the 3GPP and the IEEE. In Chapter 1, we saw that only by combining these two technologies will we be able to supply the capacity that users today demand, at least until the arrival of 5G higher data rates. Wi-Fi is becoming the offloading technique for 4G/5G – i.e. it is used to relieve the load on the 4G/5G networks, with Wi-Fi antennas being used alongside the mobile network antennas. In 2020, offloading represents three-quarters of the data rates of mobile terminal connections, but this should decrease with 5G thanks to the opening of many new frequency bands. Wi-Fi flows channeled through hotspots and then the Internet rather than through the mobile access networks represent a major part of offloading. The version of Wi-Fi for the world of telecommunications can take a variety of forms. For the moment, Wi-Fi Passpoint is the version which seems to be the most popular. Passpoint is a suggestion from the Wi-Fi Alliance, which has already shown its strength, making Wi-Fi a great

Wi-Fi and 5G

163

success thanks to proofs of product conformity. However, other solutions are also being studied by the standardization organizations, such as replacing Wi-Fi with femtocells with LTE antennas and equipping all terminals and servers with LTE ports, referred to as Home eNodeB (HeNB). We will see that this solution fits into the context of software networks, with the dawn of software eNodeB. We will begin by examining the new generations of Wi-Fi, which have the necessary capacity to deal with the exponentially increasing demand of users. 11.2. New-generation Wi-Fi Since 2018, Wi-Fi has new names and we will use them and give the former IEEE nomenclature, as well. The new generation of Wi-Fi began with the Wi-Fi 5 (IEEE 802.11ac) standard, which was released in late 2013 and achieves data rates higher than 1 Gbps. This minimum value of 1 Gbps marks the new generation of Wi-Fi. The Wi-Fi 5 standard, which we will go on to examine in detail, is supplemented by the Wi-Fi 6 (IEEE 802.11ax), Wi-Fi 7 (IEEE 802.11be) and the WiGig (IEEE 802.11ad), which uses a new frequency band: 60 GHz. This latter solution is primarily designed for the inside of buildings and for very directional use outside. Indeed, at these frequencies, the signal is very vulnerable to interference from rain, dust and, of course, obstacles. Thus, to use it outside, we need a direct path to the access point, with good weather conditions if possible, or at least a short enough distance for the data rate to remain correct, even in the presence of rain or fog. The new generation continued with the standard White-Fi (IEEE 802.11af), which uses cognitive radio – i.e. the possibility of using frequencies for which there is an owner (also known as a primary) who is not using them at a given time. For this to work, we must either drastically limit the range so as to remain in an environment without conflict with the primary, or have a server that records the use of frequencies and enables the cognitive access points to discover the usage sites and use the frequencies available with no danger of interference. The White-Fi (IEEE 802.11af) standard uses television bands in cognitive radio and is known as TVWS (TV White Space) or sometimes Super Wi-Fi. In this new generation of Wi-Fi, the Halow (IEEE 802.11ah) system focuses on the Internet of Things. This solution offers a wide transmission range of a kilometer, but with much lower data rates. Halow is mostly used in the connection of objects such as intelligent electric counters or health-care devices. The Wi-Fi 6 (IEEE 802.11ax) standard is an enhanced version of Wi-Fi 5 (IEEE 802.11ac) but with a capacity that is four times higher. The first Wi-Fi 6 products have been available since the start of 2019. Finally, Wi-Fi 7 is being standardized and should come out in 2022 or 2023. The corresponding working group is IEEE 802.11be EHT

164

Software Networks

(Extremely High Throughput). Its capacity will be even four times higher than the Wi-Fi 6 version and should overcome 100 Gbps. 11.2.1. Wi-Fi 5 (IEEE 802.11ac) Wi-Fi 5 (IEEE 802.11ac) is a version of Wi-Fi designed to surpass 1 Gbps, using multiple antennas, which can run simultaneously in separate beams. Two mutually complementary solutions have been put in place to obtain the target data rates. The first, which is fairly simple, consists simply of increasing the capacity of the transmission channel using the 5.15 GHz band, which is much freer than the 2.4 GHz band. The range of this higher-frequency band is lower. In addition, obstacles hamper the signals somewhat. However, the available bandwidth is 200 MHz, which is much greater than the 83.5 MHz assigned to the 2.4 GHz band, which enables us to have Wi-Fi channels whose band is 80 MHz and, optionally, 160 MHz. The second solution pertains to the directionality of the signals emitted, using a set of specific antennas. This technique involves allowing the transmission of several communications on the same frequency, but in different directions. There is spatial multiplexing, hence the name of this technique: SDMA (Space Division Multiple Access). Figure 11.3 illustrates this technology.

Figure 11.3. SDMA

Wi-Fi and 5G

165

The SDMA access method uses transmission techniques allowing us to direct the signals – better known as “beamforming” – whereby the signals are orientated in a specific direction, thus giving rise to directional antennas. The antennas enable us to generate several directions simultaneously, which facilitates communication with multiple clients at the same time. The technique used in Wi-Fi 5 is also called PU2RC (Per-User Unitary Rate Control). Wi-Fi 5-conforming antennas are able to deliver MU-MIMO (Multi-User MIMO) – i.e. to connect several clients to antennas from the access point. All the antennas simultaneously use the same frequency by minimizing interference In other words, in order to increase data rates, each directional antenna can be replaced by several antennas with the same directionality, using the same frequency to carry out communications in MIMO mode. This MIMO technology multiplies speed with the number of antennas. It is not a precise multiplication, but it is close enough: interferences increase along with the number of antennas, but they are still slim. In the example shown in Figure 11.3, the two physical antennas each give rise to three internal antennas, which is a total of six antennas. These six antennas mean that six simultaneous communications can be established. With these six antennas, we can also establish two 2×2 MIMO communications (two antennas communicating with two antennas on a remote machine) and two simple communications, meaning six communications in total. We can just as well have three simultaneous communications with, for example, one 3×3 MIMO communication, one 2×2 MIMO communication and a 1×1 communication. If the peak data rate reaches 250 Mbps per antenna with four physical antennas, each bearing three virtual antennas, we obtain 12 antennas. Thus, the total data rate is 3 Gbps. Also, if the bands used are wider – say, 80 MHz instead of 20 MHz – then, theoretically, we can achieve over 10 Gbps. In fact, as for all Wi-Fi access points, the peak data rate is only achieved in exceptional cases, where there is no external interference and the clients are located very near to the antenna that is distributed in the different beams. In reality, the nominal data rates are often divided by at least a factor of 2, but often much more, to obtain the real data rates. The group IEEE 802.11ac, who standardized Wi-Fi 5, with 433 Mbps per antenna over an 80 MHz band and a MIMO with two antennas per client, delivers 866 Mbps per connection with a terminal. With four physical antennas, we achieve over 3 Gbps. With a bandwidth of 160 MHz, the data rate is doubled. The standard specifies that there may be up to eight spatial sectors, which again doubles the data rate. Wi-Fi 6 (IEEE 802.ax) allows us to have up to 16 lobes, which doubles the speed once more. Evidently, these reported figures are peak data rates, which do not take into account the overloads due to access point control and management. As is the case with all Wi-Fi networks, fallback rates become necessary when the conditions

166

Software Networks

are not ideal. Compatibility with existing standards also requires the adaptation of data rates to conform to standards that are not compatible with IEEE 802.11ac. The aim with the new Wi-Fi 6 (IEEE 802.11ax) standard is to further increase the data rate by using more numerous and more directional antennas going up to 16 directions, then 16 beams and Multi-MIMO. The next version, Wi-Fi 7 (IEEE 802.11be), will have a yet increased number of directions, thus enabling to generally overcome the capacity of 100 Gbps. 11.2.2. IEEE 802.11ad The IEEE 802.ad standard is completely new, with frequencies in the range of 60 GHz. This standard is championed by a group of manufacturers: the WGA (Wireless Gigabit Alliance). The name of the product, which has become a standard in personal area networks, is WiGig. However, the basic product could be tri-band WiGig, operating simultaneously on the 2.4 GHz, 5 GHz and 60 GHz bands, and therefore able to adapt to the environment. The peak data rate is 7 Gbps. The range in the 60 GHz band is less than 10 meters; so, this network is said to belong to the category of PANs (Personal Area Networks). The channels used in the base mode correspond to 57.24 GHz, 59.4 GHz, 61.56 GHz and 63.72 GHz. The bandwidth is 2.16 GHz. To compensate for the significant attenuation of the signal, WiGig (IEEE 802.11ad) requires directional antennas that can focus the radio beam within a 6° angle. With this very narrow beam, the WiGig standard is used outdoors between directional antennas with a direct view, or indoors but counting on ricochets caused by obstacles to reach the addressee. Indeed, with a 6° beam, the connection between two machines is often complex. Fortunately, there is an option that enables us to automatically direct the beam so that the two machines wishing to connect can easily set up a connection. Two applications of WiGig are mentioned in the standard. The first corresponds to wireless connections for peripheral computers, with the aim of doing away with the numerous cables that trail everywhere. This application also facilitates simple sharing of peripheral devices. The second application relates to consumer electronics, associated with wireless technology, such as stereo equipment, HD televisions and online gaming systems.

Wi-Fi and 5G

167

11.2.3. IEEE 802.11af The White-Fi (IEEE 802.11af) technique pertains to a totally different method to increase the data rate: the use of cognitive radio. It consists of reusing frequency bands that are not being used at a given time in the television bands. More generally, recent measurements show that, in spite of their scarcity and their high price, frequencies between 0 and 20 GHz are under-used – often less than 10%. The only bands that are heavily used are those employed by telecom operators which, owing to the TDMA, CDMA and OFDMA techniques, exhibit excellent degrees of utilization. The bands reserved for TV channels, which are below 1 GHz, are particularly attractive to operators. These TV channels could be used by access points to achieve ranges of hundreds of meters. Such ranges demonstrate that Wi-Fi technology is capable of serving the needs of WiMAX and 4G clients. The White-Fi standard IEEE 802.11af – which must not be confused with IEEE 802.3af, or PoE (Power over Ethernet) – is also called TVWS (TV White Space). The expression “white space” refers specifically to those frequencies that are not being used by wireless television channels. The operational bandwidth for cognitive radio covers dozens of digital television channels. The data rates achieved by this method may be very high indeed. One important question relates to the way in which cognitive radio is used. It is for each state to regulate this usage. The main solutions pertain to the way in which unoccupied frequencies are used. A number of solutions suggest themselves, depending on the coverage of the cognitive access point. If the band is not occupied at the access point’s location, it can be assumed that it is not occupied within a radius of a few meters of that point. This category includes Wi-Fi access points inside buildings with limited power, only covering the inside of the building. A second solution is to refer to a bandwidth management server, which has the necessary information to determine whether the bands are available. Thus, a common discipline needs to be used between the primary and secondary users. With this in mind, though, will TV operators manage to reach an agreement with White-Fi (802.11af) users? Standardization work is being carried out in different working groups, such as IEEE P1900, with a view to proposing solutions acceptable for both parties. A third, more remote, solution could stem from the ability for each device to measure the interference and indicate to the secondary users whether or not they can exploit the frequency through the use of a specific sensor.

168

Software Networks

At the physical level, the method used in White-Fi (IEEE 802.11af) is FSS (Fixed Subcarrier Spacing) with OFDM. The channels used in OFDM may or may not be contiguous – i.e. they may belong to television channels that are directly beside one another or that are separated by active television channels. A channel contains 64 bearers. Four channels may be selected, each operating at 6, 7 or 8 MHz. In the latter two cases, as the data rate per channel reaches 26.7 Mbps and given that, as with Wi-Fi 5 (IEEE 802.11ac), we can have at least four directions in SDMA, the maximum base rate is 568.9 Mbps, with 8 MHz channels. The bearer access technique is the same as with normal Wi-Fi. Moreover, service classes using the EDCA (Enhanced Distributed Channel Access) technique defined by standards IEEE 802.11e are available and correspond to the four classes Background, Best-Effort, Video and Voice. A fifth, even higher, priority is added for spectrum-monitoring, to determine which channels could potentially be used. 11.2.4. Halow (IEEE 802.11ah) The IEEE also offers a standard, apt for the connection of “things” – i.e. sensors or other small devices with low consumption and often minimal cost. This solution works in the non-licensed bands available below 1 GHz. The technique should enable us to connect a large number of things with low consumption. The sharing of the transmissions takes place in classic Wi-Fi mode, but with a much lower data rate and a much greater range, of up to 1 km. The things are grouped into clearly defined sets so as to avoid too high a number of simultaneous accesses and so as not to lose too much in contention in CSMA/CA mode. In addition, Halow (IEEE 802.11ah) involves the use of classic techniques in the world of Ethernet to put stations on standby and reawaken them. IEEE 802.11ah introduces the function of relay, whereby certain things can connect through the relay, which decreases competition by limiting the number of objects attempting to transmit simultaneously. Relay increases the geographic scope of the things that can be connected. At the same time, it saves energy by having tasks locally managed by the relay. In the standard, the aim is not to create a mesh network, as the number of hops is limited to two. Energy savings are also of crucial importance in the current context, where consumption is increasing exponentially, and the Halow (IEEE 802.11ah) standard takes it into account. For this purpose, the connected machines are divided into two classes: TIM (Traffic Indication Map) terminals and non-TIM terminals. TIM terminals periodically receive information about the traffic on the access point.

Wi-Fi and 5G

169

Non-TIM terminals use the new mechanism TWT (Target Wake Time) to reduce the overhead of the signaling procedure. TWT is a function that allows an access point to define a time, or a set of times, for certain terminals to have access to the communication bearer. The terminal and the access point exchange information including the allotted duration of the activity, to enable the access point to control any competition and clashes between competing terminals. The access point can reserve time periods for a terminal to transmit thanks to various protective mechanisms. The use of TWT is negotiated in advance between an access point and a terminal. The TWT mechanism is also used to reduce energy consumption, because stations using it can enter a semi-sleep state until their TWT comes around. Another important mechanism in Halow (IEEE 802.11ah) is the RAW (Restricted Access Window), which limits the number of terminals that have the right to use the access point. This limitation is achieved by putting the terminals into groups. Channel access is restricted solely to those terminals belonging to a given group, over the course of a set period of time. This mechanism helps reduce competition and avoid simultaneous transmissions from too high a number of machines that are unaware of one another’s presence. The IEEE also defines a TXOP (Transmission Opportunity), wherein an access point and a client can carry out a set of exchanges during a reserved period. The purpose of this mode of operation is to reduce the number of competing channels, improve the efficacy of the channels by minimizing the number of exchanges and help extend the terminals’ battery life by minimizing their periods of activity; the rest of the time, the terminal is on standby. The division of the coverage area of a Basic Service Set (BSS) into sectors, determining subsets of terminal machines, is called “sectorization”. It is done by a set of antennas or a set of directional antenna lobes to cover different sectors of the BSS. The goal of sectorization is to decrease contention and interference by reducing the number of terminals that can transmit simultaneously. 11.2.5. Wi-Fi 6 (IEEE 802.11ax) and super WiGig (IEEE 802.11ay) Wi-Fi 6 was introduced to succeed Wi-Fi 5 technology-wise, i.e. using a great number of lobes to increase the overall data rate. As mentioned earlier in this chapter, there are 16 lobes, which allows us to have 16 simultaneous communications when there is one user per lobe. Another interesting improvement:

170

Software Networks

signal encoding is much more powerful. However, it requires more computing power, hence more energy. Encoding can go up to 1024-QAM, which means coding transporting 10 bits per Hertz instead of 8 with Wi-Fi 5. We must also mention power savings thanks to the TWT timer (Target Wake Time) which allows the process to regularly enter sleep mode during transmission thanks to a coordination between the sender and the receiver. The new version of WPA3 is used for security. The super WiGig, or WiGig 2 (IEEE 802.11ay), standard is an extension of WiGig, mostly by enlarging the bandwidth of the previous generations, always around the 60 GHz band. WiGig 2 uses 8.64 GHz bandwidth. The data rate can reach 44 Gbps. In a previous version, the number of antennas will be of 4 in order to perform 4-antennas MIMO, and the data rate will reach 176 Gbps. 11.3. Small cells “Small cells” is the term used to denote the new generation of cells of very small dimensions. There are numerous factors that weigh in favor of this solution, starting with the reuse of frequencies, adaptation of the cells to the size of a dwelling, lowering of energy consumption, etc. Small cells are found in a variety of forms in network technology: femtocells, which correspond to a home; metrocells, which provide coverage in the street; hotspots, which are set up in public spaces; or picocells, for companies. The main rollout should take place between 2020 and 2025. The first small cells appeared in 2010 and have become increasingly common from 2012 onwards. In certain countries, such as the United States, South Korea and France, all telecom operators already commercialize femtocells. In this section, we begin by examining femtocells, which are devoted to connecting users in their home and in the vicinity. We also present hotspots, followed by picocells, and finish up with a discussion of metrocells. We then go on to describe the Passpoint technology, which is able to bring together all of these smaller cells. In the next section, we will look at backhaul networks, the aim of which is to connect small cells to the core network. Finally, we will conclude this section by presenting some relevant aspects of Software Radio, Cognitive Radio and the new generation of cells. Figure 11.4 represents a network of small cells interconnected through a backhaul network in order to allow interconnection with the core network.

Wi-Fi and 5G

171

Figure 11.4. Small cells and backhaul network

11.3.1. Femtocells The solution advocated since release 9 of UMTS by the 3GPP to manage the increase in data rates from users comes from femtocells. The fundamental ideas are fairly similar to the relay technique: in order to avoid new antennas and engineering work, we use the optical fiber infrastructure which has been in place since 2009 so that all homes benefit from a very high data rate. In fact, this optical network is not only designed to provide users with high data rates, but to manage femtocells in order to meet the extremely high data rates offered to users by 4G and 5G. A femtocell is an antenna mounted on the user’s Home Gateway. It serves all potential clients who are within the antenna’s coverage area, which may be approximately 10 meters, or indeed much more, depending on the obstacles and interference. Thus, the femtocell is used both by the owner of the Home Gateway and by the visitors to the cell. The obvious advantages offered by this technology are the multiplication of the cells and the densification of the network. Another very clear advantage is the drop-in power of the devices which this technology facilitates. A maximum power of 100 mW seems to be the standard that is coming to pass; this power corresponds to the power required for an omnidirectional Wi-Fi transmission.

172

Software Networks

Figure 11.5 illustrates the way in which a femtocell works.

Figure 11.5. Operation of a femtocell

Conventional antennas are linked to the core network by BSC or RNC switches, corresponding, respectively, to 2G and 3G. The femtocell antenna is connected to a Femto-Gateway, which provides access to the core network. The connection between the Home Gateway and the Femto Gateway uses optical fiber technology and also often an ADSL connection. The connected terminals employ 3G, 3G+ (LTE), 4G (LTE-A) or 5G. The data rates are, as yet, fairly limited by radio constraints or by the line capacity to the Femto-Gateway, but by 2020, the data rates available to 4G/5G mobiles have reached 100 Mbps. These capacities are possible thanks to the opening of many frequencies above 20 GHz, thanks to the reuse of frequencies by the directional or by the limited scope of the signals in the small cells. Questions arise as to the usefulness of a Wi-Fi antenna and the possibility of a war between Wi-Fi and 4G/5G. The supremacy of Wi-Fi at the center of the femtocell is indeed questioned by the arrival of LTE-U (Unlicensed), which is an LTE using open frequencies, most of which are currently intended for Wi-Fi. In fact, there is little chance of 4G/5G antennas completely replacing Wi-Fi antennas.

Wi-Fi and 5G

173

For this to happen, all computerized devices, printers, PCs, multimedia equipment, etc., would need to have a 4G/5G card. Thus, there is a high probability that the two systems will coexist, with each of them handling those devices which are most appropriate. Organizations like MulteFire are going this direction. The numerous femtocell products, which are coming on the market, offer diverse, varied options. Figure 11.6 shows the two main options. The Home Gateway is usually referred to as an “HNB” (Home NodeB) or HeNB (Home eNodeB), provided it is available with a 3G, 4G or 5G antenna. Today, these boxes essentially serve to connect 3G/4G/5G mobiles, enabling the operator to offer the client the possibility of using his/her 3G/4G/5G mobile in a location where there is no signal. The HNB/HeNB very often has two radio interfaces: Wi-Fi and 3G/4G/5G. The telecom equipment is plugged into the 3G/4G/5G antenna, and the computer equipment into the Wi-Fi. The optical fiber or ADSL modem simultaneously transports the flows from both environments. The second solution shown in Figure 11.6 corresponds to the use of Wi-Fi to connect telecom equipment and computers. In this case, the 3G/4G/5G frame is encapsulated in a Wi-Fi frame and decapsulated in the Home Gateway. The 3G/4G/5G frame is then directed to the RNC (Radio Network Controller) for forwarding to the core network. This solution is called UMA (Unlicensed Mobile Access) in general, GAN (Generic Access Network) for 3G, and EGAN (Enhanced GAN) for 4G/5G.

Figure 11.6. Access to the HNB

174

Software Networks

11.3.2. Hotspots Hotspots are Wi-Fi access points whose purpose is to facilitate a high-data-rate Internet connection from a smartphone, tablet or PC. Situated in places with a great many people passing through, they are open to all clients who wish to connect, or sometimes only to those who have a subscription. If the access point is open, any and all clients can connect. However, in numerous countries, the operator of the access point must be able to monitor the clients, either in order to charge them for access or to preserve their information to locate the clients later on if the connection is used in the commission of a crime. Hotspots are located in rail stations, airports, shopping centers, at marinas and beaches, boutiques, etc. Hotspots essentially use Wi-Fi technology and depend increasingly on telecom operators to handle their 3G/4G/5G traffic to relieve the relay antennas which, today, are very often overloaded. This is the technique that we looked at earlier: offloading. The difficulty for a hotspot is to manage to offer QoS to the clients’ application flows. More specifically, the capacity of the access point depends on how far away the client is and on the degree of interference: the further away a client is, the more the signal is attenuated and the greater is the risk of interference, and therefore the access point’s overall data rate degrades. In order to deal with these problems, Wi-Fi technology relies on oversizing. Still rarely available when many clients are connected to the same access point, oversizing should soon be introduced generally, thanks to new technologies that have reached maturity since 2014: specifically, Wi-Fi 5, Wi-Fi 6, WiGig and White-Fi (IEEE 802.11ac, ax, ad and af), which should help to ease communication. In today’s world, operators use Wi-Fi for the purposes of offloading, but as the solution is not standardized, each operator has developed their own technique (although they are all fairly similar to one another). Since 2014, the technology championed by the Wi-Fi Alliance – Passpoint – has served as a common standard for all operators. This technology is described in detail a little later on. Metrocells are small cells that are installed in the streets to handle the 3G/4G/5G traffic of operators. They are the equivalent of femtocells, but instead of being situated in the private domain of the home, they are in the public domain. They need to cover a surface area slightly larger than femtocells. Indeed, as there are fewer obstacles in the public domain than in the home, the clients should be able to connect, provided they have a direct view of the access point. The size of the cell is a few tens of meters. A network of metrocells is illustrated in Figure 11.7, with the metrocells serving for offloading – i.e. taking the strain off the main 3G/4G/5G antenna.

Wi-Fi and 5G

175

Macrosite

Small cell

Figure 11.7. A network of metrocells

The purpose of these metrocells is to always allow offloading – i.e. to handle 3G/4G/5G communications for the operators in order to alleviate the workload of the large antennas. The metrocells are either interconnected to form a mesh network or connected by optical fibers to the core network, but the cost of implementing the latter solution may prove prohibitive. They may also be linked by PLC (power-line communication) as in Figure 11.7, where the access points are mounted on lampposts. 11.3.3. Wi-Fi Passpoint Passpoint is an architecture proposed by the Wi-Fi Alliance. The Wi-Fi Alliance was set up when Wi-Fi was first proposed with the aim of promoting this technology. It has had a significant part to play in the success of Wi-Fi and the introduction of various standards, such as WPA2 or WPA3, which offers high security in Wi-Fi networks, or the introduction of priorities with IEEE 802.11e. Passpoint is a solution that helps to relieve the workload of 3G/4G/5G antennas by offloading, giving users the option of connecting in a way that is totally transparent to the 3G/4G/5G networks, using Wi-Fi networks. In other words, the clients will not know through which antenna they are connected to the operator network: by a BTS, a NodeB, an eNodeB, a gNodeB, an HNB or by a Wi-Fi access point (hotspot, femtocell, microcell, metrocell, etc.).

176

Software Networks

In home or business environments, connection to a Wi-Fi network is generally made automatically once the user has entered the authentication data upon first connecting to the network. When he/she is connected to the access point and authorized to enter into the network, he/she is subject to the rules established by the IT manager or the owner of the access point. Once recognized, his/her terminal automatically joins the access points to which it connects regularly, without intervention on his/her part. The connection to most hotspots is often different from that which is described above. In a public place where there are many networks, the clients often have to begin by choosing the network to which they wish to connect and then identify themselves to the access point. Finally, in most cases, they need to enter authentication data. There are solutions in existence to simplify network selection and automatically make the connection while still ensuring good security, but these solutions are often restricted to a small number of networks and are very rarely available abroad. Wi-Fi Passpoint technology fundamentally changes the use of public hotspots. It is a general solution that all hotspot operators can apply, which helps to overcome the limitations of the proprietary, non-interoperable solutions offered by current providers. A program, installed on certified devices, automatically manages the network connection, authentication, authorization and underlying security in a manner that is totally transparent to the user. This solution works with all Passpoint networks. First and foremost, Passpoint takes care of the discovery and selection of a network. The terminal machines discover and automatically choose the network to which to connect on the basis of preferences determined by the users, policies defined by the operators and the availability of the network. These characteristics are based on the IEEE 802.11u standard. The network connection takes place seamlessly – i.e. without the user having to do anything at all. It is no longer necessary for the terminal to be on an active list of access points to which the user agrees to connect. Nor is it necessary to enter account information into a browser. Passpoint uses a consistent interface, and the connection process is automatic. In addition, the choice of peripheral device can always be made automatically for multiple types of identification. Passpoint uses SIM (Subscriber Identity Module) cards for authentication; these cards are also very widely used in modern-day cellphone networks. Passpoint also uses username/password combinations and certificates. No intervention on the part of the user is needed to establish a connection to a trusted network.

Wi-Fi and 5G

177

All Passpoint connections are secured with WPA2-Enterprise for authentication and connectivity, thus offering a level of security comparable to that of cellular networks. Passpoint improves upon WPA2-Business, adding functions capable of containing the known types of attack in deployments for public Wi-Fi networks. Passpoint can also be used to instantaneously open accounts when the user does not have a subscription to a telecom operator. This process is also standardized and unified for the establishment of a new user account at the access point, using the same account-creation method employed by all Wi-Fi Passpoint service providers. Passpoint creates a global platform centered on four protocols based on EAP (Extensible Authentication Protocol). Standardized and supported by all manufacturers in the area of security, this protocol is very widely employed today. The authentication methods EAP-SIM, EAP-AKA and EAP-TLS enable mobile operators to use the same authentication solutions for cellular and Wi-Fi. In addition to the ease of use because of transparent authentication, Passpoint offers numerous advantages to service providers and users – in particular, the following: – Internet for electronic devices without a browser. Passpoint’s authentication method does not require a browser, facilitating the use of electronic devices such as cameras, onboard devices in cars, connected objects and, more generally, all “things” connected to the Internet; – simplicity of connection and creation of new subscriptions, whether to attach them to an existing account or introduce new services. The automation of the authentication process and the security of the connection make Passpoint small-cell access an attractive solution for content providers and manufacturers of contentoriented terminals, such as e-readers. Occasional users of Wi-Fi can use prepaid subscriptions, with the model being similar to that used by mobile operators. Service providers are increasingly eager to protect their subscribers’ paying content. In order to do so, they need to know who receives that content. Passpoint authentication enables service providers to verify the identity of the person trying to connect, to have access to the subscribers’ rights and to offer premium-quality content for subscribers connected from their home, a company network or public access points. Passpoint hotspots offer service providers transparent roaming to other operators’ networks. To activate roaming, the service operators first need to establish mutual roaming agreements, covering access validation and charging. Roaming is based on a single protocol for network selection and user authentication in the hotspot.

178

Software Networks

Once roaming between two service providers is activated, Passpoint devices can connect automatically either to the network of their own service provider or to that of the other, using the same procedure. In all cases, the Passpoint client recognizes the access point as belonging to the list of available networks and establishes a connection automatically, as happens with roaming for cellphones. The new functions are activated only if the access point and the client’s device are Passpoint-compatible. The Passpoint certification program run by the Wi-Fi Alliance ensures this compatibility. In addition, Passpoint supports connectivity with non-Passpoint devices. Clients with valid identification data and using an appropriate EAP method can connect to Passpoint access points. Clients who do not have identification data appropriate for EAP methods have to use an open system, based on authentication in a browser, where WPA2 may not be needed. In this case, security is not guaranteed. Passpoint clients can connect to any existing Wi-Fi network. However, Wi-Fi clients will not necessarily be able to use Passpoint functions and services. For example, standard hotspots offer no security of connection, whereas Passpoint automatically implements WPA2 encryption. Wi-Fi Certified Passpoint is a certification program offered by the Wi-Fi Alliance, aimed at both the access points and the clients’ terminals. Just as for Wi-Fi equipment, it ensures interoperability of the devices and terminals. In Figure 11.8, we illustrate the different types of connections that can be made by a mobile in a Passpoint environment.

Figure 11.8. The integration of 3G/4G/5G and Wi-Fi access

Wi-Fi and 5G

179

In Figure 11.8, three types of connections are possible: a connection with the NodeB, which is the typical case of connection of a 3G/4G terminal; a connection with a hotspot which is, itself, connected to a Wi-Fi controller; and a connection with one or more HNBs (Home NodeB), located in domestic environments. It would also be possible to add metrocells, which are similar to femtocells but situated in public spaces – e.g. a street – instead of in private areas. The connection choices are made by the Passpoint technology, which is based on a small datacenter situated near to the access points. The power of the datacenter becomes essential in this scenario, because the number of actions increases with the possibility of handovers between two of the access points, establishing multi-paths if a user’s data stream can be channeled through several access points simultaneously, and, finally, managing multiple technologies by replacing certain Wi-Fi access points with connections such as Bluetooth, ZigBee or any other solutions available on the market. 11.3.4. Virtualization of Wi-Fi and HNB Regarding 5G access to core networks, connections are gathered on shared access in virtualization, allowing us to access different slots. Access points are virtualized in order to avoid deploying a physical access point via small cell or operator. Virtualization of the Wi-Fi access point is described in Figure 11.9. The physical antenna is shared between all virtual access points. The physical box with virtual machines has a hypervisor on which virtual machines are built. Each virtual access point is located in one of these virtual machines. Characteristics of the access points can be completely different from one another. Of course, we can replace WiFi access points with a 3G/4G/5G NodeB which becomes several NodeBs using virtualization.

Soware access point Hypervisor

Hardware access point

Figure 11.9. Virtualization of a Wi-Fi access point

In the same way, an HNB (Home NodeB) can be virtualized and host several operators without any competition in the infrastructure. Mobility processing with HLR or VLR can also choose to virtualize these devices, which allows us to share

180

Software Networks

these servers between several operators. The final goal is, of course, to considerably lower the costs. Figure 11.10 shows the virtualization of two 5G devices: virtualization of a Wi-Fi access point and virtualization of an HLR.

(a)

(b)

Figure 11.10. Virtualization of 5G devices: (a) spatial same operating system; (b) temporal different operating systems

Virtualization of a Wi-Fi access point leads to the creation of several cells which overlap and are dealt with one after another depending on the split of access point resources. But packets going through the same physical antenna can only go through one after another. This is completely different from Wi-Fi 5 (IEEE 802.11ac) technology in which the antenna has its own lobe which does not cover the other lobes so that several communications can happen at the same time in different spaces. Similarly, in a virtualized HLR, resources are used one at a time depending on the virtual machine that is being executed. Figure 11.11 shows the virtualization of an HNB (Home NodeB) or an MNB (Metro NodeB) in which the box has a supervisor supporting virtual HNB cases. Each HNB or MNB uses its own technology, provided that it is compatible with the shared antenna and the case resources. Thanks to this solution, a home or a building can share the case between several operators. Moreover, each operator could manage several software networks in the same case for different applications. The limitations of this technology are the antenna capacity, since different clients share the same resource. The emergence of Wi-Fi 5, 6, WiGig, White-Fi, Halow (11ac, ax, 11ad, 11af and 11ah) technologies should facilitate the arrival of these new cases thanks to very high data rates. Virtualization gives each user the illusion of being alone, with the desired capacity.

Wi-Fi and 5G

Operator A

Operator B

Operator C

Hypervisor

181

Operator D

HNB

Figure 11.11. HNB virtualization

Figure 11.12. Virtualization of a mesh access network

Virtualization is found in different types of 5G networks such as the mesh network. This case is described in Figure 11.12. The machines managing the mesh network have two interfaces: one towards the users and the other towards mesh network machines. These machines are virtualized with a hypervisor which supports routers or mesh switches depending on the technology that is used. For example, with a TRILL protocol, we can communicate between machines in level-2 routing. Of course, an IP-level routing is possible. It has been thoroughly developed by the IETF’s MANET group (Mobile Adhoc Network). One of the advantages of standardization is to give some networks more resources than they would have had, had there been only one network. Indeed, in the case of a single class of services, resources are shared. With a priority class, associated clients have more resources than clients associated with other classes.

182

Software Networks

If we take the example of a VoIP (Voice over IP) application, the priority software appointed to the application can use many more resources than it would have been able to use without priority class. Since networks are isolated, the VoIP network is sure to receive at least the resources it really needs to ensure the right level of service quality. Resources that have not been used are given back to the other networks which are less prioritary. This way, priorities can be hierarchized, knowing that in the same network, the least priority will have the minimum resources attributed to it during the appointment of resources to each network. We can do exactly the same for backhaul networks that allow us to link access networks to the core network. Backhaul network virtualization guarantees quality services to virtual networks requiring it. Moreover, if access networks are virtualized, backhaul networks must also be virtualized so that clients from different service classes keep the same priority level. 11.3.5. Backhaul networks Backhaul networks form the intermediary networks between the access networks to which the clients are connected and the core network. Such backhaul networks are becoming increasingly important, because small-cell technology requires a very large number of connections with all access points, instead of concentrating the connections on a small number of large antennas. It can be said that, up until now, backhaul networks have essentially consisted of links between DSLAMs and the core network, or between Nodes-B and the core network. Most typically, these networks use Gigabit Ethernet or MPLS technology. The solution developing with small cells pertains to mesh networks or networks of access points, in which the boxes are directly interconnected. For this purpose, mesh access points have two Wi-Fi ports: one to communicate with the neighboring access points and the other to connect the clients. A client wishing to connect plugs into one access point, which transmits the packets to the neighboring access point, which in turn transmits them to its neighbor, and so on until they reach the core network, using an optical fiber or an xDSL modem. Typically, the access points are connected to one another by Wi-Fi, but other solutions are developing, with longer ranges, as we will see. Figure 11.13 shows a conventional configuration for a backhaul network. The small cells are connected to a “Small Cell Backhaul Aggregation” station, which harvests the data streams from the different cells and aggregates them, forwarding them on to the core network, either by fiber optic or by wireless beams. The flows are directed to switches in the RAN (Radio Access Network), which are, themselves,

Wi-Fi and 5G

183

connected to a switch in the core network. This link is often made using very-highdata-rate MPLS technology.

Figure 11.13. A backhaul network

The connections between the access points and the aggregators, or between the access points and the first RAN switch, increasingly frequently use wireless connections with a very high data rate, of several gigabits per second at highfrequency bands – e.g. the 60 GHz band. The major disadvantage to using these frequencies is that there must be a direct view and favorable weather conditions, or a short distance. Indeed, signals on these frequencies, which are sub-millimetric, are disturbed by drops of water and dust in the atmosphere all the more if the distance between the two antennas is long. More specifically, the connection can use the 45 GHz, 57–64 GHz or 70–80 GHz bands, or even approximately 105 GHz. Seemingly, the most popular band is 60 GHz, for a variety of reasons: the bandwidth available, data rates countable in gigabits per second, low interference (because the connections are highly directional), low latency, small size of the components and antennas and, finally, a low production cost. The drawback to using this band is that two communicating antennas cannot be more than a kilometer apart, so as to avoid excessive losses. Sometimes, as explained above, backhaul is split between fronthaul and backhaul. Fronthaul is the part that links the controller antenna’s antenna, like linking the antenna and the MEC datacenter if the MEC datacenter is located close to the antenna. The backhaul is the second part between the MEC datacenter and the core network. With the emergence of 5G, fronthaul is disappearing since antennas

184

Software Networks

are directly connected to the core network. The core network actually starts with the MEC datacenter. 11.4. Software radio and radio virtual machine Undoubtedly, cognitive radio represents one of the greatest revolutions of this new millennium. It facilitates much better use of the spectrum of radio frequencies, which is generally very poorly exploited. However, the optimal use of the spectrum would be impossible without a certain number of other, parallel developments, which we will now go on to discuss. We have already encountered this technology in TVWS (TV White Space) Wi-Fi in the IEEE 802.11af standard, pertaining to the use of unoccupied television bands. Software radio, or SDR (Software-Defined Radio), defines a radio transmitter or receiver which is in software, rather than hardware, form. The implications of such technology are evident, because the characteristics of the radio interface can be modified infinitely without any technical problems at all; users will have the interface they need at a given time t and a completely different interface at time t + 1. Obviously, a certain amount of hardware is needed to perform the computations of encoding, modulation and filtering, and for the transmission itself. The computational power may be hosted on the device itself, or – as we have seen – in the mobile Cloud, in a local Cloudlet, a regional Cloud or even a central Cloud. At present, progress in the area of this technology is being made very rapidly, because it is possible to condense all the computations for a large number of access points to a single machine. Nowadays, we are seeing the emergence of a particularly interesting, attractive solution: using a single antenna for all radio communications. The signal is sent to the appropriate virtual machine for processing. In other words, at certain times, the antenna may receive Wi-Fi signals, and then at other times, signals from a 3G communication, then 4G, then a TV signal, then ZigBee, and all of it is handled by virtual signal-processing machines. The Cloud, once again, is important in these technological advances, and we can clearly see the amount of power and memory that are needed to support the necessary software machines. However, sometimes, there is insufficient power to process the signals in real time if the transmission rate increases. This is the reason for the current drive towards concretization – i.e. the opposite of virtualization. Software is transformed into hardware, using reconfigurable microprocessors. Whenever a specific program is needed, a hardware version of it is created, which increases the processing speed 100-fold. As the process of transforming the

Wi-Fi and 5G

185

hardware is, as yet, relatively slow, we generally need several reconfigurable processors in order to offer real-time applications: while one processor is transforming, the others are running. In Figure 11.14, we have represented what can likely be expected from software technologies in the world of radio over the coming years. The objectives are plotted on the basis of the flexibility that software radio can offer and the intelligence offered by cognitive radio.

Figure 11.14. The different advances made in the field of software radio

Thanks to these technologies, it is predicted that data rates will increase by a factor of 1000 within the next 10 years. 11.5. 5G 5G is 4G’s successor. It is likely to be 2020 before it is standardized, and a few years after that before it begins being commercialized. As mentioned earlier in this chapter, the arrival of 5G was a two-phase process. The first phase, release 15, is the radio part, also called NR (New Radio) and was finalized in mid-2018. The second phase, release 16, defines the core network. Its standardization will end in June 2020. Together, these two standards form 5G. There are many developments in defining 5G from the radio aspect, and a development for the core network. Let us start with the developments for the radio.

186

Software Networks

The first development regards the peak data rate, which should be between 1 and 10 Gbps. The second development is about the services available within 5G. We can find services about the Internet of Things. Objects essentially come from the fields of health care, domotics, vehicles, etc. Most of the time, these are sensors. We can also find services with very short reaction times such as vehicle control and also services with very high mobility data rates. The characteristics of 5G contain numerous key terms, including: – UWB (Ultra Wide Band): use of a very wide spectral band, which can be purchased from states or used in cognitive radio; – smart antennas: antennas capable of operating with diverse encoding techniques and also practically in any frequency; – small cells: as we saw earlier, small cells help to exponentially increase data rates by reusing the available frequencies; – software-based approach, using virtualization: this approach is widely used with this new generation of technology. In our coming discussion, we will look at several examples of virtualization, but this technology is also encountered in at least three main techniques: - software-defined radio, - software-defined networking, - seamless combining of wide-band networks; – virtualization: virtualization takes place in the devices themselves, such as: - HLRs, VLRs, eNodeB, RANs, - Cloud-RANs; – multi-access, multihoming, multi-technology and multi-route, which are technologies that are developing in the form of products available since 2014; – D2D (Device to Device), D2D2D, mesh networks, which represent simplified access techniques in order to facilitate direct communication from one terminal to another. - optimization of energy consumption; - low cost connection. Figure 11.15 represents several situations that can be handled by 5G. Starting on the left-hand side of the figure, we see multi-hop communications: the terminal is

Wi-Fi and 5G

187

too far away from the gNodeB, and the signals pass through intermediary machines. This solution helps to limit the number of antennas while covering a large geographical area. Mesh networks or ad hoc networks provide this type of service, using algorithms to route the signals between the different terminals. Next, we see D2D (Device to Device) – i.e. direct communication between two terminals without going through an antenna. This is a very eco-friendly solution, as it saves energy, minimizing the data rate of the system and thus decreasing electromagnetic pollution. Then, we find high-reliability solutions, whereby one path to reach the antenna can be replaced by another path almost instantaneously. Next, the figure shows connections to machines, for M2M (Machine-to-Machine) or M2H (Machine-to-Human) communication. The connection of things is also represented by medical devices, household appliances or other items. The figure also illustrates the connection to an intermediary relay to reach the main antenna. Indeed, as the energy expenditure depends on the square of the distance, it is very much to our advantage to use devices with short ranges. Next, in the same figure, we see the problem of high density – e.g. the coverage of a stadium holding 100,000 people, with each spectator being able to enjoy a data rate of approximately 10 Mbps. The total data rate of 1 Tbps, in this scenario, could easily be delivered by an optical fiber, but the difficulty lies in the distribution of that capacity wirelessly. 4G and 5G technologies on their own are incapable of delivering such performances: we would need either extremely small cells or extremely high bandwidths.

Figure 11.15. The different 5G access solutions

188

Software Networks

Finally, Figure 11.16 shows the switch from 4G to 5G. In 4G, applications seek a datacenter located at the heart of the network. In 5G, each access to the core network bears an MEC datacenter. Of course, there are central datacenters if required.

Figure 11.16. From 4G to 5G

Integration of access to the core network can also be performed through a PON (Passive Optical Network). The most important evolution is not really visible. This is the switch to software networks for the 5G core network, which we will describe later on. In summary, 5G networks should facilitate the integration of multiple wireless/mobile networks, which should allow for all classic applications and those of future society. This generation should also provide solutions to numerous problems; a few examples will be mentioned below. Most solutions will be heavily dependent on virtualization. The main problems are expressed by the following questions: – how can we avoid having too great a number of access points? – how can we avoid too cumbersome a cable infrastructure if numerous operators have developed small-cell infrastructures?

Wi-Fi and 5G

189

– how is it possible to ensure the quality of service in these overlapping and interlocking networks? – how can the backhaul networks ensure continuity between heavily virtualized endpoints of the network and the core network, which is, itself, virtualized? 11.5.1. 5G radio The 5G network has a rather heterogeneous radio to take care of rather different services, such as the connection of objects requiring particularly low data rates and connections to smartphones, for instance, on at a very high speed moving a train. Radio requires a set of techniques and frequency bands going from very low frequencies to frequencies of 120 GHz. The use of millimeter waves above 20 GHz allows us to go towards very high data rates and unify the techniques used in users’ access and backhaul. Radio access can also use open bands, that is without the need for a license, by taking into account the regulation linked to open bands. In 5G, it is thus possible to have bands with an exclusive license, a non-exclusive license or no license at all. The 5G system processes both the licensed and the license-free spectrums. In both cases, interfaces can be shared between several operators. This viewpoint is encouraged by regulatory organizations to optimize the use of resources like frequencies, which are expensive and sometimes used by an operator, whereas another operator can be overloaded by the lack of resources like frequencies. The various technologies to access antennas, that have been developed for a long time for 2G, 3G and 4G, are very complex, and a new unified radio interface has been developed, which offers adaptability and flexibility. This new unified interface of access includes building blocks and configuration mechanisms which allows us to automatically adapt the shape of signals, protocols, frame structures, coding, modulation and multiple access schemes. Thanks to these building blocks and mechanisms, the air interface is able to easily take care of everything, from the bands to the spectrum allocated to 5G, traffic, service quality and, more generally, the properties linked to different applications. The key technological components described in Figure 11.17 include a new technology to optimize the use of frequency bands by using specifically shaped waves, F-OFDM (Filtered Orthogonal Frequency Division Multiplexing), a new multiple-access technology, SCMA (Sparse Code Multiple Access), a new error correction code, the Polar Code, the duplex mode, and the massive MIMO technology. The new structure of the air interface improves spectral efficiency, improves connectivity and reduces latency time, thus facilitating the deployment of new cases of use such as, for example, the integration of numerous sensors in the smart city or the use of virtual reality requiring very high data rate, or the control of autonomous vehicles.

190

Software Networks

SCMA Sparse Code Mulple access

F-OFDM Filtered Orthogonal Frequency Division Mulplexing

TDD Time Division

Massive MIMO

Polar Code

Mulplexing

Crical mission

Internet of Things

Figure 11.17. Key technological components of 5G

The F-OFDM (Filtered-OFDM) is also a key element. It simultaneously exploits several subcarriers in a given frequency band. Technology defines the shape of the waves. More exactly, it defines the different shapes of waves, access schemes and frame structures depending on application scenarios and needs to perform a service. The solution allows coexisting of different shapes of waves linked to OFDM parameters, as described in Figure 11.18. By allowing different parameter configurations, Filtered-OFDM can offer an optimal choice of parameters for each service group, which means a better overall system efficiency.

Figure 11.18. Coexistence of three different shapes of waves linked to OFDM

Wi-Fi and 5G

191

SCMA (Sparse Code Multiple Access) is a new configuration of wave shape for the new radio interface which allows more flexibility. The shape of this wave, which is not orthogonal, facilitates a new multiple access scheme in which several code words, coming from different peripheral devices, are overlaid in code and power domains and carried on shared time-frequency resources. Usually, multiplexing of several peripheral devices can become ineffective if the number of layers that are overlaid is superior to the length of the multiplexed code words. However, with SCMA, overload is tolerable with a more or less significant detection complexity. In the SCMA, coded bits directly correspond with code words selected in a list of specific codes linked to the layer. The complexity of the detection highly depends on the number of words of the code list. Polar codes are another major breakthrough for 5G. They form a class of errorcorrecting codes discovered in 2009. They can reach the maximum capacity of the channel defined by Shannon with simple encoders and decoders when the size of the block is large enough. Polar codes have raised a lot of interest, and many research studies have been done, mainly on code building and decoding algorithms. A very positive aspect of this technique is its lack of complexity to encode and decode. This allows us to use encoding and decoding in real time on very high data rates. One of the most important decoding algorithms is the SC (Successive Cancellation) list decoding. Many performance evaluations show that polar codes concatenated with cyclic redundancy code (CRC) and an adaptive SC list decoder can overcome the turbo codes used in 4G. Thanks to polar codes, 5G clearly overtakes 4G performance. The last great novelty of 5G is the use of Massive MIMO. This technique uses a very high number of antennas to link M antennas to K peripheral devices. In this case, M is very large compared to K. Figure 11.19 shows an illustration of this technique. From a commercial point of view, massive MIMO is an attractive solution because its efficiency can be 100–1,000 times higher than that of 4G techniques. The idea is to perform several simultaneous communications using the same time-frequency resource. Spectral efficiency is much stronger thanks to this technique.

Figure 11.19. A massive MIMO system

192

Software Networks

11.5.2. The core network The main technique of the core network is called slicing, which means that a network forms a slice of an infrastructure. A slice is actually a virtual network. This way, the core network is a set of virtual networks sharing the same infrastructure. A slice may be: – a network to connect objects: the Internet of Things; – a network to control autonomous vehicles; – a network to manage industrial processes; – a type-MPLS network; – a network for VoIP. However, a slice can cover other types of networks, for example: – a network for each large application; – a network for a large company; – a network for a small company, perhaps; – a network per user, perhaps. One of the complex problems that remain to be solved is the urbanization of virtual machines and of the overall control of infrastructure network. Indeed, the slices must be isolated to avoid attacks from a network which could destroy the other slices. Apparently, if slices are very different from one another, each must have its own management and control system. Slices can also use the same protocols. In that case, a common management system is possible, but we need to be sure there is no possible attack on the management and control system, which seems impossible to prove. However, resources must be shared in order to obtain a strong use of these resources, which means lower cost. Figure 11.20 shows a document produced by the 3GPP. It introduces the large starting slices and the additional slices which will be added gradually. In the long run, a slice could be set upon a user’s request for a specific service and destroyed after it has been used.

Wi-Fi and 5G

193

Figure 11.20. Basic slicing of 5G core network

11.5.3. C-RAN C-RAN, or Cloud-Radio Access Network, is a system put forward in 2010 by China Telecom, with the aim of profoundly changing the access and backhaul networks between the core network and the peripheral devices. We have already mentioned this technology in several chapters, such as the chapter on MEC (Mobile Edge Computing). C-RAN architecture is based on the Cloud: all the control algorithms are handled by datacenters. This architectural arrangement could have been reached simply by continuing to use the conventional access networks connecting clients to the Nodes-B. Nonetheless, the proposition is fairly revolutionary, in that the access network is eliminated completely and replaced with a very simple solution whereby the radio signal is directly sent to the datacenter. The radio signal is transferred to the Cloud unmodified and without being embedded in a packet. The signal processing takes place in the Cloud. The original idea of C-RAN is shown in Figure 6.17. The terminals are connected to the antennas, which re-transmit the signal to the central Cloud, where this signal is decoded. Once it is received by the antenna, the signal is sent back over an optical fiber, using a technique known as RoF (Radio over Fiber). Another advantage of RoF technology is that the same antenna can be used to handle very different signals, such as 3G, 4G,

194

Software Networks

5G or Wi-Fi. It is the Cloud that unscrambles the signals and determines their characteristics in order to be able to decode them.

Figure 11.21. The fully centralized Cloud-RAN architecture. For a color version of this figure, www.iste.co.uk/pujolle/software2.zip

We can also cite other advantages, such as the low attenuation of the signal over the fiber, so the datacenter can be far away from the source. The cost of connection is very low, thanks to multiplexing on the same antenna and then on the same optical fiber. The complexity of the access network is eliminated, and the datacenter handles this complexity with appropriate virtual machines. This technology is found in the cabling solutions used at certain large arenas or shopping centers. The main disadvantage stems from the use of FTTA (Fiber to the Antenna) which is necessary for the communication. If the building containing the Wi-Fi access points is not cabled, then installing an optical cable can often be extremely costly if it requires heavy civil construction. The optical fiber is connected to the datacenter by a BBU (BaseBand Unit) and, on the other end, is connected to the physical antenna by an RRH (Remote Radio Head). The standardization currently under way for NFV (Network Functions Virtualization) includes C-RAN. Functions in the NFV environment are defined to decouple the work carried out on the node, which was previously done by the antenna, and move it to a datacenter.

Wi-Fi and 5G

195

Whilst the C-RAN architecture is an interesting option for countries wishing to develop a new infrastructure, it is much less attractive to countries that already have a local-loop infrastructure for RAN. In such cases, the proposition is to keep the existing structure and add MEC datacenters near to the antennas to perform computation locally. The computation pertains to the signal, but also to the signaling, with algorithms to manage handover or the choice of connection to be favored when multiple options are available. This architecture is illustrated in Figure 11.22.

Figure 11.22. The partially distributed Cloud-RAN architecture. For a color version of this figure, www.iste.co.uk/pujolle/software2.zip

12 The Internet of Things

The Internet of Things was born with the idea of connecting wireless and hardwired sensors – found in the home, at work and more or less everywhere in life today – to the Internet. One of the most widely used sensors is RFID (Radio-Frequency Identification), or electronic tags. The Internet of Things enables us to connect anything and everything that is connectable, from objects which are more or less big to smart dust. The concept is simple, but the problems are numerous, because in general, the “things” in question are not sufficiently sophisticated to handle the communications and computations associated with the applications. We will begin by examining sensor networks, followed by RFID networks and the NFC (Near-Field Communication) interface – which is increasingly being used in the connection of “things” to the Internet – and HIP (Host Identity Protocol), which is one of the standards for interconnection gateways between “things” and the Internet. We will then touch on a few ideas about one of the most rapidly developing types of networks: medical sensor networks, which could potentially affect seven billion people and become an enormous market in the future. Finally, we will look at a case study on the smart city. Virtualization plays an important part in the Internet of Things, even if objects can rarely be virtualized by themselves. Indeed, sensors need a hardware environment to sense temperature, pressure, light, etc. Virtualization involves processing data from devices or objects. Rather than performing this processing in the object, it is deported in a datacenter that is not in close proximity to the object. Through virtualization, we can be under the impression that processing of data flows are performed directly in the sensor. Depending on the response time required, on the complexity of computing and on the correlations to perform with other flows, the processing will be performed very close from the object with skin datacenters,

Software Networks: Virtualization, SDN, 5G and Security, First Edition. Guy Pujolle. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

198

Software Networks

otherwise in the Fog, which is the most common case. It can also be performed farther away towards the center of the network. 12.1. Sensor networks A sensor network is defined as a set of interconnected sensors, with each sensor having a transmitter-receiver. Sensor networks form a new generation of networks with specific properties, which do not fit into the framework of conventional architectures. However, the dawn of the Internet of Things has altered our view of sensor networks, which may, of course, still constitute a closed system, but also connect to the Internet. The miniaturization of the sensors poses problems in terms of communication and energy resources. The sensor needs to be sufficiently intelligent to gather the required information and transmit it correctly. In addition, the sensor’s processor must not be used too intensively, so as to consume as little energy as possible. Thus, it must incorporate reactive elements, rather than cognitive ones. Finally, in order to ensure a good data rate, the range of the transmitter-receivers must necessarily be small – around ten meters. Therefore, the establishment of a sensor network poses problems of routing, error control and power management. Very special sensors are involved in what we call “smart dust”. These dust particles, which are practically invisible, have a radio device in addition to performing the computations relating to the internal sensor. From the standpoint of communication, the environment of IP is too intensive, leading to an excessive data rate and overconsumption. Solutions derived from terrestrial networks, or industrial real-time networks, offer a better compromise between efficiency and power consumed. As there may be hundreds of such sensors per square meter, IPv6 routing represents one obvious solution to deal with the problem of addresses. However, the IP environment is cumbersome, and it is often impossible, in these sensor networks, to use a TCP/IP or even UDP/IP protocol stack. For the moment, security and QoS problems are taking a back seat to energy consumption problems. In any case, a great deal of research is currently being done to render sensor networks efficient and robust. The main radio standards are ZigBee, WiBree and 6LowPAN. WiBree is a very low-consumption technology with a range of 10 m and a data rate of 1 Mbps. This solution was developed by Nokia to compete with both ZigBee and Bluetooth. However, it was integrated with Bluetooth in 2009, with the resulting product being called Bluetooth LE (Low Energy) or BLE.

The Internet of Things

199

Networks for 6LowPAN (IPv6 over Low power Wireless Personal Area Networks) were proposed by a working group at the IETF. The objective, obviously, is to facilitate continuity of IP on non-powerful machines with limited electrical consumption. The use of the IPv6 standard to obtain a very large number of addresses for immense sensor networks poses a problem. The 16 bytes occupied by the transmitter’s address, added to the 16 bytes of the receiver address, plus the obligatory fields, lead to poor usage of the radio link to transport the supervision data. This could become genuinely problematic in light of how little energy the sensor has. ZigBee, however, limits the length of its frame to 127 bytes, which may also cause problems if the message supplied by a sensor for transmission is long. Sensors form ad hoc networks, and they need a routing protocol. The use of a protocol such as IEEE 802.11s, in combination with IPv6 addresses, would be catastrophic for the sensors’ battery life. For this reason, current propositions are much simpler, with protocols such as LOAD (6LowPAN Ad hoc On-Demand Distance Vector Routing Protocol), a simplified version of AODV, DyMO-Low (Dynamic MANET On-demand for 6LowPAN), a simplification of DyMO, from the MANET working group, and Hi-Low (Hierarchical Routing over 6LowPAN). These different protocols stem from propositions by the IETF, and therefore the standardization of ad hoc networks, but they do not include all the potential options. Another important characteristic of the protocols used in sensor networks is service discovery, which should enable the automatic establishment of a connection. The IETF also plays an important role in this domain, having published several proposed solutions – one of which is sensor-oriented: LowPAN Neighbor Discovery Extension. This protocol is a reduction of the Neighbor Discovery standard, which pertains to all energy-consuming elements, including broadcast communications and multicast management communications. A very special example of a sensor network is offered by smart dust, the aim of which is to develop nanotechnology sensors and to connect them to one another by an ad hoc network. Smart dust fits into a space smaller than a cubic millimeter – hence the name. Each grain of this dust contains the components necessary to constitute a communicative computer: a processor, memory, radio, battery, etc. Here, the main problem is the saving of energy while performing sensor functions. In particular, the network part must carry out communications using very little energy indeed. Thus, Berkeley University has designed a specific operating system and protocols, known as TinyOS and Tiny protocol. TinyOS is written in simplified C language, called nesC, which is a sort of dialect designed to optimize memory usage.

200

Software Networks

12.2. RFID RFID (Radio-Frequency Identification), or radio-identification, was introduced for the purpose of identifying objects, so it is also known as electronic tagging. Electronic tags are scanned by a reader, which is able to recover the identification data. The tags are used in numerous applications, from the tracking of animals to security tags for products sold in stores. There are two main types of electronic tags: passive and active tags. Passive tags have no energy source. They are activated by a reader which supplies a sufficiently strong electromagnetic field to generate an electrical current, facilitating the radio-wave transmission of the binary elements stored in an EEPROM memory and constituting the RFID identification. A passive tag is illustrated in Figure 12.1. The antenna must be built in such a way as to be able to receive the electromagnetic field from the reader and transmit its identity.

Figure 12.1. A passive RFID

A passive RFID may be very small. As the equipment requirements are limited, sizes of around a tenth of a millimeter are sufficient. The price of an electronic tag depends on the number of elements manufactured in one batch. It is possible to go as low as several euro cents. Active electronic tags have a source of electrical supply within the component. The first, extremely significant, advantage of such tags is the quality of the transmission. A session can be established between the reader and the RFID so that the signal is automatically retransmitted in case of a problem. Another advantage is

The Internet of Things

201

transmission with a range of several meters between the RFID and the reader, instead of only a few centimeters. One disadvantage could be the battery life. However, with typical usage of a few readings per day, it is possible to achieve battery life of over 10 years. The active RFID may have a larger memory for storing attributes associated with the value of the ID. An active RFID is shown in Figure 12.2.

Figure 12.2. Active RFID

One very well-known application of RFIDs is the electronic passport. The e-passport is defined in a text published by the ICAO (International Civil Aviation Organization). The chip in the e-passport contains the same data that are printed on the passport itself, along with a digitized photo of the passport holder. Transport documentation (tickets, etc.) is a second application for RFIDs. For example, the tickets sold on the Paris Metro contain an electronic tag which memorizes a set of data including the date and place of purchase. More sophisticated solutions are used on public transport in Seoul, where the tag is active and contains money, so that the same ticket can be used multiple times. Toll barriers on the roads also use this active RFID solution, with ranges of a few meters. The active tag is able to subtract the cost of the toll from the amount of money stored in the memory. Similarly, toll barriers for the ski lifts use active RFIDs. Another fairly immediate application is the tracking of vehicles, for example to detect stolen cars when they pass near a RFID reader. Finally, two of the best-known applications of RFID are for inventory and for purchases in stores. Inventories can be taken more often and with fewer errors. Similarly, labels on products dropped into a shopping cart can be recorded by the reader, which greatly simplifies the process of payment for those items in a shop.

202

Software Networks

RFID transmission frequencies are determined by local or regional standardization bodies. The main ones are indicated in Table 12.1. Frequency for RFID

Comment

125 KHz (LF)

First solution to offer a relatively large range for passive RFIDs

13.56 MHz (HF)

One of the standardized frequencies very widely used for passive RFIDs

400 MHz

A number of specific uses, such as the detection of stolen vehicles

865–868 MHz Frequency band standardized in Europe for intensive RFID use (UHF) 902–928 MHz Frequency band standardized for North America (UHF) 2.4–2.4835 GHZ

SM open band in which numerous RFID applications are likely to develop Table 12.1. RFID transmission frequencies

The purpose of RFIDs is to give the identity of the objects to which they are attached. This identification system has been standardized by the consortium EPCglobal. Two generations are available: EPC Gen1 and EPC Gen2. We will focus particularly on this second generation, released in mid-2006, which has become an industrial standard. EPC Gen2 is the acronym for “EPCglobal UHF Class1 Generation 2”. Version 1.1 of this specification was released in May 2006. It handles the protocol between the reader, on the one hand, and the RFID and the identity, on the other. The object of the protocol is to read, write or eliminate an RFID, so that readers sold by all manufacturers are interchangeable. The reading procedure is defined using a timeslot-based system with an anticollision system. Specifically, a reader can simultaneously trigger a large number of RFIDs, but the simultaneous reading of different tags would lead to collisions. Signaling is used to determine the frequency, the coding used (between DSB-ASK, SSB-ASK and PR-ASK) and the data rate of the channel. The anti-collision system means that, whenever tags are read simultaneously, only half of the objects that have been able to transmit are allowed to transmit again the next time. After a certain number of collisions, only one RFID is able to successfully transmit. The algorithm is designed in such a way that each RFID then takes turns to transmit. The reading speed may be up to 640 Kbps.

The Internet of Things

203

The identity of the object is determined by the EPCglobal Electronic Product Code. Gen1 uses 96 bits, while in Gen2 the code is 256 bits in length. An example of this ID for Gen1 is illustrated in Figure 12.3. Figure 12.4 shows the ID fields in Gen2. This solution is much more complex, because it uses intermediary filters to determine the lengths of the subsequent fields. Note that the serial number grows from 36 to 140 bits, which means that, for a given article, the value never has to return to 0.

Figure 12.3. Structure of GEN1 Electronic Product Code

Figure 12.4. The structure of GEN2 Electronic Product Code

Security is a thorny issue in the world of RFID. Indeed, a passive RFID device can be easily read by a third-party reader. In addition, individuals’ privacy may be violated by monitoring and tracing of everything relating to those individuals. There are solutions available, such as encoding the ID in the tag or changing the value of the tag every time it is read. These solutions rely on middleware, which is able to interpret the values of the tags or keep track of the changing values. Active tags can establish an authentication session, facilitating an exchange with the reader, which then acts as an authentication server. In this case, we need to use an electronic circuit in the tag, capable of encrypting a text transmitted by the authentication server. However, the encryption keys used are so short that there is a not-insignificant danger of them being broken after a series of authentication attempts. Numerous propositions have been made, using the anti-collision algorithm, which serializes the reading of the tags, for authentication. MIFARE is the most widely used of contactless cards, with four billion in circulation throughout the world. The name comes from the company that developed it – Mikron; the full name of the card is “Mikron FARE”. Mikron was bought out by

204

Software Networks

Philips, which ceded ownership of this technology to its subsidiary: NXP. There are two different sub-types of MIFARE cards: MIFARE Classic, which uses only a certain portion of the MIFARE command set, and MIFARE from NXP, which implements the entire range of commands. The cards communicate with a reader, which must be at a distance of less than three centimeters. This provides a certain amount of security for the communication, which is highly localized. However, attacks have been carried out, using very specific readers placed a few meters away. If we require security in a communication, the best thing to do is to encrypt the data for transmission. These cards are not very powerful at all, so their production costs are extremely low. MIFARE Classic cards have a 4- or 7-byte serial number and a storage memory between 512 bytes and 4 Kbytes. The cheapest version is the MIFARE Ultralight, which usually serves as a disposable ticket. MIFARE T = CL cards use secure elements of the same type as contact chip cards, and essentially provide the same type of service. The most highly developed card is MIFARE DESfire, which has a larger memory and an advanced operating system, as well as AES and 3DES encryption protocols. MIFARE cards serve numerous applications. The most common include: the reading of a serial number, to trigger a particular action if that number is accepted by the reader; the reading of an ID memorized on the card; or data storage. For the first application, we use the card’s serial number, which is encoded on 4 or 7 bytes. In the second case, the ID must be entered into the card, which may encrypt it, provided it has a secure element capable of carrying out cryptographic computations. 12.3. NFC (Near-Field Communication) The NFC standard is a special case of RFID communications. It facilitates communication between an RFID and a reader over a distance of up to 10 centimeters. This is contactless communication, with, for instance, a chip card or a secure microcontroller. There are numerous and varied applications of this type. The best known is payment over a mobile phone. With his/her mobile, a client can top up his/her account with a simple phone call or low-data-rate data transfer to a server. Once the account is topped up, the mobile serves as a key to pay for a whole range of services. For example, to buy a subway ticket, we need only hold the mobile near the reader for radio communication to validate the purchase of the ticket.

The Internet of Things

205

NFC data rates are fairly low: 106, 212, 424 and 848 Kbps. The frequency range is around 13.56 MHz. The standards in this field were issued by the ISO/IEC. The NFC Forum was set up by Philips and Sony. The security of the communication is assured because of the very short distance between the transmitter and the receiver and the presence of the chip card. Nevertheless, in spite of the maximum distance of three centimeters, in 2012, attempts were made to spy on NFC remotely. Therefore, it is advisable to encrypt the communication. We will now look at an example of the use of NFC, with mobile keys. Car keys, house keys or hotel keys could soon be included in smartphones. One of the advantages of mobile keys is that they can quite easily be transmitted to a friend waiting outside your door, or sent to customers by car rental companies. We will describe the way in which this application works by taking the example of opening a hotel door with a smartphone. Mobile keys are stored in a secure server or in a datacenter in a Cloud. The client has a smartphone, containing a secure zone to store the key. The door to the room has an NFC lock. The environment of this example is illustrated in Figure 12.5.

Figure 12.5. The environment of a mobile key (AEE: Application Execution Environment, TEE: Trusted Execution Environment)

To begin with, the room is reserved using the mobile terminal. The smartphone must contain a secure element, i.e. a placement that cannot easily be reached from outside. This secure element may be a variety of different types, depending on the

206

Software Networks

terminal (smartphone, tablet, laptop, etc.). It could be a SIM card from the operator, but in this case we also need the operator’s authorization to use it. It may also be a so-called “embedded SIM”, which is the equivalent of a chip card, but inserted by the manufacturer that made the smartphone. Most of the time, it is an NXP component which possesses much more than 50% of the market. It may also be a memory card (SD card) which is embedded in the mobile phone. Finally, as indicated in Figure 12.5, it may be an intermediary chip card (a loyalty card from the hotel where the client wishes to stay) that communicates with the mobile through the NFC interface. The mobile, in this case, is seen as a modem which facilitates communication between the external chip card and the key server. The key server is hosted in the Cloud. The door itself is NFC and can be opened by a digital mobile key situated on the smartphone or on the external NFC card. When the client arrives in the vicinity of the hotel, it is detected by the smartphone’s GPS. The phone then transfers the key to the room into the secure zone of its security apparatus or directly to the external NFC chip card. We then merely need to bring the smartphone or the external card near to the NFC interface of the card to open the door. 12.4. NFC contactless payment At the start of 2019, the number of contactless bank cards was 65% of the total number of bank cards. Similarly, 65% of smartphones had an NFC chip. Before the dawn of NFC, other payment systems had been tried, such as SMS payment – a direct-payment model and a solution using WAP. Over the past few years, payment using NFC cards has expanded hugely, and numerous variants have been defined. In the context of NFC, the mobile must be brought to within a few centimeters of the payment terminal. Two major axes for this have been established: prepayment and direct payment. In the first case, the terminal is seen as an electronic wallet: with each transaction, money is taken from the wallet. The payment for a very small maximum sum can be assimilated to this first axis. The second case requires more security, because payment is taken directly from the user’s account, with the sums involved generally being much higher. There are four possible models of mobile payment: – Operator-Centric Model: the mobile operator alone deploys the mobile payment services. To do so, it needs to provide a mobile wallet that is independent of the user’s account. This scenario is pretty rare, because the operators are not linked to the major international payment networks. The mobile network operator must manage the interface with the bank network in order to be able to provide advanced mobile payment services. Pilots using this model have been launched in

The Internet of Things

207

developing countries, but they offer only very partial coverage of the numerous use cases of mobile payment services. In general, the payments are limited to transfers or to modest sums; – Bank-Centric Model: the bank runs mobile-payment applications associated with client machines and asks retailers to have the necessary equipment to effect the sales. The mobile network operators are responsible for the transport network, with the quality of service and the security needed for the payment functions; – Collaboration Model: this model involves a collaboration between the banks, the mobile operators and a trusted third party; – Peer-to-Peer Model: the mobile payment service provider acts independently of the financial institutions and of the mobile network operators. Google, PayPal, GlobalPay and GoPago use a Cloud-based approach to make mobile payments. This approach places the mobile payment provider at the heart of the operation, which involves two distinct steps. First of all, a payment method associated with the Cloud is selected, and the payment is authorized via NFC. At this step, the payment provider automatically covers the cost of the purchase. In the second step, another transaction is necessary: the user’s bank must reimburse the payment provider for that payment and any fees associated therewith. 12.5. HIP (Host Identity Protocol) The Internet Engineering Task Force (IETF) has developed various mechanisms to integrate things with the Internet. HIP is an identification technology which is part of this set of measures. This protocol is standardized by the IETF in RFC 4423 and the series RFC 5201 to 5207. In the world of IP, there are two main naming conventions: IP addresses and the domain-name system. The purpose of HIP is to separate the endpoint ID and the localization of IP addresses. This approach is of the same type as LISP, described in Chapter 9. For this purpose, HIP introduces a namespace based on a public key infrastructure (PKI). This space enables us to manage multihoming securely. The IP addresses are replaced by encoded Host Identities (HIs). 12.6. Healthcare Internet The use of the Internet in the medical domain is on the rise, with 7 billion people to monitor, diagnose and care for. The long-term idea is to be able to continuously

208

Software Networks

monitor individuals with a view to advising them or alerting them to any problems detected. A priori, in the future, everybody will be able to have uninterrupted connection to a Cloud, which will analyze the data provided by sensors both inside the body and on the surface. The power of the Cloud will be such that we will be able to analyze, compare and make diagnoses on the basis of this stored dataset. Essential characteristics in this field are security, “privacy” – i.e. the private nature of the data – and the very low energy consumption of the sensors, which must be able to operate for months on end with no need to change the batteries. This environment is being established bit by bit, and we will now go on to describe the main advances from the networking standpoint. The first element that we will discuss is the sensor network adapted for use with the human body, called a BAN (Body Area Network). An example of a BAN is shown in Figure 12.6.

Figure 12.6. A BAN (Body Area Network)

It is the working group IEEE 802.15.6 that takes care of its standardization. The main standards issued by this working group cover multiple frequencies and multiple technologies:

The Internet of Things

209

– 402–405 MHz for In-In-Out communications – i.e. inside the body and from the inside to the surface of the body; – Ultra Wideband (UWB), which operates on the 2.4 GHz and 3.1–10.6 GHz bands; – between 15 and 50 MHz for communications inside the body; – HBC (Human Body Communications), which operates at 13.5, 400, 600 and 900 MHz for communications on the surface of the body. This communication uses the electrical conductivity of the human body. The intensity of the electrical current, therefore, is extremely low – around a nano-ampere. To interconnect BAN networks with the Cloud, a number of solutions are envisaged: using the LTE network, or 4G, or a new generation of Wi-Fi network. In the United States, the FCC is working to develop a specific band for medical Wi-Fi, with much stricter security standards than those in force in Wi-Fi today. The bandwidth is 40 MHz, with a very low-consumption technology. The band for such external connections would be between 2,360 and 2,400 MHz – i.e. just below the current Wi-Fi band. Other research, which is progressing well, regards the use of integrated sensors in “smart dust” capable of circulating in the blood. Analyses are regularly transmitted to external sensors which, in turn, transmit to the Cloud. The home is a place where we find a large number of very diverse sensors from different domains: ICT, telecommunications, medicine and electronics. For instance, we find Internet beds which have sensors that are able to detect all movements and speech during the night to aid in the diagnosis of sleep problems. Doors, household appliances, lightbulbs, security systems, etc., are, themselves, filled with increasing numbers of varied sensors. Another example is given by presence sensors, which detect whether or not a person is in a room, with a view to automatically turning the lights on or off. Overall, we expect to find around 100 sensors in every home by around 2022. There is a major struggle going on between household appliance manufacturers, to determine which type of connection technology all those sensors will use in the home. The base network uses Ethernet, cable or Wi-Fi. Hence, most devices can be connected directly using an Ethernet port or a Wi-Fi card. Bluetooth, 6 LowPAN, ZigBee and NFC are also in the running. A communications controller should be able to personalize all the connections and ensure they have adequate security, good availability, reasonable energy consumption and as low a cost as possible. The issue stems from the new generation of Home Gateway, which raises the question of management of IP in the terminal machine. For television sets,

210

Software Networks

telephones and other fairly important machines, the presence of an IP component poses no problem: this is what we find in IPTV or IP telephony. However, for a very cheap sensor or device, it is either impossible or too costly to introduce an IP component and have an intermediary machine such as a controller. The controller can also be integrated into the Home Gateway. HIP, described a little earlier, can be used to perform communications – firstly with the sensor and secondly with the Home Gateway. In conclusion, the home contains a large number of “things” which it is possible to connect. Therefore, the Home Gateway is of prime importance, as it allows us – perhaps with a controller – to manage not only the communications, but also the security and quality of service of the “things” in the home. 12.7. Case study: the smart city The smart city is a new paradigm using technologies of the Internet of Things associated with networks and services to make the urban environment able to provide inhabitants with multiple applications with a good quality of services and to reduce operating costs. It is hard to define exactly the intelligence expected by the inhabitants of a city. However, six criteria are used to define a smart city: smart economy, smart mobility, smart environment, smart inhabitants, smart lifestyle and smart administration. The characteristics can be gathered in many different ways. Let us rather describe several main development focuses for the smart city by studying different levels of infrastructure. Let us start with the lowest level, that of the infrastructure of the smart city. In the infrastructure layer, we find buildings, companies, homes, streets, means of transportation, shops, parks and other vertical environments supposed to work with elements which are external to their own perimeter. All these vertical infrastructures, which today are siloed, must change in the prospect of the smart city and be considered within an overall viewpoint to define commercialization of new advanced services. Behind this infrastructure are many networks and devices that must allow their inter-functioning. We can find cabled access networks such as optical fiber or DSL (Data Subscriber Line) with very high speed such as VDSL. We can find all connection solutions through air interfaces, such as that of 5G, Wi-Fi, Sigfox, Lora or LTE-M, and specific interfaces with many objects using networks such as Bluetooth or ZigBee, as well as any other solution that allows from lower to higher data rates. Services enabled by these networks are also very diverse, going from inhabitants’ healthcare management to automatic meter reading or people’s mobility management. Figure 12.7 sums up the infrastructure of a smart city.

The Internet of Things

Management plaorm

Connecvity infrastructure

Town infrastructure

• • • • •

Health Energy/GRID Counter Mobility Governance

• • • • •

DSL Opc fiber Radio Wi-Fi Internet of Things

• • • • •

Building Transport Street Garden Stadium

211

Figure 12.7. Infrastructure of a smart city

Regarding terminal devices, many sensors will supply the smart city’s datacenters. Sensors form a vast unity gathering many suppliers, interfaces, etc. Sensors represent one of the most important pieces of the puzzle that is a smart city: they collect a very high quantity of data and allow us to gather vital statistics which ensure the proper run of the city. In this way, networks of sensors, cameras, wireless devices and datacenters create the key infrastructure which supplies services as fast and efficiently as possible. There are many sensors in a smart city, with many different goals, integrated or managed by different stakeholders. Among them are mobile sensors, portable sensors, meteorological sensors, pollution sensors, counting and transportation management sensors, traffic and security cameras, sensors to manage available parking spots, light, counting, waste management, etc. Behind the infrastructure layer are the interfaces to perform communication between this infrastructure layer and the services of the smart city. The interfaces are materialized between the infrastructure and many application sensors coming from the city’s public and private services. These interfaces come from telecommunications operators, standardization groups and government regulators. We must emphasize the creation of a model of secured, standardized and open infrastructure to provide services. These interfaces must be built combining application software, standards, infrastructure and gateways, and using a unified communication protocol and data semantics in order to establish a common open frame for the supply and management of services.

212

Software Networks

A layer must be added, it will act as a platform to provide services. This platform supports services adapted to the smart city and to smart buildings, which require quite a lot of work. This puzzle requires an overview of the application architecture, and a solution to secure and make the interfaces scalable, thanks to many connectivity options. This requires a software management of peripheral devices, collection and processing of their data, results display, automation of functions and supply of new services. Moreover, a high-level API with access to standardized data must be available so that application developers and vertical silos can use the infrastructure to provide added-value applications at a low cost and in a short time. The upper layer is a layer of services and applications. Thanks to this layer and to all the other layers, real-time information supply on urban environments and vertical silos provides sets useful to different services and applications. Obviously, there are numerous application domains. For example, we can mention: – Real-time traffic information, which is essential to applications since they allow people to plan their travels with public transportation or with a private vehicle. The user can have real-time information about the next bus or the next train. – Parking information is available to allow drivers to find parking spots easily. – Information on air quality, traffic and other values allow us to react when threshold levels have been reached. Access to this kind of data is an opportunity for developers to create new applications. This way, stakeholders can access many on-line services via information portals, citizen services portals, applications offered by companies or tourist services, all based on a common infrastructure and a Big Data of information about the city. Smart cities offer new services which should make them more attractive. Moreover, many benefits are awaited in different fields. The most important fields are: – scale savings thanks to cost optimization and a better involvement from the city’s inhabitants; – advantages for citizens transportation efficiency;

through

improved

healthcare,

lifestyle

and

– possibilities of economic growth for companies, as well as for public services. Major growth opportunities for companies which will follow these new technologies of the smart city.

The Internet of Things

213

12.8. Conclusion The fifth generation network technology 5G is an environment that brings together numerous networks with different architectures and different protocols. This combination of technologies to form 5G is made possible by the arrival of the Cloud, MEC (mobile edge computing), SDN (software-defined networking) and SDR (software-defined radio). Its design ranges from D2D-type solutions to the connection of billions of more or less intelligent things. Virtualization plays an important role because, almost everywhere, there is a decoupling between the physical device and the control device, which is usually situated in a specialized datacenter. The Internet of Things is becoming a reality, thanks to sensors and RFIDs, as well as objects like digital cameras, vehicles, drones, urban property, malls, airports, stations, etc. However, we are still a long way from being able to completely integrate all these devices, because often, they are not powerful enough to support TCP/IP. Despite this weak power, they are rather energy-hungry. In order to achieve this goal, we need to use very specific gateways, which can include the connected things, on the one hand, and the Internet, on the other. HIP could become the fundamental standard of this connection solution. Many intermediation platforms are also available to allow data operators to recover flows coming from object.

13 Vehicular Networks

The environment of vehicular networks is developing very fast. Their application to real life is getting closer and will become a reality; autonomous vehicles will need to interconnect in order to speak to each other and make joint decisions. Three types of vehicular networks will overlap: the control network allowing driving automation, the data network for passengers knowing that the driver will himself become a passenger, and finally, the manufacturers’ network – the goal of which is to follow the vehicle’s lifecycle, perform maintenance and repairs, sometimes while driving, sometimes after a fault diagnosis. As the market is very large, there is much debate on the choice of the components of these vehicular networks. Many solutions are being developed; sometimes, they supplement one another, sometimes they are competitive. Overall, we can say that three solutions compete for the vehicle interconnection part. The first solution is the use of networks which would directly connect vehicles, without going through a large antenna. This solution is shown in Figure 13.1.

Figure 13.1. Vehicle-to-Vehicle communication – V2V. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

Software Networks: Virtualization, SDN, 5G and Security, First Edition. Guy Pujolle. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

216

Software Networks

The second solution comes from 5G mission critical property, meaning thanks to the possibility to have a latency time of about 1 ms. This solution does not require data to go through the 5G antenna to access a server. Direct 5G communications are feasible using a D2D (Device-to-Device) solution. Figure 13.2 shows both possibilities.

Figure 13.2. V2V via 5G

The third solution is direct communication between vehicles thanks to light beams which, of course, do not go through obstacles. This technology is usually called VLC (Visible Light Communication) with products such as Li-Fi (Light Fidelity). Figure 13.3 shows this last solution.

Figure 13.3. Communication through light between vehicles. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

Another great difference between these three solutions comes from the place in which control data is processed. In the case of communication between vehicles via Wi-Fi, the control is performed in servers located in the vehicles. We can call them skin datacenters. For 5G-related techniques, control is performed in the MEC

Vehicular Networks

217

datacenter. Finally, when communication is performed through light, control is directly performed from server to control server. We will now have a closer look at these three solutions and examine virtualization for each of these cases: control, data network and diagnosis network. 13.1. 5G Constantly growing road traffic generates many congestion, security and environmental issues. Information and communication technologies bring solutions to these problems by automating vehicles and making them autonomous. Intelligent Transport Systems (ITS) encompass a great variety of solutions to perform communication between vehicles. These solutions are bound to improve transportation security, to minimize the impact on the environment, to improve traffic management and, overall, to optimize transportation. Driver’s assistance for owners of autonomous vehicles helps them maintain regulatory speed and distance, drive in their traffic lane, avoid overtaking in dangerous situations and cross intersections safely, which has positive effects on traffic safety and management. However, there could be even more advantages if vehicles could constantly communicate with one another or with the road infrastructure. First, we will look at the major technical elements in the first generation of ITS. The first component towards automation comes from the radar technology, developed to be used in vehicles, to ease traffic flow and avoid accidents. There are several types of radars, among which long-range radars working at 77 GHz and used, for example, in speed regulation between vehicles following one another. This allows a vehicle to maintain a constant cruise distance with the vehicle in front. Short-range radars are used, for example, to manage collision avoidance systems which work at 24 GHz and 79 GHz. In the event, that a collision unavoidable, the vehicle has time to prepare for the collision by braking, of course, but also by activating protective equipment such as air bags to minimize injuries of passengers and third parties. A band has been appointed on the 79 GHz to enable the set-up of a radar service. For the past years, research on smart vehicles has emphasized cooperative ITS (C-ITS) in which vehicles communicate between themselves and/or with the infrastructure. A C-ITS can considerably increase the quality and reliability of information available in vehicles, their location and the road environment. A C-ITS improves existing services and creates new services for road users, which brings

218

Software Networks

important social and economic advantages, and increases efficiency of transportation and safety. The technology used to allow vehicles to communicate with one another comes from the IEEE 802.11p standard, which is an approved amendment of the IEEE 802.11 standard. It adds a wireless access for vehicular environments (WAVE), which is a communication system hosted in the vehicle. It defines the improvements to bring to the 802.11 standard (the working group defines Wi-Fi standards) and which are required to take care of applications of the Intelligence Transport System (ITS). This includes the data exchange between high-speed vehicles and between vehicles and the infrastructure along the roads called V2I communication (Vehicleto-Infrastructure) in the ITS band, under a 5.9 GHz (5.85-5.925 GHz) license. IEEE 1609 is a standard that applies to the upper layer built on the IEEE 802.11p. It is also the basis of a European standard for vehicular communication known as ETSI ITS-G5. The IEEE 802.11p standard uses the method of access to the MAC EDCA channel (Enhanced Distributed Channel Access), which is an enhanced version of the DFC function (Distributed Coordination Function), the basic solution of Wi-Fi defined by the group 802.11. The EDCA method uses carrier sensing multiple access CSMA (Carrier Sense Multiple Access) with access collision avoidance CSMA/CA (CSMA/Collision Avoidance), which means that a node willing to emit must first detect whether the support is free or not. If it is free during the time of AIFS (Arbitration Inter-Frame Space), it withholds transmission by drawing a random time. Random time defining procedure works as follows: (i) the node draws the random value uniformly on the interval [0, CW +1] where the initial CW value (Contention Window) equals CWmin, (ii) the size of the drawing interval of the random value doubles each time the transmission attempt fails, until the CW value equals CWmax. If the backoff value has the 0 value, the issuer immediately sends their packet. To guarantee the transfer of urgent messages with high security, protocol 802.11p uses properties corresponding to different access classes (AC). There are four data traffic classes with different priorities: Background traffic (BK or AC0), Best Effort traffic (BE or AC1), Video traffic (VI or AC2) and Voice traffic (VO or AC3). Different AIFs, waiting time before transmitting, and CW are chosen for the different access classes. This algorithm is rather similar to those of other Wi-Fi networks. It allows priority clients to obtain a good service at low cost and a lower quality service for non-priority data. Parameters are indicated in Tables 13.1 and 13.2.

Vehicular Networks

AC

CWmin

CWmax

Background (AC_BK)

aCWmin

aCWmax

Best Effort (AC_BE)

aCWmin

aCWmax

Video (AC_VI)

(aCWmin+1)/2-1

aCWmin

Voice (AC_VO)

(aCWmin+1)/4-1

(aCWmin+1)/2-1

219

Table 13.1. Priority classes

For a typical value of CWmin=15 and of CWmax=1023, the following values are obtained: AC

CWmin

CWmax

AIFSN

Max TXOP

Background (AC_BK)

15

1023

7

0

Best Effort (AC_BE)

15

1023

3

0

Video (AC_VI)

7

15

2

3.008 ms

Voice (AC_VO)

3

7

2

1.504 ms

Legacy DCF

15

1023

2

0

Table 13.2. Default values of EDCA parameters

Table 13.3 gives the transfer rate, modulation, coding rate and number of bits emitted on an OFDM symbol. Transfer rate [Mbit/s]

Modulation scheme

Coding rate

Data bits per OFDM symbol

Coded bits per OFDM symbol

3

BPSK

1/2

24

48

4.5

BPSK

3/4

36

48

6

QPSK

1/2

48

96

9

QPSK

3/4

72

96

12

16-QAM

1/2

96

192

18

16-QAM

3/4

144

192

24

64-QAM

2/3

192

288

27

64-QAM

3/4

216

288

Table 13.3. Transfer rate, modulation scheme and coding rate in IEEE 802.11p

220

Software Networks

To conclude this first section, G5 is the most advanced technique of all to interconnect vehicles and perform a vehicular network. Many experiments have been carried out to evaluate performances in the field. This solution is perfectly able to take care of communications between vehicles and is simple to implement, at a rather low cost. Its drawback might be that it is limited to the control part because of the Wi-Fi channel used, the bandwidth of which is limited. This CSMA/CA solution can also be said to be a little random, and if the number of vehicles is large, strong interferences can modify performance. 13.2. 5G standardization 5G standardization is almost over. The first phase, represented in release 15, was completed in 2018, and the second phase ends in 2020. These two phases are important for vehicle control. Indeed, the first phase standardized radio and the second phase standardized critical missions. The latency for this second standardization phase is of a millisecond, which is perfectly acceptable for vehicular control. As the reaction time for the driver’s foot to brake is over 100 ms, we consider that the reaction times, taking into account the latency and computing of controls performed in a server, must be inferior to 100 ms. 13.2.1. 5G vehicular networks For more than 15 years, the ITS (Intelligent Transportation System) has been defining a spectrum band, harmonized throughout the world in the frequency band 5.9 GHz, dedicated to vehicle traffic safety applications. As mentioned above, the IEEE 802 11p standard was chosen as a communication system which brings active safety functions. For the past 15 years, authorities, industry and road operators have managed to deploy this common communication infrastructure using this standard, but on a relatively small scale. An alternative comes from a rather similar technique, based on 3GPP standardization for mobile networks, offering direct communication between vehicles. This alternative to traditional IEEE 802 11p communication can combine the assets of cellular connectivity based on a V2N (Vehicle-to-Network) communication with the assets of direct V2V (Vehicle-to-Vehicle) communication. In the automotive industry, as cellular connectivity is required for other types of control, standardized direct communication can be set up in parallel with the deployment of cellular connectivity. The V2V (Vehicle-to-Vehicle) or V2I (Vehicle-to-Infrastructure) are two appropriate solutions to manage safety elements in the field of vehicles. Many manufacturers have already implemented many use cases of these technologies by

Vehicular Networks

221

adding smart sensors to the vehicle and collecting information during V2N (Vehicleto-Network) communications. Some examples in this field are described below. A first example is the collision warning, based on information from sensors coming from vehicles, pedestrians or the infrastructure. They show the location and speed of vehicles which might cause a collision. A second example is the detection of a dangerous location coming from a back-office, such as the detection of a strong deceleration of a vehicle during emergency braking, reported by sensors which belong to different vehicles. If a safety function is based on data coming from other vehicles and must provide active security, the network latency must be limited to a maximum time of 100 ms. Without this upper directly, information must be transmitted limit between vehicles, to or from the road infrastructure or to or from pedestrians. A third example is given by V2I (Vehicle-to-Infrastructure): for example, a traffic light transmits to vehicles the moment when the light will become green. Another example is of a construction vehicle transmitting information about current speed limits around the construction site or on hazards linked to the ongoing construction work. The new word used in the field of 5G to qualify communications in the vehicular environment is C-V2X (Cellular-V2X). The basic V2X mode includes two communication modes: mode 3 and mode 4. In mode 3, the cellular network allocates and manages radio resources used by vehicles for their communication. Signaling transits through the eNodeB or the MEC datacenter which virtualizes the eNodeB. In mode 4, vehicles select radio resources autonomously in order to perform direct V2V communications, which means directly from one vehicle to another. Mode 4 uses a schedule for controlling congestion, which allows the transit of priority packets first, by giving them the service quality required. V2X communications in Mode 4 can work without cellular coverage, which means that they can go directly from one vehicle to another if the vehicles are close enough to each other to be in the covered zone. Mode 4 is considered to be the basic V2V mode since road safety applications cannot depend on the availability of radio coverage. These control applications must be able to be executed without any coverage external to that of antennas located in the vehicles. In the case where cellular connection cannot be performed, direct communication via the PC5 interface (the interface from one vehicle to another) must function to maintain communication. This requirement comes from the fact that vehicles cannot be covered by a radio cell linked to an eNodeB or that vehicles use their V2N connectivity of the Uu interface for other services. The ecosystem based on the PC5 interface must be able to function at any time, regardless of the mobile operator network to which the connected vehicle belongs. In addition to all technical reasons, direct communication is a matter of trust because, contrary to a telecommunications operator, no third party is implied in the implementation of the

222

Software Networks

communication. If a vehicle cannot trust the information transmitted via the PC5 communication, the system cannot function. Safety systems for direct communication are mostly managed by the C-V2X standard. All data security and privacy rules are defined in the C-V2X direct communication standard, the same way they are specified in the IEEE 802.11p standard. Moreover, these security rules benefit from V2N functions for operational means, such as distribution, computing, storage, renewal of certificates and revocation lists. This is why C-V2X focuses on PC5 communication with different categories of services: – different types of user equipment involved in the C-V2X (vehicles, pedestrian area, roadside infrastructure); – different network suppliers, including the absence of network coverage; – different components of the security management system which set up a level of trust between the system entities. 13.2.2. Technological presentation of C-V2X In 2015, 3GPP specified the V2X functionalities to handle V2X services based on the LTE system in release 14. V2X covers the vehicle-to-vehicle (V2V) specifications and the vehicle-to-infrastructure (V2I) specifications. The first specifications in terms of infrastructure connection occurred in 2016. Additional improvements to manage new operational V2X scenarios were added in release 14, which was completed in June 2017. V2V communications are based on communications between devices (D2D: Device-to-Device) defined in the context of proximity services (ProSe), defined in releases 12 and 13. In this context, a new D2D interface called PC5, which we already studied in the previous section, was introduced to manage inter-vehicle communications, especially to define, the improvements necessary to support V2X requirements regarding communication between vehicles. Improvements have been made to manage vehicles with high speed (Doppler shifts/ frequency shifts, etc.) reaching 500 km/h, synchronization outside of the eNodeB coverage, better resource allocation, management control to handle heavy traffic and traffic management for V2X services. Two modes of operation for V2V of the LTE-V were introduced in release 14: the first is communication via 5G and the second is communication on the PC5 (V2V) interface, peer-to-peer. The Uu interface corresponds to the V2N (Vehicle-to-Network). It is the case when communication is set up via an eNodeB. V2V communication with 5G on the PC5 interface is managed via two modes: the managed mode (PC5 Mode 3) which works when communication between vehicles is planned by the network, and the unmanaged mode (PC5 Mode 4) which works when the vehicles communicate

Vehicular Networks

223

independently of the network. The PC5 mode in Mode 4 and the management of interferences are managed by algorithms distributed between vehicles, whereas the ordering and the management of PC5 interferences in Mode 3 are managed via the base station (eNodeB) by a control signaling using the Uu interface. Deployment scenarios, which are managed by one or several operators, benefit from several advantages resulting from the use of connectivity based on the Uu interface with broadcasting services. C-V2X involves V2N / V2N2V / V2I / V2N2P connectivity based on the so-called Uu interface with V2V / V2I / V2P connectivity based on the PC5 (reference point of a direct vehicle-to-vehicle connection). We will mainly study the advantages of PC5 technology for V2X and the way to deploy it. Moreover, this technology takes into account all aspects of V2N, which we must find in the direct PC5 connection to obtain a perfectly controlled connectivity. From the operational point of view, the PC5 interface can be established for communications such as: – vehicle-to-vehicle (V2V); – vehicle-to-infrastructure (V2I); – between vehicle and road users, whether they are pedestrians, bicycle riders or other road users (V2P). From the point of view of the physical layer, signals are transmitted using the LTE technology. The LTE defines an elementary resource by a 15 KHz frequency band for a time of 0.5 ms. These basic resources are grouped in elementary data blocks to transmit, made of 12 bands of 15 KHz, according to the technique used in the OFDM. An elementary block in LTE uses a spectrum of 12*15 KHz=180 KHz. We call it Resource lock or RB, which means the set of 12 bands of 15 KHz. In the LTE-V (LTE-Vehicle), the frame lasts for 10 ms. It is cut in 10 sub-frames that last 1 ms. Each sub-frame is divided in two slots of 0.5 ms. A slot then lasts 0.5 ms. During this time slot, seven symbols by OFDM bands are transmitted. Yet, there are 12 sub-bands, which make a total of 84 symbols transmitted every 0.5 ms. In the context of V2X, there is no slot division into two parts, and so for one slot of 1 ms, 168 symbols are carried. A symbol can reach from 1 bit to 6 bits depending on the modulation selected (QPSK at 16 QAM in the context of V2X but can reach 128 QAM in the classic context of LTE). The data is transmitted in Transport Blocks, TB, via shared channels PSSCH (Physical Sidelink Shared Channels). Control information is transmitted in SCI messages (Sidelink Control Information) also called SA (Scheduling Assignment) on control channels PSCCH (Physical Sidelink Control Channel). A TB contains a full packet and can take up one or several channels. This packet can be a tag or a CAM message (Cooperative Awareness Message) or any other message piloted by

224

Software Networks

an event. Each TB has a linked SCI value (Sidelink Control Information) and both must always be transmitted in the same sub-frame. SCI deals with 2 RB (Resource Block) and includes information such as the MCS, used to transmit the TB, the RB taken up by the TB and the interval of resource reservation for the semi-persistent scheduling scheme. This interval can refer to the periodicity used by vehicles to transmit their packets (in multiples of 100 ms). As information on the SCI is essential, the SCI must be properly received in order to receive and decode the TB. Figure 13.4 shows an example of 3 C-V2X sub-channels. This figure differentiates the RBs used for the TB and SCI transmissions.

Figure 13.4. The frame and an example of C-V2X sub-frames

13.3. VLC Light is another means of communication, which is very useful for vehicular networks, since no obstacle would stop light between two vehicles following one another. However, communication is only possible from vehicle to vehicle and the vehicular network is performed by point-to-point connections, which must be relayed by devices located in the vehicles. In this section, we will look at the characteristics of these networks named VLC (Visible Light Communication). One of its major products is Li-Fi. Li-Fi, or Light Fidelity, is a wireless communication technology based on the use of visible light. Its wavelength ranges from 460 THz to 670 THz. The principle of Li-Fi is based on data coding and sending by light amplitude modulation following a standardized protocol. The protocol layers of Li-Fi are adapted to wireless

Vehicular Networks

225

communications up to a dozen meters, which is the acceptable limit for communication between two vehicles. Data is coded to allow crossing of the optical channel, which is far from perfect. The electric signal is converted into a light signal via an electronic circuit which allows the variation of the LED’s light intensity, depending on the data to transmit. The modulation used is amplitude modulation where either logic 0 or logic 1 is transmitted according to Manchester encoding. The transmitted light spreads into the surroundings and is distorted for various reasons such as weather conditions. This environment and the distortions associated with it are gathered under the name optical channel. Li-Fi is a wireless technology using a protocol which is very similar to those of the group 802.11, but it uses communications through light instead of radio frequency waves. Visible and invisible light use much higher bandwidth than radio frequencies, which of course triggers much higher data rate. More precisely, Li-Fi has followed the IEEE 802.15.7 standard which was modified to take into account the most recent breakthroughs in the field of wireless optical communications. Specifically, the standard now integrates modulation methods with orthogonal multiplexing by optical frequency (O-OFDM). The IEEE 802.15.7 standard also defines the physical layer (PHY) and the layer of medium access control (MAC). The standard defines the data rates which allow the carrying of audio, video or multimedia services. It takes into account mobility of the optical transmission, its compatibility with artificial lighting present in infrastructures and interferences which can be generated by ambient light. The standard defines three PHY layers with different data rates: PHY 1 was created for outdoor applications and works from 11.67 Kbps to 267.6 Kbps. PHY 2 layer allows us to reach data rates from 1.25 Mbps to 96 Mbps. PHY 3 is used for many emission sources with a specific modulation method called CSK (Color Shift Keying). PHY 3 can offer data rates from 12 Mbps to 96 Mbps. The recognized modulation formats for PHY 1 and PHY2 use the On-Off Keying (OOK) modulation and the VPPM (Viable Pulse Position Modulation) modulation. Manchester encoding used for PHY 1 and PHY 2 layers includes the clock in the data transmitted by representing a logical 0 with an OOK “01” symbol and the logical 1 with an OOK “10” symbol, all of them with a continuous component. The continuous component prevents the extinction of light in the context of an extended series of logical 0.

226

Software Networks

13.4. Conclusion Vehicular networks are booming. Driving automation at low speed requires a coordination that will only be possible through a vehicle-to-vehicle connection. The market is huge, and manufacturers in this sector are getting ready to fight for their standard. The first parties come from G5 standard using IEEE 802.11p. Many tests have already been carried out and show the efficiency of this method. However, this solution can also see its data rates drop in bad conditions, especially with interferences coming from different antennas. 5G, which is a much more recent technology and not fully mature yet, is coming into force with classical techniques from telecommunication operators. This solution is seen as a potential winner thanks to its more certain and less probabilistic aspect. Moreover, the world of 5G standardized a Mode 4, which allows us to set up direct vehicle-to-vehicle communication without an eNodeB cell.

14 Tactile Internet

Tactile internet can be defined as a network environment allowing remote access to handle, execute and control real objects or virtual objects in real-time by men or machines. An IEEE working group was defined to determine and standardize this new environment: IEEE P1918.1. One of the major solutions for the introduction of tactile internet is 5G thanks to its very short latency time of 1 ms and to a great communication reliability. Essential functions like security, trust and identification must also be added to achieve tactile internet applications. 14.1. Tactile internet applications Tactile internet must remotely provide not only content, but also skills. For this, tactile internet requires a very reliable and very reactive connectivity, capable of providing reliability and latency values usually required for physical, real-time interactions. Technically, tactile internet requires setting up a communication infrastructure which combines low latency, determinism, high availability, resilience and reliability, plus a high level of security. These requirements will necessarily be satisfied simultaneously by all applications, but they will have to be meant for the said “critical” applications, and partially performed for many non-critical applications. In order to meet these strict requirements, tactile internet will have to be linked to a computing Cloud. Yet, since the interaction time must be very small, it implies the presence of control and processing datacenters very close to the user, called edge computing. If we refer to previous chapters, it is MEC, fog or skin computing. Tactile internet will also have to be linked to virtual or augmented reality for sensory and haptic control, in order to process queries with a reaction time of a range of millisecond. When these conditions are fulfilled, tactile internet must be able to simultaneously manage critical applications (as found in real-time

Software Networks: Virtualization, SDN, 5G and Security, First Edition. Guy Pujolle. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

228

Software Networks

manufacture, transportation control or fault detection in healthcare) and non-critical applications (such as games or educational software). If we want to give further detail, we must study the application classes affected by tactile internet. In the following lines, we give examples of applications. – The field of Industry 4.0. Traditional manufacturing industry is developing thanks to changes triggered by the digital world. Control of industrial equipment such as robots or smart sensors fall in this category where real time and remote control are essential. The upcoming industrial revolution (Industry 4.0) is based on automated systems which can be activated in real time. This revolution implies many major changes in manufacturing, engineering, the use of new material, the supply chain and the management of lifecycle. These smart factories must be especially flexible and self-organized. Services and applications must be linked to platforms which allow the automation of the environment which will also be used for communication between people, objects and systems. Latency requirements for different applications range from several dozens of milliseconds for mechanics to some milliseconds for M2M (Machine-to-Machine) communication, and 1 ms for electrical appliances. The value of reaction time for control cycles of fast-moving devices can reach values which are less than 1 ms. Finally, some very sensitive subsystems require very short latency which can be counted in some dozens of milliseconds. – Automotive field fully autonomous and automated driving should completely change traffic. By detecting distances between automated vehicles and potentially dangerous situations, the autonomous driving system, represented by a set of machines, should drastically reduce the number of road accidents and traffic jam. The necessary time to avoid collisions in the safety applications of a vehicle is inferior to 10 ms. If a bidirectional data exchange for autonomous manual driving is considered, a latency of the range of millisecond will probably be necessary. – Online healthcare, thanks to advanced telediagnosis tools, medical examination can be performed anywhere, at any time, regardless of where the doctor is. This application comes with strict requirements regarding reliability, availability, performances and safety. Specific applications for home care or home hospitalization can require performances less important than, for example, a real-time, remote surgery. In some cases, latency, image quality, availability and safety are capital parameters and require a particularly accurate control. Moreover, accurate telemedicine treatment cannot be performed without haptic feedback, which is also possible if the man–machine interaction (MMI) is performed in real time. For example, in eHealth applications for tactile internet, an end-to-end latency of a few milliseconds is required, as well as very high reliability in data connection and transmission.

Tactile Internet

229

Figure 14.1 describes some complementary applications for the tactile internet according to latency time and data rate. For example, we can find massive real-time games, videoconferences involving several users, real-time virtual reality, remote desktop work, etc. Response time 1 ms

5G access Vehicular network

10 ms

Alarm network

Augmented reality 4G access

100 ms

Tactile internet

Real time games

VHD video conférence

Wi-Fi access Monitoring system

Personal Cloud

1000 ms

5G applications

Virtual reality

< 1 Mbps

1 Mbps

10 Mbps

Video streaming

Cloud networking

File transfer 100 Mbps

1 Gbps

Throughput

Figure 14.1. Some 5G-specific applications

14.2. Functionalities required for the tactile internet Many functionalities are required for the tactile internet. They were described in the previous section. The main network for tactile internet should be 5G. However, it is entirely possible to perform tactile internet with a terrestrial network if the performance needs are met. A communication architecture piloted by 5G, with a radio interface to connect the client, a radio access network (RAN) and a core network (CN), meets the requirements to perform a tactile internet. End-to-end architecture for tactile internet is divided into several domains: master domain, network domain and control domain. This is shown in Figure 14.2. The master domain is made of a man or a machine and a system interface at the disposal of the human or automated operator. This interface has a control system which converts the operator’s input and the input of tactile internet thanks to diverse coding techniques. If the control device is a haptic device, the human can touch, feel, manipulate and control objects in real or virtual environments. This control device mainly monitors the operations of the controlled domain. In the case of a network control system, the master domain which contains the operator owns a controller which monitors the sensors and actuators of the network. The controlled domain consists of a controlled object, like

230

Software Networks

a robot, a machine-tool or any other machine. This object is directly controlled by the master domain via several control signals to interact on the remote environment. In case of remote operation with haptic feedback, energy is exchanged between the operator and the controlled domains, which forms a comprehensive control loop. Bridge

5G Antenna

Router

Bridge

Figure 14.2. Architecture of the tactile internet

Networks provide the basic support which is necessary for communication between the operator, the man or machine and the real or virtual object which is remotely controlled. This network must have all the required resources to provide quality communication with very short latency times to perform the required real time for the control of the distant object. The carrier grade networks bring these properties, especially 5G networks. The carrier grade indicates networks with very high availability, service quality and systems that allow the verification of this quality. To cover all these functionalities, which are necessary to create a tactile internet, whether 5G or more generally a carrier grade network, the following properties must be verified: – Ultra responsive network connectivity. This property is especially important for systems with haptic interactions or for critical communications, for example a machine-to-machine type of communication allowing real-time control and automation of dynamic processes in domains like industrial automation, manufacturing of complex objects, remote surgery or management of vocal or tactile processes. – Access reliability and availability of carrier grade type. Network connectivity must be very reliable, which is a requirement for tactile internet. However, reliability requirement is very specific to the application process and can vary very much depending on the result expected. – Possibility to manage multi-access. The use of several networks simultaneously is important to increase network reliability and availability. It is also a solution to reduce latency thanks to controllers capable of choosing the optimal

Tactile Internet

231

network at any time, optimal either for its latency time or for communication security. The controller must be equipped with enough intelligence to perform real-time choices. This means to direct the flows towards the network with the performances requested for the proper run of the application. Artificial intelligence techniques should play an important role in tactile internet. – Proactive appointment of necessary resources to gain the quality of services required. Resources control directly impacts the data rate, latency, reliability, service quality and performance of the different layers of protocols. To obtain a very strict latency, resources must be provided in priority for strategic applications of the tactile internet. Moreover, when applications must coexist, whether they are haptic, voice or sight-controlled or machine-controlled, networks require a strong intelligence to perform flexible control of resources. Networks have also to be able to provide some functionalities on-demand. Allocation of radio frequencies is an important task, especially for 5G, which is one of the main networks used for tactile internet. The access network, the RAN (Radio Access Network), must also bring the required quality of service. RAN virtualization technologies can go the right course, however, these technologies can also extend the reaction time if the architecture which was set up does not meet the real-time needs. – A control of feedback loops. Feedback loops, to react on systems further away from customized function to an application, are necessary to maintain the required quality of service. Tactile internet requires such feedback loops when signals are exchanged bidirectionally on the network to allow remote control of objects and real or virtual systems. – A core network is very reactive to consider the quality of service required by the tactile internet. The creation of the network can help significantly reduce latency in the core network. SDN techniques, which allow strict control of the resources and allocate them in a centralized way thanks to the controller, are perfectly adapted for tactile internet. Virtualization of functions like the NFV (Network Functions Virtualization) is also a very good way to obtain the reliability, availability and safety required. In the context of the 5G core network, slicing is perfectly adapted to tactile internet by personalizing a slice with the whole set of functionalities necessary to the applications. – An overall end-to-end network creation. Indeed, tactile internet supports applications end-to-end. Thus, the sum of networks which are passed through must always be capable of offering the required quality of service. Each sub-network can have the resources required to perform the quality of service required for tactile internet. But, it is possible that their juxtaposition is not able to create the performance required, so that tactile internet applications work in real-time, have good security or reach the availability of a carrier grade network. It is thus important to sense and size all the technological components implied, such as the local network, the access network, the core network, and to study both the response time,

232

Software Networks

data rate, security and availability of the whole set. If the end-to-end network is 5G-type, the overall study requires to properly size the radio access, the RAN and the core network, which can be easier, especially if the whole set is orchestrated, controlled and managed by the same system. 14.3. Technical specifications for 5G To meet the challenges of the requirements of tactile internet applications, technological innovations are necessary in many aspects of the creation of a network, a physical layer, protocols and resources management. For 5G especially, radio resources, RAN and the core network must be managed. Technology mainly faces the challenge of providing a network connectivity which is sufficient enough to allow latency and reliability required for a real-time physical interaction. While some applications of the tactile internet, such as robotics linked to the Cloud, machine-tools control or telemedicine, require simultaneously low latency and high reliability, other tactile internet applications, such as entertainment or education, are mainly based on a low-latency communication, where security can be of a lower level. We can imagine many layers which could help reduce latency and increase reliability of mobile networks. First, the latency must be reduced by a few milliseconds. Different means are implemented to reduce latency of data transmission, especially in the new 5G networks generation. One approach is gaining more and more importance. It consists of defining flexible frame structures and numbering these frames, so that the sub-carrier spacings and transmission times can have different values. This way, applications with different requirements can be managed. Recent developments by 3GPP (3rd Generation Partnership Project) go this direction, agreeing upon baseline times and much smaller slots between subcarriers, for example, 0.125 ms, or even less. With this access concept and minislots (two OFDM symbols or more), it is even possible to considerably reduce the latency time to values inferior to 1 ms. HARQ (Hybrid Automatic Repeat Request) procedure also offers a solution to decrease the transfer time of a slot. Usually, in LTE, several HARQ transmissions are authorized, which allows a very efficient use of resources, which means high spectral efficiency because of the implicit adaptation of links via retransmissions. Nonetheless, for tactile internet applications which require a low latency of a maximum range of some milliseconds, only one retransmission can be authorized to maintain very low latency. A unique retransmission is only possible if the length of TTI (Transmission Time Interval) is reduced and the number of TTI between retransmissions is also very small. Other concepts are being developed to reduce the round-trip HARQ, for example by using feedback techniques. In general, for tactile internet applications, it is recommended to use other types of time diversity and target a transmission without retransmission.

Tactile Internet

233

Access to uplink channels is another source of delay. It triggers considerable latency. Here, semi-persistent planning can be applied to speed up the access to the channel and avoid unnecessary delays. Furthermore, we must note that in addition to taking care of transmissions with low latency time and small processing time, the architecture of the physical layer must also allow high-data-rate transmissions for some applications like eMBB (enhanced Mobile Broadband). Limited by light speed, the distance between the sender and the receiver must be as short as possible to guarantee very low-latency time communications. For example, if computing is performed in a Fog or MEC-type Cloud infrastructure directly at the edge of mobile networks, latency can be very low and acceptable for real-time applications. Technologies used in this virtualized environment are SDN with NFV-type technologies. Another way to reduce latency time is by managing direct communication between peripheral devices, D2D (Device-to-Device), instead of using intermediary devices such as eNodeBs or access points to relay the data transmissions between two mobile devices. Second, reliability must be improved for some applications requiring very low failure rates. High reliability of data transmissions is a key factor for some tactile internet applications, where it is necessary to guarantee that some transmissions are performed successfully in a given time frame. In mobile networks, the degree of reliability is determined by all the network components involved. Disruptions in connectivity can come from different effects such as power loss, electromagnetic interference, hardware failure and software bugs, among others. Radio access network, the RAN and the core network must be created so as to function with a very high availability. The radio channel, especially, is a predominant source of failure. Reception powers can, for example, be too weak for the received signal to be collected, because of an important mitigation or obstacles which are larger than anticipated. In the same way, eNodeB interferences, or interferences of the neighboring terminal devices, can degrade the quality of the signal. Today, a classical technique to fight issues of interferences is to use diversity. By sending or receiving several versions of the same signal, signal quality can be very much improved, which reduces the error rate. Different types of diversity can be used: time, space or frequency diversity. As stated earlier, time diversity is used in retransmissions to send data packets in case of transmission error. Space diversity, microdiversity, under the shape of multiple transmitting or receiving antennas, helps fight small-scale fading. Moreover, macrodiversity is created using several eNodeBs. Existing macrodiversity solutions, such as single-frequency networks (SFN) and coordinated multipoint systems CoMP (Coordinated Multipoint Protocol) are mainly used to increase data rate and coverage at the edge of cells. However, for tactile internet applications requiring high reliability, these systems can also help reduce transmission failure. Such macrodiversity configurations involve several

234

Software Networks

basic stations performing multi-access architecture: a terminal device is simultaneously connected to several antennas. Moreover, these antennas are heterogeneous. Frequency diversity is the third case scenario. Emissions on different carrier frequencies are combined like carrier aggregation (CA) or dual connectivity (DC). To increase reliability, it is possible to duplicate packets thanks to distinct radio interfaces instead of splitting data as in CA or DC. Furthermore, user mobility is another source which can bring mediocre quality due to the dynamic evolution of the situation. A handover is performed when a user goes from one cell to the other. Even though the handover is performed properly, an interruption time of around 50 ms occurs. In case of transfer failure, recovery takes a lot more time. In order to avoid interruptions, multi-access on the control plane level is used more and more. With multi access, there are several paths for message delivery, even though the link is broken due to ongoing handover. This multi access solution can also allow transmission security: packets of a same message transit through different paths and one attacker should be able to listen to different paths and put packets back in order, which is not to be considered. Another technique to improve reliability is through network coding, in which information is transmitted as multiple coded packets via different paths leading to the destination. This improves latency and reliability. To sum up, diversity is the key factor to improve reliability. However, we must be able to choose the best solution and the type of diversity to use to meet the needs of tactile internet applications. 14.4. Tactile internet in Industry 4.0 Industry 4.0, or the fourth industrial revolution, is the current tendency towards automation and data exchange in manufacturing technologies. Industry 4.0 is also known as smart factory. In these smart factories with a modular construction, smart control systems watch physical processes, create a virtual copy of the physical world and make decentralized decisions. On different platforms of the Internet of Things, smart control systems communicate and cooperate with one another and with human beings in real time. Radio communications are used more and more frequently for the connection of objects and the fog and Cloud access. This radio connection uses 4G and 5G more and more often. 5G is mostly used, thanks to its latency time which is accepted by machine-tools and industrial processes. In Industry 4.0, four design principles help companies identify and implement scenarios to start the smart factory. – Interoperable communication: it is the possibility for machines, devices, sensors and people to connect and communicate with one another via a tactile internet, ensuring information and decision exchange almost in real time, even

Tactile Internet

235

though network resources are shared between mobile users, network operators and virtual services. – Information transparency: it is the capacity of information systems to create a virtual copy of the physical world, which we also call digital twins. These digital twins are enriched by data coming from sensors. It requires aggregated raw data coming from sensors to obtain contextualized information that can be used much more than in automated processes. – Technical support: it is the capacity of support systems to help human beings by grouping and visualizing information understandably to make decisions based on the context and solve urgent problems occurring in real time. It is also the capacity of hardware systems to replace human beings to perform many repetitive tasks. – Decentralized decisions: this ability comes from smart control systems able to make their own decisions and execute their tasks as autonomously as possible. Tasks are not appointed to a superior level unless they are exceptions, interferences or conflicting goals. 14.5. Conclusion on tactile internet Tactile internet is a new generation of networks which combines both real-time properties and reliability. Tactile internet is based on smart controls able to react in the time range of a millisecond. Thanks to its properties of mission critical and very high reliability, 5G is the basic network for this new world. The result of this is an architecture in which objects and machines are directly connected on the antenna, which can be located outside the company. However, skin and fog networking solutions are also able to answer the problems resulting from tactile internet by the presence of small datacenters located in the company itself. There is the same dichotomy of the automation world in which either a 5G operator is outsourced, or control remains in the company where the necessary equipment is placed in order to keep control over the applications.

15 Security

Security is of prime importance for the networks of today and will become even more crucial in tomorrow’s world. With the Internet, it is necessary to constantly be inventing new procedures to ensure the non-disclosure of numerous elements, such as the location, the names of the parties being authenticated, the semantics of the messages and private information. Of the various possible solutions, we will discuss three in detail: the use of a Cloud of security, the use of secure elements and the emergence of the blockchain. The first solution is a new paradigm which is becoming increasingly widespread. The second solution has been used for a long time now but is renewed with the eSIM. The third solution is growing fast and offers certification and traceability. Security in the world of networking is a paradigm which does not have a simple solution, besides making improvements to the existing algorithms, of which there are already a very great number, with a view to dealing with new attacks. Yet this chapter discusses a new solution in the world of security: the Cloud of security – i.e. a Cloud whose purpose is to secure the data and networks in the world of operators, companies and the general public. An initial diagram of a Cloud of security is shown in Figure 15.1. The Cloud of security contains numerous virtual machines for security such as authentication servers, authorization servers, identity management servers as well as firewalls and even very specific firewalls corresponding to a particular application. We also sometimes find secure-element servers which may contain thousands of SIM cards, eSIM or HSMs (Hardware Security Modules). We will now describe three examples of servers hosted in the Cloud of security, beginning with DPI (Deep Packet Inspection). The objective of DPI is to determine the applications transported in the flows passing through a network. In order to do so, DPI scrutinizes a flow and looks for applicative signatures which can, in general, be found by examining the grammar of an application. Every Internet application has its own signature. DPI examines the series of bits and determines the signatures.

Software Networks: Virtualization, SDN, 5G and Security, First Edition. Guy Pujolle. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

238

Software Networks

The advantage of this solution, as compared to examining the port number, is that there is a verification of all the bits passing through, rather than only the headers of the frames and packets. Indeed, attackers encapsulate their attack in messages of a known type which can easily pass through conventional firewalls. However, to detect signatures in a flow coming in at high speed is particularly complex and requires top-of-the-range – and therefore expensive – hardware. With the deportation to a Cloud of the function of examining the bits in a flow to find signatures, the cost of DPI can be greatly lowered. Thus, we deport the function of recognition of the flows to a powerful datacenter. The cost is often the cost of transporting the flow which needs to be sent to the datacenter. Various solutions have imposed themselves on the market, depending on the requirements. For example, only the message headers can be fed back, which dramatically reduces the flows to be examined, but brings with it the risk of missing encapsulated bits.

DPI Firewall A Firewall B Firewall C Authencaon server Secure element

Figure 15.1. A Cloud of security

The second example is that of firewalls. Once again, the world of Cloud computing fundamentally changes these modules, deporting their software to datacenters with numerous advantages. The first advantage is to have specialized virtual firewalls for each application. With DPI, we detect the nature of the application and send the flow in question to the corresponding firewall. The firewall has processing capability to examine all details of the flow. The low-powered firewall dealing with all flows is replaced by a set of very powerful, specialized firewalls. The disadvantage relates to the flows, which need to be sent to the specialized firewall and must then be returned to the company, although there is the possible advantage of being able to hide firewalls in order to prevent denial-of-service attacks on them. The third example is a secure-element server which, as we will see in this chapter, can be used to secure access to sensitive services such as mobile payment. These secure-element servers can contain thousands, or even millions, of secure elements such as smartcards, which may be reached by secure channels requiring high-security services.

Security

239

Also, for inclusion in the Cloud of security, we could cite numerous servers such as authentication servers, identity management servers, encoding servers (although these have the peculiarity of having to be very near to the user) and intrusion detection servers. 15.1. Secure element This part examines a new generation which has gradually been being established over the past few years. This new generation uses secure elements as a basis. Indeed, high security cannot be satisfied simply by software, which can always be broken by a memory dump and good knowledge of the position of the keys and other management systems. Secure elements are found in different forms, but the most commonplace today is the smartcard. Thus, we will begin by describing smartcards as the most classic secure element today. Figure 15.2 shows a smartcard containing all the physical elements of an ordinary computer: a microprocessor, ROM, RAM, a persistent memory, generally EEPROM, a communication bus and an input/output. Until new smartcards embedded in a USB key were released, the communication channel represented a weak point, but this is no longer the case. ROM

Cryptographic coprocessor

EEPROM

CPU

RAM

Printed circuit board Microcontroller Glue Plasc support 0.80 mm

85.60 mm

53.98 mm

Figure 15.2. Hardware architecture of the smartcard

At the heart of most smartcards is an 8-bit microprocessor with computational power of about 1–3 MIPS (million instructions per second), at a frequency of 3.5 MHz. This type of microprocessor takes 17 ms to execute the encoding algorithm DES (Data Encryption Standard), for example. It should be noted that the power of smartcard processors is increasing at such a rate that new security cards will be able to support much more complex algorithms.

240

Software Networks

The 32-bit architectures based on RISC processors with 1 million transistors constitute the new generation of microprocessors, with a computational power of about 30 MIPS at 33 MHz. Such microprocessors only take about 50 µs to execute a DES and 300 ms for RSA encryption with a 2048-bit key. Besides the processor, the different types of memory are the main elements of the microcontroller. They serve to save programs and data. Smartcard microcontrollers are computers in their own right. They generally have between 1 and 8 Kb of RAM, between 64 and 256 Kb of ROM and between 16 and 128 Kb of EEPROM. The amount of EEPROM available on a smartcard was, for a long time, limited because EEPROM was not designed specifically for smartcards and its physical limits of miniaturization had been achieved. Flash memory has enabled us to overcome this constraint. Thus, we have seen the emergence of smartcards with 1 Mb of persistent flash memory. The protection of the smartcard is mainly taken care of by the operating system. The mode of physical routing for data access is only available after personalization of the card just before it is issued to the user. Data are accessed through a logical structure of files secured by access control mechanisms. The smartcard is widely used in mobile telephone networks (SIM cards), as are public key infrastructures (PKIs). This technology has enabled operators to exploit their network while greatly limiting the number of instances of fraud, thereby also ensuring financial viability. It is also the legal support for electronic signatures, as recognized by numerous countries. The EAP smartcard directly processes the EAP protocol in the smartcard. The main intended applications are EAP-SIM and EAP-TLS. There are many advantages to an EAP-TLS protocol being executed on a smartcard. First, authentication is independent of any given software publisher (e.g. Microsoft). In addition, the security provided is certainly better than with the EAP-TLS protocol carried out in software form by the processor of a personal computer, because it is always possible for spyware that has infected the PC to capture the keys. The advantage of smartcards is that all the computations are carried out in the card itself, and the smartcard only outputs an encrypted flow. The secret keys never leave the smartcard. – multiple-identity management: the card holder can use several wireless networks. Each of those networks requires an authentication triplet: EAP-ID (value delivered in the message EAP-RESPONSE.IDENTITY), EAP-Type (type of authentication protocol supported by the network) and cryptographic credits – i.e. the set of keys or parameters used by a particular protocol (EAP-SIM, EAP-TLS,

Security

241

MS-CHAP-V2, etc.). Each triplet is identified by a name (the identity), which can have multiple interpretations (SSID, account username, mnemonic, etc.); – assignment of an identity to the card: the card’s identity is contingent upon the host network. Internally, the card may possess several identities and adapt to the network to which the terminal and the smartcard are connected; – processing of EAP messages: as the smartcard possesses a processor and memory, it can execute code and process the EAP messages received and send such messages in response; – calculation of the unicast key: once the authentication session has been completed, the EAP tunnel can be used for the transmission of diverse types of information, such as keys or profiles. It is possible to transmit a session key, for example, and make it available to the terminal wishing to access the resources of the wireless network. Figure 15.3 illustrates an authentication procedure between an authentication server and an EAP smartcard. The flow of commands passes through the software programs on the PC – i.e. first the EAP software entity, which simply transmits the EAP packets to the RADIUS server, on the one hand, and to the smartcard, on the other – followed by the machine’s operating system, which handles the interface with the smartcard and finally the IEEE 802.11 interface of the wireless link. To improve security, it is possible to insert chip cards on the server end as well, so that the EAP-TLS algorithm from the authentication server is also run on the chip card. With new chip cards that can store up to 1 Gb, it is possible to memorize the logs needed for traceability. Clearly, the more clients there are, the more the number of smartcards needs to be increased.

Figure 15.3. Authentication procedure using an EAP smartcard

242

Software Networks

The eSIM, or embedded SIM, is a secure element which is much smaller than a regular SIM card. This embedded card is only here to receive personal data from the client, and the physical card is virtualized to unroll algorithms, either in the terminal itself or in a datacenter located further away. Of course, these eSIMs can be found in smartphones, but their main interest is in objects, especially small objects that do not have the space to have a regular SIM card. The eSIM has several advantages for the client, who can easily change operators by reprogramming the eSIM. This embedded card can even act as several SIM cards from several operators. The major drawback is that if the object does not have any more power, it is useless. We cannot remove the card and place it in another object. 15.2. Secure elements-based solution 15.2.1. Virtual secure elements Security elements can, themselves, be virtualized by the normal virtualization process: taking a physical machine and installing a hypervisor capable of supporting several virtual smartcards. This process is illustrated by Figure 15.4, with three virtual machines.

Figure 15.4. Virtualization of smartcards

However, a hardware element is essential to obtain very high security: – in software form, keys can always be stolen. It may be difficult to hack certain systems, but there is always a risk, which nobody is willing to take in very high security situations; – in a secure element, the keys cannot be stolen, or at least not from the card itself. The danger lies in the possibility of hacking while the keys are being transported to the secure element.

Security

243

The solution of virtualization can be viewed completely differently – for example, a set of physical cards whose functions are deported to a Cloud of security. These secure elements are grouped together, as illustrated in Figure 7.5, on cards; these cards may number in the thousands if necessary.

Figure 15.5. A Cloud of secure elements. For a color version of this figure, www.iste.co.uk/pujolle/software2.zip

One of the paradigms of the new generation of security is to install a secure element associated with each element needing to be defended – be it an individual, an object, a virtual machine or anything else. To connect to the Internet, clients first need to authenticate themselves with their secure element. Symbolically, Figure 15.6 represents what is needed by each individual – and, by extension, each thing – connecting to the Internet.

Figure 15.6. The key for the Internet

Obviously, the symbol could be presented in a completely different form. Before going any further into the access securing processes, let us first examine the different solutions used to provide security to an environment.

244

Software Networks

15.2.2. The TEE (Trusted Execution Environment) Figure 15.7 shows the three major possible cases with one form of security based on software, one on hardware and an intermediary form of security pertaining to the TEE (Trusted Execution Environment).

Figure 15.7. The different security solutions

As we have seen, software security is not costly, but in general it is not very high level. It is often possible for a very good attacker to retrieve a copy and be able to find a certain number of keys. For this reason, it is usual to try and associate a hardware element with the keys. The other end of the security scale is to always have a hardware element to contain keys or important security elements, which can even facilitate the execution of algorithms within the safe box. The TEE is an intermediary between the client and the terminal’s processor. It has been on the market for several years. The TEE is a secure zone within the main processor of a smartphone or tablet or any other mobile device, which ensures that sensitive data are stored, processed and protected in a confidential environment. The feature of the TEE is to provide safe execution of the authorized security programs, known as “trusted applications”, which means it can provide end-to-end security by imposing protection, confidentiality, integrity and access rights on the data. Smartphone manufacturers and chip manufacturers have developed versions of this technology and built them into their devices as part of their proprietary solution.

Security

245

Thus, application developers needs to deal with the complexity of the secure creation and evaluation of the different versions of each application in order to conform to the sets of specifications and security levels established by each individual proprietary solution. The first solution to use the TEE is to attach to it a local secure element such as a smartcard, which is found on numerous mobile terminals. The smartcard serves to accommodate the secure part of the applications. The difficulty is in installing several independent applications and being able to modify them, remove them and add new ones with a high level of security. For this purpose, the TSM solution was developed. We will discuss this solution in the next section. 15.2.3. TSM A TSM (Trusted Service Manager) is a neutral third party that safeguards the security of the whole procedure of downloading applications (particularly payment accounts) to smartphones with a secure element. Commerce and payment require there to be a certain level of cooperation between mobile operators and financial institutions. The TSM knows the security mechanisms employed by the banks and by mobile telephones, forming the link between multiple financial institutions and operators, while guaranteeing complete security of the consumers’ credit card information. The TSM enables a service provider to distribute and remotely manage their contactless applications by allowing access to the secure element via the NFC link. Figure 15.8 shows the relationships between the different actors involved with the TSM. Trusted Service Manager TSM

MNO

Load payment applicaon

Request

Customer

Issuer Request for payment applicaon

UICC NFC mobile phone

Figure 15.8. Relationships between the different participants with the TSM

246

Software Networks

Several TSMs are needed for an issuer – a bank, for example – when it deals with different operators. Secure elements are indispensable for NFC services to ensure protection of critical applications from the point of view of security and to facilitate the use of the same security standards as for debit and credit cards. However, it is not sufficient to simply integrate a secure element into a mobile telephone. Specific functions are responsible for secure memory allocation so that the zones of the different service providers are separated from one another. Also, it must be possible to establish new secure services (applications and associated cards), on demand, without a third party being able to access the PIN code or other sensitive data. This is the role of the TSM (Trusted Service Manager). Figure 15.9 offers an overall view of the economic system behind NFC and TSM.

Figure 15.9. The economic system of NFC and TSM

There are two types of TSM: SEI TSM (Secure Element Issuer TSM) and SP TSM (Service Provider TSM). SEI TSM manages the security domains in the whole device, for the service provider and, thus, the whole of the life cycle of secure elements. SP TSM, for its part, manages the service provider applications available on the secure element. The communication standards and data transfer to secure domains (SD) are defined by the industrial federation “GlobalPlatform”. All types of secure elements are handled. TSMs thus function as sorts of gateways between the different service providers and the end clients.

Security

247

The TSM infrastructure can also be used to make available and manage critical applications for security which are not based on NFC. Online authentication, as it is used in the context of online and mobile banking services, is an example of this. One of the challenges that needs to be overcome, in the context of the establishment of the NFC economic system, relates to the fact that the number of secure-element providers, and therefore SEI TSM providers, is relatively limited, whereas the number of service providers is growing incessantly. If each service provider had its own TSM, the number of integrations and professional agreements required would simply explode. Hence, there are now open SP TSMs that handle several service providers, interacting with most service provider TSMs on the market in question. Service providers have access to a broad range of secure elements and can thus target most clients using smartphones, while secure-element manufacturers devote less time and money to the integration of projects including SP TSMs. As the name indicates, both types of TSM must be trustworthy and, therefore, must satisfy the strictest security standards. This also underlies the implementation of quality processes, such as security measures integrated into the systems and the solutions – in particular, the storage of secure keys. This is critical for the applications involved in payment operations. Payment systems such as Visa or MasterCard thus require a rigorous TSM certification process. Because of these requirements in terms of security and savoir-faire in mobile technologies and payment technologies, TSMs, and particularly open SP TSMs, are usually run by a trusted intermediary. The general problems of the TSM system are as follows: – neutrality: the TSM must be independent of the issuer and the operator; – scalability: the TSM must be able to handle all types of credentials for all applications (banks, badges, coupons, transport, etc.); – the TSM must be able to support all types of form factor such as SIM cards, “secure microcontrollers”, SD cards and eSEs. The specific problems of the TSM are that: – the credentials must be isolated for each application from the same issuer and from each different issuer. A variety of options have been defined to create security domains (SD); – access to the SD is complex. In addition, there may be several secure elements, and it is complex to ensure access to the right secure element;

248

Software Networks

– multi-tasking is another fairly complex characteristic. No one application should be allowed to obstruct another for too long, on all the security elements. In particular, we need to manage priorities between different issuers; – the use of SMS or BIP (TCP/IP) for the transport of the applets between the TSM and secure element is problematic. It leads to several problems, including OTA (Over The Air) security, and the creation of SDs in a highly secure manner; – the process of installation of the applets must be common to all operators and all issuers; – there is a DAP (Data Authentication Pattern) mechanism, which enables the issuer to be sure that the installed applet has not been modified by the operator; – the problem of who is authorized to manage the SDs needs to be precisely defined; – the personalization of the applets in the SDs needs to be done with caution; – secure elements have a stringent limitation on the number of SDs. This is a major constraint for the number of NFC applications. Figure 15.10 shows the security domains of the telecom operator, that of the TSM and those of the applications. All of this is complex to manage, and the rights to the SDs of the operator and of the TSM are imperfectly defined.

Figure 15.10. The security domains

Security

249

15.2.4. Solution without a TSM A solution without a TSM has been developed by some companies, including Google and EtherTrust. This solution entails deporting the secure elements to servers which can be controlled by the service providers (various banks, companies, etc.) or by a Cloud of security provider. With this solution, all the shortcomings of the current implementation of TSM are eliminated. However, a new constraint is introduced, replacing the drawbacks of TSM: the mobile device must constantly be connected. This solution is represented in Figure 15.11.

Figure 15.11. Solution without a TSM

The connection is no longer made with the secure element on the smartphone but rather with secure elements that are in the card grids. There are as many secure elements as there are applications which a user wishes to achieve. The advantage is that the applets or modifications of the programs are made directly in the secure elements, which are either in NFC service provider or in a secure element provider. This system is scalable, because a user can have as many secure elements and therefore SDs as NFC applications. All the difficulties due to the sharing of the secure element are no longer there. This solution is being extended with the emergence of HCE, which we will discuss in the next section.

250

Software Networks

15.2.5. HCE HCE (Host Card Emulation) is an exact virtual representation of a smartcard using only software. Before the HCE architecture, NFC transactions were mainly made by using a local secure element such as a smartcard. HCE enables us to offer virtual payment cards. These payment cards can be distributed in real time without having to change the software on the smartphone or tablet. HCE technology facilitates information transfer across an NFC interface between the NFC component and a remote secure element, which can be treated as local. HCE requires the NFC protocol to channel the data to smartphone’s operating system instead of to a secure element such as a local smartcard based on hardware configured specifically to respond only as a card, with no other functions. Android 4.4 and later from Google uses HCE, which allows an NFC interface to communicate with the terminal’s operating system. Google introduced this platform to carry out secure NFC transactions for payments, loyalty programs, access to external cards and other personalized services. With HCE, any application on an Android 4.4 and later device can emulate an NFC smartcard, enabling users to initiate transactions with an application of their choice. The applications can also use a new type of reader, to act as a reader of HCE cards and other NFC devices. 15.2.6. Securing solutions Having described TEE, TSM and HCE, we can now go on to describe the different effective solutions in terms of access security using secure elements. There are two opposing solutions: the local solution, with a secure element in the terminal or connected directly to it, and the delocalized solution, which is based on virtualized secure elements. Figure 15.12 illustrates the first solution, whereby a secure element must be available locally. That secure element could be a SIM card from a telecom operator, the inbuilt secure element from the manufacturer that made the terminal, a secure element hosted on an SD card and inserted into the reader of the mobile terminal or even an external secure element, communicating with the mobile via an NFC interface, for example. These different solutions developed greatly between 2010 and 2015, because they represented practically the only solution, before the arrival of HCE. Manufacturers who have been able to make enough of a name for themselves in the market of smartphones, tablets and portable electronics have, of course, chosen to employ this solution. The secure element is a smartcard or an eSIM – made by NXP, for example – as is the case in Apple’s smartphones.

Security

251

Figure 15.12. Securing by local secure elements

This solution is easy to implement, with the advantage of being able to carry out communications locally, but the disadvantages listed in section 15.2.3 – namely the significant limitation of the number of applications that can be embedded in a secure element and the difficulty of configuring the secure element when a new service is added. The second major solution is the “remote smartcard”, which is advantageous if we want high security, effectively using a secure element. This solution has become possible because of the arrival of Android’s HCE. Before, the solution used was always the first type, as indicated in Figure 15.13. With the advent of HCE, the NFC component can be directly connected to an external secure element, as indicated in Figure 15.14.

Figure 15.13. Securing using external secure elements before Android 4.4

252

Software Networks

Figure 15.14. Securing using external secure elements with Android 4.4 and later

Before giving a more detailed description of this solution and presenting examples of its operation with classic applications, let us look at how confidential communication takes place between the mobile terminal and, say, an Internet-based trader. This relationship is illustrated in Figure 15.15.

Figure 15.15. Securing by a Cloud of secure elements

Security

253

Communication takes place between the mobile terminal and the secure element, which is in the Cloud of security. This communication takes place after the opening of a secure tunnel once opened with the SSL/TLS protocol. Once this communication is set up, the secure element always connects with the trader through a secure channel that is opened between the secure element and the retailer’s site. Purchases can then be made perfectly securely. Once the purchase has been made, control is passed back to the mobile terminal, which is able to retrieve the bought music or video or, indirectly, any type of purchase. The secure element may be in a Cloud of secure elements, as well as in an HSM (Hardware Security Module), which plays the same role, but the client still needs to trust that device. It is for this reason that Clouds of secure elements are still the most likely solution to develop, because secure elements – for example, smartcards – may actually be held by the user, who will therefore have a high level of trust in that system. One initial advantage of this solution using a virtualized or decentralized smartcard is the possibility of assigning to a particular user not just one smartcard but many s/he actually needs. The client can have multiple banks, multiple key managers and multiple security service providers. A security service provider can quite easily carve out a niche in this market by setting up its own secure-element server or hosting secure elements with a Cloud-of-security provider. This property is illustrated in Figure 15.16.

Figure 15.16. Advantages of the external solution

254

Software Networks

To be complete, the environment must have its own secure-element management system. For this purpose, a simple solution is offered by the Global Platform consortium, involving adding an administration center that can, itself, be perfectly well virtualized. Figure 15.17 shows the environment complete with its administration server.

Figure 15.17. Architecture of securing using external secure elements

Numerous security services can be constructed using the above model, drawing on a Cloud of security. We have already seen the examples of virtual firewalls and virtual DPI. With regard to the Cloud of secure elements, we can cite management of the identity of virtual machines and management of hotel, car and house keys and, more generally, any keys we may need to protect something. Figure 15.18 represents the securing of virtual machines. Indeed, today, we are seeing increasing numbers of attacks launched from Clouds, such as the creation of specific virtual machines for attacking private servers, which may be external or internal. In the internal case, we can cite attacks which, in an attacking virtual machine, randomly forge all the frames corresponding to a protocol based, for instance, on the RFC describing that protocol in detail. These random frames are then sent to the datacenter’s internal bus, with a very high probability that one of those frames will be an attack frame and will take down one or more servers, which, as they shut down, will force other servers to do so too. Another application is that of mobile keys which are stored in a Cloud of security, which can open a door when the smartphone is held next to the lock. The communication is established directly between the Cloud of security and the NFC lock. Of course, we must be aware of the possibility of an occasional communication breakdown – for example, when we cannot connect to the Cloud of security at the moment of presenting the smartphone to the lock. In such a case, the key can be stored in the TEE of the mobile devices for a limited period of time.

Security

255

Figure 15.18. Securing of virtual machines

The applications of purchasing and the relationship with a banking site are also part of the services that can be provided by this virtual chip card solution. One of the types of transactions is illustrated in Figure 15.19. The transaction takes place between the NFC reader of the retailer site and the user’s secure element, which – in this figure – is in a smartcard in a Cloud of security. As in the case of the mobile key, the mobile phone merely serves as a modem to facilitate the transaction between the two endpoints and establish the necessary secure channels. This solution is sufficient for limited-cost purchases. For larger transactions, it is necessary to add further communications with the “issuer” and the bank itself. In conclusion to this section, we have seen a new approach to securing by the use of secure elements. This approach is one of the key security solutions in the world of telecommunications and the Internet. Having introduced secure-element-based techniques, we described the potential for innovations offered by the Cloud of security.

Figure 15.19. Securing of an electronic payment

Service providers and operators have a choice between the two approaches: local, with a secure element in the mobile terminal, or remote, with a secure element

256

Software Networks

in the Cloud of security. Usually, the choice is dictated by whether or not it is possible to have a secure element built into the mobile terminal. In 2015, Apple opted to use local secure elements in light of their extensive range of smartphones. Apple has been able to launch applications based on this approach, such as Apple Pay. Google, on the other hand, opted for the external secure-element solution after it failed to obtain a sufficient range of smartphones by buying Motorola. (Hence, they then re-sold Motorola.) Nevertheless, Google is looking into both solutions – local and delocalized – having re-entered the mobile phone market with HTC and Pixel phones. 15.3. The blockchain The blockchain allows us to perform certification and traceability. This technology can especially be used in network environments. Initially, this technology was developed for cryptocurrencies such as Bitcoin, but there are many other applications in the network, for example, in order to certify the identity of virtual machines to avoid fraud of virtual machines. The blockchain is a database technology using an immutable record. In this record, we can find transactions which, once they are written in the record, can no longer be modified. Everyone can check this record and everything which has been written in it, past or present, without any trusted third party. It works directly on a user-to-user mode. As it is fully distributed, the system cannot be corrupted, and operations can be checked in the nodes that manage the chain, called miner nodes. The transactions which must be encapsulated in the chain are gathered in blocks, the last block is correlated to the previous one by the hash value of the previous block. All transactions of a new block are validated by the miner nodes. If the block is valid, it is timestamped and permanently added to the chain. The transactions it contains are then visible on the whole network and can easily be verified by all those who wish to do so. If computing is complex to perform (this is hash computing), checking it is very simple. Once added to the chain, a block can no longer be modified or deleted, which guarantees authenticity and security of the whole set. As shown in Figure 15.20, each block of the chain is made up of several transactions, a sum of controls (hash) used as an ID, a sum of controls from the previous block (except for the first block of the chain, called the genesis block), and finally, of a measure of the quantity of work necessary to produce the block. The work is defined by the consensus method used within the chain, such as the proof of work or the proof of involvement.

Security

257

Figure 15.20. Operation of a blockchain

The blockchain is a system based on trust. After more or less complex computing, the system ends up with a consensus which cannot be questioned. The method to come to this consensus can be the proof of work or the proof of involvement. The first method uses a complex work to perform, proving that the miner managed to find the right hash value in the block. The second method takes the actives value into account. Indeed, the more involved a node is in the process, the less interest the system has to stop. The protocol linked to the proof of work uses cryptographic computing, which requires a high computing power, provided by the miners. This process cannot be broken in any other time frame than billions of years. Entities are miners the role of which is to supply the network with computing power to allow decentralized ledger updates. For these updates, miners must be able to find a complex enough hash. The first to perform this operation becomes the leader who manages the new block and wins a reward of 12.5 bitcoins. It takes about 15 minutes on very powerful machines. Competition exists between miners to become the leader and take care of a block’s transactions. This allows the power available on the network to grow. Anybody can use their computing power to mine, but the more the miners, the harder to obtain the consensus resolution. Thus, the protocol becomes almost untouchable as soon as competition is high between nodes of a network, meaning that no group of miners takes the lead. 15.4. Conclusion The world of security is very extensive, and it has only been possible, in this chapter, to offer a very partial view of network security. In particular, we have focused on the new generation, linked to the Cloud, with the Cloud of security. This solution has advantages and disadvantages, but it is on new ground: that of easy entry into the market and easy tailoring to the security objective. Only time will tell how much of a market share it will win. Another emerging solution comes from the blockchain, which offers a certification and traceability without any focal point.

16 Concretization and Morphware Networks

Virtualization requires a software version of the infrastructure. This solution is a paradigm in which the network environment is made of datacenters. These datacenters are more or less big and support the virtual machines. This solution is excellent for agility and flexibility of the network, but it is very bad from a performance and energy saving point of view. A good solution for all these constraints is concretization. Concretization is the reverse of virtualization: the question becomes one of how to make the move from software to hardware but preserve the same degree of agility as software. In other words, the idea of concretization is to replace the software with hardware that is instantaneously reconfigurable, so that the software executed on the reconfigurable processor can instantly be replaced by different software on reconfigured hardware. Figure 16.1 illustrates the relationship between the processes of concretization and virtualization. However, the initial microprocessor-like equipment which is virtualized has nothing to do with the equipment obtained after concretization. If the initial equipment is, for example, a router, the post-concretization equipment is a routing software executed on a reconfigurable processor. Concretization takes the shape of an equipment which is almost instantaneously reconfigurable. The goal of this solution is to speed up the execution of software obtained after virtualization and to consume much less energy.

Software Networks: Virtualization, SDN, 5G and Security, First Edition. Guy Pujolle. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

260

Software Networks

Figure 16.1. The process of concretization

16.1. Accelerators Even before concretization, one initial solution to speed up the computation of software is represented by accelerators. Many different products of performance accelerators are already on the market, including the DPDK (Data Plane Development Kit). The DPDK environment is a collection of data plane libraries and network interface drivers, which accelerate communications. Real migrations of virtual machines – i.e. the actual transport of all the software associated with a virtual machine, from one physical machine to another – can be done in the space of a few milliseconds. The purpose of the DPDK is to create a set of libraries for environments made up of software which are designed to perform specific functions, requiring large amounts of computation power. This type of software is found in the context of virtualization for signal-processing applications or multimedia protocol management. In particular, this solution applies in the networking domain by way of an addition: the EAL (Environment Abstraction Layer). The EAL masks the peculiarities of the environment, by presenting a programming interface to libraries, interfaces, hardware accelerators and operating-system elements such as Linux or FreeBSD. Once the EAL has been created for a specific environment, developers string together the libraries and interfaces to create their own applications. For example, the EAL can provide a standard framework to handle Linux, FreeBSD, Intel IA 32- or 64-bit or IBM Power8. The EAL also provides additional services, such as temporal references, access to the PCIe bus, tracking functions, debugging solutions and alarm functions. Finally, the DPDK environment implements a model with very little overhead,

Concretization and Morphware Networks

261

which helps to obtain excellent performance in terms of the data plane. The environment also enables us to access the different nodes by eliminating the computation overheads, leading to rather impressive accelerations which are achieved by way of fairly simple processes. The DPDK environment also includes examples which demonstrate best practices for software architectures, tips for the design of data structures and storage and for tuning applications, as well as advice on how to overcome the performance deficits of different solutions for network deployment. The DPDK is one of the best examples of accelerators to compensate for the deficient performance of processors running some software programs which must achieve high performance. There are other types of accelerators, which can be seen as intermediary between entirely software-based solutions and purely hardware-based solutions. However, the advent of increasingly powerful reconfigurable microprocessors certainly represents the way of speeding up processing much better. 16.2. A reconfigurable microprocessor A reconfigurable microprocessor is a microprocessor with hardware that can be rewired dynamically. This enables the chip to effectively adapt to the programming tasks required by a particular piece of software. For example, the reconfigurable processor can be transformed from a video card to a graphics card – both of them being optimized to enable the applications to run at the highest possible speed. We can say that they are bespoke processors. In practical terms, this ability results in a high degree of flexibility in terms of the chip’s functions. For example, a single chip could be used simultaneously for a camera, a signal processor, a router and a firewall. We only need to load the desired software, and the processor is reconfigured automatically to optimize the performance necessary for the programmed function. Several types of reconfigurable processors are available on the market. First of all, there are DSPs (Digital Signal Processors) which exhibit excellent performance. DSPs are programmable chips used in cell phones, automobiles and many music and video players. Another version of reconfigurable microprocessors has programmable memory matrices which perform hardware functions using software tools. These microprocessors are more flexible than specialized DSPs but are also slower and more expensive. Hardwired chips are the oldest, least expensive and fastest but unfortunately the least flexible. A DSP is illustrated in Figure 16.2.

262

Software Networks

Figure 16.2. A DSP

Reconfigurable microprocessors have different names depending on exactly what is reconfigured. The best-known examples are FPGAs (Field-Programmable Gate Arrays), which are arrays of gates that can be programmed using RAM technology. Figure 16.3 shows an example of an FPGA.

Figure 16.3. Example of an FPGA

EPLDs (Erasable Programmable Logic Devices) are also programmable logic circuits, but they use FLASH technology.

Concretization and Morphware Networks

263

Figure 16.4. An EDLP component

We can represent the different types of solutions as described in the form shown in Figure 16.5.

Figure 16.5. The different types of microprocessors

Reconfigurable microprocessors form a new generation of systems which, in the long term, should come to replace all software elements requiring a little computing power. These reconfigurable microprocessors comprise a layer of hardware elements and a layer of memory elements. The hardware element has a varying degree of complexity, represented by its “granularity”. Granularity shows from a fine-grained to a coarse-grained version, to express the complexity of the hardware element. For

264

Software Networks

a fine grain, we work at bit level, so we can program absolutely any logical function. However, this solution is costly in terms of performance and is not able to deliver all the desired orders of magnitude of performance, for certain computational components in networks. In addition, the reconfiguration time is too long to handle several real-time processes on the same reconfigurable processor. In the case of coarse grains, it is no longer possible to perform all functions directly. The elements form operators, which can directly be used for the necessary operations in signal processing or multimedia protocols. These operators can be reconfigured much more quickly, because they are limited in terms of the number of functions that the component itself can perform. More specifically, the granularity of the reconfigurable element is defined as the size of the smallest basic block – the CLB (Configurable Logic Block) – which can be included in the string of functions to be performed. A high degree of granularity – i.e. fine granularity – means great flexibility to implement the algorithms using hardware. We can implement almost any type of function. We use fine grains to carry out particular functions or test new algorithms, before moving on to coarser grains. The difficulties facing fine-grained circuits include the higher power requirement and the lower execution speed, due to the path to be followed, which is generally quite long. Reconfiguration may also require a lot of time in comparison with the time periods necessary to maintain a real-time process. However, coarse grains have much shorter chain paths and lend themselves more easily to real-time applications. Of course, it is important that the computation of a function correspond as closely as possible to the path followed. If the granularity is too coarse, there is a risk that the component will take more time than is necessary – in other words, poor use coupled with higher consumption. For example, an addition on four bits performed on a component with a granularity of 16 bits degrades performance, as significantly more resources are consumed. The idea to find the best compromise is to make matrices of coarse-grained elements mixed with fine-grained elements. We can obtain such an arrangement on a single chip with rDPAs (reconfigurable Datapath Arrays) and FPGAs. Matrix architectures combining rDPA and FPGA must be optimized so that the path taken between the different elements is as short as possible. If the architecture is not appropriate, we immediately see a significant loss of efficiency, but this is not necessarily a major problem, if the overall circuit serves numerous, very different algorithms. In fact, the best scenario is to know, in advance, which algorithms will be executed on the circuit, with a view to being able to optimize the logic elements, the memory and the path that is followed. FPGAs are generally too fine-grained to

Concretization and Morphware Networks

265

be highly efficient in terms of performance, and they need to be substituted with coarser-grained elements that are better suited to the purposes of the reconfigurable microprocessor. Reconfiguration can take place when the circuit is engaged, between two phases of execution, or even during the execution of a process. Circuits which work at the bit level, such as FPGAs, require a lot of time for reconfiguration, which is carried out by way of a bitstream that is fairly complex in relation to coarse-grained architectures, which require a much shorter binary stream, with much quicker establishment. In fact, most reprogrammable cards have partial reconfigurations that facilitate the execution of a new function while another part of the microprocessor continues to execute a previous function. Reconfigurable element matrices seem to represent the best compromise between fine- and coarse-grained architectures, bringing together enough circuits to find optimal chain paths. To control this collection of circuits, a host processor is needed. The host needs to be powerful to determine which algorithms to use to compute the chain. Overall, the flexibility of the architecture hinges on the path taken to interconnect the gates. Thus, the path to be determined is of prime importance in carrying out complex functions, such as those found in certain protocol- or signal-processing operations.

Figure 16.6. Reconfigurable element matrix

In fact, we need to be able to have fine-grained and coarse-grained reconfigurable components in a single system and to combine them in a programmable environment such as that illustrated in Figure 16.7.

266

Software Networks

Figure 16.7. Reconfigurable microprocessor using fine- and coarse-grained elements

The datacenter environments of the future should be built around the same type of reconfigurable memories and microprocessors we just studied. 16.3. Morphware networks Looking at the history of networks, the story began with hardware networks – i.e. networks whose nodes are formed of hardware, as illustrated in Figure 16.8. The nodes are not in the least bit agile; they are unmodifiable, and if we wish to increase their power, we need to upgrade the hardware. If we want to change technology – for example, to go from using routers to using switches – again we need to change the hardware. Thus, there is no flexibility in this solution. Yet, we have been using this technology since the beginning of networking.

Figure 16.8. A hardware network

The virtualization of network equipment has facilitated the introduction of software networks. These networks, as we have seen throughout this book, offer

Concretization and Morphware Networks

267

a very high degree of flexibility. The size of the devices can be adapted to the traffic on the network, and the technologies used can very easily be changed. Several virtual devices can be placed in the same hardware node. The number of virtual devices depends on numerous parameters, such as the power of the node’s processor, the available memory space and the input/output ports. A set of software networks can be put in place to serve specific applications. This category of networks is illustrated in Figure 16.9.

Figure 16.9. Software networks. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

In software networks, the networking machines that are virtualized can move following the breakdown or malfunction of the original hardware node or migrate towards a better proportion hardware node. The advantage of such software networks is the agility of the nodes and the great simplicity of installation and change. There are also numerous disadvantages, such as the management of each software network, the sizing of the software networks and security. Yet, an important point which is rarely discussed relates to the performance of software networks. Moving from hardware to software decreases performance by several orders of magnitude. Of course, the power of the datacenters is such that it is possible to make up for a loss of 10 to 100 orders of magnitude quite easily. So, it is also possible to entirely make up for the loss in performance. However, it is more difficult to compensate for a greater loss than this, because we need to find software accelerators which are not always easy to manage. For this reason, we are witnessing the rise of so-called “morphware networks” – i.e. networks which change depending on the client and adapt their performance to what is necessary in order for the network to operate in the best possible conditions. In fact, each client is assigned an individual network, and the network’s performance is adapted to the client. The idea is to use concretization to obtain the requisite performance. The physical nodes of the network are datacenters of varying dimensions, but these datacenters contain microprocessors of the three different types encountered at the start of this chapter:

268

Software Networks

processors as found in datacenters today, capable of handling bespoke software. We also find ASICs, capable of handling very particular real-time processes, such as signal processing. Finally, there is also a fairly large number of reconfigurable processors, so that the necessary reconfigurations do not interfere with the processes occurring in the node. These new morphware networks are illustrated in Figure 16.10. They represent a new generation, primarily based on reconfigurable processors. This position means they can be used as intermediaries between physical networks and software networks.

Figure 16.10. Morphware network. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

In conclusion to this chapter, we have examined the question of how to improve the performance of software networks, whose power is lower than that of hardware networks. With a view to preserving the flexibility and agility of software networks but with the power of hardware networks, concretization is becoming a hugely important process. By concretization, we are able to obtain networks which are perfectly suited to the services and applications that need to be served. 16.4. Conclusion This chapter looks at the likely developments in the longer term, with concretization and networks that mutate to suit the clients’ needs. There are numerous problems that need to be solved, and new ideas have been put forward, including the creation of a software network every time a connection request is made. While such networks cannot be implemented on a true scale today, there is no reason to think that the same will be true in a few years’ time. Hence, rapid reconfigurable processors would represent an excellent solution.

Conclusion

The networks of the future will be completely virtualized in physical infrastructures – essentially datacenters of varying sizes. Very small datacenters will be located on the periphery of the network, near to the user. This physical infrastructure will be used to support software networks that are tailored to the clients’ needs and to those of the applications for which they were generated. The agility of these networks is the main difference with previous-generation networks: it is possible to replace a network in a few minutes, or even a few seconds, and by using automation, in a few milliseconds. However, it is important to note the difficulties that could arise from this new generation of networks: the complexity of managing the different networks and the security of the whole system. Indeed, for reasons of isolation, management cannot be shared, for fear that one network will be intermingled with another. Security is also a major issue, because of the increased complexity of the architecture and the diversity of the networks. The Cloud of security is a promising new paradigm, but it does not solve all the problems. Overall, this new generation of technologies is based on the Cloud, and on virtualization, as is indicated by Figure C.1, which also shows the overlap with migration, NFV and Cloud of security. The NFV standard attaches a great deal of hope to the major simplification which is offered by virtual machines. The Open Platform for NFV (OPNFV) represents another promising avenue. The OPNFV solution would be Carrier-Grade, integrated, and should give rise to an open source platform developed by the Linux Foundation.

Software Networks: Virtualization, SDN, 5G and Security, First Edition. Guy Pujolle. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

270

Software Networks

Figure C.1. The fundamental elements of new generation networks (NGNs). For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip

Another significant advance would be concretization – i.e. hardware virtualization will replace software virtualization. Indeed, with software networks, we tend more to see a drop in performance in comparison to physical networks, even though the power of Clouds limits that reduction. Companies such as Radisys and Intel are beginning to explore the option of hardware that behaves like software. 5G is also greatly affected by virtualization, which offers numerous functions that are indispensable for a simple connection of billions of things. Virtualization is even responsible for massive revolutions, such as the use of C-RAN techniques, which is spreading across the globe. Among these revolutions, a local loop could become very simple indeed, with a potential return to the use of analog signals, which would be transported directly to the Cloud and computed there. Certainly, it is true that far more information can be conveyed by an analog signal than a digital one. Finally, security shows a new paradigm, with secure elements that are virtualized, or not, and there is a massive emergence of artificial intelligence-based techniques such as Big Data analytics, machine learning, etc.

References

[AVR 14] AVRAMOV L., PORTOLANI M., The Policy Driven Data Center with ACI: Architecture, Concepts, and Methodology, CISCO Press, 2014. [AZO 13] AZODOLMOLKY S., Software Defined Networking with OpenFlow, Packt Publishing, 2013. [GEN 14] GENG H., Data Center Handbook, Wiley, 2014. [GOR 14] GORANSSON P., BLACK C., Software Defined Networks: A Comprehensive Approach, Morgan Kaufmann, 2014. [HOO 14] HOODA S.K., KAPADIA S., Using TRILL, FabricPath, and VXLAN: Designing Massively Scalable Data Centers (MSDC) with Overlays, CISCO Press, 2014. [HU 14] HU F., Network Innovation through OpenFlow and SDN: Principles and Design, CRC Press, 2014. [LOW 13] LOWE S., MARSHALL N., GUTHRIE F. et al., Mastering VMware vSphere 5.5, SYBEX, 2013. [MAD 14] MADISETTI V., BAHGA A., Internet of Things (A Hands-on-Approach), VPT publisher, 2014. [MIR 14] MIR N. F., Computer and Communication Networks, Prentice Hall, 2014. [MOR 14] MORREALE P.A., ANDERSON J.M., Software Defined Networking: Design and Deployment, CRC Press, 2014. [NAD 13] NADEAU T.D., GRAY K., SDN: Software Defined Networks, O’Reilly, 2013. [RHO 14] RHOTON J., DE CLERCQ J., NOVAK F., OpenStack Cloud Computing: Architecture Guide, RP publisher, 2014. [SAB 13] SABOOWALA H., ABID M., MODALI S., Designing Networks and Services for the Cloud: Delivering business-grade cloud applications and services, CISCO Press, 2013.

Software Networks: Virtualization, SDN, 5G and Security, First Edition. Guy Pujolle. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

272

Software Networks

[SHU 13] SHUKLA V., Introduction to Software Defined Networking - OpenFlow & VxLAN, Createspace, 2013. [WEN 14] WEN H., TIWARY P.K., LE-NGOC T., Wireless Virtualization, Springer, 2013. [YEL 14] YELURI R., CASTRO-LEON E., Building the Infrastructure for Cloud Security: A Solutions View, Apress Open, 2014.

Index

3GPP, 162 5G, 161, 185, 270 vehicle, 220 5G core network, 192 5G radio, 189 6LowPAN, 198 A, B, C accelerators, 260 ad hoc networks, 199 AIFS (Arbitration Inter-Frame Space), 218 AODV, 199 applications tactile, 227 ASIC, 268 availability, 126 Backhaul, 182 BAN (Body Area Network), 208 Bare Metal, 7, 53 BaseBand Unit, 49 BBU, 49 BGP, 147 BLE, 198 blockchain, 256 Bluetooth LE, 198 Brahmaputra, 56 C-ITS, 217

C-RAN, 49, 193 C-V2X, 222 C-V2X (Cellular-V2X), 221 CAM message, (Cooperative Awareness Message), 224 Carrier-Grade Ethernet, 126 Cellular IP, 146 CLB (Configurable Logic Block), 264 Cloud, 269 Cloud networking, 34 Cloud of security, 237 Cloud RAN, 49 Cloud-Radio Access Network, 193 CMT (Concurrent Multipath Transfer), 157 cognitive radio, 184 CoMP, 233 concretization, 259, 270 configuration, 16 controller, 116 critical missions, 220 CSK (Color Shift Keying), 225 CSMA (Carrier Sense Multiple Access), 218 D, E, F D2D (Device to Device), 186, 216, 222 Danube, 58

Software Networks: Virtualization, SDN, 5G and Security, First Edition. Guy Pujolle. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

274

Software Networks

DAP (Data Authentication Pattern), 248 DFC, 218 DiffServ, 125 digital twins, 235 diversity, 233 DPDK (Data Plane Development Kit), 56, 260 DPI (Deep Packet Inspection), 119, 237 DSP (Digital Signal Processor), 261 DyMO-Low, 199 EAL (Environment Abstraction Layer), 260 EAP Protocol, 240 EAP-SIM, 240 EAP-TLS, 240 ECMP (Equal Cost MultiPath), 130 EDCA (Enhanced Distributed Channel Access), 168, 218 EGAN (Enhanced GAN), 173 eHealth, 228 EHT (Extremely High Throughput), 163, 164 eMBB (enhanced Mobile Broadband), 233 embedded SIM, 242 eNodeB vehicle, 221 EPCglobal, 202 EPLD (Erasable Programmable Logic Device), 262 ERB (Egress RBridge), 130 eSIM, 242 Ethernet, 125 ETSI, 54 Euphrates, 58 F-OFDM, 189 fast path, 56 femtocell, 171 Filtered-OFDM, 190 firewalls, 238 flow table, 120

FMIPv6 (Fast MIPv6), 146 FPGA (Field-Programmable Gate Array), 262, 264 FSS (Fixed Subcarrier Spacing), 168 FTTA (Fiber to the Antenna), 194 G, H, I GAN (Generic Access Network), 173 GlobalPlatform, 246, 254 Granularity, 263 Halow, 163, 168 handover, 143 HARQ (Hybrid Automatic Repeat Request), 232 HAWAII (Handoff-Aware Wireless Access Internet Infrastructure), 146 HBC (Human Body Communications), 209 HCE (Host Card Emulation), 250 healthcare, 228 HeNB, 163 HIP (Host Identity Protocol), 131, 147–149, 207 HMIPv6 (Hierarchical MIPv6), 146 Home NodeB (HNB), 173, 180 Host Identities, 207 Hotspot, 174 HSM (Hardware Security Module), 237, 253 identity, 241 IDMP (Intra-Domain Mobility Management Protocol), 146 IEEE, 162 IEEE 1609, 218 IEEE 802.11ac, 165 IEEE 802.11af, 167 IEEE 802.11ah, 168 IEEE 802.11ax, 166 IEEE 802.11ay, 169, 170 IEEE 802.11p, 218 IEEE 802.11s, 199

Index

IEEE 802.15.6, 208 IEEE 802.15.7, 225 IEEE 802.16, 161 IEEE 802.1ah, 127, 128 IEEE 802.ad, 166 IEEE P1900, 167 IEEE P1918.1, 227 IETF, 147, 153, 159, 207 Industry 4.0, 228, 234 intelligence, 98 Internet of Things, 197 IP Mobile Care-of-Address, 145 IPv4 Mobile IP, 144 IPv6, 199 Mobile IP, 144 IRB (Ingress RBridge), 130 ITR (Ingress Tunnel Router), 131 ITS (Intelligent Transportation System), 217, 220 K, L, M Kubernetes, 4, 9, 58 Li-Fi (Light Fidelity), 216, 224 LISP (Locator/Identifier Separation Protocol), 116, 131 LOAD, 199 Long Term Evolution, 162 LowPAN Neighbor Discovery Extension, 199 LS-SCTP (Load Sharing SCTP), 156 LTE, 162 LTE-A, 162 LTE-Advanced, 162 LTE-U, 172 LTE-V (LTE-Vehicle), 223 M2M (Machine-to-Machine), 228 macromobility, 144 Management and Network Orchestration, 55 MANO, 54, 55, 58

275

Massive MIMO, 191 mCoA (Multiple Care-of-Addresses), 148, 151 MEF (Metropolitan Ethernet Forum), 125 Metrocells, 170, 174 microdiversity, 233 micromobility, 144–146 Mifare, 203 MIMO, 165 MIP6 (Mobile IPv6). See Mobile IPv6, 150–153 Mobile IP, 144 tunneling, 145 Mobile IPv6, 151 mCoA, 151 mobile network, 144 mobile payments, 206, 207 morphware network, 266 MPTCP (Multipath TCP), 147, 159 MTBF, 126 MU-MIMO, 165 multi-access, 234 multihoming, 144, 146, 152–155, 159 SCTP, 154 multistreaming, 153 N, O, P network coding, 234 Network Functions Virtualization, 54 NFC, 204, 245, 250 NFV, 42, 43, 49, 54, 269, 270 NFV Manager, 55 NFVI, 54 NOX, 122 NVGRE, 124 O-OFDM, 225 OCP, 48, 53 ODM, 53 ODP, 56 ONAP, 58 ONF (Open Network Foundation), 120 ONOS, 56, 122

276

Software Networks

Open Network Automation Platform, 58 Open Platform NFV, 54 Open vSwitch, 54, 65 OpenContrail, 56, 59, 94 OpenDaylight, 56, 122 OpenFlow, 117 Open-O, 58, 61 open-Orchestrator, 58 OPNFV, 54, 58, 270 Original Design Manufacturers, 53 OTA (Over The Air) Security, 248 Packet Data Node Gateway, 49 Passpoint, 162, 170, 174–176 PB (Provider Bridge), 128 PBB (Provider Backbone Bridge), 128 PBT (Provider Backbone Transport), 129 PC5, 222 PC5 interface, 221 PC5 Mode 3, 222 PC5 Mode 4, 223 PCRF, 50 PDNG, 49 PKI, 240 Polar Code, 189, 191 PON (Passive Optical Network), 188 POX, 122 PseudoWire, 127 PU2RC, 165 R, S, T RADIUS, 241 RAW (Restricted Access Window), 169 RBridges, 129 rDPAs (reconfigurable Datapath Arrays), 264 reconfigurable microprocessor, 261 reliability, 126 Remote Radio Head, 49 RFID (Radio-Frequency Identification), 200 RLOC (Routing-Locator), 131

RRH, 49 SCI value (Sidelink Control Information), 223, 224 SCMA (Sparse Code Multiple Access), 189, 191 SCTP (Stream Control Transmission Protocol), 147, 153–155 SD (Security Domain), 247 SDMA (Space Division Multiple Access), 164, 165, 168 SDN, 116 SDR (Software-Defined Radio), 184 secure element, 239 Security, 237 SEI TSM, 246 sensor network, 198, 199, 208 sensors, 211 Service Function Chaining, 57 session key, 241 SFC, 57 SHIM6 (Level 3 Multihoming Shim Protocol for IPv6), 131, 147, 148, 150 SIM (Subscriber Identity Module), 176 SLA, 127 Small cells, 170 smart city, 210 smart dust, 197–199 smart edge, 32 smartcard, 238, 239 software network, 267 Software radio, 184 software security, 244 SP TSM, 246 STP, 129 super WiGig, 169, 170 Tactile Internet, 227 TCP Multipath TCP, 153, 159 TEE (Trusted Execution Environment), 244 TIM (Traffic Indication Map), 168 TinyOS, 199 TLV, 129

Index

TRILL, 129 TRILL protocol, 181 TSM (Trusted Service Manager), 245 Tungsten Fabric, 122 TVWS, 163 TWT (Target Wake Time), 169 TXOP (Transmission Opportunity), 169 U, V, W uCPE, 48 UMA (Unlicensed Mobile Access), 173 Uu interface, 221, 222 UWB (Ultra Wide Band), 186 V2I (Vehicle-to-Infrastructure), 218, 220 V2N (Vehicle-to-Network), 220, 221 V2V (Vehicle-to-Vehicle), 220 V2X 5G, 222 vehicular networks, 215–218 vEPC, 49 VIM, 54, 55 virtual Evolved Packet Core, 49 virtual firewalls, 238

277

Virtualization Wi-Fi, 179 Virtualized Network Functions, 54 virtualized secure elements, 250 VLAN, 127 VLC (Visible Light Communication), 216, 224 VNF, 48, 54 VNFM, 55 VoIP (Voice over IP), 182 VoLTE, 162 vRAN, 48 VXLAN (Virtual eXtensible LAN), 123 WAVE, 218 WGA (Wireless Gigabit Alliance), 166 White-Fi, 163, 167 WiBree, 198 Wi-Fi, 162, 163, 165, 166, 169 Wi-Fi Alliance, 175 WiGig, 163, 166, 174 WiGig 2, 192 WiMAX, 161 wireless optical communications, 225

Other titles from

in Networks and Telecommunications

2018 ANDIA Gianfranco, DURO Yvan, TEDJINI Smail Non-linearities in Passive RFID Systems: Third Harmonic Concept and Applications BOUILLARD Anne, BOYER Marc, LE CORRONC Euriell Deterministic Network Calculus: From Theory to Practical Implementation LAUNAY Frédéric, PEREZ André LTE Advanced Pro: Towards the 5G Mobile Network PEREZ André Wi-Fi Integration to the 4G Mobile Network TOUNSI Wiem Cyber-Vigilance and Digital Trust: Cyber Security in the Era of Cloud Computing and IoT

2017 BENSLAMA Malek, BENSLAMA Achour, ARIS Skander Quantum Communications in New Telecommunications Systems

HILT Benoit, BERBINEAU Marion, VINEL Alexey, PIROVANO Alain Networking Simulation for Intelligent Transportation Systems: High Mobile Wireless Nodes LESAS Anne-Marie, MIRANDA Serge The Art and Science of NFC Programming (Intellectual Technologies Set – Volume 3)

2016 AL AGHA Khaldoun, PUJOLLE Guy, ALI-YAHIYA Tara Mobile and Wireless Networks (Advanced Network Set – Volume 2) BATTU Daniel Communication Networks Economy BENSLAMA Malek, BATATIA Hadj, MESSAI Abderraouf Transitions from Digital Communications to Quantum Communications: Concepts and Prospects CHIASSERINI Carla Fabiana, GRIBAUDO Marco, MANINI Daniele Analytical Modeling of Wireless Communication Systems (Stochastic Models in Computer Science and Telecommunication Networks Set – Volume 1) EL FALLAH SEGHROUCHNI Amal, ISHIKAWA Fuyuki, HÉRAULT Laurent, TOKUDA Hideyuki Enablers for Smart Cities PEREZ André VoLTE and ViLTE

2015 BENSLAMA Malek, BATATIA Hadj, BOUCENNA Mohamed Lamine Ad Hoc Networks Telecommunications and Game Theory BENSLAMA Malek, KIAMOUCHE Wassila, BATATIA Hadj Connections Management Strategies in Satellite Cellular Networks

BERTHOU Pascal, BAUDOIN Cédric, GAYRAUD Thierry, GINESTE Matthieu Satellite and Terrestrial Hybrid Networks CUADRA-SANCHEZ Antonio, ARACIL Javier Traffic Anomaly Detection LE RUYET Didier, PISCHELLA Mylène Digital Communications 1: Source and Channel Coding PEREZ André LTE and LTE Advanced: 4G Network Radio Interface PISCHELLA Mylène, LE RUYET Didier Digital Communications 2: Digital Modulations PUJOLLE Guy Software Networks (Advanced Network Set – Volume 1)

2014 ANJUM Bushra, PERROS Harry Bandwidth Allocation for Video under Quality of Service Constraints BATTU Daniel New Telecom Networks: Enterprises and Security BEN MAHMOUD Mohamed Slim, GUERBER Christophe, LARRIEU Nicolas, PIROVANO Alain, RADZIK José Aeronautical Air−Ground Data Link Communications BITAM Salim, MELLOUK Abdelhamid Bio-inspired Routing Protocols for Vehicular Ad-Hoc Networks CAMPISTA Miguel Elias Mitre, RUBINSTEIN Marcelo Gonçalves Advanced Routing Protocols for Wireless Networks CHETTO Maryline Real-time Systems Scheduling 1: Fundamentals Real-time Systems Scheduling 2: Focuses

EXPOSITO Ernesto, DIOP Codé Smart SOA Platforms in Cloud Computing Architectures MELLOUK Abdelhamid, CUADRA-SANCHEZ Antonio Quality of Experience Engineering for Customer Added Value Services OTEAFY Sharief M.A., HASSANEIN Hossam S. Dynamic Wireless Sensor Networks PEREZ André Network Security PERRET Etienne Radio Frequency Identification and Sensors: From RFID to Chipless RFID REMY Jean-Gabriel, LETAMENDIA Charlotte LTE Standards LTE Services TANWIR Savera, PERROS Harry VBR Video Traffic Models VAN METER Rodney Quantum Networking XIONG Kaiqi Resource Optimization and Security for Cloud Services

2013 ASSING Dominique, CALÉ Stéphane Mobile Access Safety: Beyond BYOD BEN MAHMOUD Mohamed Slim, LARRIEU Nicolas, PIROVANO Alain Risk Propagation Assessment for Network Security: Application to Airport Communication Network Design BERTIN Emmanuel, CRESPI Noël Architecture and Governance for Communication Services BEYLOT André-Luc, LABIOD Houda Vehicular Networks: Models and Algorithms

BRITO Gabriel M., VELLOSO Pedro Braconnot, MORAES Igor M. Information-Centric Networks: A New Paradigm for the Internet DEUFF Dominique, COSQUER Mathilde User-Centered Agile Method DUARTE Otto Carlos, PUJOLLE Guy Virtual Networks: Pluralistic Approach for the Next Generation of Internet FOWLER Scott A., MELLOUK Abdelhamid, YAMADA Naomi LTE-Advanced DRX Mechanism for Power Saving JOBERT Sébastien et al. Synchronous Ethernet and IEEE 1588 in Telecoms: Next Generation Synchronization Networks MELLOUK Abdelhamid, HOCEINI Said, TRAN Hai Anh Quality-of-Experience for Multimedia: Application to Content Delivery Network Architecture NAIT-SIDI-MOH Ahmed, BAKHOUYA Mohamed, GABER Jaafar, WACK Maxime Geopositioning and Mobility PEREZ André Voice over LTE: EPS and IMS Networks

2012 AL AGHA Khaldoun Network Coding BOUCHET Olivier Wireless Optical Communications DECREUSEFOND Laurent, MOYAL Pascal Stochastic Modeling and Analysis of Telecoms Networks DUFOUR Jean-Yves Intelligent Video Surveillance Systems

EXPOSITO Ernesto Advanced Transport Protocols: Designing the Next Generation JUMIRA Oswald, ZEADALLY Sherali Energy Efficiency in Wireless Networks KRIEF Francine Green Networking PEREZ André Mobile Networks Architecture

2011 BONALD Thomas, FEUILLET Mathieu Network Performance Analysis CARBOU Romain, DIAZ Michel, EXPOSITO Ernesto, ROMAN Rodrigo Digital Home Networking CHABANNE Hervé, URIEN Pascal, SUSINI Jean-Ferdinand RFID and the Internet of Things GARDUNO David, DIAZ Michel Communicating Systems with UML 2: Modeling and Analysis of Network Protocols LAHEURTE Jean-Marc Compact Antennas for Wireless Communications and Terminals: Theory and Design PALICOT Jacques Radio Engineering: From Software Radio to Cognitive Radio PEREZ André IP, Ethernet and MPLS Networks: Resource and Fault Management RÉMY Jean-Gabriel, LETAMENDIA Charlotte Home Area Networks and IPTV TOUTAIN Laurent, MINABURO Ana Local Networks and the Internet: From Protocols to Interconnection

2010 CHAOUCHI Hakima The Internet of Things FRIKHA Mounir Ad Hoc Networks: Routing, QoS and Optimization KRIEF Francine Communicating Embedded Systems / Network Applications

2009 CHAOUCHI Hakima, MAKNAVICIUS Maryline Wireless and Mobile Network Security VIVIER Emmanuelle Radio Resources Management in WiMAX

2008 CHADUC Jean-Marc, POGOREL Gérard The Radio Spectrum GAÏTI Dominique Autonomic Networks LABIOD Houda Wireless Ad Hoc and Sensor Networks LECOY Pierre Fiber-optic Communications MELLOUK Abdelhamid End-to-End Quality of Service Engineering in Next Generation Heterogeneous Networks PAGANI Pascal et al. Ultra-wideband Radio Propagation Channel

2007 BENSLIMANE Abderrahim Multimedia Multicast on the Internet

PUJOLLE Guy Management, Control and Evolution of IP Networks SANCHEZ Javier, THIOUNE Mamadou UMTS VIVIER Guillaume Reconfigurable Mobile Radio Systems