Implementing Cisco HyperFlex Solutions (Networking Technology) [1 ed.] 013660191X, 9780136601913

Cisco HyperFlex solutions enhance data center efficiency, agility, and resiliency by tightly integrating core infrastruc

1,078 85 21MB

English Pages 400 [766] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Implementing Cisco HyperFlex Solutions (Networking Technology) [1 ed.]
 013660191X, 9780136601913

  • Commentary
  • decrypted from B9E060FD633F9D4209B79AFCF6B73B8D source file

Table of contents :
About This eBook
Title Page
Copyright Page
About the Authors
About the Technical Reviewers
Dedications
Acknowledgments
Contents at a Glance
Reader Services
Contents
Command Syntax Conventions
Preface
Who Should Read This Book?
How This Book Is Organized
Introduction
Introduction to HCI
Cisco’s HCI Solution: HyperFlex Overview
Deployment Options of Cisco HyperFlex
Cisco HyperFlex Advantages
Chapter 1. Overview of Data Center Products
Cisco Unified Computing System
Cisco Intersight Cloud-Based Management (SaaS Solution)
HyperFlex Data Platform
Cisco MDS 9000 Multilayer Director SAN Switches
Network Switches
Hypervisors
Summary
Chapter 2. HyperFlex Architecture
HyperFlex Architecture Components
HyperFlex HX-Series Node Overview
Physical Installation
Physical Topology
Summary
Chapter 3. Installing HyperFlex
Installation Prerequisites
Installing Cisco HyperFlex System Servers
Installing HyperFlex Edge/Robo Cluster (ESXi, 3 Node)
Installing a HyperFlex Stretch Cluster (ESXi)
Summary
Chapter 4. Managing HyperFlex
Logging In to HyperFlex Connect
Replication
Encryption
Summary
Chapter 5. Maintaining HyperFlex
HyperFlex Licensing
Virtual Machine Management
Scaling HyperFlex Clusters
Hardware (Disk) Replacement
Upgrading HyperFlex Software
Summary
Chapter 6. Advanced Features
Data Protection
Replication Overview
Replication Networking Overview
Replication Pair Overview
Disaster Recovery Overview
HyperFlex Backup
Cohesity
Commvault
Summary
Chapter 7. Deploying HyperFlex Anywhere with Cisco Intersight
Cloud-Based Management
Cisco HyperFlex Edge Cluster Deployment
Standard UCSM Managed HyperFlex Cluster Deployment
Summary
Index
Code Snippets

Citation preview

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

Contents 1. Cover Page 2. About This eBook 3. Title Page 4. Copyright Page 5. About the Authors 6. About the Technical Reviewers 7. Dedications 8. Acknowledgments 9. Contents at a Glance 10. Reader Services 11. Contents 12. Command Syntax Conventions 13. Preface 1. Who Should Read This Book? 2. How This Book Is Organized 14. Introduction 1. Introduction to HCI 2. Cisco’s HCI Solution: HyperFlex Overview 3. Deployment Options of Cisco HyperFlex 4. Cisco HyperFlex Advantages 15. Chapter 1. Overview of Data Center Products 1. Cisco Unified Computing System

||||||||||||||||||||

||||||||||||||||||||

2. Cisco Intersight Cloud-Based Management (SaaS Solution) 3. HyperFlex Data Platform 4. Cisco MDS 9000 Multilayer Director SAN Switches 5. Network Switches 6. Hypervisors 7. Summary 16. Chapter 2. HyperFlex Architecture 1. HyperFlex Architecture Components 2. HyperFlex HX-Series Node Overview 3. Physical Installation 4. Physical Topology 5. Summary 17. Chapter 3. Installing HyperFlex 1. Installation Prerequisites 2. Installing Cisco HyperFlex System Servers 3. Installing HyperFlex Edge/Robo Cluster (ESXi, 3 Node) 4. Installing a HyperFlex Stretch Cluster (ESXi) 5. Summary 18. Chapter 4. Managing HyperFlex 1. Logging In to HyperFlex Connect 2. Replication 3. Encryption 4. Summary 19. Chapter 5. Maintaining HyperFlex 1. HyperFlex Licensing 2. Virtual Machine Management 3. Scaling HyperFlex Clusters

||||||||||||||||||||

||||||||||||||||||||

4. Hardware (Disk) Replacement 5. Upgrading HyperFlex Software 6. Summary 20. Chapter 6. Advanced Features 1. Data Protection 2. Replication Overview 3. Replication Networking Overview 4. Replication Pair Overview 5. Disaster Recovery Overview 6. HyperFlex Backup 7. Cohesity 8. Commvault 9. Summary 21. Chapter 7. Deploying HyperFlex Anywhere with Cisco Intersight 1. Cloud-Based Management 2. Cisco HyperFlex Edge Cluster Deployment 3. Standard UCSM Managed HyperFlex Cluster Deployment 4. Summary 22. Index 23. Code Snippets 1. i 2. ii 3. iii 4. iv 5. v 6. vi 7. vii

||||||||||||||||||||

||||||||||||||||||||

8. viii 9. ix 10. x 11. xi 12. xii 13. xiii 14. xiv 15. xv 16. xvi 17. xvii 18. xviii 19. xix 20. xx 21. 1 22. 2 23. 3 24. 4 25. 5 26. 6 27. 7 28. 8 29. 9 30. 10 31. 11 32. 12 33. 13 34. 14

||||||||||||||||||||

||||||||||||||||||||

35. 15 36. 16 37. 17 38. 18 39. 19 40. 20 41. 21 42. 22 43. 23 44. 24 45. 25 46. 26 47. 27 48. 28 49. 29 50. 30 51. 31 52. 32 53. 33 54. 34 55. 35 56. 36 57. 37 58. 38 59. 39 60. 40 61. 41

||||||||||||||||||||

||||||||||||||||||||

62. 42 63. 43 64. 44 65. 45 66. 46 67. 47 68. 48 69. 49 70. 50 71. 51 72. 52 73. 53 74. 54 75. 55 76. 56 77. 57 78. 58 79. 59 80. 60 81. 61 82. 62 83. 63 84. 64 85. 65 86. 66 87. 67 88. 68

||||||||||||||||||||

||||||||||||||||||||

89. 69 90. 70 91. 71 92. 72 93. 73 94. 74 95. 75 96. 76 97. 77 98. 78 99. 79 100. 80 101. 81 102. 82 103. 83 104. 84 105. 85 106. 86 107. 87 108. 88 109. 89 110. 90 111. 91 112. 92 113. 93 114. 94 115. 95

||||||||||||||||||||

||||||||||||||||||||

116. 96 117. 97 118. 98 119. 99 120. 100 121. 101 122. 102 123. 103 124. 104 125. 105 126. 106 127. 107 128. 108 129. 109 130. 110 131. 111 132. 112 133. 113 134. 114 135. 115 136. 116 137. 117 138. 118 139. 119 140. 120 141. 121 142. 122

||||||||||||||||||||

||||||||||||||||||||

143. 123 144. 124 145. 125 146. 126 147. 127 148. 128 149. 129 150. 130 151. 131 152. 132 153. 133 154. 134 155. 135 156. 136 157. 137 158. 138 159. 139 160. 140 161. 141 162. 142 163. 143 164. 144 165. 145 166. 146 167. 147 168. 148 169. 149

||||||||||||||||||||

||||||||||||||||||||

170. 150 171. 151 172. 152 173. 153 174. 154 175. 155 176. 156 177. 157 178. 158 179. 159 180. 160 181. 161 182. 162 183. 163 184. 164 185. 165 186. 166 187. 167 188. 168 189. 169 190. 170 191. 171 192. 172 193. 173 194. 174 195. 175 196. 176

||||||||||||||||||||

||||||||||||||||||||

197. 177 198. 178 199. 179 200. 180 201. 181 202. 182 203. 183 204. 184 205. 185 206. 186 207. 187 208. 188 209. 189 210. 190 211. 191 212. 192 213. 193 214. 194 215. 195 216. 196 217. 197 218. 198 219. 199 220. 200 221. 201 222. 202 223. 203

||||||||||||||||||||

||||||||||||||||||||

224. 204 225. 205 226. 206 227. 207 228. 208 229. 209 230. 210 231. 211 232. 212 233. 213 234. 214 235. 215 236. 216 237. 217 238. 218 239. 219 240. 220 241. 221 242. 222 243. 223 244. 224 245. 225 246. 226 247. 227 248. 228 249. 229 250. 230

||||||||||||||||||||

||||||||||||||||||||

251. 231 252. 232 253. 233 254. 234 255. 235 256. 236 257. 237 258. 238 259. 239 260. 240 261. 241 262. 242 263. 243 264. 244 265. 245 266. 246 267. 247 268. 248 269. 249 270. 250 271. 251 272. 252 273. 253 274. 254 275. 255 276. 256 277. 257

||||||||||||||||||||

||||||||||||||||||||

278. 258 279. 259 280. 260 281. 261 282. 262 283. 263 284. 264 285. 265 286. 266 287. 267 288. 268 289. 269 290. 270 291. 271 292. 272 293. 273 294. 274 295. 275 296. 276 297. 277 298. 278 299. 279 300. 280 301. 281 302. 282 303. 283 304. 284

||||||||||||||||||||

||||||||||||||||||||

305. 285 306. 286 307. 287 308. 288 309. 289 310. 290 311. 291 312. 292 313. 293 314. 294 315. 295 316. 296 317. 297 318. 298 319. 299 320. 300 321. 301 322. 302 323. 303 324. 304 325. 305 326. 306 327. 307 328. 308 329. 309 330. 310 331. 311

||||||||||||||||||||

||||||||||||||||||||

332. 312 333. 313 334. 314 335. 315 336. 316 337. 317 338. 318 339. 319 340. 320 341. 321 342. 322 343. 323 344. 324 345. 325 346. 326 347. 327 348. 328 349. 329 350. 330 351. 331 352. 332 353. 333 354. 334 355. 335 356. 336 357. 337 358. 338

||||||||||||||||||||

||||||||||||||||||||

359. 339 360. 340 361. 341 362. 342 363. 343 364. 344 365. 345 366. 346 367. 347 368. 348

||||||||||||||||||||

||||||||||||||||||||

About This eBook ePUB is an open, industry-standard format for eBooks. However, support of ePUB and its many features varies across reading devices and applications. Use your device or app settings to customize the presentation to your liking. Settings that you can customize often include font, font size, single or double column, landscape or portrait mode, and figures that you can click or tap to enlarge. For additional information about the settings and features on your reading device or app, visit the device manufacturer’s Web site. Many titles include programming code or configuration examples. To optimize the presentation of these elements, view the eBook in single-column, landscape mode and adjust the font size to the smallest setting. In addition to presenting code and configurations in the reflowable text format, we have included images of the code that mimic the presentation found in the print book; therefore, where the reflowable format may compromise the presentation of the code listing, you will see a “Click here to view code image” link. Click the link to view the print-fidelity code image. To return to the previous page viewed, click the Back

||||||||||||||||||||

||||||||||||||||||||

button on your device or app.

||||||||||||||||||||

||||||||||||||||||||

Implementing Cisco HyperFlex Solutions A Complete Configuration Guide for Cisco Data Center HCI Solution

Jalpa Patel, Manuel Velasco, and Avinash Shukla

Cisco Press

||||||||||||||||||||

||||||||||||||||||||

Implementing Cisco HyperFlex Solutions Jalpa Patel, Manuel Velasco, Avinash Shukla Copyright© 2021 Cisco Systems, Inc. Published by Cisco Press, Pearson Education, Inc. All rights reserved. No part of this book may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without written permission from the publisher, except for the inclusion of brief quotations in a review. ScoutAutomatedPrintCode Library of Congress Control Number: 2020937224 ISBN-13: 978-0-13-660191-3 ISBN-10: 0-13-660191-X

Warning and Disclaimer This book is designed to provide a reference guide to independently implement HyperFlex solutions with UCS and upstream data center devices. Every effort has been made to make this book as complete and as accurate as possible, but no warranty or fitness is

||||||||||||||||||||

||||||||||||||||||||

implied. The information is provided on an “as is” basis. The authors, Cisco Press, and Cisco Systems, Inc. shall have neither liability nor responsibility to any person or entity with respect to any loss or damages arising from the information contained in this book or from the use of the discs or programs that may accompany it. The opinions expressed in this book belong to the author and are not necessarily those of Cisco Systems, Inc.

Trademark Acknowledgments All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. Cisco Press or Cisco Systems, Inc., cannot attest to the accuracy of this information. Use of a term in this book should not be regarded as affecting the validity of any trademark or service mark.

Special Sales For information about buying this title in bulk quantities, or for special sales opportunities (which may include electronic versions; custom cover designs; and content particular to your business, training goals, marketing focus, or branding interests), please contact

||||||||||||||||||||

||||||||||||||||||||

our corporate sales department at [email protected] or (800) 382-3419. For government sales inquiries, please contact [email protected]. For questions about sales outside the U.S., please contact [email protected].

Feedback Information At Cisco Press, our goal is to create in-depth technical books of the highest quality and value. Each book is crafted with care and precision, undergoing rigorous development that involves the unique expertise of members from the professional technical community. Readers’ feedback is a natural continuation of this process. If you have any comments regarding how we could improve the quality of this book, or otherwise alter it to better suit your needs, you can contact us through email at [email protected]. Please make sure to include the book title and ISBN in your message. We greatly appreciate your assistance. Editor-in-Chief: Mark Taub Alliances Manager, Cisco Press: Ron Fligge

||||||||||||||||||||

||||||||||||||||||||

Product Line Manager: Brett Bartow Acquisitions Editor: James Manly Managing Editor: Sandra Schroeder Development Editor: Christopher A. Cleveland Senior Project Editor: Lori Lyons Copy Editor: Kitty Wilson Technical Editors: Carlos Lopez, Atul Khanna Cover Designer: Chuti Prasertsith Production Manager: Vaishnavi Venkatesan Composition: codeMantra Indexer: Ken Johnson Proofreader: Betty Pessagno

Figure Credits Figure I-4a: Chip icon, microtic/Shutterstock Figure I-4b: SSD icon, microtic/Shutterstock Figure I-8: Icon: VMware ESXi, VMware, Inc. Figure I-17, Figures 6-24a, 6-24b: Veeam logo, Veeam Software Group GmbH Figure 5-22: Screenshot of Create Datastore using

||||||||||||||||||||

||||||||||||||||||||

HXDP plugin © 2020 VMware, Inc Figure 5-48: Screenshot of DRS settings – Enabled and Fully Automated © 2020 VMware, Inc Figure 5-49: Screenshot of Advanced Configuration – Net.TeamPolicyUpDelay © 2020 VMware, Inc Figure 5-50: Screenshot of EAM Status © 2020 VMware, Inc

Americas Headquarters Cisco Systems, Inc. San Jose, CA Asia Pacific Headquarters Cisco Systems (USA) Pte. Ltd. Singapore Europe Headquarters Cisco Systems International BV Amsterdam, The Netherlands Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco Website at www.cisco.com/go/offices. Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: www.cisco.com/go/trademarks. Third party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R)

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

About the Authors Jalpa Patel (CCIE No. 42465) is a senior technical leader in the Customer Experience group at Cisco Systems. Her areas of interest include Cisco Unified Computing System and virtualization, hyperconverged infrastructure solutions, application-centric SDN for application agility and data center automation, and Nexus switches, which are the foundation for a nextgeneration unified fabric data center. She is currently focusing on building SaaS solutions for all data center technologies. Jalpa has over 18 years of experience in networking and system management. She is leading innovation within Cisco to change support models. She has authored and published IEEE papers and regularly presents at industry conferences. Jalpa holds a M.S. degree in telecommunication networks from New York University and a B.S. degree in electronics and communications from a Gujarat university in India. She has also earned an advanced program management certificate from Stanford University. Manuel Velasco (CCIE No. 49401) is a customer

||||||||||||||||||||

||||||||||||||||||||

support specialist in the Customer Experience group at Cisco Systems. In his previous role, he worked as a TAC engineer at Cisco, supporting multiple data center technologies, including Cisco Unified Computing System and virtualization, Cisco Application Centric Infrastructure (ACI), and Cisco HyperFlex. He has over 8 years of experience in data center technologies. Manuel holds a B.S. degree in computer engineering from CalPoly–San Luis Obispo. Avinash Shukla (CCIE No. 28418) is a technical leader at Cisco Systems. He has 11 years of experience in Cisco CX roles, spanning professional and technical services, and holds a dual CCIE in Collaboration and Datacenter. His current focus areas include Cisco HyperFlex, Cisco UCS and virtualization, Cisco Intersight, data center automation, and Kubernetes. He enjoys customer interaction and thrives on solving new challenges to improve customer experience every day. Avinash holds a B.Tech. in electronics and communication from IIIT in Hyderabad, India. He has co-authored IEEE publications and presents regularly at Cisco Live. He has won numerous Cisco awards for innovation, technical excellence, and customer focus, and he has presented many training sessions to Cisco partners and customers. He has also authored several tech notes and guides on data center technology to

||||||||||||||||||||

||||||||||||||||||||

simplify day-to-day operations for customers.

Technet24

||||||||||||||||||||

||||||||||||||||||||

About the Technical Reviewers Carlos E. Lopez has more than 15 years of experience in the IT Networking field. He previously worked at IBM as a Network Designer and currently works at Cisco Systems as a Data Center Technical Leader, providing TAC Escalation support for SAN, Hyperflex, and UCS technologies. Carlos holds a variety of IT Certifications, including double CCIE# 21063 - Storage Area Networking and Data Center, as well as Red Hat and VMware Certifications, among others. Atul Khanna (CCIE No. 35540) is a data center networking manager with Cisco Customer Experience Centers Americas. He has extensive experience in directing and leading strategies to provide optimal technical services to Cisco customers. He has more than 10 years of experience at Cisco in enterprise support, network operations, manage/cloud services, data center networking, compute, and virtualization. Atul was a senior technical consulting engineer supporting HyperFlex solutions in Richardson, Texas. He facilitated Advanced Services (AS) team members for successful

||||||||||||||||||||

||||||||||||||||||||

new customer deployments and upgrades, and he cultivated relationships with Cisco partners and customers to meet organizational demands. He also presented a technical webinar for Cloud Services Platform 2100. He attended Cisco Live in 2015 and 2018, interacting with Cisco customers and partners at the TAC booth. Atul lives with his wife in Milpitas, California.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Dedications I would like to dedicate this book to my parents, Minaxi and Babubhai Patel, for their blessings and faith in me; and to Jigisha, Falguni and Harish, for their guidance and encouragement. I also would like to dedicate this book to my brother, Hardik, and his wife, Dharmistha, who have been a great support for me throughout the complete process of writing of this book. Finally, thank you to Raj and Samaira for their love and inspiration. —Jalpa Patel I would like to dedicate this book to my wonderful wife, Bery Velasco, and my son, Niklaus Velasco, and to my parents, Juan and Martha Velasco, for all their support and especially for the love and affection they have given me. —Manuel Velasco I would like to dedicate this book to my family, especially my lovely wife, Neelima, and son, Aryav, Without their support, none of this would have been possible. Their companionship and love are key to my success. I would also like to dedicate this book to my

||||||||||||||||||||

||||||||||||||||||||

sister, Anubha, for always believing in me, and my parents, Kanak and Anil Shukla, whose guidance and upbringing made me who I am today and taught me the importance of hard work and humility. Lastly, I would like to thank my friends and colleagues (within and outside Cisco): All of you have inspired me in many ways and helped me in my professional endeavors. —Avinash Shukla

Technet24

||||||||||||||||||||

||||||||||||||||||||

Acknowledgments We would like to thank and acknowledge several people who have helped us directly or indirectly with the necessary skills that enabled us to write this book. A thank you goes out to James Manly, Chris Cleveland, and everybody else at Cisco Press for believing in us and supporting throughout this journey. A very big and warm thank you also goes out to all our Cisco colleagues. We honored to be part of the CX community of experts on data center technologies. Finally, we would like to thank my technical reviewers, Carlos Lopez and Atul Khanna, for their patience, commitment, and support in the adventure of writing our first book.

||||||||||||||||||||

||||||||||||||||||||

Contents at a Glance Preface Introduction Chapter 1 Overview of Data Center Products Chapter 2 HyperFlex Architecture Chapter 3 Installing HyperFlex Chapter 4 Managing HyperFlex Chapter 5 Maintaining HyperFlex Chapter 6 Advanced Features Chapter 7 Deploying HyperFlex Anywhere with Cisco Intersight Index

Technet24

||||||||||||||||||||

||||||||||||||||||||

Reader Services Register your book for convenient access to downloads, updates, and corrections as they become available. To start the registration process, go to www.ciscopress.com/ register and log in or create an account.* Enter the product ISBN 9780136601913 and click Submit. When the process is complete, you will find any available bonus content under Registered Products. *Be sure to check the box indicating that y ou would like to hear from us to receiv e exclusiv e discounts on future editions of this product.

||||||||||||||||||||

||||||||||||||||||||

Contents Preface Introduction Introduction to HCI Cisco’s HCI Solution: HyperFlex Overview Deployment Options of Cisco HyperFlex Cisco HyperFlex Advantages Chapter 1 Overview of Data Center Products Cisco Unified Computing System UCS-B Series Products UCS 6454 Fabric Interconnect UCS 6300 Series Fabric Interconnects UCS 6200 Series Fabric Interconnects UCS 5108 Chassis Cisco UCS B-Series Blade Servers Cisco UCS Manager (UCSM)

Technet24

||||||||||||||||||||

||||||||||||||||||||

UCS-C Series Products UCS-S Series Products Cisco Intersight Cloud-Based Management (SaaS Solution) HyperFlex Data Platform HyperFlex Hybrid Nodes HyperFlex All Flash Nodes HyperFlex Edge for Remote and Branch Offices Cisco HyperFlex HX-Series System Management Components Cisco MDS 9000 Multilayer Director SAN Switches Cisco MDS 9700 Series Multilayer Director Cisco MDS 9500 Series Multilayer Director Network Switches Nexus 5000 Series Products Nexus 9000 Series Products Cisco Nexus 9000 NX-OS Mode Cisco Nexus 9000 ACI Mode Hypervisors

||||||||||||||||||||

||||||||||||||||||||

VMWare vSphere Hypervisor Microsoft Hyper-V Summary Chapter 2 HyperFlex Architecture HyperFlex Architecture Components Storage Controller Virtual Machine HX Services IO Visor Log Structure File System Data Optimization HyperFlex Cleaner Cluster Resource Manager (CRM) Zookeeper HyperFlex HX-Series Node Overview HX220 M5/M4 Series HXAF220c-M5SX All Flash HX220c-M5SX Hybrid HXAF220c-M4S All Flash Cisco HyperFlex HX220c-M4S Hybrid Node HX240 M5/M4 Series HXAF240c-M5SX All Flash

Technet24

||||||||||||||||||||

||||||||||||||||||||

HX240c-M5SX Hybrid HX240c-M5L Hybrid HXAF240c-M4SX All Flash HX240c-M4SX Hybrid HXAF220c-M5N All-NVMe All Flash, Hybrid, and All-NVMe Cisco HyperFlex Compute-Only Nodes Physical Installation Supported Media Physical Topology HyperFlex Standard Cluster/HyperFlex Extended Cluster Topology Fabric Interconnect Connectivity HX-Series Rack-Mount Servers Connectivity Cisco UCS B-Series Blade Servers Connectivity HyperFlex Stretch Cluster Topology HyperFlex Edge Cluster Topology Edge Network Connectivity Summary Chapter 3 Installing HyperFlex

||||||||||||||||||||

||||||||||||||||||||

Installation Prerequisites Cisco HyperFlex HX-Series System Components Cisco UCS Fabric Interconnects (FIs) Cisco Nexus Switches HyperFlex Software Versions Host Requirements Disk Requirements Converged Nodes Compute-Only Nodes Browser Recommendations Port Requirements HyperFlex External Connections UCS/Fabric Interconnect Network Provisioning Configuration for HyperFlex 3.5 Network Settings VLAN and vSwitch Requirements Cisco UCS Requirements Hypervisor Requirements Storage Cluster Requirements vCenter Configuration Requirements

Technet24

||||||||||||||||||||

||||||||||||||||||||

System Services Requirements CPU Resource Reservation for Controller VMs Memory Resource Reservation for Controller Virtual Machines Controller VM Locations Auto-Support Requirements Single Sign-On Requirements Installing Cisco HyperFlex System Servers Physical Connectivity Illustrations for Direct Connect Mode Cluster Setup Installation Workflow Deploy the HX Data Platform Installer OVA by Using the vSphere Web Client Configuring and Deploying a Standard HyperFlex Cluster HX Data Platform Installer Navigation Aid Buttons Running the Postinstallation Script Sample Post Installation Script Installing HyperFlex Edge/Robo Cluster (ESXi, 3 Node)

||||||||||||||||||||

||||||||||||||||||||

HyperFlex Edge Deployment Options HyperFlex On-Premises OVA Installer Installation Overview Configuring and Deploying a HyperFlex Edge Cluster (Gigabit Ethernet Only) Configuring and Deploying a HyperFlex Edge Cluster (10 Gigabit Ethernet Only) HyperFlex Edge Postinstallation Tasks Installing a HyperFlex Stretch Cluster (ESXi) Preinstallation Checklist Network Requirements and Network Topology Witness Requirements Network Latency Requirements for the Witness VM Fabric Interconnect and Node Requirements VMware Requirements ACI Requirements UCS Manager Requirements

Technet24

||||||||||||||||||||

||||||||||||||||||||

VLAN Requirements Stretch Witness Deploying the Witness Node HyperFlex Stretch Cluster Network IP Addressing HyperFlex Stretch Cluster Guidelines HyperFlex Stretch Cluster Limitations HyperFlex Stretch Cluster Installation HyperFlex Stretch Cluster Postinstallation Summary Chapter 4 Managing HyperFlex Logging in to HyperFlex Connect Local Access Role-Based Access Control Dashboard Page Settings Page Auto-Support Settings Support Bundle Settings Notifications Settings Kubernetes Device Connector

||||||||||||||||||||

||||||||||||||||||||

Alarms Page Events Page Activity Page Performance Page System Information Page Controller VM Access Nodes Page Disks Page Datastores Page Virtual Machines Page Upgrade Page UCS Server Firmware HX Data Platform Upgrade ESXi Upgrade Progress Page Web CLI Page Replication Replication Pairing Encryption Summary Chapter 5 Maintaining HyperFlex

Technet24

||||||||||||||||||||

||||||||||||||||||||

HyperFlex Licensing Registering a Cluster with Smart Licensing Creating a Registration Token Registering a Cluster with Smart Software Licensing Through a Controller VM Virtual Machine Management HX Data Platform Native Snapshots Overview Benefits of HX Data Platform Native Snapshots Native Snapshot Considerations Native Snapshot Best Practices Understanding SENTINEL Snapshots Native Snapshot Timezones Creating Snapshots Scheduling Snapshots Reverting to a Snapshot Deleting Snapshots ReadyClones Benefits of HX Data Platform

||||||||||||||||||||

||||||||||||||||||||

ReadyClones Supported Base VMs ReadyClones Requirements ReadyClones Best Practices Creating ReadyClones Using HX Connect Creating ReadyClones Using the HX Data Platform Plug-in Datastores Adding Datastores Scaling HyperFlex Clusters Node Expansion Adding a Converged Node Adding a Compute Node Expanding a Stretch Cluster Configuring Sites for Expanding a Cluster Expanding a Cluster Removing Nodes Removing Converged Nodes Removing a Compute Node Increasing Storage Capacity by Adding

Technet24

||||||||||||||||||||

||||||||||||||||||||

Drives Hardware (Disk) Replacement Replacing SSDs Replacing NVMe SSDs Replacing Housekeeping SSDs Replacing or Adding HDDs Upgrading HyperFlex Software Upgrading HyperFlex Pre-Upgrade Workflow Downloading UCS Infra, B-Series, CSeries, and Storfs Bundles Verifying the Pre-Upgrade UCS Server Firmware (C-Bundle) Version Using UCS Manager Using HX Connect Pre-Upgrade Validation Viewing the HyperFlex Cluster Health Checking the Cluster Storage Capacity Verifying That DRS Is Enabled Verifying and Configuring the Net.TeamPolicyUpDelay

||||||||||||||||||||

||||||||||||||||||||

Default Value Viewing ESX Agent Manager Verifying the Health of a HyperFlex Cluster in Cisco UCS Manager Verifying vMotion Interfaces Verifying Upstream Network Connectivity Configuring the Cluster Access Policy in Lenient Mode Verifying That No Major Alarms Are Reported for the HyperFlex Cluster in HyperFlex Connect Hypercheck Utility Storage Controller VM and ESXi Node Checks Installing and Running Hypercheck Upgrading UCS Infrastructure Firmware Upgrade Procedure Recommended Upgrade Method Online Upgrade Process Offline Upgrade Process

Technet24

||||||||||||||||||||

||||||||||||||||||||

Post-Upgrade Check Summary Chapter 6 Advanced Features Data Protection Replication Overview Port Requirements for Replication Replication Considerations Recovery Considerations Replication Networking Overview Replication Network Considerations M*N Connectivity Between Clusters Configuring a Replication Network in HX Connect Replication Pair Overview Procedure for Creating Replication Pair Protecting Virtual Machines Creating Protection Groups Protecting Virtual Machines Disaster Recovery Overview Compatibility Matrix for Disaster Recovery Operations Testing Virtual Machine Recovery

||||||||||||||||||||

||||||||||||||||||||

Recovering Virtual Machines Plan Migration HyperFlex Backup Veeam Availability Suite Changed Block Tracking Cohesity Cohesity Protection Cohesity Recovery Commvault Summary Chapter 7 Deploying HyperFlex Anywhere with Cisco Intersight Cloud-Based Management HyperFlex Cluster Policies in Intersight Cisco HyperFlex Edge Cluster Deployment Intersight Arbitrator Overview for 2Node Physical Topology Cisco HyperFlex Edge Single-Switch Configuration Cisco HyperFlex Edge Dual-Switch Configuration

Technet24

||||||||||||||||||||

||||||||||||||||||||

Logical Topology Installing Cisco HyperFlex Edge with the Cisco Intersight Platform Intersight Installation Prerequisites Cisco Integrated Management Controller Configuration Claiming Devices in Cisco Intersight Cisco HyperFlex Edge Clusters Installation Wizard Postinstallation Configuration Standard UCSM Managed HyperFlex Cluster Deployment Summary Index

||||||||||||||||||||

||||||||||||||||||||

Command Syntax Conventions The conventions used to present command syntax in this book are the same conventions used in the IOS Command Reference. The Command Reference describes these conventions as follows: Boldface indicates commands and keywords that are entered literally as shown. In actual configuration examples and output (not general command syntax), boldface indicates commands that are manually input by the user (such as a show command). Italic indicates arguments for which you supply actual values. Vertical bars (|) separate alternative, mutually exclusive elements. Square brackets ([ ]) indicate an optional element. Braces ({ }) indicate a required choice. Braces within brackets ([{ }]) indicate a required choice within an optional element.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Preface Decisions made within IT departments have never been more important to the broader business than they are today. IT departments must react quickly to new business initiatives that are designed to drive bottomline improvements and generate new revenue streams. It should be no surprise that IT departments are increasingly looking for infrastructure that improves productivity and agility within the data center. Organizations around the world are turning to hyperconverged infrastructure (HCI) to achieve such goals. Indeed, HCI offerings such as Cisco’s HyperFlex have become critical platforms for modernizing data center infrastructure, thanks to their ability to: Collapse silos of storage, compute, and data management services down to a cluster of x86 servers that can be deployed, managed, and supported as a single system Support IT organizational transformation through consolidation of roles that are focused on virtualization, compute, and storage at a generalist level Reduce the need to deploy different types of siloed infrastructure within the data center, including data efficiency and data protection solutions

||||||||||||||||||||

||||||||||||||||||||

Today, Cisco’s HyperFlex HCI solutions enable very high levels of efficiency, agility, and resiliency within the data center by enabling tight integration of core infrastructure (compute, storage, networking, and system management), increased levels of automation, and simplified life cycle management. HyperFlex solutions are fully engineered appliances built on UCS servers (x86) that provide an abstracted pool of capacity, memory, and CPU cores that are used as the foundation for server-centric workloads (for example, the hypervisor, VMs, and applications) as well as storagecentric workloads (for example, data persistence, data access, and data management).

WHO SHOULD READ THIS BOOK? This book is intended for sales engineers, field consultants, professional services, IT managers, partner engineering, and customers deploying Cisco HyperFlex. External references are provided wherever applicable, but readers are expected to be familiar with hypervisorspecific technologies, infrastructure concepts, networking connectivity, and security policies related to customer installations. As we see an increasing trend in deployment of HyperFlex with UCS servers, the book will be useful to both small-scale customers and largescale data centers. It can be considered as one book for all who deal with HyperFlex data center solutions on a

Technet24

||||||||||||||||||||

||||||||||||||||||||

daily basis.

HOW THIS BOOK IS ORGANIZED This book covers a diverse set of topics related to the successful deployment of HyperFlex. The chapters are as follows: Chapter 1, “Overview of Data Center Products”: This chapter provides an overview of the data center products required for networking, compute, and storage. This chapter does not provide details on each product but does provide an overview to help you understand how HyperFlex can connect with other data center devices. Chapter 2, “HyperFlex Architecture”: This chapter describes the architecture of HyperFlex Data Platform, including the components that make this solution possible. This chapter also reviews the different cluster topology configurations that are supported by HyperFlex HX Data Platform. It also discusses both the logical and physical topologies and the supported hardware for these topologies. Chapter 3, “Installing HyperFlex”: This chapter covers how to install and deploy the various types of Cisco HyperFlex clusters. It includes information on preparation, prerequisites, and the various components required to install a HyperFlex cluster. Chapter 4, “Managing HyperFlex”: With Cisco HyperFlex, you get simplified hyperconverged infrastructure management with a choice of tools to best fit your needs. Y ou can manage HyperFlex through HyperFlex Connect, hypervisor management plug-ins, or the Cisco Intersight cloud-based platform. This chapter covers management through HyperFlex Connect, a user interface that provides a view of the HX storage cluster status, components, and features such as encryption and replication.

||||||||||||||||||||

||||||||||||||||||||

Chapter 5, “Maintaining HyperFlex”: This chapter provides details on managing a HyperFlex system and how to perform Day 2 operations on a HyperFlex cluster. The chapter explains HyperFlex licensing and virtual machine management, including native snapshots and ReadyClones. It also includes information on how to scale the various types of HyperFlex clusters and perform hardware replacements in HyperFlex systems. Chapter 6, “Advanced Features”: This chapter provides an overview of the HyperFlex Data Platform disaster recovery feature and goes over the configuration steps to properly enable the feature. Finally, it presents an overview of supported third-party solutions you can use as second layer of protection for HyperFlex Data Platform. Chapter 7, “Deploying HyperFlex Anywhere with Cisco Intersight”: This chapter provides details about what Cisco Intersight is and how you can use it to deploy edge clusters and standard clusters.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Introduction It is important to understand the various changes in data center technology and the evolution from traditional to converged and to hyperconverged infrastructure. As illustrated in Figure I-1, the traditional data center comprises the three layers of infrastructure—compute, storage, and networking (provided by individual companies as standalone layers) —where the customers are responsible for integrating these layers, basically building the complete infrastructure product themselves. Such environments are expensive to build and difficult to support, as there are a lot of different components (both hardware and software) involved. Maintaining traditional data centers involves many challenges, as upgrades, compatibility, and interoperability are time-consuming and complicated. Implementation can also be timeconsuming and fraught with issues related to adding new resources in the existing environment.

||||||||||||||||||||

||||||||||||||||||||

Figure I-1 Traditional (Non-Converged) Data Center The next evolution of the data center was converged platforms (see Figure I-2), which brought advantages such as single-vendor support, faster deployment, and a

Technet24

||||||||||||||||||||

||||||||||||||||||||

validated building block approach. The converged architecture is an approach to data center management that packages compute, networking, storage, and virtualization tools together to simplify data center operations. The goal of converged infrastructure is to reduce complexity in data center management. The principal design factor is to eliminate issues of hardware incompatibility and provide ease of deployment and management to deliver multivendor-validated design and deliver predictable performance. Industry leaders such as Cisco, VMware, EMC, and NetApp came together to build partnerships (such as VCE and FlexPod) to deliver data center solutions predesigned and tested to reduce complexity for customers. This phase also saw a technological evolution in the form of unified fabric, with a lossless data center transport layer helping to enable the convergence of LANs and SANs onto a single unified fabric switch.

||||||||||||||||||||

||||||||||||||||||||

Figure I-2 Converged Data Center The latest data center evolution is the hyperconverged infrastructure (HCI; see Figure I-3). HCI brings together the compute, memory, hypervisor, and storage devices of servers into a single platform for virtual servers. There is no longer a separate storage system, as the

Technet24

||||||||||||||||||||

||||||||||||||||||||

servers running the hypervisors also provide the software-defined storage resources to store the virtual servers, effectively storing the virtual machines on themselves. HCI removes nearly all the silos and is almost completely self-contained; it is also simpler to use, faster to deploy, and easier to consume than earlier evolutions, yet it is still flexible and offers high performance. Most hyperconverged systems rely on standard networking components, such as on-board network cards in the x86 servers and top-of-rack switches. Some key advantages of HCI are the simplification of deployment, easier day-to-day management operations, and increased agility, all of which reduce operational costs.

||||||||||||||||||||

||||||||||||||||||||

Figure I-3 Hyperconverged Infrastructure

Introduction to HCI Hyperconverged infrastructure is a software-defined IT infrastructure that uses virtualized components of “hardware-defined” systems. Some of the important features of HCI are virtual computing, virtual storage, virtual networking, and a centralized management system (see Figure I-4).

Figure I-4 HCI Fundamentals HCI solutions consist of the following fundamental components: A tight integration of servers (x86) for compute, storage, networking, and virtualization in all-in-one appliances. A simple automated deployment model that provides ease of use and reduces implementation time. A central management system that allows easy administration of all HCI resources with an intuitive UI that helps eliminate separate

Technet24

||||||||||||||||||||

||||||||||||||||||||

management solutions for managing the infrastructure. A storage solution that is a standard distributed file system spanning all nodes in a cluster to provide storage, networking, and virtualization capability to applications. Ease of scalability and the ability to scale up or scale down one unit or appliance at a time.

Cisco’s HCI Solution: HyperFlex Overview When you are looking to simplify your infrastructure, you really need a complete, enterprise-grade, hyperconverged platform that provides a foundation for your multi-cloud/hybrid cloud future and delivers the speed, simplicity, and scale necessary for your business to succeed today. You want a cloud-like experience within your own data center. This is where HyperFlex bridges the industry gap. HyperFlex is the only complete HCI solution on the market, with compute, storage, and networking all integrated into a single solution provided by a single vendor. Most will agree that the network is the most critical component in a data center in our ability to connect to and effectively use public or other private cloud solutions. With HyperFlex, the entire stack— compute, HCI software, storage, and networking—are all provided by and supported by Cisco. This means better support for you and no vendor finger-pointing. Cisco HyperFlex combines the convergence of

||||||||||||||||||||

||||||||||||||||||||

computing and networking provided by Cisco UCS and next-generation hyperconverged storage software to uniquely provide the compute resources, network connectivity, storage, and hypervisor platform to run an entire virtual environment—all contained in a single uniform system. Figure I-5 shows Cisco Hyperflex evolution into a complete Hyperconverged solution.

Figure I-5 Cisco HCI: HyperFlex Cisco HyperFlex includes a purpose-built, highperformance, low-latency hyperconverged platform that adapts to support any application, in any cloud, anywhere (see Figure I-6). The platform supports multiple hypervisors and virtualized environments (including Microsoft Windows Server 2019 Hyper-V and

Technet24

||||||||||||||||||||

||||||||||||||||||||

VMware vSphere), Docker containers with Kubernetes, multi-cloud services, and edge deployments to efficiently and economically deploy, monitor, and manage applications. Cisco HyperFlex supports data center core, multi-cloud, and edge use cases.

Figure I-6 Cisco HCI: HyperFlex for the Data Center and the CloudOverview of HyperFlex Software

||||||||||||||||||||

||||||||||||||||||||

Components Cisco HyperFlex consists of the following main software components (see Figure I-7): Storage controller A storage controller is a virtual machine running Cisco HyperFlex Data Platform software with a dedicated number of processor cores and amount of memory, allowing it to deliver consistent performance without affecting the performance of the other virtual machines on the cluster. The controller can access all storage without hypervisor intervention with the VMware VM_DIRECT_PATH feature. It pools the local storage for distributed capacity. It uses the node’s memory and SSDs as part of a distributed caching layer and uses the node’s HDDs for a distributedcapacity storage. Hypervisor integration VIB (vSphere Installation Bundle) on the hypervisor (ESXi) VAAI IO Visor (scvmclient) stHypervisorSvc vmware-esx-STFSNasPlugin Hypervisor hosts presented with an NFS datastore

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure I-7 HyperFlex Components

Deployment Options of Cisco HyperFlex Cisco HyperFlex is an adaptive system that can power any application anywhere with the simplicity of hyperconvergence. It delivers flexibility to support multiple hypervisors, containerized environments, multi-cloud services, and edge deployment. Key deployment options of Cisco HyperFlex are as followsAvailable on hypervisors such as VMWare ESXi and Microsoft Hyper-V HyperFlex container appliance All application workloads, including VDI, VSI, databases, missioncritical workloads, and so on Cloud-based management using Cisco Intersight Types of HyperFlex deployment, including:

||||||||||||||||||||

||||||||||||||||||||

Edge deployments to remote offices and branch offices (ROBOs) Standard clusters (up to 64 nodes) Stretch clusters Figure 1-8 covers the deployment options of Cisco HyperFlex

Figure I-8 HyperFlex Deployment Options Figure 1-8, covers the deployment options of Cisco HyperFlex

Cisco HyperFlex Advantages Cisco HyperFlex offers a number of advantages

Technet24

||||||||||||||||||||

||||||||||||||||||||

compared to other HCI solutions. The following are some of the most important of these advantages, although this is not an exhaustive list: Single point of management and connectivity: Cisco HyperFlex is engineered on the industry-recognized Cisco UCS. Cisco UCS, the foundation for Cisco HyperFlex systems, is built with a single point of management and connectivity for the entire system. The system is designed as a single virtual blade server chassis that can span multiple chassis and racks of blade-and-rack server-based nodes. Cisco is in the unique position of being able to deliver a hyperconverged solution that can incorporate blade and rack systems in its architecture, offering greater flexibility than any other solution (see Figure I-9). Scalability: Cisco HyperFlex provides the ability to scale seamlessly and easily (see Figure I-10). HyperFlex clusters can scale to up to 64 nodes, with protection against multiple node and component failures. HyperFlex can also scale across clouds and across data center locations. Also, an already deployed HyperFlex cluster is easy to scale up or scale down. For example, you can start with as few as three nodes and add more nodes to the cluster with ease. Y ou can also scale down and remove a node if required. Figures I-11 through I-14 demonstrate this ease of scaling. Adaptability: Cisco HyperFlex is built on an adaptive infrastructure that supports external storage, ease of connectivity to legacy setups, and easy data transfer and VM migrations. Figure I-15 provides an overview of these advantages, including the ability to use existing UCS FI and support for external storage, which enables easier migration and data transfer.

||||||||||||||||||||

||||||||||||||||||||

Figure I-9 HyperFlex on UCS

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure I-10 HyperFlex’s Easy Scaling

||||||||||||||||||||

||||||||||||||||||||

Figure I-11 Starting with the Minimum Number of Nodes

Figure I-12 Adding More Nodes

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure I-13 Scaling Compute and Storage as Needed

||||||||||||||||||||

||||||||||||||||||||

Figure I-14 Scaling Down/Retiring Servers

Figure I-15 Adaptive Infrastructure Next generation management supported by Cisco Intersight: Cisco Intersight is a cloud-based management platform that provides end-to-end life cycle management and lets

Technet24

||||||||||||||||||||

||||||||||||||||||||

you install, configure, manage, and monitor HyperFlex from anywhere (see Figure I-16). Intersight also provides access to advanced features such as a recommendation engine and compatibility checker. Replication and stretch clusters: The HyperFlex Data Platform disaster recovery feature allows you to protect virtual machines from disaster by setting up replication of running VMs between a pair of network-connected clusters. In addition, HyperFlex stretch clusters enable you to deploy an active/active disaster-avoidance solution for mission-critical workloads requiring high uptime (near zero recovery time objective) and no data loss (zero recovery point objective). See Figure I-17.

||||||||||||||||||||

||||||||||||||||||||

Figure I-16 Next-Gen Management

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure I-17 Replication and Stretched Cluster This book discusses each of these advantages in detail and describes how to leverage the features of HyperFlex to remove complexities and operate a complete data center using a HyperFlex solution. It also covers HyperFlex architecture, implementation, configuration, operations, and management.

||||||||||||||||||||

||||||||||||||||||||

Chapter 1 Overview of Data Center Products A data center is designed to host critical computing resources in a centralized place. The network plays an important role in the availability of data center resources. Cisco HyperFlex offers a variety of flexible hyperconverged infrastructure configurations engineered on the Cisco UCS platform. This chapter provides an overview of the main Cisco r network, compute, and storage data center products.

CISCO UNIFIED COMPUTING SYSTEM The Cisco Unified Computing System (Cisco UCS) unites compute, network, and storage access. The UCS platform, optimized for virtual environments, is designed using open industry-standard technologies and

Technet24

||||||||||||||||||||

||||||||||||||||||||

aims to reduce total cost of ownership and increase business agility. The system integrates a low-latency, lossless 10 Gigabit Ethernet or 40 Gigabit Ethernet unified network fabric with enterprise-class x86architecture servers. It is an integrated, scalable, multichassis platform in which all resources participate in a unified management domain. The main components of Cisco Unified Computing System are: Computing: The system is based on an entirely new computing system class that incorporates rack-mount and blade servers based on Intel processors. Network: The system is integrated onto a low-latency, lossless, 10 Gbps or 40 Gbps unified network fabric. This network foundation consolidates LANs, SANs, and high-performance computing networks, which are often separate networks today. The unified fabric lowers costs by reducing the number of network adapters, switches, and cables and by decreasing the power and cooling requirements. Virtualization: The system unleashes the full potential of virtualization by enhancing the scalability, performance, and operational control of virtual environments. Cisco security, policy enforcement, and diagnostic features are now extended into virtualized environments to better support changing business and IT requirements. Storage access: The system provides consolidated access to both SAN storage and network-attached storage (NAS) over the unified fabric. By unifying storage access, Cisco Unified Computing System can access storage over Ethernet, Fibre Channel, Fibre Channel over Ethernet (FCoE), and iSCSI. Customers therefore

||||||||||||||||||||

||||||||||||||||||||

have a choice of storage protocol and physical architecture, and they enjoy enhanced investment protection. In addition, server administrators can pre-assign storage access policies for system connectivity to storage resources, simplifying storage connectivity and management for increased productivity. Management: The system uniquely integrates all system components, so the entire solution can be managed as a single entity by the Cisco UCS Manager (UCSM). UCSM has an intuitive graphical user interface (GUI), a command-line interface (CLI), and a robust application programming interface (API) to manage all system configurations and operations.

UCS-B Series Products The Cisco UCS fabric interconnect (FI) is a core part of the Cisco Unified Computing System, providing both network connectivity and management capabilities for the system. Depending on the model chosen, the Cisco UCS fabric interconnect offers line-rate, low-latency, lossless 10 Gigabit or 40 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE) and Fibre Channel connectivity. Cisco UCS fabric interconnects provide the management and communication backbone for the Cisco UCS C-Series, S-Series, HX-Series rack-mount servers, B-Series blade servers, and Cisco UCS 5100 Series blade server chassis. All servers and chassis, and therefore all blades, attached to the Cisco UCS fabric interconnects become part of a single, highly available management domain. In addition, by supporting unified fabrics, the Cisco UCS

Technet24

||||||||||||||||||||

||||||||||||||||||||

fabric interconnects provide both LAN and SAN connectivity for all servers within the domain.

UCS 6454 Fabric Interconnect The Cisco UCS 6454 uses a cut-through architecture, supporting deterministic, low-latency, line-rate 10/25/40/100 Gigabit Ethernet ports, a switching capacity of 3.82 Tbps, and 320 Gbps bandwidth between FI 6454 and IOM 2208 per 5108 blade chassis, independent of packet size and enabled services. The product family supports Cisco low-latency, lossless 10/25/40/100 Gigabit Ethernet unified network fabric capabilities, which increase the reliability, efficiency, and scalability of Ethernet networks. The fabric interconnect supports multiple traffic classes over a lossless Ethernet fabric from the server through the fabric interconnect. Figure 1-1 shows a Cisco UCS 6454 fabric interconnect.

Figure 1-1 Cisco UCS 6454 Fabric Interconnect Cisco specifications of the UCS 6454 fabric interconnect are as follows:

||||||||||||||||||||

||||||||||||||||||||

48 10/25 Gbps and 6 40/100 Gbps ports 8 unified ports supporting 10/25 GE or 8/16/32 Gbps Fibre Channel 3.82 Tbps throughput

UCS 6300 Series Fabric Interconnects The Cisco UCS 6300 Series offers high-performance, low-latency, lossless, line-rate 40 Gigabit Ethernet, with up to 2.56 Tbps of switching capacity. Backward compatibility and scalability are assured, with the ability to configure 40 Gbps quad SFP (QSFP) ports as breakout ports using 4x10GbE breakout cables. Existing Cisco UCS servers with 10GbE interfaces can be connected in this manner, although Cisco HyperFlex nodes must use a 40GbE VIC adapter to connect to a Cisco UCS 6300 Series fabric interconnect. Figure 1-2 shows a Cisco UCS 6300 series fabric interconnect.

Figure 1-2 Cisco UCS 6300 Series Fabric Interconnect

Technet24

||||||||||||||||||||

||||||||||||||||||||

The specifications of the Cisco UCS 6300 Series fabric interconnect are as follows: For top of rack or UCS Mini Up to 32 40 Gbps ports Up to 2.56 Tbps throughput Support for Ethernet, FCoE, or Fibre Channel (varies by model)

Figure 1-3 shows a Cisco UCS 2304 fabric extender.

Figure 1-3 Cisco UCS 2304 Fabric Extender The specifications of the Cisco UCS 2304 fabric extender are as follows: 4 40 Gbps QSFP+ uplink ports Up to 320 Gbps of I/O to the chassis 4 40 Gbps Ethernet ports to the chassis midplane

UCS 6200 Series Fabric Interconnects From a networking perspective, the Cisco UCS 6200 Series uses a cut-through architecture, supporting deterministic, low-latency, line-rate 10 Gigabit Ethernet on all ports, up to 1.92 Tbps switching capacity, and 160

||||||||||||||||||||

||||||||||||||||||||

Gbps bandwidth per chassis, independent of packet size and enabled services. Figure 1-4 shows a Cisco UCS 6248 fabric interconnect.

Figure 1-4 Cisco UCS 6248 Fabric Interconnect The specifications of the Cisco UCS 6248 fabric interconnect are as follows: 32 to 96 unified ports 1/10 Gbps Ethernet and FCoE 1/2/4/8 Gbps Fibre Channel 2 microsecond port-to-port latency

Figure 1-5 shows a Cisco UCS 2208XP fabric extender.

Figure 1-5 Cisco UCS 2208XP Fabric Extender The specifications of the Cisco UCS 2208 fabric extender are as follows: 8 10 Gbps SFP+ uplink ports

Technet24

||||||||||||||||||||

||||||||||||||||||||

Up to 160 Gbps of I/O to the chassis 32 10 Gbps Ethernet ports to the chassis midplane

Figure 1-6 shows a Cisco UCS 2204XP fabric extender.

Figure 1-6 Cisco UCS 2204XP Fabric Extender The specifications of the Cisco UCS 2204 fabric extender are as follows: 4 10 Gbps SFP+ uplink ports Up to 80 Gbps of I/O to the chassis 16 10 Gbps Ethernet ports to the chassis midplane

UCS 5108 Chassis The UCS 5108 chassis is a 6RU chassis that can accommodate up to eight half-width, four full-width, or any combination of blade form factors (M1–M5 generation) that can fit in the available number of blade slots. The chassis has two I/O bays for fabric extenders or fabric interconnects. Figure 1-7 shows a Cisco UCS 5108 blade server chassis (front view).

||||||||||||||||||||

||||||||||||||||||||

Figure 1-7 Cisco UCS 5108 Blade Server Chassis: Front View Figure 1-8 shows a Cisco UCS 5108 blade server chassis (rear view).

Figure 1-8 Cisco UCS 5108 Blade Server Chassis: Rear View

Technet24

||||||||||||||||||||

||||||||||||||||||||

Cisco UCS B-Series Blade Servers Cisco UCS B-Series blade servers provide massive amounts of computing power in a compact form factor to increase density in computing-intensive and enterprise application environments. Available in fulland half-width form factors, Cisco UCS blade servers offer dedicated storage and high-capacity memory configurations to support workloads ranging from web infrastructure to distributed databases and business intelligence applications. The Cisco UCS B200 M5 blade server offers exceptional levels of performance, flexibility, and I/O throughput to run applications with the second-generation Intel Xeon scalable processors or Intel Xeon scalable processors; up to 3 TB of memory or 6 TB of Intel Optane DC persistent memory; up to two SAS, SATA, and NVMe drives plus M.2 storage; up to two GPUs; and up to 80 Gigabit Ethernet. Figure 1-9 shows a Cisco UCS B200 M5 blade server.

||||||||||||||||||||

||||||||||||||||||||

Figure 1-9 Cisco UCS B200 M5 Blade Server For more information about all Cisco UCS servers, go to https://www.cisco.com/go/ucs.

Cisco UCS Manager (UCSM) Cisco UCS Manager (UCSM) provides unified, embedded management of all software and hardware components of the Cisco Unified Computing System across multiple chassis and rack servers and thousands of virtual machines. It supports all Cisco UCS product models, including Cisco UCS B-Series blade servers, Cisco UCS C-Series rack-mount servers, and Cisco UCS Mini, as well as the associated storage resources and networks. Cisco UCS Manager is embedded on a pair of Cisco UCS fabric interconnects using a clustered, active/standby configuration for high availability. UCS Manager participates in server provisioning, device discovery, inventory, configuration, diagnostics, monitoring, fault detection, auditing, and statistics collection. Figure 1-10 shows Cisco UCS Manager for managing UCS operations.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 1-10 Cisco UCS Manager An instance of Cisco UCS Manager with all Cisco UCS components managed by it forms a Cisco UCS domain, and a domain can include up to 160 servers. In addition to provisioning Cisco UCS resources, this infrastructure management software provides a model-based foundation for streamlining the day-to-day processes of updating, monitoring, and managing computing resources, local storage, storage connections, and network connections.

||||||||||||||||||||

||||||||||||||||||||

By enabling better automation of processes, Cisco UCS Manager allows IT organizations to achieve greater agility and scale in their infrastructure operations while reducing complexity and risk. UCS Manager provides flexible role- and policy-based management using service profiles and templates. Cisco UCS Manager manages Cisco UCS systems through an intuitive HTML 5 or Java user interface and a CLI. It can register with Cisco UCS Central Software in a multi-domain Cisco UCS environment, enabling centralized management of distributed systems scaling to thousands of servers. Cisco UCS Manager can be integrated with Cisco UCS Director to facilitate orchestration and to provide support for converged infrastructure and infrastructure as a service (IaaS). The Cisco UCS XML API provides comprehensive access to all Cisco UCS Manager functions. The API provides Cisco UCS system visibility to higher-level systems management tools from independent software vendors (ISVs) such as VMware, Microsoft, and Splunk, as well as tools from BMC, CA, HP, IBM, and others. ISVs and in-house developers can use the XML API to enhance the value of the Cisco UCS platform according to their unique requirements. Cisco UCS PowerTool for Cisco UCS Manager and the Python Software Development Kit

Technet24

||||||||||||||||||||

||||||||||||||||||||

(SDK) help automate and manage configurations in Cisco UCS Manager.

UCS-C Series Products Capable of operating in standalone deployments or as part of Cisco UCS, Cisco UCS C-Series rack servers offer expansion capabilities to help an organization address fluctuating workload challenges. With a wide range of I/O, memory, internal disk, and solid-state disk (SSD) options, you can balance processing power and other resources to meet the requirements of your cloud infrastructure workloads. Every Cisco UCS C-Series rack server includes the Cisco Integrated Management Controller (IMC), a management interface that provides the tools you need to easily and quickly control and manage a standalone server. Embedded in Cisco UCS C-Series servers, the IMC defines the core object model and provides an easy way to provision and deploy a standalone server. Your IT administrators can control and manage servers, including management of network configuration; remote keyboard, video, and mouse (KVM) devices; power states; and firmware revisions. Out-of-band server management and virtual media support for CD and DVD drives for remote operating system and application software installation are accessible through standard protocols, command-line interfaces (CLIs).

||||||||||||||||||||

||||||||||||||||||||

The UCS C240 M5 is a 2-rack-unit (2RU) rack server, featuring up to 2 Intel Xeon Scalable processors and 24 DDR4 DIMM slots, 6 PCIe 3.0 slots plus 1 dedicated 12 Gbps RAID controller slot and 1 dedicated mLOM slot, dual 10GBASE-T Intel x550 Ethernet ports, hotpluggable PSUs, Dual internal Cisco FlexFlash SD cards, and Dual M.2 SATA SSD or NVMe. Figure 1-11 shows a Cisco UCS 240 M5 rack server.

Figure 1-11 Cisco UCS 240 M5 Rack Server For more information about all Cisco UCS servers, go to https://www.cisco.com/go/ucs.

UCS-S Series Products The Cisco UCS S-Series storage server has a modular server architecture that, using Cisco’s blade technology expertise, allows you to upgrade the computing or network nodes in the system without the need to migrate data from one system to another. The Cisco UCS S3260 storage server is a modular, high-

Technet24

||||||||||||||||||||

||||||||||||||||||||

density, high-availability dual-node rack server that is well suited for service providers, enterprises, and industry-specific environments. It addresses the need for dense cost-effective storage for ever-growing data needs. Designed for a new class of cloud-scale applications, it is simple to deploy and excellent for big data applications, software-defined storage environments, and other unstructured data repositories, media streaming, and content distribution. With dual-node capability that is based on the Intel Xeon scalable processors, it features up to 720 TB of local storage in a compact 4-rack-unit (4RU) form factor. All hard-disk drives can be asymmetrically split between the dual nodes and are individually hot-swappable. Figure 1-12 shows a Cisco UCS S3260 storage server.

||||||||||||||||||||

||||||||||||||||||||

Figure 1-12 Cisco UCS S3260 Storage Server

CISCO INTERSIGHT CLOUDBASED MANAGEMENT (SAAS SOLUTION) Cisco Intersight builds on the Cisco UCS and Cisco HyperFlex platforms to provide an intuitive infrastructure with cloud-powered intelligence. Instead of using an onsite system management console, you can use the cloud-based management platform at Intersight.com or Cisco Intersight Virtual Appliance to focus on managing your systems rather than your management tools. In addition, Cisco Intersight allows you to manage systems both in your data center and at remote edge and branch office locations.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Cisco Intersight provides a model-based deployment for Cisco UCS and HyperFlex platforms, building on the deployment methodologies of UCS. HyperFlex platforms can be sent to their destination location and then set up remotely through the cloud-based HyperFlex installer. You can set up a Cisco UCS C-Series server quickly and easily by replicating an existing server profile for rapid configuration. The model-based deployment works for a single system in a remote location or hundreds of systems in a data center and enables rapid, standardized configuration and deployment. Figure 1-13 illustrates the Cisco Intersight Solution.

Figure 1-13 Cisco Intersight

HYPERFLEX DATA PLATFORM Cisco HyperFlex systems deliver complete

||||||||||||||||||||

||||||||||||||||||||

hyperconvergence. These systems combine the software-defined networking and computing power of Cisco UCS with the Cisco HyperFlex HX data platform. Designed for simplicity, Cisco HyperFlex systems bring increased operational efficiency and adaptability to more workloads in your data center. A HyperFlex cluster requires a minimum of three HXSeries “converged” nodes (with disk storage). Data is replicated across at least two of these nodes, and a third node is required for continuous operation in the event of a single node failure. Each node that has disk storage is equipped with at least one high-performance SSD for data caching and rapid acknowledgment of write requests. Each node also is equipped with additional disks, up to the platform’s physical limit, for long-term storage and capacity. The Cisco HyperFlex HX data platform combines the cluster’s storage devices into a single distributed, multitier object-based datastore. It makes this data available through the file system protocols and mechanisms needed by the higher-level hypervisors, virtual machines, and containers. With HyperFlex, performance scales linearly as you scale a cluster because all components contribute both processing and storage capacity to the cluster. The data platform optimizes storage tiers for an excellent balance

Technet24

||||||||||||||||||||

||||||||||||||||||||

between price and performance. For example, hybrid nodes use solid-state drives (SSDs) for caching and hard-disk drives (HDDs) for capacity; all-flash nodes use fast SSD drives or Nonvolatile Memory Express (NVMe) storage for caching and SSDs for capacity; allNVMe nodes deliver the highest performance for the most demanding workloads, and caching is further accelerated by Intel Optane SSDs.

HyperFlex Hybrid Nodes The initial HyperFlex product release featured hybrid converged nodes, which use a combination of SSDs for the short-term storage caching layer and HDDs for the long-term storage capacity layer. A HyperFlex Hybrid system is an excellent choice for entry-level or midrange storage solutions. A HyperFlex Hybrid configuration is a good option for customers who want the simplicity of the Cisco HyperFlex solution but whose needs focus on capacity-sensitive solutions, lower budgets, and fewer performance-sensitive applications. Figure 1-14 shows Cisco HyperFlex Hybrid M5 configurations.

||||||||||||||||||||

||||||||||||||||||||

Figure 1-14 Cisco HyperFlex Hybrid M5 Configurations Figure 1-15 shows Cisco HyperFlex Hybrid M4 configurations.

Figure 1-15 Cisco HyperFlex Hybrid M4 Configurations

Technet24

||||||||||||||||||||

||||||||||||||||||||

HyperFlex All Flash Nodes The primary challenge to a HyperFlex Hybrid system from highly performance-sensitive applications is their increased sensitivity to high storage latency. Due to the characteristics of the spinning hard disks, their higher latency will unavoidably become the bottleneck in a HyperFlex Hybrid system. Ideally, if all the storage operations were to occur in the caching SSD layer, the hybrid system’s performance would be excellent. But in several scenarios, the amount of data being written and read exceeds the caching layer capacity, placing larger loads on the HDD capacity layer, and the subsequent increases in latency will naturally result in reduced performance. Cisco HyperFlex All Flash systems are an excellent option for customers with a requirement to support high-performance, latency-sensitive workloads. With a purpose-built, flash-optimized, and high-performance log-based file system, the Cisco HyperFlex All Flash system provides: Predictable high performance across all the virtual machines on HyperFlex All Flash and compute-only nodes in the cluster Highly consistent and low latency, which benefits data-intensive applications and databases such as Microsoft SQL and Oracle Future-ready architecture that is well suited for flash memory configuration, including the following:

||||||||||||||||||||

||||||||||||||||||||

Cluster-wide SSD pooling, which maximizes performance and balances SSD usage to spread the wear A fully distributed log-structured file system that optimizes the data path to help reduce write amplification Large sequential writing, which reduces flash wear and increases component longevity Inline space optimization—for example, deduplication and compression—which minimizes data operations and reduces wear Lower operating cost with the higher-density drives for increased capacity of the system Cloud-scale solution with easy scale-out and distributed infrastructure and the flexibility of scaling out independent resources separately

Figure 1-16 shows Cisco HyperFlex All Flash M5 configurations.

Figure 1-16 Cisco HyperFlex All Flash M5 Configurations

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 1-17 shows Cisco HyperFlex All Flash M4 configurations.

Figure 1-17 Cisco HyperFlex All Flash M4 Configurations Cisco HyperFlex support for Hybrid and All Flash models allows customers to choose the right platform configuration based on capacity, applications, performance, and budget requirements. All Flash configurations offer repeatable and sustainable high performance, especially for scenarios with a larger working set of data—in other words, a large amount of data in motion.

HyperFlex Edge for Remote and Branch Offices

||||||||||||||||||||

||||||||||||||||||||

Cisco HyperFlex Edge is a new version of the Cisco HyperFlex system that is optimized for remote sites, branch offices, and edge environments. It offers a lowcost, small-footprint option with automated management and the same full feature set that you expect from Cisco HyperFlex systems. Cisco HyperFlex Edge offers the full power of a next-generation hyperconverged platform without the need to connect to Cisco UCS fabric interconnects. A Cisco HyperFlex Edge system has a fixed cluster size of three converged HX220c Hybrid or All Flash nodes. Unlike with Cisco HyperFlex systems in the data center, Cisco HyperFlex Edge clusters do not support Cisco UCS computing-only nodes, and a cluster cannot be expanded with additional converged nodes. However, storage capacity can be added online at any time to open drive bays. The following are the components of a Cisco HyperFlex Edge system: Cisco HyperFlex HX-Series rack-mount servers (choose three of the same model from the following list): Cisco HyperFlex HX220c M5SX Node Cisco HyperFlex HX220c M5SX All Flash Node Cisco HyperFlex HX220c M4S Node Cisco HyperFlex HX220c M4S All Flash Node

Technet24

||||||||||||||||||||

||||||||||||||||||||

Cisco HyperFlex HX Data Platform software VMware vSphere ESXi hypervisor VMware vCenter Server (supplied by the end user)

The HX220c Hybrid and All Flash nodes used for building Cisco HyperFlex Edge systems are the same nodes used for building standard Cisco HyperFlex clusters. There are no separate product identifiers for Cisco HyperFlex Edge nodes. However, the choice of hardware components for the nodes, such as CPU, disks, and network adapters, differs. A solution can be configured, deployed, managed, and monitored with existing separate tools for Cisco UCS, Cisco HyperFlex systems, and VMware solutions. You can manage a Cisco HyperFlex cluster through a VMware vSphere web client plug-in or through the HTML5-based native Cisco HyperFlex Connect management tool. But with the Cisco Intersight platform, you can deploy and manage a solution from the cloud, bringing the low cost, deployment ease, and remote-management features of Cisco HyperFlex systems to your edge environments without the need for experienced IT staff at the deployment site.

Cisco HyperFlex HX-Series System Management Components A Cisco HyperFlex HX-Series system is managed using

||||||||||||||||||||

||||||||||||||||||||

the following Cisco software components: Cisco UCS Manager: Cisco UCS Manager is embedded software that resides on a pair of fabric interconnects, providing complete configuration and management capabilities for Cisco HX-Series servers. The most common way to access UCS Manager is to use a web browser to open the GUI. UCS Manager supports role-based access control. The configuration information is replicated between two Cisco UCS fabric interconnects to provide a high-availability solution. If one FI becomes unavailable, the other takes over. A key benefit of UCS Manager is the concept of stateless computing. Each node in an HX cluster has no set configuration. MAC addresses, UUIDs, firmware, and BIOS settings, for example, are all configured on UCS Manager in a service profile and applied uniformly to all the HX-Series servers. This enables consistent configuration and ease of reuse. A new service profile can be applied within a matter of minutes. Cisco HX Data Platform: Cisco HX Data Platform is a hyperconverged software appliance that transforms Cisco servers into a single pool of compute and storage resources. It eliminates the need for network storage and tightly integrates with VMware vSphere and its existing management application to provide a seamless data management experience. In addition, native compression and deduplication reduce storage space occupied by the VMs. HX Data Platform is installed on a virtualized platform, such as vSphere. It manages the storage for virtual machines, applications, and data. During installation, you specify the Cisco HyperFlex HX cluster name, and HX Data Platform creates a hyperconverged storage cluster on each of the nodes. As your storage needs increase and you add nodes to the HX cluster, HX Data Platform balances the storage across the additional resources. VMware vCenter Management: Cisco HyperFlex system has

Technet24

||||||||||||||||||||

||||||||||||||||||||

VMware vCenter-based management. vCenter Server is a data center management server application developed to monitor virtualized environments. HX Data Platform is also accessed from the preconfigured vCenter Server to perform all storage tasks. vCenter supports key shared storage features such as VMware vMotion, DRS, HA, and vSphere replication. Native HX Data Platform snapshots and clones are scalable and replace VMware snapshots and cloning capability. Y ou must have vCenter installed on a separate server to access HX Data Platform. vCenter is accessed through vSphere Client, which is installed on an administrator’s laptop or PC. Cisco HyperFlex Connect: Cisco HyperFlex Connect (HX Connect) provides a user interface to Cisco HyperFlex. It provides interfaces for monitoring, performance charts, and tasks for upgrade, encryption, replication, datastores, nodes, disks, and VM ReadyClones.

CISCO MDS 9000 MULTILAYER DIRECTOR SAN SWITCHES Cisco MDS 9000 multilayer SAN switches can help lower the total cost of ownership (TCO) of storage environments. They combine a robust, flexible hardware architecture with multiple layers of network and storage-management intelligence. With them, you can build highly available, scalable storage networks with advanced security and unified management. This section provides details on MDS 9700 and MDS 9500 Series switches.

Cisco MDS 9700 Series Multilayer Director

||||||||||||||||||||

||||||||||||||||||||

The Cisco MDS 9700 Series multilayer directors are the newest directors in the Cisco storage networking portfolio. The Cisco MDS 9710 multilayer director supports up to 384 line-rate 16 Gbps Fibre Channel or 10 Gbps Fibre Channel over Ethernet (FCoE) ports, and the Cisco MDS 9706 multilayer director supports up to 192 line-rate 16 Gbps Fibre Channel or 10 Gbps FCoE ports. They each provide up to 1.5 Tbps of per-slot throughput when populated with six fabric modules. Both directors also provide redundant supervisors, power supplies, and fan modules. Cisco MDS 9700 Series multilayer directors offer: Outstanding SAN performance High availability with fully redundant components, including fabric cards, supervisors, and power supplies Industry-leading scalability Intelligent network features such as virtual SAN technology, ACLs, intelligent frame processing, and fabric-wide QoS Multiprotocol support, including Fibre Channel, Fibre Channel over Ethernet (FCoE), Fibre Channel over IP (FCIP), and NVMe over fabric

Table 1-1 lists the capabilities of various MDS 9700 Switches Table 1-1 Capabilities of MDS 9700 Switches

Technet24

||||||||||||||||||||

||||||||||||||||||||

Feature

MDS 9706

MDS 9710

MDS 9718

Maximum ports per chassis

192 2/4/8/10/16/3 2 Gbps FC

384 2/4/8/10/16/32 Gbps FC

768 2/4/8/10/16/32 Gbps FC

192 10 Gbps FCoE or 96 40 Gbps FCoE

384 10 Gbps FCoE or 192 40 Gbps FCoE

768 10 Gbps FCoE or 384 40 Gbps FCoE

Line card slots

4

8

16

Supervisor slots

2

2

2

Power supply bays

4

8

16

Switching fabric slots

6

6

6

Switching fabric

256 Gbps FC per fabric

256 Gbps FC per fabric

256 Gbps FC per fabric

FC switching bandwidth

12 Tbps

24 Tbps

48 Tbps

||||||||||||||||||||

||||||||||||||||||||

Cisco MDS 9500 Series Multilayer Director Cisco MDS 9500 Series multilayer directors are highperformance, protocol-independent, director-class SAN switches. Cisco MDS directors are designed to meet stringent requirements of enterprise data center storage environments. Table 1-2 lists the capabilities of various MDS 9500 switches. Table 1-2 Capabilities of 9500 Switches Feature

9506

9509

9513

Maximum ports per chassis

192 1/2/4/8 Gbps FC 16 10 Gbps FC 32 10 Gbps FCoE 64 1 Gbps Ethernet

336 1/2/4/8 Gbps FC 28 10 Gbps FC 56 10 Gbps FCoE 112 1 Gbps Ethernet

528 1/2/4/8 Gbps FC 263 10 Gbps FC 88 10 Gbps FCoE 176 1 Gbps Ethernet

Line card slots

4

7

11

Supervisor slots

2

2

2

Switching fabric slots

In supervisor

In supervisor

2

Technet24

||||||||||||||||||||

||||||||||||||||||||

Switching bandwidth

96 Gbps per slot

96 Gbps per slot

256 Gbps per slot

Physical dimensions (H × W × D)

12.25 × 17.37 × 21.75 in.

24.5 × 17.37 × 28.0 in.

24.5 × 17.37 × 28.0 in.

Physical dimensions (RU)

7 RU

14

14

NETWORK SWITCHES The Cisco Nexus Series switches are modular and fixed port network switches designed for data centers. All switches in the Nexus range run the modular NX-OS firmware/operating system on the fabric. NX-OS has high-availability features, and this platform is optimized for high-density 10 Gigabit Ethernet. This section provides details on Nexus 5000 and 9000 Series switches.

Nexus 5000 Series Products Cisco Nexus 5000 Series switches are designed to deliver high-density top-of-rack (ToR) Layer 2 and Layer 3, 10/40 Gigabit Ethernet with unified ports in compact 1-, 2-, and 4-rack-unit form factors. The Cisco Nexus 5000 Series includes the Cisco Nexus 5500 and 5600 platforms as part of the Cisco Unified Fabric portfolio.

||||||||||||||||||||

||||||||||||||||||||

The Cisco 5600 platform is designed to meet the scaling demands of virtualized and cloud deployments: These switches support up to 2304 ports in a single management domain with Cisco FEX architecture. These switches support large buffers for congestion management. These switches provide hardware-based VXLAN (Layer 2, Layer 3, and gateway) and are capable of network virtualization generic routing encapsulation (NVGRE). These switches deliver integrated Layer 3 services with large table sizes and buffers with 1-microsecond latency.

Broad connectivity support includes Gigabit Ethernet, 10 Gigabit Ethernet, 40 Gigabit Ethernet, and 100 Gigabit Ethernet; native Fibre Channel; and Fibre Channel over Ethernet. Cisco Dynamic Fabric Automation (DFA) is supported for simplified virtualized and cloud deployments. Cisco FabricPath is supported for scalable Layer 2 networks. Figure 1-18 shows a Cisco Nexus 5672UP switch.

Figure 1-18 Cisco Nexus 5672UP Switch

Technet24

||||||||||||||||||||

||||||||||||||||||||

The Cisco 5500 switches are designed to meet the scalability demands of today’s data centers. They offer up to 1152 ports in a single management domain using Cisco FEX architecture or up to 96 unified ports. On Cisco 5500 switches, operations are simplified for two main reasons: The Cisco FEX architecture offers a single point of management. Programmers can use Python, Tcl, and OpenFlow.

Cisco 5500 switches deliver deployment flexibility to meet varying customer needs: Broad connectivity support includes Gigabit Ethernet, 10 Gigabit Ethernet, 10-BASE-T, 40 Gigabit Ethernet uplinks, native Fibre Channel, and Fibre Channel over Ethernet (FCoE). Cisco Dynamic Fabric Automation (DFA) is supported for simplified virtualized and cloud deployments. Cisco FabricPath is supported for scalable Layer 2 networks.

Figure 1-19 shows two Cisco Nexus 5500 Series switches.

Figure 1-19 Cisco Nexus 5500 Series Switches

||||||||||||||||||||

||||||||||||||||||||

Nexus 9000 Series Products The Cisco Nexus 9000 Series switches are the next generation of data center switching infrastructure. In Cisco NX-OS Software mode, the Cisco Nexus 9000 Series addresses the new challenges of current infrastructure designs while building on first-generation software-defined networking (SDN) solutions and providing a path to Cisco Application Centric Infrastructure (ACI). In Cisco ACI mode, the Cisco Nexus 9000 Series provides the industry’s most advanced answer to SDN, with a powerful combination of hardware and software custom developed to provide a more robust and comprehensive solution than anything else available on the market. Figure 1-20 shows Cisco Nexus 9000 Series switches.

Figure 1-20 Cisco Nexus 9000 Series Switches

Technet24

||||||||||||||||||||

||||||||||||||||||||

Cisco Nexus 9000 NX-OS Mode The Cisco NX-OS mode of the Cisco Nexus 9000 Series provides better performance, port density, programmability, and power-efficient solutions. In addition to these benefits, the Cisco Nexus 9000 Series in Cisco NX-OS mode offers the best platform for migration from 1 or 10 Gbps to 10 or 40 Gbps. The Cisco Nexus 9000 Series portfolio consists of a broad set of hardware components, including the following modular chassis options: Cisco Nexus 9504 switch 4 payload slots Up to 576 full line-rate 1 and 10 Gbps ports Up to 144 full line-rate 40 Gbps ports Cisco Nexus 9508 switch 8 payload slots Up to 1152 full line-rate 1 and 10 Gbps ports Up to 288 full line-rate 40 Gbps ports Cisco Nexus 9516 switch 16 payload slots Up to 2304 full line-rate 1 and 10 Gbps ports Up to 576 full line-rate 40 Gbps ports

||||||||||||||||||||

||||||||||||||||||||

The portfolio also includes these fixed-switch options: Cisco Nexus 9396PX switch 48 ports of 1 and 10 Gbps fiber Enhanced Small FormFactor Pluggable (SFP+) 12 ports of 40 Gbps fiber Quad SFP (QSFP) Cisco Nexus 9396TX switch 48 ports of 1 and 10 Gbps fiber SFP+ 12 ports of 40 Gbps fiber QSFP Cisco Nexus 93128TX switch 96 ports of 1 and 10 Gbps copper RJ-45 8 ports of 40 Gbps fiber QSFP

All these benefits are enhanced by Cisco’s innovative 40 Gbps bidirectional (BiDi) optics. These standards-based QSFP optics provide 40 Gbps speeds on existing 10 Gbps infrastructure at approximately the same cost as current 10 Gbps optics. These optics are an exclusive Cisco option to help increase adoption of 40 Gbps solutions.

Cisco Nexus 9000 ACI Mode Cisco ACI mode enables the next generation of SDN. Rather than focusing solely on technical problems, accelerating deployment of today’s complex networks,

Technet24

||||||||||||||||||||

||||||||||||||||||||

Cisco ACI also focuses on the business problem of how to rapidly deploy applications to meet changing demands. Cisco ACI mode provides a balanced approach of hardware and software acceleration for network infrastructure, including Layer 4 through 7 services, through the language of applications. The baseline capabilities of automation, programmability, and centralized provisioning that are the focus of firstgeneration SDN solutions are built into Cisco ACI at the foundation. Cisco ACI enhances these capabilities by allowing the network to be automated and programmed based on business-level application requirements. Figure 1-21 illustrates Cisco ACI mode.

||||||||||||||||||||

||||||||||||||||||||

Figure 1-21 Cisco ACI Mode To enable Cisco ACI mode, the Cisco Nexus 9000 Series hardware portfolio is coupled with the Cisco Application Policy Infrastructure Controller (APIC) and new operating system software for the switching platform. In addition, a broad ecosystem of third-party integration includes the following: Virtual or physical Layer 4 through 7 services Management systems Orchestration systems Monitoring systems

Technet24

||||||||||||||||||||

||||||||||||||||||||

The Cisco ACI fabric is a leaf-and-spine architecture where every leaf connects to every spine using highspeed 40/100 Gbps Ethernet links, with no direct connections between the spine nodes or the leaf nodes. The ACI fabric is based on a routed underlay network with a switched VXLAN overlay, where every leaf in the fabric is VXLAN Tunnel Endpoint (VTEP). The Cisco ACI fabric is built on a network of individual components that are provisioned and managed as a single entity. The Cisco ACI fabric supports both Layer 2 (L2) and Layer 3 (L3) forwarding across this fabric infrastructure. Figure 1-22 illustrates the Cisco ACI high-level architecture.

||||||||||||||||||||

||||||||||||||||||||

Figure 1-22 Cisco ACI High-Level Architecture

HYPERVISORS Hypervisors can help you do the following: Establish or expand a private cloud environment: A

Technet24

||||||||||||||||||||

||||||||||||||||||||

hypervisor provides flexibility and on-demand IT services by expanding or moving your use of shared resources and adjusting utilization to meet demand. Use your hardware more effectively: A hypervisor allows more powerful physical computers to use less power and physical space by consolidating servers and workloads. Improve business continuity: A hypervisor provides redundancy, minimizing the impact of both scheduled and unscheduled downtime of workloads. Establish or expand a virtual desktop infrastructure (VDI): Using a centralized desktop strategy with VDI can help you increase business agility and data security, as well as simplify regulatory compliance and manage desktop operating systems and applications. Y ou can, for example, deploy Hyper-V and Remote Desktop Virtualization Host (RD Virtualization Host) on the same server to make personal virtual desktops or virtual desktop pools available to users. Make development and test more efficient: A hypervisor can reproduce different computing environments without requiring you to buy or maintain all the hardware you’d need if you only used physical systems.

VMWare vSphere Hypervisor The core of the vSphere product suite is the hypervisor called ESXi. A hypervisor is a piece of software that creates and runs virtual machines. Hypervisors are divided into two groups: Type 1 hypervisors: Also called bare metal hypervisors, Type 1 hypervisors run directly on the system hardware. A guest operating system runs on another level above the hypervisor. VMware ESXi is a Type 1 hypervisor that runs on the host server hardware

||||||||||||||||||||

||||||||||||||||||||

without an underlying operating system. Type 2 hypervisors: These hypervisors run in a conventional operating system environment, and the host operating system provides I/O device support and memory management. Examples of Type 2 hypervisors are VMware Workstation and Oracle VirtualBox.

ESXi provides a virtualization layer that abstracts the CPU, storage, memory, and networking resources of the physical host into multiple virtual machines. This means that applications running in virtual machines can access these resources without direct access to the underlying hardware. VMware refers to the hypervisor used by VMware ESXi as vmkernel. vmkernel receives requests from virtual machines for resources and presents the requests to the physical hardware. VMware vSphere is a software suite that includes components such as ESXi, vCenter Server, vSphere Client, vCenter Orchestrator, and vSphere Update Manager. vSphere components provide virtualization, management, resource optimization, and many other features that are useful for a virtual environment. vSphere is used to virtualize and aggregate the underlying physical hardware resources and to provide a pool of virtual resources to the data center. It also supports some advanced virtualization features, such as disaster recovery, high availability, fault tolerance, and dynamic resource allocation.

Technet24

||||||||||||||||||||

||||||||||||||||||||

People new to the VMware’s virtualization platform sometimes get confused in dealing with vSphere and its components. Remember that vSphere is a suite of products, just like Microsoft Office (which includes Word, Excel, Access, and other products); it is not a single product that you can install in your environment. These are the most important components included in the vSphere product suite: ESXi: A type 1 hypervisor, which is a piece of software that creates and runs virtual machines. In vSphere, virtual machines are installed on ESXi servers. vCenter Server: A centralized management platform and framework that lets you manage virtual machines and ESXi hosts centrally. vSphere Update Manager: An add-on package for vCenter Server that helps you keep your ESXi hosts and VMs patched with the latest updates. vSphere Web Client: A web-based user interface used for managing a virtual infrastructure. vSphere Client: A locally installed Windows application with a GUI for all day-to-day management tasks and for the advanced configuration of a virtual infrastructure.

Microsoft Hyper-V Hyper-V is Microsoft’s hardware virtualization product. It lets you create and run a software version of a

||||||||||||||||||||

||||||||||||||||||||

computer, called a virtual machine. Each virtual machine acts like a complete computer, running an operating system and programs. When you need computing resources, virtual machines give you more flexibility, help save time and money, and provide a more efficient way to use hardware than just running one operating system on physical hardware. Hyper-V runs each virtual machine in its own isolated space, which means you can run more than one virtual machine on the same hardware at the same time. You might want to do this to avoid problems such as a crash affecting the other workloads or to give different people, groups, or services access to different systems. Hyper-V has required parts that work together so you can create and run virtual machines. Together, these parts are called the virtualization platform. They’re installed as a set when you install the Hyper-V role. The required parts include Windows hypervisor, the Hyper-V virtual machine management service, the virtualization WMI provider, the Virtual Machine Bus (VMbus), Virtualization Service Provider (VSP), and the Virtual Infrastructure Driver (VID). Hyper-V also has the following tools for management and connectivity: Hyper-V Manager

Technet24

||||||||||||||||||||

||||||||||||||||||||

Hyper-V module for Windows PowerShell Virtual Machine Connection Windows PowerShell Direct

You can install these tools on the same computer the Hyper-V role is installed on or on computers without the Hyper-V role installed. The Hyper-V features are as follows: Computing environment: A Hyper-V virtual machine includes the same basic parts as a physical computer, such as memory, processor, storage, and networking components. All these parts have features and options that you can configure in different ways to meet different needs. Storage and networking can each be considered a category of its own because of the many ways you can configure them. Disaster recovery and backup: For disaster recovery, Hyper-V Replica creates copies of virtual machines, intended to be stored in another physical location, so you can restore the virtual machine from the copy. For backup, Hyper-V offers two types. One uses saved states, and the other uses Volume Shadow Copy Service (VSS) so you can make application-consistent backups for programs that support VSS. Optimization: Each supported guest operating system has a customized set of services and drivers, called integration services, that make it easier to use the operating system in a Hyper-V virtual machine. Portability: Features such as live migration, storage migration, and import/export make it easier to move or distribute a virtual machine. Remote connectivity: Hyper-V includes Virtual Machine

||||||||||||||||||||

||||||||||||||||||||

Connection, a remote connection tool for use with both Windows and Linux. Unlike Remote Desktop, this tool gives you console access, so you can see what’s happening in the guest even when the operating system isn’t booted yet. Security: Secure boot and shielded virtual machines help protect against malware and other unauthorized access to a virtual machine and its data.

SUMMARY Cisco Unified Computing System (Cisco UCS) is an integrated computing infrastructure with intent-based management to automate and accelerate deployment of all your applications. HyperFlex systems bring increased operation efficiency and adaptability to more workloads in a data center. Cisco MDS switches address the stringent requirements of large virtualized data center storage environments: uncompromising high availability, security, scalability, ease of management, and transparent integration of new technologies for extremely flexible data center SAN solutions. The Cisco Nexus switch product line provides a series of solutions that attempt to make it easier to connect and manage disparate data center resources with software-defined networking (SDN). Leveraging the Cisco Unified Fabric, which unifies storage, data, and networking (Ethernet/IP) services, the Nexus switches create an open, programmable network foundation built to support a virtualized data center environment. Chapter

Technet24

||||||||||||||||||||

||||||||||||||||||||

2, “HyperFlex Architecture,” discusses HyperFlex architecture along with supported topologies.

||||||||||||||||||||

||||||||||||||||||||

Chapter 2 HyperFlex Architecture The Cisco HyperFlex HX Data Platform is a highperformance, distributed file system that supports multiple hypervisors with a wide range of enterprisegrade data management and optimization services. The Cisco HyperFlex HX Data Platform combines the cluster’s storage devices into a single distributed, multitier, object-based datastore. It makes this data available through the file system protocols and mechanisms needed by the higher-level hypervisors, virtual machines, and containers. Performance scales linearly as a cluster scales because all components contribute both processing and storage capacity to the cluster. A Cisco HyperFlex cluster can be deployed in three different ways: HyperFlex standard cluster: This type of cluster is composed of

Technet24

||||||||||||||||||||

||||||||||||||||||||

a pair of Cisco UCS fabric interconnects and at least three HX hyperconverged nodes. HyperFlex stretch cluster: With a stretch cluster, half of the cluster nodes are physically located in one place, and the remaining half are located in a distant secondary location. HyperFlex edge cluster: This type of cluster is composed of at least three HX hyperconverged nodes, with no UCS fabric interconnect required. Starting with HyperFlex version 4.0, it is possible to deploy cluster with only two HX hyperconverged nodes.

Figure 2-1 illustrates a Cisco HyperFlex solution.

Figure 2-1 Cisco HyperFlex Solution

HYPERFLEX ARCHITECTURE COMPONENTS This section covers the components that comprise the

||||||||||||||||||||

||||||||||||||||||||

HyperFlex architecture.

Storage Controller Virtual Machine The HyperFlex architecture requires a storage controller virtual machine (SCVM) with a dedicated number of processor cores and amount of memory—enough to allow the SCVM to deliver consistent performance and not affect the performance of the other virtual machines on the cluster. The controller can access all storage without hypervisor intervention, thanks to the VMware VM_DIRECT_PATH feature. It uses the node’s memory and SSDs as part of a distributed caching layer, and it uses the node’s HDDs for distributed capacity storage. The controller integrates the data platform into the VMware vSphere cluster through the use of three preinstalled VMware ESXi vSphere Installation Bundles (VIBs) on each node: IO Visor: This VIB provides a network file system (NFS) mount point so that the ESXi hypervisor can access the virtual disks that are attached to individual virtual machines. From the hypervisor’s perspective, it is simply attached to a network file system. The IO Visor intercepts guest VM IO traffic and intelligently redirects it to the HyperFlex SCVMs. VMware API for Array Integration (VAAI): This storage offload API allows vSphere to request advanced file system operations such as snapshots and cloning. The controller implements these operations via manipulation of the file system metadata rather than actual data copying, providing rapid response

Technet24

||||||||||||||||||||

||||||||||||||||||||

and thus rapid deployment of new environments. stHypervisorSvc: This VIB adds enhancements and features needed for HyperFlex data protection and VM replication.

A HyperFlex cluster requires a minimum of three HXSeries “converged” nodes (with disk storage). Data is replicated across at least two of these nodes, and a third node is required for continuous operation in the event of a single node failure. Each node that has disk storage is equipped with at least one high-performance SSD for data caching and rapid acknowledgment of write requests. Each node also is equipped with additional disks, up to the platform’s physical limit, for long-term storage and capacity.

HX Services Each SCVM running on the hyper-converged nodes runs an instance of the Linux services used by the HyperFlex system. Table 2-1 lists these services and their purposes. Table 2-1 HyperFlex Controller Services HX Servic e Name

Linu What the Service Does x Serv ice Nam e

Sprin

sto

Core of the Springpath file system. Owns disks and

||||||||||||||||||||

||||||||||||||||||||

gpat h file syste m

rfs

read/write process

SCV M clien t

scv mc lie nt

Proxy NFS server running on ESXi that intercepts the file I/O operations and sends them to CRM

Syste m man age ment servi ce

st M gr

Overall system management and inter-process communication

Hype rFlex Con nect serve r

hx m an ag er

HX Connect UI Manager Service

Repli catio n servi ces

rep lsv c

Responsible for replication

Clust er IP

cip -

Responsible for assigning cluster management and data IP addresses

Technet24

||||||||||||||||||||

||||||||||||||||||||

Moni tor

m oni tor

Repli catio n Clust er IP Moni tor

rep lcip m oni tor

Responsible for assigning cluster replication IP addresses

Stats Cach e Servi ce

car bo nca ch e

Accepts metrics over various protocols and writes them to disk as efficiently as possible; requires caching metric values in RAM as they are received and flushing them to disk on an interval using the underlying Whisper library

Stats Aggr egat or Servi ce

car bo nag gre gat or

Can be run in front of carbon-cache to buffer metrics over time before reporting them to Whisper

Stats Liste ner Servi ce

sta tsd

A network daemon that runs on the Node.js platform and listens for statistics, such as counters and timers, sent over UDP or TCP and sends aggregates to one or more pluggable back-end services (such as Graphite)

||||||||||||||||||||

||||||||||||||||||||

Clust er Man ager Servi ce

ex hib ito r

Controls stop/start and configuration of Zookeeper

HX Devi ce Con nect or

hx _d evi ce _c on ne cto r

Responsible for connecting to Intersight

Web serve r

to mc at

Web server using Java servlets

Reve rse prox y serve r

ngi nx

HTTP and reverse proxy server, as well as a mail proxy server

Job sche duler

cro n

Daemon to execute scheduled commands

DNS

res

A simple database for runtime nameserver

Technet24

||||||||||||||||||||

||||||||||||||||||||

and nam e serve r servi ce

olv co nf

information and a simple framework for notifying applications of changes in that information

Stats web serve r

gr ap hit e

A web application that uses the Twisted framework to provide real-time visualization and storage of numeric time-series data with a daemon that processes the incoming time-series data and saves it as Whisper database files

Clust er Man age ment IP

m g mt ip

The cluster management IP service in pre-3.0 clusters

IO Visor IO Visor, which is the SCVM client, lives as a process in user space inside ESXi and can be thought of as a simple NFS proxy. It behaves as a server for the VMware NFS client, while looking like a client to the controller VMs (stCtlVM/SCVM server). It is installed into ESXi as a vSphere Installation Bundle (VIB) that is auto-deployed during cluster installation.

||||||||||||||||||||

||||||||||||||||||||

Log Structure File System The data platform implements a log-structured file system that uses a caching layer in SSDs to accelerate read requests and write replies, and a persistence layer implemented with HDD data distribution. Incoming data is distributed across all nodes in the cluster to optimize performance using the caching tier. Efficient data distribution is achieved by mapping received data to stripe units that are stored equally across all nodes, with the number of data replicas determined by the policies you set. Figure 2-2 illustrates a logical representation for the Log file system.

Figure 2-2 Log File System

Data Optimization Overall space consumption in the HyperFlex clustered file system is optimized by the default deduplication and compression features. Deduplication is always on, helping reduce storage

Technet24

||||||||||||||||||||

||||||||||||||||||||

requirements in virtualization clusters in which multiple operating system instances in guest virtual machines result in large amounts of replicated data. Compression further reduces storage requirements, reducing costs. In addition, the log-structured file system is designed to store variable-sized blocks, reducing internal fragmentation. Figure 2-3 provides a logical representation of the inline compression and deduplication of the HyperFlex system.

Figure 2-3 Inline Compression and Deduplication

HyperFlex Cleaner

||||||||||||||||||||

||||||||||||||||||||

When exiting data gets updated, it gets appended to the log, and the metadata tree also gets updated. When this happens, the original blocks of data are left behind, with no references that need to be garbage collected. Likewise, when a file is deleted, the segments that make up the file are still persisted on the disk as dead keys until the cleaner runs and compacts the segment to free up space. Figure 2-4 provides a logical representation of the data blocks when they get updated or deleted.

Figure 2-4 HyperFlex Updated and Deleted Data

Cluster Resource Manager (CRM) Cluster Resource Manager (CRM) has many responsibilities as it coordinates functions across a

Technet24

||||||||||||||||||||

||||||||||||||||||||

cluster. It is responsible for understanding all resources available in the cluster. In addition, CRM is responsible for the timely detection of failures and conducting the appropriate remediation. CRM maintains a Zookeeper (ZK) database that is responsible for mapping data to the mirror set. Cleaner removes stale data and releases storage. It typically runs in the background continuously. Cleaner goes into sleep mode when it is not needed and wakes when policy defined conditions are met. A cluster shutdown stops storage cluster operations and shuts it down. A shutdown cluster has all storage cluster processes, including the working VMs, powered down. This does not include powering down the nodes in the cluster or shutting down the vCenter or FI cluster. The Cisco HX cluster shutdown procedure does not shut down the ESXi hosts. Rebalancing occurs when a CRM event is triggered, such as when resources are added (either a new node, a new disk, or both), resources are deleted (retiring a node or a failure condition), monitoring timers are expired, or users initiate requests.

Zookeeper Zookeeper is a service for maintaining configuration

||||||||||||||||||||

||||||||||||||||||||

information, naming information, distributed synchronization, and group services for distributed applications within a cluster. It allows distributed processes to coordinate with each other via a shared hierarchical namespace, organized much like a file system, where each node in the tree is known as a znode. Zookeeper also provides high availability and reliability, with no single point of failure characteristics for large distributed environments. It is replicated over a number of nodes called an ensemble. The nodes in the ensemble must know about each other in order to maintain an image of state and transactional logs.

HYPERFLEX HX-SERIES NODE OVERVIEW The Cisco HyperFlex solution is composed of three different types of nodes: HyperFlex hybrid nodes: Hybrid converged nodes use a combination of SSD for the short-term storage caching layer and HDDs. HyperFlex all flash nodes: An all flash converged node uses only SSDs. HyperFlex edge for remote branch offices: Hybrid or all flash converged nodes based on the HX220c.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 2-5 illustrates the different HyperFlex node models.

Figure 2-5 Cisco HyperFlex Models

HX220 M5/M4 Series The HX220c M5/M4 servers extend the capabilities of the Cisco HyperFlex portfolio in a 1U form factor with the addition of the Intel Xeon processor scalable family with 24 DIMM slots.

HXAF220c-M5SX All Flash HXAF220c-M5SX All Flash is a small-footprint Cisco HyperFlex All Flash model that contains a 240 GB M.2 form factor SSD that acts as the boot drive, a 240 GB housekeeping SSD, either a single 375 GB Optane NVMe SSD or a 1.6 TB NVMe SSD or a 400 GB SAS SSD write-

||||||||||||||||||||

||||||||||||||||||||

log drive, and six to eight 960 GB or 3.8 TB SATA SSDs for storage capacity. For configurations requiring selfencrypting drives, the caching SSD is replaced with an 800 GB SAS SED SSD, and the capacity disks are also replaced with either 800 GB, 960 GB, or 3.8 TB SED SSDs. Figure 2-6 shows the front view of an HXAF220c-M5SX All Flash node.

Figure 2-6 HXAF220c-M5SX All Flash

HX220c-M5SX Hybrid HX220c-M5SX Hybrid is a small-footprint Cisco HyperFlex hybrid model that contains a minimum of six and up to eight 1.8 TB or 1.2 TB SAS HDDs that contribute to cluster storage capacity, a 240 GB SSD housekeeping drive, a 480 GB or 800 GB SSD caching drive, and a 240 GB M.2 form factor SSD that acts as the boot drive. For configurations requiring self-encrypting drives, the caching SSD is replaced with an 800 GB SAS SED SSD, and the capacity disks are replaced with 1.2 TB SAS SED HDDs. Figure 2-7 shows the front view of an HX220c-M5SX Hybrid node.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 2-7 HX220c-M5SX Hybrid

HXAF220c-M4S All Flash HXAF220c-M4S All Flash is a small-footprint Cisco HyperFlex all flash model that contains two Cisco Flexible Flash (FlexFlash) Secure Digital (SD) cards that act as the boot drives, a single 120 GB or 240 GB SSD data-logging drive, a single 400 GB NVMe or a 400 GB or 800 GB SAS SSD write-log drive, and six 960 GB or 3.8 TB SATA SSDs for storage capacity. For configurations requiring self-encrypting drives, the caching SSD is replaced with an 800 GB SAS SED SSD, and the capacity disks are also replaced with either 800 GB, 960 GB, or 3.8 TB SED SSDs. Figure 2-8 shows the front view of an HXAF220c-M4S All Flash node.

Figure 2-8 HXAF220c-M4S All Flash

Cisco HyperFlex HX220c-M4S Hybrid Node Cisco HyperFlex HX220c-M4S Hybrid Node is a smallfootprint Cisco HyperFlex hybrid model that contains

||||||||||||||||||||

||||||||||||||||||||

six 1.8 TB or 1.2 TB SAS HDD drives that contribute to cluster storage capacity, a 120 GB or 240 GB SSD housekeeping drive, a 480 GB SAS SSD caching drive, and two Cisco Flexible Flash (FlexFlash) SD cards that act as boot drives. For configurations requiring selfencrypting drives, the caching SSD is replaced with an 800 GB SAS SED SSD, and the capacity disks are replaced with 1.2TB SAS SED HDDs. Figure 2-9 shows the front view of an HX220c-M4S Hybrid node.

Figure 2-9 HX220c-M4S Hybrid

HX240 M5/M4 Series The HX240C M5/M4 servers extend the capabilities of Cisco’s HyperFlex portfolio in a 2U form factor with the addition of the Intel Xeon processor scalable family with 24 DIMM slots.

HXAF240c-M5SX All Flash HXAF240c-M5SX All Flash is a capacity-optimized Cisco HyperFlex all flash model that contains a 240 GB M.2 form factor SSD that acts as the boot drive, a 240 GB housekeeping SSD, either a single 375 GB Optane NVMe

Technet24

||||||||||||||||||||

||||||||||||||||||||

SSD or a 1.6 TB NVMe SSD or a 400 GB SAS SSD writelog drive installed in a rear hot-swappable slot, and 6 to 23 960 GB or 3.8 TB SATA SSDs for storage capacity. For configurations requiring self-encrypting drives, the caching SSD is replaced with an 800 GB SAS SED SSD, and the capacity disks are also replaced with either 800 GB, 960 GB, or 3.8 TB SED SSDs. Figure 2-10 shows the front view of an HXAF240cM5SX All Flash node.

Figure 2-10 HXAF240c-M5SX All Flash

HX240c-M5SX Hybrid HX240c-M5SX Hybrid is a capacity-optimized Cisco HyperFlex hybrid model that contains a minimum of 6 and up to 23 1.8 TB or 1.2 TB SAS small form factor (SFF) HDDs that contribute to cluster storage, a 240 GB SSD housekeeping drive, a single 1.6 TB SSD caching drive installed in a rear hot-swappable slot, and a 240 GB M.2 form factor SSD that acts as the boot drive. For configurations requiring self-encrypting drives, the caching SSD is replaced with a 1.6 TB SAS SED SSD, and the capacity disks are replaced with 1.2TB SAS SED HDDs.

||||||||||||||||||||

||||||||||||||||||||

Figure 2-11 shows the front view of an HX240c-M5SX Hybrid node.

Figure 2-11 HX240c-M5SX Hybrid

HX240c-M5L Hybrid HX240c-M5L Hybrid is a density-optimized Cisco HyperFlex hybrid model that contains a minimum of 6 and up to 12 6 TB or 8 TB SAS large form factor (LFF) HDDs that contribute to cluster storage, a 240 GB SSD housekeeping drive and a single 3.2 TB SSD caching drive (both installed in the rear hot-swappable slots), and a 240 GB M.2 form factor SSD that acts as the boot drive. Large form factor nodes cannot be configured with self-encrypting disks and are limited to a maximum of eight nodes in a cluster in the initial release of HyperFlex 3.0. Figure 2-12 shows the front view of an HX240c-M5L Hybrid node.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 2-12 HX240c-M5L Hybrid

HXAF240c-M4SX All Flash HXAF240c-M4SX All Flash is a capacity-optimized Cisco HyperFlex All Flash model that contains two FlexFlash SD cards that act as boot drives, a single 120 GB or 240 GB SSD data-logging drive, a single 400 GB NVMe or a 400 GB or 800 GB SAS SSD write-log drive, and 6 to 23 960 GB or 3.8 TB SATA SSDs for storage capacity. For configurations requiring self-encrypting drives, the caching SSD is replaced with an 800 GB SAS SED SSD, and the capacity disks are also replaced with either 800 GB, 960 GB, or 3.8 TB SED SSDs. Figure 2-13 shows the front view of an HXAF240cM4SX All Flash node.

Figure 2-13 HXAF240c-M4SX All Flash

HX240c-M4SX Hybrid HX240c-M4SX Hybrid is a capacity-optimized Cisco HyperFlex hybrid model that contains a minimum of 6 and up to 23 1.8 TB or 1.2 TB SAS HDDs that contribute to cluster storage, a single 120 GB or 240 GB SSD

||||||||||||||||||||

||||||||||||||||||||

housekeeping drive, a single 1.6 TB SAS SSD caching drive, and two FlexFlash SD cards that act as the boot drives. For configurations requiring self-encrypting drives, the caching SSD is replaced with a 1.6 TB SAS SED SSD, and the capacity disks are replaced with 1.2TB SAS SED HDDs. Figure 2-14 shows the front view of an HX240c-M4SX Hybrid node.

Figure 2-14 HX240c-M4SX Hybrid

HXAF220c-M5N All-NVMe HXAF220c-M5N All-NVMe is a small-footprint Cisco HyperFlex all-NVMe model that contains a 240 GB M.2 form factor SSD that acts as the boot drive, a 1 TB housekeeping NVMe SSD, a single 375 GB Intel Optane NVMe SSD write-log drive, and six to eight 1 TB or 4 TB NVMe SSDs for storage capacity. Optionally, the Cisco HyperFlex Acceleration Engine card can be added to improve write performance and compression. Selfencrypting drives are not available as an option for the all-NVMe nodes.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 2-15 shows the front view of an HXAF220c-M5N All-NVMe node.

Figure 2-15 HXAF220c-M5N All-NVMe

All Flash, Hybrid, and All-NVMe Hybrid converged nodes use a combination of SSDs for the short-term storage caching layer and HDDs. All Flash nodes only make use of SSDs, and All-NVMe nodes use only Non-Volatile Memory Express (NVMe) drives.

Cisco HyperFlex Compute-Only Nodes All current model Cisco UCS M4 and M5 generation servers, except the C880 M4 and C880 M5, may be used as compute-only nodes connected to a Cisco HyperFlex cluster, along with a limited number of previous M3 generation servers. Any valid CPU and memory configuration is allowed in the compute-only nodes, and the servers can be configured to boot from SANs, local disks, or internal SD cards. The following servers may be used as compute-only nodes: Cisco UCS B200 M3 blade servers Cisco UCS B200 M4 blade servers

||||||||||||||||||||

||||||||||||||||||||

Cisco UCS B200 M5 blade servers Cisco UCS B260 M4 blade servers Cisco UCS B420 M4 blade servers Cisco UCS B460 M4 blade servers Cisco UCS B480 M5 blade servers Cisco UCS C220 M3 rack-mount servers Cisco UCS C220 M4 rack-mount servers Cisco UCS C220 M5 rack-mount servers Cisco UCS C240 M3 rack-mount servers Cisco UCS C240 M4 rack-mount servers Cisco UCS C240 M5 rack-mount servers Cisco UCS C460 M4 rack-mount servers Cisco UCS C480 M5 rack-mount servers

PHYSICAL INSTALLATION The servers connect to the fabric interconnects via the Cisco VIC 1227 mLOM card, the Cisco 1387 mLOM card, or the Cisco VIC 1457 mLOM card. The VIC 1227 card is used in conjunction with the Cisco UCS 6248UP or 6296UP model fabric interconnects, the VIC 1387 is used in conjunction with the Cisco UCS 6332 or 633216UP model fabric interconnects, and the Cisco UCS VIC 1457 is used in conjunction with the Cisco UCS 6454 model fabric interconnect. The Cisco UCS Virtual Interface Card (VIC) 1227 is a

Technet24

||||||||||||||||||||

||||||||||||||||||||

dual-port Enhanced Small Form-Factor Pluggable (SFP+) 10 Gbps Ethernet and Fibre Channel over Ethernet (FCoE)-capable PCI Express (PCIe) modular LAN-on-motherboard (mLOM) adapter installed in the Cisco UCS HX-Series rack servers. Figure 2-16 shows a Cisco VIC 1227 mLOM card.

Figure 2-16 Cisco VIC 1227mLOM Card The Cisco UCS VIC 1387 card is a dual-port Enhanced Quad Small Form-Factor Pluggable (QSFP+) 40 Gbps Ethernet and Fibre Channel over Ethernet (FCoE)capable PCI Express (PCIe) mLOM adapter installed in the Cisco UCS HX-Series rack servers. Figure 2-17 shows a Cisco VIC 1387 mLOM card.

||||||||||||||||||||

||||||||||||||||||||

Figure 2-17 Cisco VIC 1387 mLOM Card The Cisco UCS VIC 1457 is a quad-port Small FormFactor Pluggable (SFP28) mLOM card designed for the M5 generation of Cisco UCS C-Series rack servers. The card supports 10/25 Gbps Ethernet or FCoE. The card can present PCIe standards-compliant interfaces to the host, and these can be dynamically configured as either NICs or HBAs. Figure 2-18 shows a Cisco VIC 1457 mLOM card.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 2-18 Cisco VIC 1457 mLOM Card

Supported Media Table 2-2 through Table 2-9 list the Small Form-Factor Pluggable (SPF), Quad Small Form-Factor Pluggable (QSFP), cables with integrated transceivers, and cable specifications that are supported by the Cisco VIC 1227 mLOM card, Cisco VIC 1387 mLOM card, and Cisco VIC 1457 mLOM card. Table 2-2 Cisco VIC 1227 mLOM Card SFP and Twinax Support Connector (Media)

Cable Type

SFP-10G-USR

Ultra-short-range MMF

SFP-10G-SR

Short-range MMF

SFP-10G-LR

Long-range SMF

10GBASE-CU

SFP+ cable 1, meter

10GBASE-CU

SFP+ cable 3, meter

10GBASE-CU

SFP+ cable 5, meter

||||||||||||||||||||

||||||||||||||||||||

10GBASE-CU

SFP+ cable 7, meter

SFP-10GB-ACU

Cable, 7 meter

Table 2-3 Cisco VIC 1227 mLOM Card Cable Specifications Connec tor (Media)

Cable

Distanc e

Power (Each Side)

Transceiver Latency (Link)

Stan dar d

SFP+ CU copper

Twinax

5 meters

Approxi mately 0.1 W

Approximatel y 0.1 microseconds

SFF 8431

SFP+ ACU copper

Active Twinax

7 meters/1 0 meters

Approxi mately 0.5 W

Approximatel y 6.8 nanoseconds

SFF 8461

SFP+ SR MMF and SR

MM OM2 MM OM3

82 meters/3 00 meters

1W

Approximatel y0 microseconds

IEE E 802. 3ae

Table 2-4 Cisco VIC 1387 QSFP and QSA Support Connect or (Media)

Cable Type

Technet24

||||||||||||||||||||

||||||||||||||||||||

QSFP40G-SR4

40GBASE-SR4 QSFP+ transceiver module for MMF, 4 lanes, 850 nm wavelength, 12-fiber MPO/MTP connector

QSFP40GSR4-S

40GBASE-SR4 QSFP+ transceiver module for MMF, 4 lanes, 850 nm wavelength, 12-fiber MPO/MTP connector, S-class

QSFP40GCSR4

40GBASE-CSR4 QSFP+ transceiver module for MMF, 4 lanes, 850 nm wavelength, 12-fiber MPO/MTP connector

QSFP40G-SRBD

40 Gbps QSFP BiDi optical transceiver

QSFP40G-LR4

40GBASE-LR4 QSFP+ transceiver module for SMF, duplex LC, 10-kilometer reach

QSFP40GLR4-S

40GBASE-LR4 QSFP+ transceiver module for SMF, duplex LC, 10-kilometer reach, S-class

QSFP4SFP10G -CU1M

40GBASE-CR4 QSFP+ to 4 10GBASE-CU SFP+ direct attach breakout cable assembly, 1 meter passive

QSFP4SFP10G -CU3M

40GBASE-CR4 QSFP+ to 4 10GBASE-CU SFP+ direct attach breakout cable assembly, 3 meter passive

||||||||||||||||||||

||||||||||||||||||||

QSFPH40GCU1M

Direct attach copper, 1 meter

QSFPH40GCU3M

Direct attach copper, 3 meter

QSFPH40GCU5M

Direct attach copper, 5 meter

QSFPH40GACU7M

40GBASE-CR4 QSFP+ direct attach copper cable, 7 meter active

QSFPH40GACU10M

40GBASE-CR4 QSFP+ direct attach copper cable, 10 meter active

QSFP4x10GAC7M

40GBASE-CR4 QSFP+ to 4 10GBASE-CU SFP+ direct attach breakout cable assembly, 7 meter active

QSFP4x10GAC10M

40GBASE-CR4 QSFP+ to 4 10GBASE-CU SFP+ direct attach breakout cable assembly, 10 meter active

QSFPH40GAOC1M

40 Gbps QSFP active optical cable, 1 meter

Technet24

||||||||||||||||||||

||||||||||||||||||||

QSFPH40GAOC2M

40 Gbps QSFP active optical cable, 2 meter

QSFPH40GAOC3M

40 Gbps QSFP active optical cable, 3 meter

QSFPH40GAOC5M

40 Gbps QSFP active optical cable, 5 meter

QSFPH40GAOC7M

40 Gbps QSFP active optical cable, 7 meter

QSFPH40GAOC10M

40 Gbps QSFP active optical cable, 10 meter

QSFP4X10GAOC1M

QSFP to 4 SFP 10 Gbps active optical cable, 1 meter

QSFP4X10GAOC2M

QSFP to 4 SFP 10 Gbps active optical cable, 2 meter

QSFP4X10GAOC3M

QSFP to 4 SFP 10 Gbps active optical cable, 3 meter

||||||||||||||||||||

||||||||||||||||||||

QSFP4X10GAOC5M

QSFP to 4 SFP 10 Gbps active optical cable, 5 meter

QSFP4X10GAOC7M

QSFP to 4 SFP 10 Gbps active optical cable, 7 meter

QSFP4X10GAOC10M

QSFP to 4 SFP 10 Gbps active optical cable, 10 meter

CVRQSFPSFP10G

QSFP to SFP+ adapter (QSA)

Table 2-5 Cisco VIC mLOM 1387 SFP and Twinax Support with QSA Connector (Media)

Cable Type

SFP-10G-USR

Ultra-short-range MMF

SFP-10G-SR

Short-range MMF

SFP-10G-LR

Long-range SMF

10GBASE-CU

SFP+ cable, 1 meter

Technet24

||||||||||||||||||||

||||||||||||||||||||

10GBASE-CU

SFP+ cable, 3 meter

10GBASE-CU

SFP+ cable, 5 meter

10GBASE-CU

SFP+ cable, 7 meter

SFP-10GB-ACU

Cable, 7 meter

Table 2-6 40 Gigabit Ethernet Cabling Specifications Connector (Media)

Cabl e

Dista nce

Power (Each Side)

Transceiver Latency (Link)

Stan dar d

SFP+ copper (CU)

Twin ax

1, 3, and 5 meter

Approxi mately 0.1 W

Approximatel y 0.1 microsecond

SFF 8431

SFP+ ACU copper

Activ e Twin ax

7 meter

Approxi mately 0.5 W

Approximatel y 0.1 microsecond

SFF 8461

MM OM2

25 and 100 meter

1W

Approximatel y0 microseconds

IEE E 802. 3ae

SFP+ FET

MM OM3

10 meter

||||||||||||||||||||

||||||||||||||||||||

MM OM4

SFP+ short reach (SR) and MMF

MM OM2

82 and 300 meter

1W

Approximatel y0 microseconds

IEE E 802. 3ae

MM OM3

MM OM4

SFP+ long reach (LR)

SMF

10 kilome ter

1W

Approximatel y0 microseconds

IEE E 802. 3ae

SFP+ long reach (ER)

SMF

40 kilome ter

1.5 W

Approximatel y0 microseconds

IEE E 802. 3ae

SFP+ long reach (ZR)

SMF

80 kilome ter

1.5 W

Approximatel y0 microseconds

IEE E 802. 3ae

Table 2-7 Cisco VIC 1457 SFP Support: SFP+ 10 Gbps Transceivers

Technet24

||||||||||||||||||||

||||||||||||||||||||

Connector (Media)

Cable Type

SFP-10G-SR

10GBASE-SR, 850 nm, MMF, 300 meter

SFP-10G-SR-S

10GBASE-SR, 850 nm, MMF, 300 meter, Sclass

SFP-10G-LR

10GBASE-LR, 1310 nm, SMF, 10 kilometer

SFP-10G-LR-S

10GBASE-LR, 1310 nm, SMF, 10 kilometer, Sclass

Table 2-8 Cisco VIC 1457 SF28P Support: SFP28 25 Gbps Transceivers Connector (Media)

Cable Type

SFP-25G-SR-S

25G BASE SFP+, 850 nm, MMF, 400 meter, Sclass

SFP-10/25G(1 ) CSR-S

10/25GBASE-CSR SFP+, 850 nm, MMF, 400 meter, S-class

Table 2-9 Cisco VIC 1457 Twinax Support Connector

Cable Type

||||||||||||||||||||

||||||||||||||||||||

(Media)

SFP-H10GBCU1M

10GBASE-CU SFP+ direct attach copper cable, 1 meter

SFP-H10GB-CU15M(4)

10GBASE-CU SFP+ direct attach copper cable, 1.5 meter

SFP-H10GBCU2M

10GBASE-CU SFP+ direct attach copper cable, 2 meter

SFP-H10GBCU2-5M(4)

10GBASE-CU SFP+ direct attach copper cable, 2.5 meter

SFP-H10GBCU3M

10GBASE-CU SFP+ direct attach copper cable, 3 meter

SFP-H10GBCU5M

10GBASE-CU SFP+ direct attach copper cable, 5 meter

SFP-H10GBACU7M

10GBASE-CU SFP+ direct attach active copper cable, 7 meter

SFP-H10GBACU10M

10GBASE-CU SFP+ direct attach active copper cable, 10 meter

SFP-10G-AOC1M

10GBASE active optical SFP+ cable, 1 meter

SFP-10G-AOC2M

10GBASE active optical SFP+ cable, 2 meter

Technet24

||||||||||||||||||||

||||||||||||||||||||

SFP-10G-AOC3M

10GBASE active optical SFP+ cable, 3 meter

SFP-10G-AOC5M

10GBASE active optical SFP+ cable, 5 meter

SFP-10G-AOC7M

10GBASE active optical SFP+ cable, 7 meter

SFP-10GAOC10M

10GBASE active optical SFP+ cable, 10 meter

SFP-H25G-CU1M

25GBASE-CU SFP28 direct attach copper cable, 1 meter

SFP-H25G-CU2M

25GBASE-CU SFP28 direct attach copper cable, 2 meter

SFP-H25G-CU3M

25GBASE-CU SFP28 direct attach copper cable, 3 meter

SFP-25G-AOC1M

25GBASE-AOC SFP28 active optical cable, 1 meter

SFP-25G-AOC2M

25GBASE-AOC SFP28 active optical cable, 2 meter

SFP-25G-AOC3M

25GBASE-AOC SFP28 active optical cable, 3 meter

SFP-25G-AOC5M

25GBASE-AOC SFP28 active optical cable, 5 meter

||||||||||||||||||||

||||||||||||||||||||

SFP-25G-AOC7M

25GBASE-AOC SFP28 active optical cable, 7 meter

SFP-25GAOC10M

25GBASE-AOC SFP28 active optical cable, 10 meter

PHYSICAL TOPOLOGY This section describes the supported topologies in which the HyperFlex HX Data Platform can be deployed. These topologies include the following: HyperFlex standard cluster HyperFlex stretch cluster HyperFlex edge cluster

HyperFlex Standard Cluster/HyperFlex Extended Cluster Topology The Cisco HyperFlex system is composed of a pair of Cisco UCS fabric interconnects along with up to 32 HXSeries rack-mount servers per cluster. Up to 32 compute-only servers can also be added per HyperFlex cluster. Adding Cisco UCS rack-mount servers and/or Cisco UCS 5108 blade chassis, which house Cisco UCS blade servers, allows for additional compute resources in an extended cluster design. Up to 8 separate HX clusters can be installed under a single pair of fabric

Technet24

||||||||||||||||||||

||||||||||||||||||||

interconnects. The two fabric interconnects both connect to every HX-Series rack-mount server, and both connect to every Cisco UCS 5108 blade chassis and Cisco UCS rack-mount server. Upstream network connections, also referred to as Northbound network connections, are made from the fabric interconnects to the customer data center network at the time of installation. Figure 2-19 illustrates the topology for a standard HyperFlex cluster.

Figure 2-19 HyperFlex Standard Cluster Topology Figure 2-20 illustrates the topology for an extended HyperFlex cluster.

||||||||||||||||||||

||||||||||||||||||||

Figure 2-20 HyperFlex Extended Cluster Topology

Fabric Interconnect Connectivity Fabric interconnects (FIs) are deployed in pairs, and the two units operate as a management cluster while forming two separate network fabrics, referred to as the A side and B side fabrics. Therefore, many design elements refer to FI A or FI B, alternatively called fabric A or fabric B. Both fabric interconnects are active at all times, passing data on both network fabrics for a

Technet24

||||||||||||||||||||

||||||||||||||||||||

redundant and highly available configuration. Management services, including Cisco UCS Manager, are also provided by the two FIs, but in a clustered manner, where one FI is the primary, and one is secondary, with a roaming clustered IP address. This primary/secondary relationship is only for the management cluster and has no effect on data transmission. Fabric interconnects have the following ports, which must be connected for proper management of the Cisco UCS domain: Mgmt: A 10/100/1000 Mbps port for managing the fabric interconnect and the Cisco UCS domain via GUI and CLI tools. This port is also used by remote KVM, IPMI, and SoL sessions to the managed servers within the domain. This is typically connected to the customer management network. L1: A cross-connect port for forming the Cisco UCS management cluster. This port is connected directly to the L1 port of the paired fabric interconnect using a standard CAT5 or CAT6 Ethernet cable with RJ-45 plugs. It is not necessary to connect this to a switch or hub. L2: A cross-connect port for forming the Cisco UCS management cluster. This port is connected directly to the L2 port of the paired fabric interconnect using a standard CAT5 or CAT6 Ethernet cable with RJ-45 plugs. It is not necessary to connect this to a switch or hub. Console: An RJ-45 serial port for direct console access to the fabric interconnect. This port is typically used during the initial FI setup process with the included serial to RJ-45 adapter cable. This

||||||||||||||||||||

||||||||||||||||||||

can also be plugged into a terminal aggregator or remote console server device.

HX-Series Rack-Mount Servers Connectivity The HX-Series converged servers are connected directly to the Cisco UCS fabric interconnects in Direct Connect mode. This option enables Cisco UCS Manager to manage the HX-Series rack-mount servers using a single cable for both management traffic and data traffic. All the Cisco HyperFlex M4 generation servers are configured with the Cisco VIC 1227 or Cisco VIC 1387 network interface card (NIC) installed on the mLOM slot, which has dual 10 Gigabit Ethernet) or 40 Gigabit Ethernet ports. Cisco HyperFlex M5 generation servers can be configured only with the Cisco VIC 1387 card or the Cisco VIC 1457. The standard and redundant connection practice is to connect port 1 of the VIC (the right-hand port) to a port on FI A and port 2 of the VIC card (the left-hand port) to a port on FI B (0). An optional configuration method for servers containing the Cisco VIC 1457 card is to cable the servers with 2 links to each FI, using ports 1 and 2 to FI A, and ports 3 and 4 to FI B. The HyperFlex installer checks for this configuration and ensures that all servers’ cabling matches. Failure to follow this cabling practice can lead to errors, discovery failures, and loss of redundant connectivity.

Technet24

||||||||||||||||||||

||||||||||||||||||||

All nodes within a Cisco HyperFlex cluster must be connected at the same communication speed—for example, mixing 10 Gb with 25 Gb interfaces is not allowed. In addition, for clusters that contain only M5 generation nodes, all the nodes within a cluster must contain the same model of Cisco VIC cards. Various combinations of physical connectivity between the Cisco HX-Series servers and the fabric interconnects are possible, but only specific combinations are supported. For example, use of the Cisco QSA module to convert a 40 Gigabit Ethernet QSFP+ port into a 10 Gigabit Ethernet SFP+ port is not allowed with M4 generation servers but is allowed with M5 generation servers in order to configure a mixed cluster of M4 and M5 generation servers along with model 6248 or 6296 fabric interconnects. Table 2-10 lists the possible connections and which of these methods are supported. Table 2-10 Supported Physical Connectivity Fab ric Inte rco nne ct Mod el

62 48

62 96

63 32

6332-16UP

6454

||||||||||||||||||||

||||||||||||||||||||

Port Typ e

10 Gi ga bit Et he rn et

10 Gi ga bit Et he rn et

40 Gi ga bit Et he rn et

10 Giga bit Ethe rnet Brea kout

40 Gi ga bit Et he rn et

10 Giga bit Ethe rnet Brea kout

10 Giga bit Ethe rnet Onb oard

10 Gi ga bit Et he rn et

25 Gi ga bit Et he rn et

M4 with VIC 1227



















M4 with VIC 1387



















M4 with VIC 1387 + QSA



















M5 with VIC 1387



















M5 with



















Technet24

||||||||||||||||||||

||||||||||||||||||||

VIC 1387 + QSA

M5 with VIC 1457 or 1455



















Figure 2-21 illustrates the connectivity of an HX-Series server.

Figure 2-21 HX-Series Node Connectivity

Cisco UCS B-Series Blade Servers Connectivity HyperFlex extended clusters also incorporate 1 to 16 Cisco UCS blade servers for additional compute capacity. A blade chassis comes populated with 1 to 4 power

||||||||||||||||||||

||||||||||||||||||||

supplies and 8 modular cooling fans. In the rear of the chassis are two bays for installation of Cisco fabric extenders. The fabric extenders (also commonly called IO modules, or IOMs) connect the chassis to the fabric interconnects. Internally, the fabric extenders connect to the Cisco VIC installed in each blade server across the chassis backplane. The standard practice is to connect 1 to 8 10 Gigabit Ethernet links, or 1 to 4 40 Gigabit Ethernet links (depending on the IOMs and FIs purchased) from the left-side IOM, or IOM 1, to FI A, and to connect the same number of 10 Gigabit Ethernet or 40 Gigabit Ethernet links from the right-side IOM, or IOM 2, to FI B. All other cabling configurations are invalid and can lead to errors, discovery failures, and loss of redundant connectivity. Figure 2-22 illustrates the connectivity for the Cisco UCS 5108 chassis.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 2-22 HyperFlex Cisco UCS 5108 Chassis Connectivity

HyperFlex Stretch Cluster Topology A stretch cluster is geographically redundant, meaning that part of the cluster resides in one physical location, and another part resides in a second location. The cluster also requires a “tie breaker” or “witness” component, which should reside in a third, separate location. The goal of this design is to help ensure that the virtual infrastructure remains available even in the

||||||||||||||||||||

||||||||||||||||||||

event of the complete loss of one site. Figure 2-23 shows a logical representation of the components that are part of a HyperFlex stretch cluster.

Figure 2-23 Components of a Stretch Cluster Characteristics of stretch cluster include the following: A stretch cluster is a single cluster with nodes geographically distributed at different locations. Storage is mirrored locally and across each site (but not to the tiebreaker witness). Sites need to be connected over a low-latency network to meet the write requirements for applications and for a good end-user experience. Geographic failover (using a virtual machine) is like failover in a regular cluster. Node failure in a site is like node failure in a regular cluster. Split brain is a condition in which nodes at either site cannot see each other. This condition can lead to problems if a node quorum cannot be determined (so that virtual machines know where to

Technet24

||||||||||||||||||||

||||||||||||||||||||

run). Split brain is caused by network failure or site failure. Stretch clusters have a witness: an entity hosted on a third site that is responsible for deciding which site becomes primary after a splitbrain condition.

A stretch cluster does have some physical limitations to keep in mind. Some applications, specifically databases, require write latency of less than 20 milliseconds (ms). Many other applications require latency of less than 10 ms to avoid problems with the application. To meet these requirements, the round-trip time (RTT) network latency on the stretch link between sites in a stretch cluster should be less than 5 ms. The speed of light (3e8 m/s) at the maximum recommended stretch cluster site distance of 100 km (approximately 62 miles) introduces about 1 ms of latency by itself. In addition, time is needed for code path and link hops (from node to fabric interconnect to switch), which also plays a role in determining the maximum site-to-site recommended distance. Figure 2-24 shows a logical representation of a HyperFlex stretch cluster.

||||||||||||||||||||

||||||||||||||||||||

Figure 2-24 Stretch Cluster Network Figure 2-25 illustrates the topology for a HyperFlex stretch cluster.

Figure 2-25 HyperFlex Stretch Cluster Topology

Technet24

||||||||||||||||||||

||||||||||||||||||||

HyperFlex Edge Cluster Topology A HyperFlex edge cluster is composed of two, three, or four Cisco UCS HX-Series converged nodes (with disk storage). Cluster node expansion is not supported but is planned for a future software release with 10 Gigabit Ethernet topologies. Each node is equipped with at least one high-performance SSD for data caching and rapid acknowledgment of write requests. Each node is also equipped with additional disks, up to the platform’s physical limit, for long-term storage capacity. The following are the components of a Cisco HyperFlex Edge system: Cisco HyperFlex HX-Series rack-mount servers, including the following models: Cisco HyperFlex HX220c-M5SX rack-mount servers (2-, 3-, or 4-node configuration) Cisco HyperFlex HXAF220c-M5SX All Flash rack-mount servers (2-, 3-, or 4-node configuration) Cisco HyperFlex HX220c-M4S rack-mount servers (3node configuration only) Cisco HyperFlex HXAF220c-M4S All Flash rack-mount servers (3-node configuration only) Cisco HyperFlex Data Platform software VMware vSphere ESXi hypervisor VMware vCenter Server (end user supplied)

||||||||||||||||||||

||||||||||||||||||||

In this solution, Cisco Intersight is the management platform that performs the deployment and administration of the HyperFlex Edge systems across multiple sites. A Cisco HyperFlex edge cluster is built using Cisco HXSeries rack-mount servers that are not connected to Cisco UCS fabric interconnects. Upstream network connections, also referred to as Northbound network connections, are made directly from the servers to the customer-chosen data center top-of-rack (ToR) switches at the time of installation. Figure 2-26 shows a logical topology for a HyperFlex edge cluster.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 2-26 HyperFlex Edge Cluster Logical Topology The following section describes the physical connectivity of the 3-node cluster. Chapter 7, “Deploying HyperFlex Anywhere with Intersight,” reviews the 2node edge cluster.

Edge Network Connectivity The Cisco HyperFlex Edge product supports three

||||||||||||||||||||

||||||||||||||||||||

networking topologies: single Gigabit Ethernet switch, dual Gigabit Ethernet, and 10 Gigabit Ethernet switch (either single or dual) configurations, depending on requirements and the available switching hardware. This section describes specific requirements for each topology, as well as common network requirements that apply to all three topologies.

Single-Switch Configuration A single-switch configuration provides a simple topology requiring only a single switch and two Gigabit Ethernet ports per server. Link or switch redundancy is not provided. Access ports and trunk ports are the two supported network port configurations. Figure 2-27 illustrates the logical network topology of an edge cluster with a single upstream switch.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 2-27 Edge Single-Switch Network Connectivity The upstream network requirements are as follows: A managed switch with VLAN capability Six physical Gigabit Ethernet ports for three HyperFlex nodes (Optional) Jumbo frames PortFast or PortFast trunk configured on each port to ensure uninterrupted access

Two Gigabit Ethernet ports are required per server: Port 1: For management (ESXi and Cisco Integrated

||||||||||||||||||||

||||||||||||||||||||

Management Controller [CIMC]), vMotion traffic, and VM guest traffic Port 2: For HyperFlex storage traffic

There are two supported network port configurations: access ports or trunk ports. Spanning tree PortFast (access ports) or PortFast trunk (trunk ports) must be enabled for all network ports connected to HyperFlex servers. Failure to configure PortFast causes intermittent CIMC disconnects during ESXi bootup and longer-than-necessary network reconvergence during physical link failure. Figure 2-28 illustrates the physical topology for a single switch from a single edge node.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 2-28 Edge Single-Switch Physical Topology Some things to consider with this topology are as follows: Cable both integrated LOM ports to the same ToR switch. If desired, cable the dedicated CIMC port to the same switch or to an out-of-band management switch. Do not use the 10 Gigabit Ethernet ports on the VIC.

Ethernet interfaces can be configured either as access ports or trunk ports, as follows: An access port can have only one VLAN configured on the interface; it can carry traffic for only one VLAN.

||||||||||||||||||||

||||||||||||||||||||

A trunk port can have one or more VLANs configured on the interface; it can carry traffic for several VLANs simultaneously.

Table 2-11 summarizes the differences between access ports and trunk ports. You can use the details described in this table to determine which ports to use for a deployment. Table 2-11 Trunk and Access Port Definitions Trunk Ports

Access Ports

Requires more setup and definition of VLAN tags within CIMC, ESXi, and HX Data Platform installer.

Provides a simpler deployment process than trunk ports.

Provides the ability to logically separate management, vMotion, and VM guest traffic on separate subnets.

Requires that management, vMotion, and VM guest traffic share a single subnet.

Provides flexibility to bring in additional L2 networks to ESXi.

Requires a managed switch to configure ports 1 and 2 on discrete VLANs; storage traffic must use a dedicated VLAN, with no exceptions.

Dual-Switch Configuration A dual-switch configuration provides a slightly more

Technet24

||||||||||||||||||||

||||||||||||||||||||

complex topology with full redundancy that protects against switch failure, link and port failure, and LOM/PCIe NIC HW failures. It requires two switches that may be standalone or stacked, and four Gigabit Ethernet ports and one additional PCIe NIC per server. Trunk ports are the only supported network port configuration. Figure 2-29 illustrates the logical network topology of an edge cluster with dual upstream switch connectivity.

||||||||||||||||||||

||||||||||||||||||||

Figure 2-29 Edge Dual-Switch Network Connectivity The upstream network requirements are as follows: Two managed switches with VLAN capability (standalone or stacked)

Technet24

||||||||||||||||||||

||||||||||||||||||||

12 physical Gigabit Ethernet ports for three HyperFlex nodes (All 12 ports must be configure as trunk and allow all applicable VLANs.) (Optional) Jumbo frames PortFast trunk configured on all ports to ensure uninterrupted access to CIMC)

Four Gigabit Ethernet ports are required per server: Port 1: For management (ESXi, HyperFlex controller, and CIMC) and VM guest traffic Port 2: For HyperFlex storage traffic (and vMotion standby) Port 3: For VM guest traffic (and management standby) Port 4: For vMotion traffic (and storage standby)

Two ports use LOM, and two ports are from a PCIe addin NIC: 1 LOM port and 1 PCIe port serve management and VM guest traffic in a redundant configuration 1 LOM port and 1 PCIe port serve storage data and vMotion traffic in a redundant and load-balanced configuration

The Intel i350 quad-port NIC (UCSC-PCIE-IRJ45) must be installed for this topology: The NIC may be selected at ordering time and shipped preinstalled from the factory. The NIC may also be field-installed if ordered separately. Either riser 1 or riser 2 may be used, although riser 1 is recommended.

||||||||||||||||||||

||||||||||||||||||||

Only trunk ports are supported in the dual-switch configuration. A spanning tree PortFast trunk must be enabled for every network port connected to HyperFlex servers. Failure to configure PortFast causes intermittent CIMC disconnects during ESXi bootup and longer-thannecessary network reconvergence during physical link failure. Figure 2-30 illustrates the physical topology for a dualswitch configuration from a single edge node.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 2-30 Edge Dual-Switch Physical Topology Some things to consider with this topology are as follows: Cable both integrated LOM ports to the same ToR switch. Cable any two out of four PCIe NIC ports to the same ToR switch. Do not connect more than two PCIe NIC ports prior to installation. After cluster installation, you may freely use the remaining ports. Redundancy occurs at the vSwitch level and includes one uplink

||||||||||||||||||||

||||||||||||||||||||

port from the onboard LOM and one uplink port from PCIe NIC for each vSwitch. If desired, cable the dedicated CIMC port to the same switch or to an out-of-band management switch. Do not use the 10 Gigabit Ethernet ports on the VIC.

10 Gigabit Ethernet Switch Configuration 10 Gigabit Ethernet switch configuration provides a fully redundant technology that protects against switch (if using dual or stacked switches) and link and port failures. The 10 Gigabit Ethernet switch may be standalone or stacked. In addition, this configuration requires the following: Two 10 Gigabit Ethernet ports and a VIC 1387 with two QSAs per server Use of trunk mode Deployment using the on-premise OVA installer and not Intersight

Figure 2-31 illustrates the logical network topology of an edge cluster with 10 Gigabit Ethernet upstream switch connectivity.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 2-31 Edge 10 Gigabit Ethernet Dual-Switch Physical Connectivity The upstream network requirements are as follows: Two 10 Gbps ports are required per server using a VIC 1387: Each physical VIC port is logically divided into 4 vNICs, as seen by the hypervisor.

||||||||||||||||||||

||||||||||||||||||||

Only 10 Gbps speeds are supported (no 40 Gbps). M5 servers require VIC 1387 and two QSAs to reach 10 Gbps speeds. M4 servers require VIC 1227 to reach 10 Gbps speeds. Additional NICs: Additional third-party NICs may be installed in the HX edge nodes, as needed. All non-VIC interfaces must be shut down until the installation is complete. Only a single VIC is supported per HX edge node. Only trunk ports are supported in 10 Gigabit Ethernet switch configurations. Spanning tree PortFast trunk should be enabled for all network ports connected to HX ports.

Figure 2-32 illustrates the physical topology for a 10 Gigabit Ethernet dual-switch configuration from a single edge node.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 2-32 Edge 10 Gigabit Ethernet Dual-Switch Physical Topology Some things to consider with this topology are as follows: For M5 servers, ensure that a Cisco 40 Gigabit Ethernet-to-10 Gigabit Ethernet QSA is installed in both VIC ports. If using a single 10 Gigabit Ethernet switch, cable both 10 Gigabit Ethernet ports to the same switch. If using dual 10 Gigabit Ethernet switches or stacked switches, cable a 1X10 Gigabit Ethernet port to each switch, ensuring that all port 1s from all nodes go to the same switch and all port 2s from all nodes are connected to the other switch.

||||||||||||||||||||

||||||||||||||||||||

Cable the dedicated CIMC port to the same switch or to an out-ofband management switch.

SUMMARY This chapter presents the architecture of the HyperFlex HX Data Platform, including the components that make this solution possible. This chapter also reviews the different cluster topology configurations that are supported by the HyperFlex HX Data Platform. It discusses both the logical and physical topologies and the supported hardware for these topologies. Chapter 3, “Installing HyperFlex,” discusses deploying various types of HyperFlex clusters.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Chapter 3 Installing HyperFlex This chapter covers the details of installing the various types of HyperFlex clusters. It includes information about prerequisites and various components needed to install a Cisco HyperFlex cluster. A Cisco HyperFlex HX-Series system provides a fully contained virtual server platform that combines all three layers of compute, storage, and network with the powerful Cisco HX Data Platform software tool, resulting in a single point of connectivity for simplified management. As discussed in Chapter 2, “HyperFlex Architecture,” there are three types of HyperFlex clusters: Cisco HyperFlex standard cluster: Cisco Hyperflex standard cluster is deployed under a single UCS management domain with minimum 3 HX Nodes which form a cluster. The hyperFlex nodes have configurable processor, memory, drive, power, interconnect and accessory options.

||||||||||||||||||||

||||||||||||||||||||

Cisco HyperFlex edge cluster or ROBO (Remote Office Branch Office) sclusters: Cisco HyperFlex Edge solutions are deployed as fixed sets of two, three, or four nodes that use existing Gigabit Ethernet and 10 Gigabit Ethernet switches, offering the utmost in flexibility for deployment in remote office/branch office (ROBO) environments. Cisco HyperFlex stretch cluster: A HyperFlex cluster can reside in a single location, or it can be stretched across short geographic distances. An active/active stretch cluster synchronously replicates data between the two sites and has a very short recovery time objective (RTO) and zero data loss. Across longer geographic distances, native replication can synchronize data from a primary site to a secondary site to support more traditional disaster recovery strategies. HyperFlex also supports stretch clusters, where one cluster can span multiple locations. Each location requires a pair of fabric interconnects.

Installation can be performed using the on-premises installer VM or Cisco Intersight. Note At the time of writing this book, stretch clusters cannot be deployed from Intersight.

Intersight is a platform that provides IT infrastructure management as a service. Intersight is based on a microservice architecture and is delivered as a cloudnative or on-premises solution. Refer to Chapter 7, “Deploying HyperFlex Anywhere with Intersight,” for information on HyperFlex installation using Intersight. This chapter covers the installation of all three types of

Technet24

||||||||||||||||||||

||||||||||||||||||||

HyperFlex clusters for the ESXi hypervisor. Note All screenshots in this chapter show HyperFlex version 3.5.

The following section describes the installation and configuration requirements for the Cisco HyperFlex systems.

INSTALLATION PREREQUISITES Before a HyperFlex installation, it is important to prepare and configure the following components: Cisco UCS (servers, fabric interconnects, and so on) Network Hypervisor DNS server NTP server SMTP server vCenter

The topology in Figure 3-1 shows how all these components come together in a HyperFlex implementation.

||||||||||||||||||||

||||||||||||||||||||

Figure 3-1 Network Topology for a HyperFlex Implementation The following sections provide details on the prerequisites and configuration of the various components.

Cisco HyperFlex HX-Series System

Technet24

||||||||||||||||||||

||||||||||||||||||||

Components These HX System components comprise the Cisco HX Series server and Cisco HX Data Platform. Cisco HX-Series server: Y ou can use any of the following servers to configure Cisco HyperFlex: Converged nodes—All Flash: Cisco HyperFlex HXAF240c M5, HXAF220c M5, HXAF240c M4, and HXAF220c M4. Converged nodes—Hybrid: Cisco HyperFlex HX240c M5, HX220c M5, HX240c M4, and HX220c M4. Compute-only—Cisco B200 M3/M4, B260 M4, B420 M4, B460 M4, B480 M5, C240 M3/M4, C220 M3/M4, C480 M5, C460 M4, B200 M5, C220 M5, and C240 M5.

Note A converged node provides storage and compute resources to the cluster, whereas a compute-only node provides compute resources to the cluster only.

Cisco HX Data Platform: The HX Data Platform consists of the following components: Cisco HX Data Platform installer: Download this installer to a server connected to the storage cluster. The HX Data Platform installer configures the service profiles and policies within Cisco UCS Manager, deploys the controller VMs, installs the software, creates the storage cluster, and updates the VMware vCenter plug-in. Storage controller VM: Use the HX Data Platform installer to install the storage controller VM on each converged node in the managed storage cluster.

||||||||||||||||||||

||||||||||||||||||||

Cisco HX Data Platform plug-in: This integrated VMware vSphere interface monitors and manages the storage in a storage cluster.

Cisco UCS Fabric Interconnects (FIs) Fabric interconnects provide both network connectivity and management capabilities to any attached Cisco HXSeries server. FIs that were purchased and deployed as part of the Cisco HyperFlex system are also referred to as the HX FI domain in this chapter. The following fabric interconnects are supported: Cisco UCS 6200 Series fabric interconnects Cisco UCS 6300 Series fabric interconnects Cisco UCS 6400 Series fabric interconnects

Note UCS FIs are required for standard and stretch clusters. Edge clusters do not need UCS FIs.

Cisco Nexus Switches Cisco Nexus switches deliver high-density, configurable ports for flexible access deployment and migration.

HyperFlex Software Versions The HX components—Cisco HX Data Platform Installer,

Technet24

||||||||||||||||||||

||||||||||||||||||||

Cisco HX Data Platform, and Cisco UCS firmware—are installed on different servers. Verify that the components on the servers used with and within an HX storage cluster are compatible. For more information, visit https://www.cisco.com/c/en/us/support/hyperconverge d-systems/hyperflex-hx-data-platformsoftware/products-release-notes-list.html.

Host Requirements A Cisco HyperFlex cluster contains a minimum of three converged HyperFlex nodes. There is an option of adding compute-only nodes to provide additional compute power if there is no need for extra storage. Each server in a HyperFlex cluster is also referred as a HyperFlex node. You need to ensure that each node has the correct settings installed and configured before you deploy the storage cluster. For further information, see https://www.cisco.com/c/en/us/support/hyperconverge d-systems/hyperflex-hx-series/products-installationguides-list.html. Ensure that the following host requirements are met: Use the same VLAN IDs for all the servers (nodes or hosts) in a cluster. Use the same administrator login credentials for all the ESXi servers across a storage cluster.

||||||||||||||||||||

||||||||||||||||||||

Keep SSH enabled on all ESXi hosts. Configure DNS and NTP on all servers. Install and configure VMware vSphere.

Note VMware vSphere comes preinstalled from the factory on HX servers.

Use a single VIC only for converged nodes or compute-only nodes. Additional VICs or PCIe NICs are not supported.

Disk Requirements The disk requirements vary between converged nodes and compute-only nodes. To increase the available CPU and memory capacity, you can expand an existing cluster with compute-only nodes as needed. These compute-only nodes provide no increase in storage performance or storage capacity. Alternatively, adding converged nodes increases the storage performance and storage capacity alongside CPU and memory resources.

Converged Nodes Servers with only solid-state drives (SSDs) are All Flash servers. Servers with both SSDs and hard-disk drives (HDDs) are hybrid servers. The following applies to all the disks in a HyperFlex

Technet24

||||||||||||||||||||

||||||||||||||||||||

cluster: All the disks in the storage cluster must have the same amount of storage capacity. All the nodes in the storage cluster must have the same number of disks. All SSDs must support TRIM and must have TRIM enabled. (TRIM is an Advanced Technology Attachment [ATA] command that enables an operating system to inform a NAND flash SSD which data blocks it can erase because they are no longer in use. The use of TRIM can improve the performance when writing data to SSDs and can increase the life span of an SSD.) All HDDs can be either SATA or SAS type. All SAS disks in a storage cluster must be in a pass-through mode. Disk partitions must be removed from SSDs and HDDs. Disks with partitions are ignored and are not added to an HX storage cluster. Optionally, you can remove or back up existing data on disks. All existing data on a provided disk is overwritten.

Note New factory servers are shipped with appropriate disk partition settings. Do not remove disk partitions from new factory servers.

Only disks ordered directly from Cisco are supported. On servers with self-encrypting drives (SEDs), both the cache and persistent storage (capacity) drives must be SED capable. These servers support Data at Rest Encryption (DARE).

All M4 converged nodes have 2 64 GB SD FlexFlash cards in a mirrored configuration with ESX installed. All M5 converged nodes have M.2 SATA SSD with ESXi

||||||||||||||||||||

||||||||||||||||||||

installed. Note Do not mix storage disk types or storage sizes on a server or across a storage cluster. Mixing storage disk types is not supported. When replacing cache or persistent disks, always use the same type and size as the original disk. Do not mix any of the persistent drives. Use all HDDs or all SSDs and use the same size drives in a server. Do not mix hybrid and all flash cache drive types. Use a hybrid cache device on a hybrid server and use an all flash cache device on an all flash server. Do not mix encrypted and non-encrypted drive types. Use SED hybrid or SED all flash drives. On SED servers, both the cache and persistent drives must be SED type. All nodes must use the same size and quantity of SSDs. Do not mix SSD types.

Compute-Only Nodes Storage on compute-only nodes is not included in the cache or capacity of storage clusters. Note When adding compute nodes to a HyperFlex cluster, the compute-only service profile template automatically configures it for booting from an SD card. If you are using another form of boot media, update the local disk configuration policy.

Browser Recommendations Use one of the browsers listed in Table 3-1 to run the listed HyperFlex components. These browsers have

Technet24

||||||||||||||||||||

||||||||||||||||||||

been tested and approved. Other browsers might work, but full functionality has not been tested and confirmed. Table 3-1 Browser Recommendations for HyperFlex Browser

Cisco UCS Manager

HX Data Platform Installer

HX Connec t

Microsoft Internet Explorer

9 or higher

11 or higher

11 or higher

Google Chrome

14 or higher

56 or higher

56 or higher

Mozilla Firefox

7 or higher

52 or higher

52 or higher

Port Requirements If your network is behind a firewall, in addition to the standard port requirements, VMware recommends ports for VMware ESXi and VMware vCenter. Table 3-2 lists the open port requirements in a network between various components (HX Data Platform installer, ESXi, SCVM, vCenter, and so on) for HyperFlex installation and operations: CIP-M is for the cluster management IP address.

||||||||||||||||||||

||||||||||||||||||||

SCVM is the management IP address for the controller VM. ESXi is the management IP address for the hypervisor.

Table 3-2 Open Port Network Requirements for HyperFlex Installation and Operation Port Service Source Nu /Protoc mbe ol r

Port Essential Information Destin ations

Time Server-Related Ports

12 3

NTP/ UDP

Each ESXi node

Time serve r

Bidirectional

Each SCVM node

UCSM

HX Data Platform Installer-Related Ports

22

SSH/ TCP

HX Data Platform installer

Each ESXi node

Management addresses

Each SCV M node

Management addresses

Technet24

||||||||||||||||||||

||||||||||||||||||||

80

4 43

HTTP S/TC P

HTTP S/TC P

HX Data Platform installer

HX Data Platform installer

CIPM

Cluster management

UCS M

UCSM management addresses

Each ESXi Nod e

Management addresses

Each SCV M Nod e

Management addresses

CIPM

Cluster management

UCS M

UCSM management addresses

Each ESXi node

Management addresses

Each SCV M node

Management addresses

||||||||||||||||||||

||||||||||||||||||||

CIPM

Cluster management

Management addresses

80 89

vSphe re SDK/ TCP

HX Data Platform installer

Each ESXi node

90 2

Heart beat/ UDP/ TCP

HX Data Platform installer

vCen ter

Ping/ ICMP

HX Data Platform installer

N on e

Each ESXi node

ESXi IP addr esses

Management addresses

CVM IP addr esses

93 33

UDP/ TCP

HX Data Platform installer

CIPM

Cluster management

Mail Server-Related Ports (Optional for Email Subscription to Cluster Events)

Technet24

||||||||||||||||||||

||||||||||||||||||||

25

SMTP /TCP

Each SCVM node

Mail serve r

Optional

CIP-M

UCSM

Monitoring-Related Ports (Optional for Monitoring UCS Infrastructure)

16 1

SNM P poll/ UDP

Monitoring server

UCS M

Optional

16 2

SNM P trap/ UDP

UCSM

Mon itori ng serve r

Optional

Each ESXi node

Nam e serve r

Management addresses

Each SCVM node

Nam e serve

Management addresses

Name Server-Related Ports

53 (e xt er na l lo ok up

DNS/ TCP/ UDP

||||||||||||||||||||

||||||||||||||||||||

s)

r

CIP-M

Nam e serve r

UCSM

Nam e serve r

Cluster management

vCenter-Related Ports

80

HTTP /TCP

vCenter

Each SCV M node

Bidirectional

CIPM

4 43

HTTP S (Plugin)/T CP

vCenter

Each ESXi node

Bidirectional

Each SCV M node

CIP-

Technet24

||||||||||||||||||||

||||||||||||||||||||

M

74 4 4

HTTP S (VC SSO)/ TCP

vCenter

Each ESXi node

Bidirectional

Each SCV M node

CIPM

94 43

HTTP S (Plugin)/T CP

vCenter

Each ESXi node

Bidirectional

Each SCV M node

CIPM

59 89

CIM Server /TCP

vCenter

Each ESXi node

90

CIM

vCenter

Each

Introduced in ESXi Release

||||||||||||||||||||

||||||||||||||||||||

80

Server /TCP

90 2

Heart beat/ TCP/ UDP

vCenter

ESXi node

6.5

Each ESXi node

This port must be accessible from each host. Installation results in errors if the port is not open from the HX installer to the ESXi hosts.

Each ESXi node

Management addresses

Each SCV M node

Management addresses

CIPM

Cluster management

User-Related Ports

22

SSH/ TCP

User

HX Data Platf orm insta ller

UCS M

UCSM management addresses

Technet24

||||||||||||||||||||

||||||||||||||||||||

vCen ter

SSO serve r

80

HTTP /TCP

User

Each SCV M node

Management addresses

CIPM

Cluster management

UCS M

HX Data Platf orm insta ller

vCen ter

4 43

HTTP S/TC P

User

Each SCV M

||||||||||||||||||||

||||||||||||||||||||

node

CIPM

UCS M

UCSM management addresses

HX Data Platf orm insta ller

vCen ter

74 4 4

94 43

HTTP S (SSO) /TCP

User

HTTP S (Plugin)/T CP

User

vCen ter

SSO serve r

vCen ter

SSO Server Port

Technet24

||||||||||||||||||||

||||||||||||||||||||

74 4 4

HTTP S (SSO) /TCP

SSO server

Each ESXi node

Bidirectional

Each SCV M node

CIPM

Stretch Witness (Required only when deploying HyperFlex Stretch Cluster)

21 81

Zooke eper/ TCP

Witness

Each CVM node

Bidirectional, management addresses

Exhib itor (Zook eeper life cycle)

Witness

Each CVM node

Bidirectional, management addresses

28 88

38 88

81 80

||||||||||||||||||||

||||||||||||||||||||

/TCP

80

HTTP /TCP

Witness

Each CVM node

Potential future requirement

4 43

HTTP S/TC P

Witness

Each CVM node

Potential future requirement

Replication (Required Only When Configuring Native HX Asynchronous Cluster to Cluster Replication)

93 38

Data servic es mana ger peer/ TCP

Each CVM node

Each CVM node

Bidirectional, include cluster management IP addresses

30 49

Replic ation for CVM/ TCP

Each CVM node

Each CVM node

Bidirectional, include cluster management IP addresses

4 0 49

Cluste r map/ TCP

Each CVM node

Each CVM node

Bidirectional, include cluster management IP addresses

Technet24

||||||||||||||||||||

||||||||||||||||||||

4 05 9

NR NFS/ TCP

Each CVM node

Each CVM node

Bidirectional, include cluster management IP addresses

90 98

Replic ation servic e

Each CVM node

Each CVM node

Bidirectional, include cluster management IP addresses

88 89

NR maste r for coordi natio n/TC P

Each CVM node

Each CVM node

Bidirectional, include cluster management IP addresses

93 50

Hyper visor servic e/TCP

Each CVM node

Each CVM node

Bidirectional, include cluster management IP addresses

SED Cluster Ports

4 43

HTTP S

Each SCVM managemen t IP (including cluster managemen t IP)

UCS M (fabr ic A, fabri c B, VIP)

Policy configuration

56

TLS

CIMC from

KVM

Key exchange

||||||||||||||||||||

||||||||||||||||||||

96

each node

serve r

UCSM-Related Ports

4 43

Encry ption and so on/T CP

Each CVM node

CIM C OOB

Bidirectional for each UCS node

81

KVM/ HTTP

User

UCS M

OOB KVM

74 3

KVM/ HTTP

User

UCS M

OOB KVM encrypted

Miscellaneous Ports

93 50

Hyper visor servic e/TCP

Each CVM node

Each CVM node

Bidirectional, including cluster management IP addresses

90 97

CIPM failov er/TC P

Each CVM node

Each CVM node

Bidirectional for each CVM to other CVMs

Technet24

||||||||||||||||||||

||||||||||||||||||||

11 1

RPC bind/ TCP

Each SCVM node

Each SCV M node

CVM outbound to the installer

80 02

Instal ler/T CP

Each SCVM node

Inst aller

Service Location Protocol

80 80

Apach e Tomc at/TC P

Each SCVM node

Each SCV M node

stDeploy makes a connection request with stdeploy

80 82

Auth servic e/TCP

Each SCVM node

Each SCV M node

Any request with uri /auth/

93 35

hxRo boCo ntrol/ TCP

Each SCVM node

Each SCV M node

Robo deployments

4 43

HTTP S/TC P

Each CVM Mgmt IP including CIP-M

UCS M A/B and VIP

Policy configuration

56 96

TLS/T CP

CIMC from each node

KMS serve

Key exchange

||||||||||||||||||||

||||||||||||||||||||

r

81 25

UDP

Each SCVM node

Each SCV M node

Graphite

42 7

UDP

Each SCVM node

Each SCV M node

Service Location Protocol

32 76 8 to 65 53 5

UDP

Each SCVM node

Each SCV M node

SCVM outbound communication

Verify that the firewall ports listed in Table 3-2 are open.

HyperFlex External Connections Table 3-3 covers the details and requirements for HyperFlex external connections to Intersight, AutoSupport, and the postinstallation script. Table 3-3 HyperFlex External Connections Ext Description

IP

Essential Information

Technet24

||||||||||||||||||||

||||||||||||||||||||

er nal Co nn ect ion

I n t e r si g h t d e v i c e c o n n e c t o r

Addr ess/ FQD N/Po rts/V ersi on Supported HX systems are connected to Cisco Intersight through a device connector that is embedded in the management controller of each system.

HT TP S po rt nu m ber : 44 3

1.0 .5 – 20 84 or lat er (a ut oup gr ad ed by

All device connectors must properly resolve svc.ucs-connect.com and allow outboundinitiated HTTPS connections on port 443. The current HX installer supports the use of an HTTP proxy.

The IP addresses of ESXi management must be reachable from Cisco UCS Manager over all the ports that are listed as being needed from the installer to ESXi management to ensure deployment of ESXi management from Cisco Intersight.

||||||||||||||||||||

||||||||||||||||||||

by Cis co Int ers igh t)

A u t o S u p p o r t

Auto-Support (ASUP) is the alert notification service provided through HX Data Platform.

S M TP po rt nu m ber : 25

Enabling AutoSupport is strongly recommended because ASUP provides historical hardware counters that are valuable in diagnosing future hardware issues, such as a drive failure for a node.

P o s ti n s t a ll a ti o n s c

To complete postinstallation tasks, you can run a postinstallation script on the installer VM. The script pings across all network interfaces (management, vMotion, and storage network) to ensure full fabric availability. The script also validates the correct tagging of VLANs and jumbo frame configurations on the Northbound switch.

HT TP po rt nu m ber : 80

The postinstallation script requires name resolution to http://cs.co/hx-scripts via port 80 (HTTP).

Technet24

||||||||||||||||||||

||||||||||||||||||||

c ri p t

UCS/Fabric Interconnect Network Provisioning Prior to setting up a HyperFlex cluster, you need to plan the upstream bandwidth capacity for optimal network traffic management. This ensures that the flow is in steady state, even if there is a component failure or a partial network outage. By default, the hx-vm-network vSwitch switch is configured as active/active. All other vSwitch switches are configured as active/standby. Note For clusters running Catalyst switches upstream to the FIs, set the best-effort quality of service (QoS) MTU to 9216 (under LAN > LAN Cloud > QoS System Class), or failover will fail.

Configuration for HyperFlex 3.5 Figure 3-2 provides an overview of the UCS service profile mapping to vmware vswitch and vmnics. This is configured automatically during HyperFlex installation.

||||||||||||||||||||

||||||||||||||||||||

Figure 3-2 HyperFlex Data Platform Connectivity for a Single Host The default vSwitch NIC teaming policy and failover policy is set to yes to ensure that all management, vMotion, and storage traffic are locally forwarded to the

Technet24

||||||||||||||||||||

||||||||||||||||||||

fabric interconnects to keep the flow in steady state. When vNIC-a fails, ESXi computes the load balancing, and all the virtual ports are repinned to vNIC-b. When vNIC-a comes back online, repinning applies, and virtual ports are rebalanced across vNIC-a and vNIC-b. This reduces the latency and bandwidth utilization upstream of the Cisco UCS fabric interconnects. Figure 3-3 shows traffic flow in a normal state when traffic is limited to within the fabric interconnects.

Figure 3-3 Traffic Flow in Steady State

||||||||||||||||||||

||||||||||||||||||||

If one or more server links fail—for instance, if Host 1 loses connectivity to fabric A and Host 2 loses connectivity to fabric B—the traffic must go through the upstream switches. Therefore, the uplink network bandwidth usage increases. Figure 3-4 shows traffic flow when an active link failure occurs, resulting in traffic flows through the upstream switch.

Figure 3-4 Traffic Flow During Link Failure

Technet24

||||||||||||||||||||

||||||||||||||||||||

Note When you have uplinks from a fabric interconnect to two different upstream switches, you encounter a condition called Disjoint Layer 2 (DJL2) on the FI. This is known to happen on the FI on End Host mode and if the DJL2 is not configured properly.

Network Settings The following are best practices for UCS and ESXi network configuration when deploying HyperFlex: Use different subnets and VLANs for each network. Directly attach each host to a Cisco UCS fabric interconnect using a 10/25/40 Gbps cable. Do not use VLAN 1, which is the default VLAN, as doing so can cause networking issues, especially if disjoint Layer 2 configuration is used. For VLANs set as non-native by default, configure the upstream switches to accommodate the non-native VLANs.

Each ESXi host needs the following networks: Management traffic network: Handles the hypervisor (ESXi server) management and storage cluster management. Data traffic network: Handles the hypervisor and storage data traffic. vMotion network: Used for virtual machine vMotion. VM network: Handles virtual machine network traffic.

There are four vSwitch switches, each carrying a different network:

||||||||||||||||||||

||||||||||||||||||||

vswitch-hx-inband-mgmt: Used for ESXi management and storage controller management. vswitch-hx-storage-data: Used for ESXi storage data and HX Data Platform replication. vswitch-hx-vmotion: Used for VM and storage vMotion. vswitch-hx-vm-network: Used for VM data traffic.

Note The Cisco HX Data Platform installer automatically creates the vSwitch switches. The following services in vSphere must be enabled after the HyperFlex storage cluster is created: DRS (optional, if licensed), vMotion, high availability.

VLAN and vSwitch Requirements When installing HyperFlex, it is important to provide at least three VLAN IDs. All VLANs must be configured on the fabric interconnects during the installation. Table 34 outlines these requirements. Table 3-4 VLAN and vSwitch Requirements VLAN Type

Description

VLAN ESXi and HyperFlex management traffic*

VLAN name: hx-inbandmgmt

VLAN ID

VLAN HyperFlex storage traffic*

VLAN name: hx-storagedata

Technet24

||||||||||||||||||||

||||||||||||||||||||

VLAN ID

VLAN VM vMotion*

VLAN name: hx-vmotion

VLAN ID

VLAN VM data*

User defined

IP blocks

KVM IP pool

One IP address per host

Subnet mask

Example: 255.255.0.0

Default gateway

Example: 10.193.0.1

* Must use different subnets and VLANs

External Switch Tagging (EST) and vSwitch settings are applied using UCS Manager profiles. The HX Data Platform installer automates and simplifies this process.

Cisco UCS Requirements When installing HyperFlex, you need to provide information for the UCS fabric interconnect and UCS Manager when prompted, as outlined in Table 3-5 and Table 3-6.

||||||||||||||||||||

||||||||||||||||||||

Table 3-5 Cisco UCS Fabric Interconnect Requirements UI Element

Essential Information

Uplink switch model

Provide the switch type and connection type (SFP + twinax or optic)

Fabric interconnect cluster IP address

IP address

FI-A IP address

IP address

FI-B IP address

IP address

MAC address pool

Check 00:00:00 MAC address pool

IP blocks

KVM IP pool; a minimum of four IP addresses

Subnet mask

Example:255.255.0.0

Default gateway

Example:10.193.0.1

Table 3-6 Cisco UCS Manager Requirements UI Element

Essential Information

Technet24

||||||||||||||||||||

||||||||||||||||||||

UCS Manager hostname

Hostname or IP address

Username

Admin username

Password

Admin username

Hypervisor Requirements When installing HyperFlex, you need to enter the IP address from the range of addresses that are available to the ESXi servers on the storage management network or storage data network through vCenter, as shown in Table 3-7. Provide static IP addresses for all network addresses. Table 3-7 HyperFlex IP Address Requirements (Example 4 Node HX Cluster)* Management Network IP Addresses

Data Network IP Addresses

Hypervisor

Storage Controller

Hypervisor

Storage Controller

IP address

IP address

IP address

IP address

IP address

IP address

IP address

IP address

IP address

IP address

IP address

IP address

||||||||||||||||||||

||||||||||||||||||||

IP address

IP address

IP address

IP address

VLAN tag

VLAN_ID

VLAN tag

VLAN_ID

Subnet mask

Subnet mask

Default gateway

Default gateway

Installer Appliance IP Addresses

IP address

IP address

IP address

IP address

* Two m ore IP addresses for v irtual IP—one for the m anagem ent network and one for the data network—are required in addition to the ones listed in this table.

Note When assigning Ip addresses to Hyperflex Systems, consider the following: Data and management networks must be on different subnets. IP addresses cannot be changed after a storage cluster is created. Contact Cisco TAC for assistance if IP address change is required after cluster creation. If you are specifying DNS names, you should enable forward and reverse DNS lookup for IP addresses. The installer IP address must be reachable from the management subnet used by the hypervisor and the storage controller VMs. The installer appliance must run on the ESXi host or on a VMware workstation that is not a part of the cluster to be installed.

Storage Cluster Requirements

Technet24

||||||||||||||||||||

||||||||||||||||||||

A storage cluster is a component of the Cisco HX Data Platform that reduces storage complexity by providing a single datastore that is easily provisioned in vSphere Web Client. Data is fully distributed across disks in all the servers that are in the storage cluster to leverage controller resources and provide high availability. A storage cluster is independent of the associated vCenter cluster. You can create a storage cluster by using ESXi hosts that are in the vCenter cluster. To define a storage cluster, provide the parameters outlined in Table 3-8. Table 3-8 Definition of Storage Cluster Parameters Field Description Na me

Enter a name for the storage cluster.

Ma na ge me nt IP add res s

This is the address of the storage management network for access on each ESXi host.

The IP address must be on the same subnet as the management IP addresses for the nodes.

Do not allow cluster management IP addresses to share the last octet with another cluster on the same subnet.

These IP addresses are in addition to the four IP addresses

||||||||||||||||||||

||||||||||||||||||||

assigned to the nodes in the previous section.

Sto rag e clu ster dat a IP add res s

This is the address of the storage data network and storage controller VM network for access on each ESXi host.

Dat a rep lica tio n fac tor

This is the number of redundant replicas of the data across the storage cluster. This is set during HX Data Platform installation and cannot be changed. The choices are:

The same IP address must be applied to all ESXi nodes in the cluster.

3: A replication factor of 3 is highly recommended for all environments except HyperFlex Edge. A replication factor of 2 has a lower level of availability and resiliency. The risk of outage due to component or node failures should be mitigated by having active and regular backups. This is the recommended option.

2: This replication level specifies keeping two redundant replicas of the data. This consumes less storage resources than level 3 but reduces your data protection in the event of simultaneous node or disk failures.

Technet24

||||||||||||||||||||

||||||||||||||||||||

If nodes or disks in the storage cluster fail, the cluster’s ability to function is affected. Failure of more than one node or of one node and a disk(s) on a different node is called a simultaneous failure.

vCenter Configuration Requirements Configuration of vCenter requires an administratorlevel account and password. Ensure that you have an existing vCenter server and also ensure that the following vSphere services are operational: Enable Dynamic Resource Scheduler (DRS), if licensed (optional) Enable vMotion Enable high availability (required to define failover capacity and for expanding the datastore heartbeat) Ensure that user VMs are version 9 or later (required to use HX Data Platform, native snapshots, and ReadyClones

Table 3-9 outlines the vCenter information required to complete the HyperFlex installation. Table 3-9 vCenter Requirements for HyperFlex Installation Field

Description

||||||||||||||||||||

||||||||||||||||||||

vCenter Server

Enter your current vCenter server web address.

For example, http://.

Userna me

Enter .

Passwor d

Enter .

Data center name*

Enter the required name for the vCenter data center.

Cluster name

Enter the required name for the vCenter cluster. The cluster must contain a minimum of three ESXi servers.

* An existing data center object can be used. If the data center doesn’t exist in v Center, it will be created.

System Services Requirements Before installing Cisco HX Data Platform, ensure that the following network connections and services are operational. Table 3-10 outlines the system services requirements. Table 3-10 System Services Requirements Field

Essential Information

Technet24

||||||||||||||||||||

||||||||||||||||||||

DNS server(s)

The DNS server address is required if you are using hostnames while installing the HyperFlex Data Platform.

If you do not have a DNS server, do not enter a hostname under System Services in the Cluster Configuration page of the HX Data Platform installer. Use only IP addresses.

To provide more than one DNS servers address, separate the addresses with commas. Check carefully to ensure that DNS server addresses are entered correctly.

NTP server(s) (which must be a reliable NTP server)

Set the IP address. The NTP server is used for clock synchronization between the storage controller VM, the ESXi hosts, and the vCenter server.

A static IP address for an NTP server is required to ensure clock synchronization between the storage controller VM, ESXi hosts, and vCenter server.

During installation, this information is propagated to all the storage controller VMs and corresponding hosts. The servers are automatically synchronized on storage cluster startup.

Time zone

Select a time zone for the storage controller VMs. The time zone is used to determine when to take scheduled snapshots. Note that all the VMs must be in the same time zone.

||||||||||||||||||||

||||||||||||||||||||

Note Before configuring the storage cluster, manually verify that the NTP server is working and providing a reliable source for the time. Use the same NTP server for all nodes (both converged and compute) and all storage controller VMs. The NTP server must be stable, continuous (for the lifetime of the cluster), and reachable through a static IP address. If you are using Active Directory as an NTP server, please make sure that the NTP server is set up according to Microsoft best practices. For more information, see https://docs.microsoft.com/en-us/windows-server/networking/windowstime-service/windows-timeservice-tools-and-settings. Note that if the NTP server is not set correctly, time sync may not work, and you may need to fix the time sync on the client side. For more information, see https://kb.vmware.com/s/article/1035833.

CPU Resource Reservation for Controller VMs As the storage controller VMs provide critical functionality for the HyperFlex Data Platform, the HX Data Platform installer configures CPU resource reservations for the controller VMs. This reservation guarantees that the controller VMs have the minimum required CPU resources. This is useful in situations where the physical CPU resources of the ESXi hypervisor host are heavily consumed by the guest VMs. Table 3-11 details the CPU resource reservation for storage controller VMs. Table 3-11 CPU Resource Reservation for Storage Controller VMs Number of VM CPU

Shares

Reservation

Limit

Technet24

||||||||||||||||||||

||||||||||||||||||||

8

Low

10,800 MHz

Unlimited

Memory Resource Reservation for Controller Virtual Machines Table 3-12 details the memory resource reservations for the storage controller VMs. Table 3-12 Memory Resource Reservations for Storage Controller VMs Server Model

Amount of Guest Memory

Reserve All Guest Memory?

HX220cM4/M5s

48 GB

Y es

72 GB

Y es

78 GB

Y es

HXAF220CM4/M5s

HX240cM4/M5SX

HXAF240CM4/M5SX

HX240C-M5L

The compute only nodes, for example, the B200 blades,

||||||||||||||||||||

||||||||||||||||||||

have a lightweight storage controller VM that is configured with only 1 vCPU and 512 MB of memory reservation.

Controller VM Locations The physical storage locations of the controller VMs differs among the Cisco HX-Series rack servers due to differences in the physical disk locations and connections on those server models. The storage controller VM is operationally no different from any other typical virtual machines in an ESXi environment. The VM must have a virtual disk with the bootable root file system available in a location separate from the SAS HBA that the VM is controlling via VMDirectPath I/O. The configuration details of the models are as follows: HX220c M5, HXAF220c M5, HX240c M5, and HXAF240c M5: The server boots the ESXi hypervisor from the internal M.2 form factor SSD. The M.2 SSD is partitioned by the ESXi installer, and the remaining 216 GB of space is used as a VMFS datastore. The controller VM’s root file system is stored on a 2.5 GB virtual disk, /dev/sda, which is placed on this VMFS datastore. The controller VM has full control of all the front- and rear-facing hotswappable disks via PCI passthrough control of the SAS HBA. The controller VM operating system sees the 240 GB SSD (also commonly called the “housekeeping” disk) as /dev/sdb and places HyperFlex binaries and logs on this disk. The remaining disks seen by the controller VM OS are used by the HX distributed file system for caching and capacity layers. Figure 3-5 and Figure 3-6 detail the storage controller VM placement on ESXi hypervisor hosts for these server types.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 3-5 HX220c M5 Controller VM Placement

||||||||||||||||||||

||||||||||||||||||||

Figure 3-6 HX240c M5 Controller VM Placement HX220c M4 and HXAF220c M4: The server boots the ESXi hypervisor from the internal mirrored SD cards. The SD card is partitioned by the ESXi installer, and the remaining space is used as a VMFS datastore. The controller VM’s root file system is stored on a 2.5 GB virtual disk, /dev/sda, which is placed on this VMFS datastore. The controller VM has full control of all the front-facing hot-swappable disks via PCI passthrough control of the SAS HBA. The controller VM operating system sees the 120 GB or 240 GB SSD (also commonly called the “housekeeping” disk) as /dev/sdb and places HyperFlex binaries and logs on this disk. The remaining disks seen by the controller VM OS are used by the HX distributed file system for caching and capacity layers. Figure 3-7 details the storage controller VM placement on ESXi hypervisor hosts for

Technet24

||||||||||||||||||||

||||||||||||||||||||

these server types.

Figure 3-7 HX220c M4 Controller VM Placement HX240c M4 and HXAF240c M4: The server boots the ESXi hypervisor from the internal mirrored SD cards. The HX240cM4SX and HXAF240c-M4SX servers have a built-in SATA controller provided by the Intel Wellsburg Platform Controller Hub (PCH) chip, and the 120 GB or 240 GB housekeeping disk is connected to it, placed in an internal drive carrier. Since this model does not connect the housekeeping disk to the SAS HBA, the ESXi hypervisor remains in control of this disk, and a VMFS datastore is provisioned there, using the entire disk. On this VMFS datastore, a 2.2 GB virtual disk is created and used by the controller VM as /dev/sda for the root file system, and an 87 GB virtual disk is

||||||||||||||||||||

||||||||||||||||||||

created and used by the controller VM as /dev/sdb, placing the HyperFlex binaries and logs on this disk. The front-facing hotswappable disks, seen by the controller VM OS via PCI passthrough control of the SAS HBA, are used by the HX distributed file system for caching and capacity layers. Figure 3-8 details the storage controller VM placement on ESXi hypervisor hosts for these server types.

Figure 3-8 HX240c M4 Controller VM Placement

Auto-Support Requirements Auto-Support (ASUP) is the alert notification service provided through HX Data Platform. If you enable Auto-

Technet24

||||||||||||||||||||

||||||||||||||||||||

Support, notifications are sent from HX Data Platform to designated email addresses or email aliases that you want to receive the notifications. To configure AutoSupport, use the information in Table 3-13. Table 3-13 Required Information for ASUP Configuration Sett ing

Description

Ena ble Auto Supp ort

Check this box during HX storage cluster creation.

Mail serve r

Set the IP address. The SMTP mail server must be configured in your network to enable Auto-Support. It is used for handling email sent from all the storage controller VM IP addresses.

Note that only unauthenticated SMTP is supported for ASUP.

Mail send er

Provide the email address to use for sending Auto-Support notifications.

ASU P

Specify the list of email addresses or email aliases to receive Auto-Support notifications.

||||||||||||||||||||

||||||||||||||||||||

recip ient

Note Enabling Auto-Support is strongly recommended because it provides historical hardware counters that are valuable in diagnosing future hardware issues, such as drive failure for a node.

Single Sign-On Requirements The single sign-on (SSO) URL is provided by vCenter. If it is not directly reachable from the controller VM, you can configure the location explicitly by using Installer Advanced Settings. The SSO URL can be found in vCenter at vCenter Server > Manage > Advanced Settings, key config.vpxd.sso.sts.uri.

INSTALLING CISCO HYPERFLEX SYSTEM SERVERS To set up and install the Cisco HyperFlex nodes, refer to the appropriate installation guides. Also set up the fabric interconnects and integrate the HX-Series servers to the Cisco fabric interconnect. If necessary, refer to Chapter 2, which covers physical installation and topology, before proceeding with the following information.

Physical Connectivity Illustrations for Direct

Technet24

||||||||||||||||||||

||||||||||||||||||||

Connect Mode Cluster Setup Only direct connect mode is supported for HX servers with fabric interconnects. For example, the topologies in Figure 3-9, Figure 3-10, and Figure 3-11 are considered supported. These topologies show a sample of direct connect mode physical connectivity for C-Series rackmount servers with Cisco UCS Domain and Cisco UCS Manager, release 3.1 or later. These images show the cabling configuration for Cisco UCS Manager integration with a C-Series rack-mount server. The paths shown in these figures carry both management traffic and data traffic.

Figure 3-9 Direct Connect Cabling Configuration

||||||||||||||||||||

||||||||||||||||||||

Figure 3-10 Direct Connect Cabling Configuration with Cisco VIC 1455 (Four-Port Linking)

Figure 3-11 Direct Connect Cabling Configuration with Cisco VIC 1455 (Two-Port Linking) The numbered labels in the preceding figures indicate the following: 1. Cisco UCS 6454 fabric interconnect or Cisco UCS 6200 Series or 6300 FI (fabric A) 2. Cisco UCS 6454 fabric interconnect or Cisco UCS 6200 Series or 6300

Technet24

||||||||||||||||||||

||||||||||||||||||||

FI (fabric B) 3. C-Series rack-mount server 4. Cisco UCS VIC in supported PCIe slot

In Figures 3-9 and 3-10, XGb represents a 40 Gigabit Ethernet connection or a 10 Gigabit Ethernet connection. For 10 Gigabit Ethernet, the following cables are used: 4x10 breakout Small Form-Factor Pluggable (SFP) cables 4x10 active optical (OAC) cables 10G SFP cable that uses the Qualified Security Assessor (QSA) module

Note For VIC 1455, the following restrictions apply: Ports 1 and 2 must connect to same fabric interconnect (that is, fabric A). Ports 3 and 4 must connect to same fabric interconnect (that is, fabric B). This is due to the internal port-channeling architecture inside the card. Ports 1 and 3 are used because the connections between ports 1 and 2 (also 3 and 4) form an internal port channel.

Note The above images show a sample of direct connect mode physical connectivity for C-Series Rack-Mount Server with Cisco UCS VIC 1455. The port connections remain the same for Cisco UCS VIC 1457.

Caution Do not connect port 1 to Fabric Interconnect A and port 2 to Fabric Interconnect B. Use ports 1 and 3 only. Using ports 1 and 2 results in discovery and

||||||||||||||||||||

||||||||||||||||||||

configuration failures.

Installation Workflow The installation workflow illustrated in Figure 3-12 summarizes the steps involved in creating a standard cluster using the HX Data Platform installer:

Figure 3-12 HyperFlex Installation Work Flow Step 1. Deploy the HX Data Platform installer OVA by using the vSphere Web Client. Step 2. Enter UCS Manager, vCenter, and hypervisor credentials. Step 3. Configure server ports and associate HyperFlex servers. Step 4. Configure UCSM (VLANs, MAC pool, hx-extmgmt IP oool for out-of-band CIMC, iSCSi storage, FC storage, and so on). Step 5. Configure the hypervisor.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Step 6. Configure the IP addresses. Step 7. Configure and deploy the HyperFlex cluster. The sections that follow cover the steps of this workflow in detail.

Deploy the HX Data Platform Installer OVA by Using the vSphere Web Client You need to connect to vCenter to deploy the OVA file and provide the IP address properties. Deploying directly from an ESXi host does not allow you to set the values correctly. Do not deploy the HX Data Platform installer to an ESXi server that is going to be a node in the Cisco HX storage cluster. The procedure for deploying the HX Data Platform installer OVA using vSphere Web Client is as follows: Step 1. Locate and download the HX Data Platform installer OVA from cisco.com. Download the HX Data Platform installer to a node that is on the storage management network (for example, Cisco-HX-Data-Platform-Installer-v3.5.2e31762-esx.ova); this node will be used for the HX Data Platform storage cluster. Step 2. Deploy the HX Data Platform installer by using the VMware hypervisor to create a HX Data Platform installer virtual machine.

||||||||||||||||||||

||||||||||||||||||||

Note Use a version of the virtualization platform that supports virtual hardware version 10.0 or greater. vSphere is a system requirement. You can use either vSphere thick client, vSphere thin client, or vSphere Web Client. To deploy the HX Data Platform installer, you can also use VMware Workstation, VMware Fusion, or VirtualBox. In any case, follow these steps: 1. Open a virtual machine hypervisor, such as vSphere, VirtualBox, Workstation, or Fusion. 2. Select the node where you want to deploy the HX Data Platform Installer: 1. Using vSphere thick client: Expand Inventory list > Host > File > Deploy OVA. 2. Using vSphere Web Client: Expand vCenter Inventory list > Hosts > Host > Deploy OVA. Make sure you provide user credentials while deploying the HX installer OVA using vSphere Web Client. Refer to Figure 3-13 for details.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 3-13 Deploying the HX Installer OVA Template Step 3. Select where the HX Data Platform installer is located. Accept the defaults and select the appropriate network. Then review the details, as shown in Figure 3-14.

||||||||||||||||||||

||||||||||||||||||||

Figure 3-14 Reviewing the Template Details Step 4. Enter a static IP address for use by the HX Data Platform installer VM, as shown in Figure 3-15. Click Next. Note A static IP address is necessary even if DHCP is configured for the network. You need the static IP address to run the HX Data Platform installer, to install the HX Data Platform, and to create the HX Data Platform storage cluster. DNS must be reachable from the installer VM.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 3-15 Customizing the Template Properties Step 5. In the screen shown in Figure 3-16, verify that the options listed are correct. Click Finish and wait for the HX Data Platform installer VM to be added to the vSphere infrastructure.

||||||||||||||||||||

||||||||||||||||||||

Figure 3-16 Verifying Details and Deploying the HX Installer VM To power on the HX Data Platform installer manually, navigate to the virtual machine list and power on the installer VM. Note The preferred settings for the HX Data Platform installer virtual machine are 3 vCPU and 4 GB of memory. Reducing these settings can result in 100% CPU usage and spikes for the host.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Step 6. Open the HX Data Platform installer virtual machine console and notice that the initial console display lists the HX Data Platform installer virtual machine IP address: Click here to view code image Data Platform Installer. ******************************************* You can start the installation by visiting the following URL: http://192.168.202.20 ******************************************* Cisco-HX-Data-Platform-Installer login:

Step 7. Use the URL listed in the output (for example, http://192.168.202.20) to log in to the HX Data Platform installer. Step 8. Accept the self-signed certificate. Step 9. Log in using the username root and the password you provided as part of the OVA deployment. Note The default password Cisco123 is no longer valid with HX Data Platform 3.5(1a) and later. When you deploy the OVA, you must specify a password. If you try logging in with Cisco123, and it doesn’t work, a password was set when the OVA was deployed.

Configuring and Deploying a Standard HyperFlex Cluster When you log in to the HX Data Platform installer with

||||||||||||||||||||

||||||||||||||||||||

root user credentials, you see three cluster installation options under Create Cluster. For this example, select Standard Cluster, as shown in Figure 3-17.

Figure 3-17 Selecting a Standard Cluster Installation

Note Select Advanced Option at the bottom of the screen shown in Figure 3-17 only if you fully understand the advanced configuration and customization functionality.

Under Credentials, enter the UCS Manager and vCenter username (for example, admin) and password, as shown

Technet24

||||||||||||||||||||

||||||||||||||||||||

in Figure 3-18. Click Continue to configure the server selection.

Figure 3-18 Entering UCS Manager and vCenter Credentials On the Server Selection page, the UCS Manager credentials you provided are used to discover the UCSM

||||||||||||||||||||

||||||||||||||||||||

configuration and list the already discovered HyperFlex servers, as shown in Figure 3-19.

Figure 3-19 Discovering HyperFlex Servers: UCSM There is also an option to configure server ports on UCS Manager from the installer. Select the unassociated servers that will be used for the HyperFlex installation.

Technet24

||||||||||||||||||||

||||||||||||||||||||

The example in Figure 3-20 shows three servers selected to deploy a three-node HyperFlex cluster. Click Continue on the Server Selection page to continue the UCSM configuration.

Figure 3-20 Selecting Servers to Deploy as a ThreeNode HyperFlex Cluster

||||||||||||||||||||

||||||||||||||||||||

On the UCSM Configuration page, configure VLAN, MAC pool, and Cisco IMC access (out-of-band or inband CIMC access), as shown in Figure 3-21.

Figure 3-21 UCSM Configuration Details

Note Use separate subnets and VLANs for each of the networks. Select a prefix that is

Technet24

||||||||||||||||||||

||||||||||||||||||||

not used with any other MAC address pool across all UCS domains.

If you want to add external storage, configure iSCSI storage and FC storage details toward the bottom of the UCSM Configuration. In the Advanced section at the bottom of this page, configure the UCS server firmware version, HyperFlex cluster name, and organization name, as shown in Figure 3-22: HyperFlex cluster name: The HyperFlex cluster name is applied to a group of HX servers in a given cluster. The HyperFlex cluster name adds a label to service profiles for easier identification. Organization name: Specify a unique org name to ensure isolation of the HyperFlex environment from the rest of the UCS domain. Click Continue to apply the UCSM Configuration settings.

||||||||||||||||||||

||||||||||||||||||||

Figure 3-22 UCSM Configuration Advanced Section On the Hypervisor Configuration page (see Figure 3-23), configure the subnet mark, gateway, and IP address for the hypervisor settings and hypervisor credentials. In the Hypervisor Settings section, select Make IP Addresses and Hostnames Sequential to make the IP addresses sequential. Click Continue to configure the IP addresses.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 3-23 Hypervisor Configuration On the IP Addresses page, shown in Figure 3-24,

||||||||||||||||||||

||||||||||||||||||||

configure the management and data IP addresses for hypervisor and storage controller. Select Make IP Addresses Sequential to make the IP addresses sequential. For each HX node, enter the hypervisor, storage controller, management, and data IP addresses. For the IP addresses, specify whether the network belongs to the data network or the management network.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 3-24 Configuring IP Addresses Also configure the cluster IP address for management and data. If a node with a cluster IP address becomes unavailable, the cluster IP address is moved to another node in the storage cluster. Click Continue to apply the IP Addresses page settings.

||||||||||||||||||||

||||||||||||||||||||

Note Compute-only nodes can be added only after a storage cluster is created.

On the Cluster Configuration page, shown in Figure 325, for the Cisco HX storage cluster, complete the following fields to begin deploying the HyperFlex cluster: In the Replication Factor drop-down menu, specify the number of redundant replicas of your data across the storage cluster. Set the replication factor to either 2 or 3 redundant replicas: For hybrid servers (servers that contain SSD and HDDs), the default value is 3. For all flash servers (servers that contain only SSDs), select either 2 or 3.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 3-25 Cisco HX Storage Cluster Configuration A replication factor of 3 is highly recommended for all environments except Hyper- Flex Edge. A replication factor of 2 has a lower level of availability and resiliency. The risk of outage due to component or node failures should be mitigated by having active and regular backups.

||||||||||||||||||||

||||||||||||||||||||

In the Controller VM section, create a new password for the administrative user of the HyperFlex cluster. A default administrator username and password are applied to the controller VMs. The VMs are installed on all converged and compute-only nodes. In the vCenter Configuration section, complete the following fields: vCenter Datacenter Name: Enter the vCenter data center name for the Cisco HyperFlex cluster. vCenter Cluster Name: Enter the vCenter cluster name.

In the System Services section, configure the DNS server, NTP server, DNS domain name, and time zone. In the Connected Services section of the Cluster Configuration page, shown in Figure 3-26, select Enable Connected Services to enable Auto-Support and Intersight management.

Figure 3-26 Enabling Connected Services In Send service ticket notifications to, enter the email address for sending SR notifications when triggered by Auto-Support.

Technet24

||||||||||||||||||||

||||||||||||||||||||

In the Advanced Configuration section of the Cluster Configuration page, shown in Figure 3-27, configure the following: Jumbo Frames: By selecting Enable Jumbo Frames on Data Network, you can set the MTU size for the storage data network on the host vSwitch switches and vNICs, as well as each storage controller VM. The default value is 9000. Disk Partitions: If you select Clean up disk partitions, you can remove all existing data and partitions from all nodes added to the storage cluster for manually prepared servers. Select this option to delete existing data and partitions. Y ou must back up any data that should be retained. (Optional) vCenter Single-Sign-On Server: This information is required only if the SSO URL is not reachable. Virtual Desktop (VDI): Check this option for VDI-only environments.

Figure 3-27 Cluster Configuration: Advanced Configuration Click Start to begin deploying the HyperFlex cluster.

||||||||||||||||||||

||||||||||||||||||||

The Progress page displays the progress of various configuration tasks, as shown in Figure 3-28.

Figure 3-28 HyperFlex Installation ProgressÍ

Caution Do not skip any validation warnings that appear during the installation process.

Technet24

||||||||||||||||||||

||||||||||||||||||||

When the installation finishes, you see the Cluster Creation Successful message shown in Figure 3-29.

Figure 3-29 HyperFlex Installation Success Browse to the Summary page, shown in Figure 3-30, to review the configuration information.

||||||||||||||||||||

||||||||||||||||||||

Figure 3-30 HyperFlex Installer Summary

HX Data Platform Installer Navigation Aid Buttons On the HyperFlex installer, you have the following options: Export Configuration: Click the down arrow icon to download a JSON configuration file. Workflow Info: Hover over the information icon to view the

Technet24

||||||||||||||||||||

||||||||||||||||||||

current workflow. For HyperFlex cluster creation, the workflow info is Create Workflow = Esx. Tech Support: Click the question mark icon to view details related to the HyperFlex Data Platform software version. Click Create New Bundle to create a tech support bundle for Cisco TAC. Save Changes: Click the circle icon to save changes made to the HyperFlex cluster configuration parameters. Settings: Click the gear icon to start over or log out.

Running the Postinstallation Script To complete the postinstallation tasks, you can run a postinstallation script on the installer VM. It is important to run post_install to confirm network operation immediately following the deployment of the HyperFlex System. Follow these steps to do so: Step 1. Use an SSH client to connect to the shell on the installer VM. Step 2. Log in with installer VM root credentials. Step 3. Type post_install and press Enter. Step 4. Set the postinstallation script parameters as specified in Table 3-14. Table 3-14 Postinstallation Script Parameter Settings Setting

Description

||||||||||||||||||||

||||||||||||||||||||

Enable high availability/DRS on cluster

Enable the vSphere high availability feature, according to best practice.

Disable SSH warning

Suppress the SSH and shell warnings in vCenter.

Add vMotion interfaces

Configure vMotion interfaces according to best practice. Requires IP address and VLAN ID input.

Add VM network VLANs

Add additional guest VLANs to Cisco UCS Manager and within ESXi on all cluster hosts.

Note If you run into any postinstallation script issues, set the postinstallation script parameters manually.

Sample Postinstallation Installation Script You can log in to the installer VM and run the following post_install script to complete the installation steps explained above: Click here to view code image root@Hyperflex-Installer:~# post_install Select post_install workflow1. New/Existing Cluster 2. Expanded Cluster (for non-edge clusters)

Technet24

||||||||||||||||||||

||||||||||||||||||||

3. Generate Certificate Selection: 1 Logging in to controller hx-02-cmip.rchs.local HX CVM admin password: Getting ESX hosts from HX cluster... vCenter URL: 192.168.202.35 Enter vCenter username (user@domain): [email protected] vCenter Password: Found datacenter RCH-HX-02 Found cluster RCH-HX-02 post_install to be run for the following hosts: hx-02-esxi-01.rchs.local hx-02-esxi-02.rchs.local hx-02-esxi-03.rchs.local Enter ESX root password: Enter vSphere license key?

(y/n) n

Enable HA/DRS on cluster? (y/n) y Successfully completed configuring cluster HA. Disable SSH warning? (y/n) y Add vmotion interfaces? (y/n) y Netmask for vMotion: 255.255.254.0 VLAN ID: (0-4096) 208 vMotion MTU is set to use jumbo frames (9000 bytes). Do you want to change to 1500 bytes? (y/n) y vMotion IP for hx-02-esxi-01.rchs.local: 192.168.208.17 Adding vmotion-208 to hx-02-esxi-01.rchs.local Adding vmkernel to hx-02-esxi-01.rchs.local vMotion IP for hx-02-esxi-02.rchs.local: 192.168.208.18 Adding vmotion-208 to hx-02-esxi-02.rchs.local Adding vmkernel to hx-02-esxi-02.rchs.local vMotion IP for hx-02-esxi-03.rchs.local: 192.168.208.19 Adding vmotion-208 to hx-02-esxi-03.rchs.local Adding vmkernel to hx-02-esxi-03.rchs.local Add VM network VLANs? (y/n) n Run health check? (y/n) y Validating cluster health and configuration... Cluster Summary: Version – 3.5(2e) Model - HXAF240C-M5SX Health - HEALTHY

||||||||||||||||||||

||||||||||||||||||||

ASUP enabled - Trues root@Hyperflex-Installer:~#

Refer to the sample script, where post_install script is run to generate certificate. Click here to view code image root@Cisco-HX-Data-Platform-Installer:~# post_install Select post_install workflow1. New/Existing Cluster 2. Expanded Cluster 3. Generate Certificate Selection: 3 Certificate generation workflow selected Logging in to controller 192.168.200.23 HX CVM admin password: Getting ESX hosts from HX cluster... Select Certificate Generation Workflow1. With vCenter 2. Without vCenter Selection: 1 vCenter URL: 192.168.202.33 Enter vCenter username (user@domain): [email protected] vCenter Password: Starting certificate generation and re-registration. Trying to retrieve vCenterDatacenter information .... Trying to retrieve vCenterCluster information .... Certificate generated successfully. Cluster re-registration in progress .... Cluster re-registered successfully. root@HyperFlex-Installer:~#

Note Workflow 3 is mandatory to have a unique SSL certificate in the cluster. When you generate this certificate, it replaces your current certificate. If you’re performing cluster expansion, this option is not required.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Upon successful completion of the post_install script, the summary of the configuration executed based on the chosen option is displayed under Cluster Summary.

INSTALLING HYPERFLEX EDGE/ROBO CLUSTER (ESXI, 3 NODE) Cisco HyperFlex Edge brings the simplicity of hyperconvergence to ROBO and edge environments. The following sections describe the deployment of HyperFlex Edge. Table 3-15 provides a supportability summary for HyperFlex Edge clusters. Table 3-15 Supportability Summary for Edge Clusters Limitatio n

Support

Cluster type

HX220c M4 Hybrid cluster

HX220c M5 Hybrid cluster

HXAF220c M4 All Flash cluster

||||||||||||||||||||

||||||||||||||||||||

HXAF220c M5 All Flash cluster

Cluster size

Two, three, or four nodes; cluster expansion workflow is not supported in HyperFlex Edge configuration

Replicatio n factor

Two

Networkin g

1 GB or 10 GB networking without Cisco UCS fabric interconnects; HX Edge systems do no implement QoS

HX clusters per vCenter

Up to 100

HyperFlex Edge Deployment Options HyperFlex Edge can be deployed using Cisco Intersight from the cloud or by using the on-premises installer, depending on the requirements: HyperFlex on-premises OVA installer: Use this option for onpremises Edge deployments for three- and four-node clusters. This type of deployment supports all three network topologies and requires download and installation of the installer along with local network access. Note that use of the on-premises installer is not support for two-node HyperFlex Edge clusters. Intersight installer: Use this option to deploy HyperFlex Edge from the cloud. This deployment option supports all Edge cluster sizes and network topologies. See Chapter 7 for more details on this

Technet24

||||||||||||||||||||

||||||||||||||||||||

option.

HyperFlex On-Premises OVA Installer Cisco HyperFlex Edge supports three networking topologies: Single Gigabit Ethernet switch Dual Gigabit Ethernet switches 10 Gigabit Ethernet switch (either single or dual)

The choices of configuration depend on requirements and the available switching hardware. The topology section in Chapter 2 describes specific requirements for each topology, as well as common network requirements that apply to all three topologies. The rest of the pre-installation requirements (for example, IP address, port requirements, vCenter requirements) are similar to those for the HyperFlex standard cluster installation. Please see the section “Configuring and Deploying a Standard HyperFlex Cluster,” earlier in this chapter, for more details.

Installation Overview Table 3-16 summarizes the installation workflow for Edge deployments. As you can see, steps 1–3 are the same for Gigabit Ethernet and 10 Gigabit Ethernet deployments. However, step 4 is applicable to Gigabit

||||||||||||||||||||

||||||||||||||||||||

Ethernet deployments, and steps 5–6 are for 10 Gigabit Ethernet deployments. Table 3-16 HyperFlex Edge Installation Steps St Description ep

Reference

Applicability

1

Make one of the following selections, based on your switch configuration:

Gigabit Ethernet and 10 Gigabit Ethernet

Complete the preinstallation checklist.

Single-switch configuration

Dual-switch configuration

10 Gigabit Ethernet switch configuration

Common network requirements

2

Complete the installation

Rack Cisco HyperFlex Nodes

Gigabit Ethernet

Technet24

||||||||||||||||||||

||||||||||||||||||||

prerequisites. Configure Networking for Cisco Integrated Management Controller (CIMC)

and 10 Gigabit Ethernet

Verifying Firmware Versions

3

Download and deploy the Cisco HX Data Platform installer.

Deploying the Cisco HX Data Platform Installer

Gigabit Ethernet and 10 Gigabit Ethernet

4

Deploy the Cisco HyperFlex Edge cluster.

Complete the following steps to configure your edge cluster and verify successful installation:

Gigabit Ethernet only

Configure Y our HyperFlex Cluster

Verify Cisco HX Data Platform Software Installation

||||||||||||||||||||

||||||||||||||||||||

5

SSH to the installer VM and run a script to complete specific configuration tasks.

(10 Gigabit Ethernet Only) Run the Configuration Script

10 Gigabit Ethernet only

6

Deploy Cisco HyperFlex Edge cluster.

(10 Gigabit Ethernet Only) Configuring a HyperFlex Cluster

10 Gigabit Ethernet only

Configuring and Deploying a HyperFlex Edge Cluster (Gigabit Ethernet Only) After completing the preinstallation checklist, in a web browser, enter the IP address of the installer VM and click Accept or Continue to bypass any SSL certificate errors. Note A two-node edge cluster can be deployed only from Intersight. See Chapter 7 for more details.

Log in to the Cisco HX Data Platform installer using the username root and the password provided as part of the OVA deployment. Read the end-user license agreement, check I accept the terms and conditions, and click Login. On the Select a Workflow page, click Create Cluster > Edge Cluster, as shown in Figure 3-31.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 3-31 Selecting a Workflow On the Credentials page, complete the fields shown in Figure 3-32 and click Continue.

||||||||||||||||||||

||||||||||||||||||||

Figure 3-32 Configuring Credentials On the IP Addresses page, enter the assigned address for each server, as shown in Figure 3-33, and click Continue.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 3-33 Configuring IP Addresses On the Cluster Configuration page, complete the fields as shown in Figure 3-34 and click Start.

||||||||||||||||||||

||||||||||||||||||||

Figure 3-34 Cluster Configuration In the Connected Services section of the Cluster Configuration page (see Figure 3-35), select Enable Connected Services to enable Auto-Support and Intersight management. In the Send service ticket notifications to box, enter the email address for

Technet24

||||||||||||||||||||

||||||||||||||||||||

sending SR notifications when triggered by AutoSupport.

Figure 3-35 Connected Services In the Advanced Configuration section of the Cluster Configuration page, shown in Figure 3-36, configure the following: Jumbo frames: By selecting Enable Jumbo Frames on Data Network, you can set the MTU size for the storage data network on the host vSwitch switches and vNICs, as well as each storage controller VM. The default value is 9000. Disk Partitions: If you select Clean up disk partitions, you can remove all existing data and partitions from all nodes added to the storage cluster for manually prepared servers. Select this option to delete existing data and partitions. Y ou must back up any data that should be retained. (Optional) vCenter Single-Sign-On Server: This information is required only if the SSO URL is not reachable. Virtual Desktop (VDI): Check this option for VDI-only environments.

||||||||||||||||||||

||||||||||||||||||||

Figure 3-36 Advanced Configuration When you click Start, the installation process starts, as shown in Figure 3-37.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 3-37 Installation Progress When the installation is complete (see Figure 3-38), the installer takes you to the summary page (see Figure 339), on which you can Launch HyperFlex Connect to view the cluster information.

||||||||||||||||||||

||||||||||||||||||||

Figure 3-38 Successful Cluster Creation

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 3-39 HyperFlex Cluster Installation Summary

Configuring and Deploying a HyperFlex Edge Cluster (10 Gigabit Ethernet Only) To deploy a 10 Gigabit Ethernet HyperFlex edge cluster, you need to perform an additional step of running the configuration script (hx_edge) along with all the same steps for deploying HyperFlex Edge (Gigabit Ethernet).

||||||||||||||||||||

||||||||||||||||||||

Follow these steps: Note For Gigabit Ethernet installation, you do not have to run the configuration script Step 1. SSH to the installer VM. Step 2. Run the ~#hx_edge command to start the configuration script. Step 3. Enter y to continue in interactive mode. Step 4. Follow the onscreen prompts to complete the installation. Step 5. When all the inputs have been entered and confirmed, enter y to begin the first phase of the 10 Gigabit Ethernet HX Edge deployment. Step 6. The configuration task may take several minutes to complete. Check the summary view to view the completion status of each of the various tasks.

Figure 3-40 shows the results from a sample script run.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 3-40 10 Gigabit Ethernet HyperFlex Edge Cluster hx_edge Script $$$$$$$$$$$$$$$$

After you run the script and complete the steps, deploy the installer and run the installation tasks as described in the section “Configuring and Deploying a HyperFlex Edge Cluster (Gigabit Ethernet Only),” earlier in this chapter.

||||||||||||||||||||

||||||||||||||||||||

HyperFlex Edge Postinstallation Tasks To run the postinstallation script (which is the same for both HX Edge Gigabit Ethernet and HX Edge 10 Gigabit Ethernet), do the following: Step 1. In a web browser, navigate to http:///mssh and log in using the username root and your password. Step 2. Run ~#post_install.py to start the postinstallation tasks in the web-based SSH window. On execution of the post installation script, the following options appear: Click here to view code image Select post_install workflow1. New/Existing Cluster 2. Expanded Cluster 3. Generate Certificate

Note Workflow 3 is mandatory to have a unique SSL certificate in the cluster. When you generate this certificate, it replaces your current certificate. If you’re performing cluster expansion, this option is not required.

Choose one of the following options as per the requirement: 1: Choose this option to run the postinstallation script on a newly

Technet24

||||||||||||||||||||

||||||||||||||||||||

created cluster or on an existing cluster. When you select this option, the script runs the postinstallation operations on all the nodes in the cluster. 2: Choose this option to run the postinstallation script on expanded nodes or on newly added nodes after executing the expansion workflow. When you select this option, the script fetches the list of expanded nodes and runs the postinstallation operations on the expanded nodes. 3: Choose this option to have a unique SSL certificate in the cluster. When you select this option, the current certificate is replaced with the newly created SSL certificate. This option is not required for the cluster expansion.

Step 3. Follow the onscreen prompts to complete the installation. The post_install script completes the following: Licenses the vCenter host. Enables high availability/DRS on the cluster per best practices. Suppresses SSH/shell warnings in vCenter. Configures vMotion according to best practices. Adds additional guest VLANs/port groups. Performs a HyperFlex Edge configuration check.

On successful completion of the post_install workflow, the summary of the configuration executed based on the chosen option is displayed under Cluster Summary.

INSTALLING A HYPERFLEX

||||||||||||||||||||

||||||||||||||||||||

STRETCH CLUSTER (ESXI) A HyperFlex stretch cluster enables you to deploy an active/active disaster-avoidance solution for missioncritical workloads requiring high uptime (a near-zero recovery time objective) and no data loss (a zero recovery point objective). As illustrated in Figure 3-41, a stretch cluster is a single cluster with geographically distributed nodes. Both sides of the cluster act as primary for certain user VMs. The data for these VMs is replicated synchronously on the other site. Stretch clusters enable you to access an entire cluster even if one of the sites completely goes down. Typically, these sites are connected with a low-latency dedicated highspeed link between them.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 3-41 Stretch Cluster Topology

Preinstallation Checklist This section covers prerequisites for successfully deploying a HyperFlex stretch cluster. It describes the guidelines to follow when deploying a stretch cluster.

Network Requirements and Network Topology Before you deploy a stretch cluster, ensure that you meet the following requirements: 10 Gbps dedicated, 5 ms round-trip time (RTT) latency between the two active sites is required. 100 Mbps, 200 ms RTT worst-case latency for 16 KB packets between the active sites and witness site is required. Existing fabric interconnects are supported, provided that the fabric interconnects support M5 servers. User VMs should be capable of vMotion to any site, without impacting external network connectivity to these VMs. As with regular HX clusters, two separate IP subnets are required, both of them over stretch L2: one subnet for data traffic and one for management traffic, with the management subnet reachable from vCenter and the witness node. The static IP address assigned to the witness VM can be changed only with cluster redeployment. If a DHCP server is used to define the network configuration, the IP address needs to be dedicated to the witness VM.

Figure 3-42 illustrates the stretch cluster network

||||||||||||||||||||

||||||||||||||||||||

topology.

Figure 3-42 Stretch Cluster Network Topology

Witness Requirements

Technet24

||||||||||||||||||||

||||||||||||||||||||

Consider the following guidelines when deploying a Witness VM for a HyperFlex Stretched Cluster: Configure and enable NTP on all servers. An independent third witness site is required. Both of the main sites must have connectivity to the third witness site, with a minimum bandwidth of 100 Mbps and 200 ms RTT worst-case latency for 16 KB packets. Site must have the capability to deploy and run an Open Virtualization Format (OVF) image.

Network Latency Requirements for the Witness VM Consider the following network latency requirement guidelines when creating a HyperFlex Stretched Cluster: The HyperFlex stretch cluster solution requires that the witness VM be located in a third site to ensure that a storage site failure does not affect the witness VM. The solution can support a witness VM bandwidth as low as 100 Mbps and 200 ms RTT worst-case latency for 16 KB packets. Latency to the witness VM impacts site failure times, and it is recommended, for larger clusters with significant load and data, to have RTT times in the order of 10 ms or lower.

Fabric Interconnect and Node Requirements Hyperflex Stretch cluster requires redundant Fabric interconnect and Hyperflex Nodes on each site. Consider the following requirements for Fabric

||||||||||||||||||||

||||||||||||||||||||

Interconnect and Nodes: Symmetric configuration is required across both sites. There must be a minimum of two converged nodes on each site. A maximum of 16 converged nodes on each site is supported. Ensure that both sites have the same number of converged nodes. Converged nodes have to be M5 nodes. Ensure that both parts of the fabric interconnect pair are of the same model in the same domain. Compute-only nodes are supported.

VMware Requirements VMware Enterprise Plus edition with a HyperFlex stretch cluster is highly recommended to ensure proper failover behavior and guarantee high performance during normal operations. While it is possible to run a stretch cluster without VMware Enterprise Plus edition, the advanced DRS features such as site affinity will not be available, negating some of the intended operational functions of a stretch cluster. Consider the following requirements for vCenter configuration: Use a single vCenter for both sites. The vCenter can be a VM running at the same site as the witness. Nested vCenter is not supported in a stretch cluster. vCenter must be configured independently for high availability, as required.

Technet24

||||||||||||||||||||

||||||||||||||||||||

ACI Requirements The supported ACI configurations are as follows: 1 POD and 2 POD (in a multi-POD configuration) ESXi version 6.5u3 HyperFlex versions 3.5(2f), 3.5(2g), and later ACI version 4.1(1k)

UCS Manager Requirements Two separate, manually synchronized fabric interconnect domains are required.

VLAN Requirements Consider the following VLAN requirements when deploying a HyperFlex Stretched Cluster: IP addresses for nodes on both sites are required. Stretch VLANs across both sites are required.

Stretch Witness A HyperFlex witness node is mandatory in a stretch cluster environment to achieve quorum in case of total failure in any of the sites or when the network link between the sites encounters a failure. In case of accidental deletion or loss of the witness VM, you can replace the witness VM by contacting Cisco

||||||||||||||||||||

||||||||||||||||||||

TAC. The witness VM requires a static IP address that cannot be changed without cluster redeployment. If a DHCP server is used to define the network configuration, the IP address needs to be dedicated to the witness VM. Table 3-17 outlines the stretch cluster witness VM network requirements. Table 3-17 Witness VM Network Requirement Port Service/Protocol Numb er

Sour Port Essential ce Destinati Information ons

2181

Zookeeper/TCP

Wi tne ss

Each CVM node

Bidirectional management addresses

8180

Exhibitor (Zookeeper lifecycle)/TCP

Wi tne ss

Each CVM node

Bidirectional management addresses

80

HTTP/TCP

Wi tne ss

Each CVM node

Potential future requirement

433

HTTP/TCP

Wi tne ss

Each CVM node

Potential future requirement

2888

3888

Technet24

||||||||||||||||||||

||||||||||||||||||||

Deploying the Witness Node A witness VM must be deployed on an ESXi server that has sufficient hosting capability. The witness VM requires four vCPUs, 8 GB of memory, and 40 GB of disk space. Ensure that the virtual network on this ESXi host is reachable from both of the stretch cluster sites. Download the HyperFlex witness node onto your desktop or host that runs vSphere Web Client from cisco.com (for example, HyperFlex-Witness-1.0.2.ova). HyperFlex witness node version 1.0.2 is supported in HX Data Platform 3.5(1a) and later. Follow these steps: Step 1. Log in to vSphere Web Client. Choose the ESXi server where the witness should be deployed. Right-click the ESXi host and select Deploy OVF Template. Step 2. Browse and select the HyperFlex-Witness.ova file, as shown in Figure 3-43. Click Next.

||||||||||||||||||||

||||||||||||||||||||

Figure 3-43 Deploying a Witness VM OVA Step 3. Specify a unique name for the witness node in the Virtual Machine Name field. Select a location for the virtual machine from the dropdown list. Click Next. Step 4. From the Select a compute resource dropdown list, choose the ESXi host where you want

Technet24

||||||||||||||||||||

||||||||||||||||||||

to deploy the witness node. Click Next. Step 5. In the Review details pane, as shown in Figure 3-44, verify the template details and then click Next.

Figure 3-44 Reviewing Details Step 6. In the Select Storage pane, select the datastore where the virtual machine will reside (and

||||||||||||||||||||

||||||||||||||||||||

ensure that this datastore has at least 40 GB of available free space). Click Next. Step 7. In the Select Networks pane, select a destination network port group to which the witness VM must connect. Click Next. Step 8. On the Customize Template page, complete the fields that are relevant for your configuration (for example, IP address, netmask, default gateway, DNS, NTP). Click Next. Step 9. On the Ready to complete page (see Figure 345), verify all the details entered. Click Finish.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 3-45 Customizing the Template and Deploying the OVA

HyperFlex Stretch Cluster Network IP Addressing

||||||||||||||||||||

||||||||||||||||||||

IP addresses for a HyperFlex stretch cluster need to be allocated from the appropriate subnets and VLANs. Ensure that the data and management networks are on different subnets for a successful installation. In addition, ensure that the IP addressing is in the same subnet for the same components (for example, management network IP addresses, data network IP addresses). Table 3-18 lists the IP address requirements for deploying a HyperFlex stretch cluster. Tables 3-19 and 320 provide examples of stretch cluster IP addressing for two sites. Table 3-18 Example: Stretch Cluster IP Addressing Storage cluster management IP address

10.10.1 0.128

Storage cluster data IP address

192.168. 10.160

Subnet mask IP address

255.255 .255.0

Subnet mask IP address

255.255. 255.0

Default gateway IP address

10.10.1 0.1

Default gateway IP address

192.168. 10.1

Table 3-19 Example: Stretch Cluster IP Addressing for Site A

Technet24

||||||||||||||||||||

||||||||||||||||||||

Management Network IP Addresses (Must Be Routable)

Data Network IP Addresses (Does Not Have to Be Routable)

E S Xi H os tn a m e*

Hyper visor Manag ement Netwo rk

Storage Controll er Manage ment Network

Hypervisor Data Network (Not Required for Cisco Intersight)

Storage Controller Data Network (Not Required for Cisco Intersight)

Se rv er 1

10.10.1 0.2

10.10.10.3 2

192.168.10.2

192.168.10.32

Se rv er 2

10.10.1 0.3

10.10.10.3 3

192.168.10.3

192.168.10.33

Se rv er 3

10.10.1 0.4

10.10.10.3 4

192.168.10.4

192.168.10.3

Se rv er 4

10.10.1 0.5

10.10.10.3 5

192.168.10.5

192.168.10.35

||||||||||||||||||||

||||||||||||||||||||

Se rv er 5

10.10.1 0.6

10.10.10.3 6

192.168.10.6

192.168.10.36

Table 3-20 Example: Stretch Cluster IP Addressing for Site B Management Network IP Addresses (Must Be Routable)

Data Network IP Addresses (Does Not Have to Be Routable)

E S Xi H os tn a m e*

Hyper visor Manag ement Netwo rk

Storage Controll er Manage ment Network

Hypervisor Data Network (Not Required for Cisco Intersight)

Storage Controller Data Network (Not Required for Cisco Intersight)

Se rv er 1

10.10.1 0.64

10.10.10.9 6

192.168.10.64

192.168.10.96

Se rv er 2

10.10.1 0.65

10.10.10.9 7

192.168.10.65

192.168.10.97

Technet24

||||||||||||||||||||

||||||||||||||||||||

Se rv er 3

10.10.1 0.66

10.10.10.9 8

192.168.10.66

192.168.10.98

Se rv er 4

10.10.1 0.67

10.10.10.9 9

192.168.10.67

192.168.10.99

Se rv er 5

10.10.1 0.68

10.10.10.1 00

192.168.10.68

192.168.10.100

HyperFlex Stretch Cluster Guidelines Consider the following guidelines when creating a HyperFlex stretch cluster: HX Data Platform Enterprise Edition licensing is required to use a HyperFlex stretch cluster. vSphere Enterprise Plus licensing is required for full-featured HyperFlex stretch cluster functionalities such as VM load balancing and VM migration. Ensure that DRS is enabled. Support the stretch cluster scale: There must be a minimum of two nodes on each site. The minimum overall cluster size across both sites is 4. For a small form factor (SFF), the maximum cluster size

||||||||||||||||||||

||||||||||||||||||||

across both sites is 64. The maximum converged node count is 16 per site. The compute-to- converged node ratio can be 2:1, and the maximum limit of nodes per site must not exceed 32. For example, you can have 11 converged and 21 compute nodes per site. For a large form factor (LFF), the maximum cluster size is 48. The maximum converged node count is 8 per site. The compute-to-converged node ratio can be 2:1, and the maximum limit of nodes per site must not exceed 24. For example, you can have 8 converged and 16 compute nodes per site. There must be a redundant fabric interconnect configuration on each site. There must be symmetric cluster configuration across both sites. The number of nodes and the model of HX nodes should be the same on both sites. VMs are placed correctly within site affinity only if VMware high availability and DRS are enabled before VM creation. Otherwise, the correct placement of VM for proper affinity is not guaranteed. HyperFlex Native Replication is supported between stretch clusters and between stretch clusters and standard clusters. It is a best practice to have two datastores—one per site—with the appropriate affinity.

HyperFlex Stretch Cluster Limitations Consider the following limitations when creating a HyperFlex stretch cluster: Self-encrypting drives (SEDs) are not supported. However, VMbased third-party software encryption is supported.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Overlay networking and L3 protocols are not supported. L2 adjacency is required for the data and management networks. Auto failback of VMs after site failure is not supported. Y ou must manually migrate the failover VMs to the site. A stretch cluster is not supported on Hyper-V platform. Online rolling upgrades are supported only for the HX Data Platform. Cisco UCS Manager upgrades and VMware ESXi upgrades must be performed manually on one node at a time or performed offline. Upgrading from a standalone cluster to a stretch cluster configuration is not supported. Stretch clusters are supported only on M5 nodes. Mixed M4/M5 clusters are not supported. NVMe converged nodes are not supported. Hardware Acceleration Engine (HX-PCIE-OFFLOAD-1) is not supported. A shared witness VM is not supported for stretch cluster deployments.

HyperFlex Stretch Cluster Installation This section summarizes the steps that are involved in creating a stretch cluster using the HX Data Platform installer. All the configuration for a stretch cluster in this section will be done from a single HX installer. Figure 3-46 gives an overview of the workflow required to deploy a stretch cluster.

||||||||||||||||||||

||||||||||||||||||||

Figure 3-46 Stretch Cluster Deployment Workflow Overview Figure 3-47 summarizes the installation workflow steps that are involved in creating a stretch cluster using the HX Data Platform installer.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 3-47 Stretch Cluster Deployment Workflow Using the HXDP Installer Follow the workflow outlined in Table 3-21 during installation. Table 3-21 Stretch Cluster Installation Workflow Step

Description

Referenc e

Create the stretch cluster sites: Run the configure site workflow for Site A and Site B.

Log in to the HX Data Platform installer. Enter UCS Manager credentials and hypervisor credentials for both of the sites.

Enter Credent ials

||||||||||||||||||||

||||||||||||||||||||

Configure the server ports and associate HyperFlex servers.

Associat e HyperFl ex Servers

Configure VLAN, MAC pool, hxext-mgmt IP pool for out-ofband CIMC, in-band CIMC, iSCSi storage, and FC storage.

Configu re UCS Manage r

Configure the hypervisor.

Configu re the Hypervi sor

Download and deploy the witness VM.

Note that a witness VM is mandatory in a HyperFlex stretch cluster environment.

Deployi ng the Witness Node

Create your HyperFlex stretch cluster: Run the create stretch cluster workflow.

Enter UCS Manager credentials for Site A and Site B, as well as vCenter credentials.

Enter Credent ials

Configure the server ports and associate HyperFlex servers.

Associat e HyperFl ex Servers

Configure IP addresses.

Configu re IP

Technet24

||||||||||||||||||||

||||||||||||||||||||

Address es

Deploy the HyperFlex stretch cluster.

Deploy a HyperFl ex Stretch Cluster

Follow these steps to install a stretch cluster: Step 1. Log on to the HX installer to start the cluster configuration. If the installer is still showing the previous installation’s status, click on the sprocket in the bar above and select Start Over to begin a fresh install. On the Select a Workflow page, select Create Cluster > Stretch Cluster, as shown in Figure 3-48.

||||||||||||||||||||

||||||||||||||||||||

Figure 3-48 Selecting the Stretch Cluster Workflow Step 2. Select Configure Site and enter the UCSM credentials and the site name (see Figure 3-49). Then click Continue.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 3-49 Configuring Site Credentials Step 3. On the Server Selection page, as shown in Figure 3-50, select the source servers and click Continue. The Configuration pane on the right displays a detailed list of the credentials used. The Server Selection page also displays a list of unassociated HX servers under the Unassociated tab and a list of discovered servers under the Associated tab.

||||||||||||||||||||

||||||||||||||||||||

Figure 3-50 Server Selection Step 4. On the UCSM Configuration page, enter the VLAN ID, VLAN names, MAC pool, IP pool for in-band of out-of-band, iSCSI storage, and FC storage (see Figure 3-51). Click Continue.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 3-51 UCSM Configuration Step 5. In the Hypervisor Settings section of the UCSM Configuration page, provide all the requested information (see Figure 3-52). Then click Configure Site to begin site configuration. The Progress page displays the

||||||||||||||||||||

||||||||||||||||||||

progress of various configuration tasks.

Figure 3-52 Hypervisor Configuration Step 6. Confirm that the Site A hypervisor configuration is successful, as illustrated in Figure 3-53.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 3-53 Configuration Success for Site A Step 7. As illustrated in Figure 3-54, click on the sprocket and select Configure Site to begin the Site B configuration.

||||||||||||||||||||

||||||||||||||||||||

Figure 3-54 Configuring Site B Step 8. Repeat steps 2 through 6 for Site B. Step 9. Ensure that the witness VM has been deployed successfully. (Refer to the section “Deploying the Witness Node,” earlier in this chapter.) Step 10. To begin configuring the stretch cluster, click the sprocket on the installer and select Create Stretch Cluster to begin the stretch cluster configuration. Step 11. On the credentials screen (see Figure 3-55), provide the source (Site A) and target (Site B) UCSM, as well as credentials, site name, UCSM org name, and vCenter and hypervisor credentials. Click Continue to proceed to the Server Selection screen.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 3-55 Configuring Credentials for a Stretch Cluster Step 12. On the Server Selection page (see Figure 356), configure the server ports and associated

||||||||||||||||||||

||||||||||||||||||||

HyperFlex servers. Servers from both sites Site A and Site B (named DC1 and DC2) are shown under Associated Servers.

Figure 3-56 Server Selection: Site A and Site B Step 13. On the IP Addresses page (see Figure 3-57), configure the IP addresses to be applied to the

Technet24

||||||||||||||||||||

||||||||||||||||||||

storage cluster. For each HX node, enter the hypervisor and storage controller IP addresses. For the IP addresses, specify the IP address belonging to the data network, the management network, the cluster IP address for the management and data networks, the subnet mask, and the IP address of the witness VM installed earlier. Click Continue.

||||||||||||||||||||

||||||||||||||||||||

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 3-57 Configuring IP Address Information Step 14. On the Cluster Configuration page (see Figure 3-58), configure and deploy the HyperFlex stretch cluster. On this page, you can’t change the Replication Factor setting. For a stretch cluster, it’s 2+2.

Figure 3-58 Cluster Configuration

||||||||||||||||||||

||||||||||||||||||||

Step 15. In the Connected Services section of the Cluster Configuration page (see Figure 3-59), select Enable Connected Services to enable Auto-Support and Intersight management. In Send service ticket notifications to, enter the email address for sending SR notifications when triggered by Auto-Support.

Figure 3-59 Connected Services Step 16. In the Advanced Networking section of the Cluster Configuration page (see Figure 3-60), complete the management and data VLAN tag for Site A and Site B and enter the management and data vSwitch names.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 3-60 Advanced Configurations Step 17. In the Advanced Configuration section of the Cluster Configuration page (refer to Figure 360), configure the following: Jumbo frames: By selecting Enable Jumbo Frames on Data Network, you can set the MTU size for the storage data network on the host vSwitch switches and vNICs, as well as each storage controller VM. The default value is 9000. Disk Partitions: If you select Clean up disk partitions, you can remove all existing data and partitions from all nodes added to the storage cluster for manually prepared servers. Select this option to delete existing data and partitions. Y ou must back up any data that should be retained. (Optional) vCenter Single-Sign-On Server: This information is required only if the SSO URL is not reachable. Virtual Desktop (VDI): Check this option for VDI-only environments.

||||||||||||||||||||

||||||||||||||||||||

Step 18. Click Start to begin deploying the stretch cluster. The Progress page displays the progress of the various configuration tasks, as shown in Figure 3-61. When the process is complete, the HyperFlex servers are installed and configured, and you see the message Cluster Creation Successful.

Figure 3-61 Stretch Cluster Creation Progress

HyperFlex Stretch Cluster Postinstallation After the installation of a stretch cluster using the HX Data Platform installer, run the postinstallation script to

Technet24

||||||||||||||||||||

||||||||||||||||||||

finalize the configuration and set up the vMotion network. You can also run this script at a future time if needed. If you wish to run the post_install.py script later, you can run the script directly from any control VM (CVM) by using the following procedure: Step 1. Log in to a cluster IP (CIP) through a SSH server. Step 2. Run the post_install script, which is available at /usr/share/springpath/ storfs-misc/hxscripts/post_install.py. Step 3. Follow the prompts and enter the required information.

SUMMARY This chapter covers all types of installations for HyperFlex clusters, including the following: HyperFlex installation prerequisites HyperFlex standard cluster deployment HyperFlex edge cluster deployment HyperFlex stretch cluster deployment Postinstallation tasks

This chapter covers all the deployment steps using the

||||||||||||||||||||

||||||||||||||||||||

on HyperFlex Data Platform on-premises installer. HyperFlex deployment from Intersight is covered in Chapter 7. Chapter 4, “Managing HyperFlex,” discusses HyperFlex Connect and how it simplifies Day 2 operations.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Chapter 4 Managing HyperFlex With Cisco HyperFlex, you get simplified hyperconverged infrastructure management and a choice of tools to best fit your needs. You can manage HyperFlex through HyperFlex Connect, through hypervisor management plug-ins, or with the Cisco Intersight cloud-based platform. This chapter covers management through HyperFlex Connect. The HX Connect user interface provides a view of the HX storage cluster status, components, and features, such as encryption and replication. HyperFlex Connect is an HTML5 web-based GUI tool that runs on all the HX nodes and is accessible via the cluster management IP address.

LOGGING IN TO HYPERFLEX CONNECT Each login to HX Connect is a session, which is the

||||||||||||||||||||

||||||||||||||||||||

period of activity between the time when you log in to HX Connect and when you log out.

Local Access Predefined local accounts can log in to HyperFlex Connect. The password for the default account is set during cluster creation as the cluster password. Using local access is recommended only when vCenter or SSO credentials are not available. WithHyperFlex 3.5 and higher, the HX Connect GUI does not allow root user access; you can log in as an admin user only.

Role-Based Access Control HyperFlex Connect provides role-based access control (RBAC) via integrated authentication with the vCenter server managing the HyperFlex cluster. Cisco HyperFlex Connect supports two types of RBAC login: Administrator: Users with administrator rights in the managing vCenter server have read and modified rights within HyperFlex Connect. These users can make changes to the cluster settings and configuration. Read-Only: Users with read-only rights in the managing vCenter server have read rights within HyperFlex Connect. These users cannot make changes to the cluster settings and configuration.

Users can log in to HyperFlex Connect by using vCenter credentials, such as administrator@ vsphere.local, or by using vCenter single sign-on (SSO) credentials, such as

Technet24

||||||||||||||||||||

||||||||||||||||||||

domain\user for an Active Directory user. Creation and management of RBAC users and rights must be done via vCenter Web Client or vCenter 6.5 HTML5 vSphere Client. Figure 4-1 shows the Cisco HyperFlex Connect login page.

Figure 4-1 Cisco HyperFlex Connect Login Page To manage a HyperFlex cluster using HyperFlex Connect, follow these steps: Step 1. Enter the HX storage cluster management IP

||||||||||||||||||||

||||||||||||||||||||

address or FQDN in a browser. Step 2. Enter local credentials, such as admin or vCenter RBAC credentials. Step 3. Click the eye icon to view or hide the password field text. Step 4. To log out of HX Connect and properly close the session, select User > Logout. After a successful login, you see a Dashboard page. You must have administrative privileges to see all the options. Also, it is important to sync the time between vCenter and the controller VMs to avoid AAA authentication failure. Figure 4-2 shows a summary of the pages in Cisco HyperFlex Connect.

Technet24

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

Figure 4-2 Summary of Pages in Cisco HyperFlex Connect Key monitoring pages include information about the local HX storage cluster: Dashboard: Overall HX storage cluster status. Alarms, Events, Activity: See the Cisco HyperFlex Systems Troubleshooting Guide for details. Performance: Charts for IOPS, throughput, latency, and replication network bandwidth. System Information: System overview, plus status and tasks for nodes and disks. Datastores: Status and tasks related to datastores. Virtual Machines: Status and tasks related to protecting virtual machines.

Additional HX Connect pages provide management access: Encryption: For data at rest disk and node encryption tasks. Replication: For disaster recovery VM protection tasks. Upgrade: Provides access to HX Data Platform and Cisco UCS Manager firmware upgrade tasks.

The following sections provide detailed information about these pages.

Dashboard Page

Technet24

||||||||||||||||||||

||||||||||||||||||||

The Dashboard page, which is the first page that you see when you log in to Cisco HyperFlex Connect, shows a status summary of the HX storage cluster. Figure 4-3 shows the Dashboard page.

Figure 4-3 Cisco Hyperflex Connect Dashboard

||||||||||||||||||||

||||||||||||||||||||

Page The key parts of this page to understand are as follows: Operational status: Provides the functional status of the HX storage cluster and application performance. Resiliency health: Provides the data health status and ability of the HX storage cluster to tolerate failures. Capacity: Displays a breakdown of the total storage compared to how much storage is used or free, along with storage optimization, compression savings, and deduplication percentages, based on the data stored in the cluster. Nodes: Displays the number of nodes in the HX storage cluster and the division of converged and compute nodes. Hovering over a node icon displays that node’s name, IP address, node type, and an interactive display of disks with access to capacity, usage, serial number, and disk type data. Virtual machines: Displays the total number of virtual machines on this HX storage cluster. This does not include the virtual machines that reside on a remote HX storage cluster. Performance: Displays an HX storage cluster performance snapshot for a configurable amount of time, showing IOPS, throughput, and latency data. For full details, see the section “Performance Page,” later in this chapter.

Settings Page The Settings page allows users to configure support, notification, integration, and connection with Cisco Intersight. Figure 4-4 shows the available settings, and the following sections describe the various options.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 4-4 Cisco Hyperflex Connect Settings

Auto-Support Settings Auto-Support is the alert notification service provided through HX Data Platform. If you enable Auto-Support, notifications are sent from HX Data Platform to designated email addresses or email aliases that you want to receive the notifications. Figure 4-5 shows the Auto-Support Settings page.

||||||||||||||||||||

||||||||||||||||||||

Figure 4-5 Cisco Hyperflex Connect Auto-Support Settings Page The key options to understand are as follows: Enable Auto-Support (Recommended): Allows you to enable Auto-Support if it is not enabled during initial configuration. Send service ticket notifications to: Allows you to enter the email address to which notifications should be sent. Enable Remote Support: Allows Cisco Technical Assistance Center (TAC) to collect information from this cluster to accelerate troubleshooting on reported anomalies. Use Proxy Server: Allows you to provide a web proxy server URL, port, username, and password.

Figure 4-6 shows the Web Proxy Settings section of the Auto-Support Settings page.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 4-6 Cisco HyperFlex Connect Web Proxy Settings

Support Bundle Settings The Support Bundle Settings page lists the alreadygenerated Cisco HX Data Platform support bundles and contains an option to generate a new support bundle. A support bundle is a collection of logs from components within and used by HX Data Platform. (The VMware vCenter logs are not collected through HX Connect.) After you generate a support bundle, you can

||||||||||||||||||||

||||||||||||||||||||

upload it to the HX Data Platform FTP server for use by Cisco TAC. Figure 4-7 shows the Support Bundle page.

Figure 4-7 Cisco Hyperflex Connect Support Bundle Page On this page, the Generate section enables you to create a support bundle from the logs of every controller VM and ESXi host in the HX storage cluster. You can download an already-generated support bundle up to midnight local controller VM time. After midnight, the system deletes the support bundle. Details include the filename, size, and date the bundle was generated.

Notifications Settings The Notification Settings page shows the settings needed to send email notifications for HX storage cluster and ESXi alarms from the Cisco HX Data Platform to Auto-Support. Figure 4-8 shows the

Technet24

||||||||||||||||||||

||||||||||||||||||||

Notifications Settings page.

Figure 4-8 Cisco HyperFlex Connect Notifications Settings Page To send email notifications for alarms, you need to provide the following information on this page: Mail server address From address (that is, an email address used to identify your HX storage cluster in support service tickets and as the sender for AutoSupport notifications) Recipient list (comma separated)

Kubernetes

||||||||||||||||||||

||||||||||||||||||||

The Kubernetes page displays data about Cisco Container Platform Kubernetes clusters. The Cisco HyperFlex Kubernetes storage integration allows HyperFlex to dynamically provide persistent storage to Kubernetes pods running on HyperFlex. The integration enables orchestration of the entire persistent volume object life cycle to be offloaded and managed by HyperFlex, while being driven (initiated) by developers and users through standard Kubernetes persistent volume claim objects. Figure 4-9 shows the Kubernetes settings.

Figure 4-9 Cisco HyperFlex Connect Kubernetes Settings

Technet24

||||||||||||||||||||

||||||||||||||||||||

Kubernetes storage provisioning and management has three status options: Enabled: The HyperFlex cluster can provision and manage Kubernetes storage. Partially Enabled: In certain circumstances, a previously enabled cluster may show the status Partially Enabled. This status typically appears when either of these options is set: Expansion of the HyperFlex cluster or Change to required ESXi networking for iSCSI. Not Enabled: The HyperFlex cluster cannot provision Kubernetes storage.

Device Connector Device Connector connects Cisco HyperFlex Connect to Cisco Intersight, a cloud-hosted server management system. It enables Cisco HyperFlex systems to be managed and monitored through Cisco Intersight. Device Connector does not mandate the format of the login credentials; rather, they are passed as is to the configured HTTP proxy server. Whether the username must be qualified with a domain name depends on the configuration of the HTTP proxy server. Figure 4-10 shows the Device Connector page.

||||||||||||||||||||

||||||||||||||||||||

Figure 4-10 Cisco HyperFlex Connect Device Connector Settings To register a device with Cisco Intersight in the cloud, you must connect Cisco HyperFlex Connect with Cisco Intersight by configuring the device connector proxy settings, if they are required. You can then use the device ID and claim code to validate your access to the device from Cisco Intersight and claim the device. By default, Intersight management is enabled.

Technet24

||||||||||||||||||||

||||||||||||||||||||

By clicking the HTTPS Proxy Settings button on the Device Connector page, you reach the Settings page shown in Figure 4-11.

Figure 4-11 Cisco HyperFlex Connect Proxy Settings for Device Connector To connect with Intersight, HyperFlex needs Internet connectivity. If your devices don’t have a direct connection, you might need to provide proxy details. In the Settings page shown in Figure 4-11, you enter HTTPS proxy details manually: Off: Select this option if you want to disable the HTTPS proxy settings configuration. This is the default HTTPS proxy setting. Manual: Select this option if you want to enable the HTTPS proxy

||||||||||||||||||||

||||||||||||||||||||

settings configuration. Proxy Hostname/IP: Enter the proxy hostname or IP address. Proxy Port: Enter the proxy port number. Authentication: Enable this option to authenticate access to the proxy server. Enter the username and password to authenticate access.

Alarms Page The Alarms page displays a list of HX storage cluster and ESXi alarms raised by HyperFlex Data Platform. You can acknowledge an alarm and reset it to green. Figure 4-12 shows the Alarms page, which includes the following information:

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 4-12 Cisco HyperFlex Connect Alarms Severity: The options in this column are Warning, Error, Information, and Critical. Source: The component where the alarm is triggered. Description: Details about the alarm. Time: The time when the alarm was triggered. Acknowledged: The time when the alarm was acknowledged. Acknowledged By: The username of the person who acknowledged the alarm.

||||||||||||||||||||

||||||||||||||||||||

When you select an alarm from the Alarms page, the following at the top of the list are enabled: Acknowledge: If you want to take ownership of an issue and work on it, you can select the appropriate alarm and click Acknowledge. Alarms are neither cleared nor reset when acknowledged. Reset to green: An alarm triggered by an event might not reset to a normal state if vCenter Server does not retrieve the event that identifies the normal condition. In such a case, you can click on Reset to green to manually return it to a normal state.

Events Page The Events page displays information about state changes, such as user actions and system actions, that occur on the HX storage cluster, hosts, or datastores. Examples of state changes include adding a node to the HX storage cluster, removing a node from the HX storage cluster, and reconfiguring a VM resource. Note You can export only one page, or 100 events, at a time. Click Newer or Older to display more events and then export again.

Figure 4-13 shows the events and the actions you can take on this page:

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 4-13 Cisco HyperFlex Connect Events Events: Provides the type, description, timestamp when the event occurred, name of the target, user of the resource for the event, link to vSphere storage cluster events, and event details. The following target and event types are possible:

||||||||||||||||||||

||||||||||||||||||||

Target types: Storage Cluster, Host, Datastore, Disk Event types: Warning, Error, Information, Critical Download: Downloads the events report in the format you select: csv or doc. Filter: Allows you to filter the displayed events. Available filters include the following: All: Shows all events Info: Shows events with the severity Info Warning: Shows events with the severity Warning Error: Shows events with the severity Error Critical: Shows events with the severity Critical

Activity Page The Activity page displays a list of recent activities on the HX storage cluster, gathered from vSphere. You can monitor the progress of recent tasks on the HX storage cluster (for example, ReadyClones, VM Power On/Off, and Enter/Exit Maintenance Mode). Figure 4-14 shows the Activity page.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 4-14 Cisco HyperFlex Connect Activity Page The list of recent tasks on the Activity page includes the following details: ID Description VM power on/off/suspend status Task status, with possible values In Progress, Success, and Failed

For failed VM power operations, the Existing State and Required State fields are also included, with a date and timestamp and a progress bar.

Performance Page The Monitor Performance tab of the Performance page

||||||||||||||||||||

||||||||||||||||||||

shows the read and write performance of the storage cluster, hosts, and datastores (see Figure 4-15).

Figure 4-15 Cisco HyperFlex Connect Performance Page You can select an object from the menu to display associated performance charts: Cluster: Displays the combined performance of all hosts in the cluster. This is the default. Nodes > node_name: Displays the performance of a selected node. Datastores > datastore_name: Displays the performance of a

Technet24

||||||||||||||||||||

||||||||||||||||||||

selected datastore.

Figure 4-16 shows examples of performance charts.

Figure 4-16 Cisco HyperFlex Connect Performance Charts The screen in Figure 4-16 includes the following parts: IOPS chart: The number of input and output operations per second performed at a point in time.

||||||||||||||||||||

||||||||||||||||||||

Throughput (MBps) chart: The rate of data transfer in the storage cluster, measured in megabytes per second (MBps). Latency (msec) chart: How long it takes for a single I/O request to complete. This is the time between issuing a request and receiving a response, measured in milliseconds (msec).

This page displays the past one hour of performance data, plotted in 20-second intervals. You can hover your mouse over individual data points to view peak performance information and timestamps. Light blue indicates write operations, and dark blue indicates read operations.

System Information Page The System Information page displays HX storage cluster system-related information, including node and disk data, and provides access to HX maintenance mode (see Figure 4-17).

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 4-17 Cisco HyperFlex Connect System Information Page This page displays basic configuration information for a HyperFlex storage cluster: vCenter (link): Provides a link to remotely access vSphere Web Client. Hypervisor: Indicates the hypervisor version installed on this HX storage cluster. HXDP Version: Indicates the installer package version installed on this HX storage cluster.

||||||||||||||||||||

||||||||||||||||||||

Data Replication Factor: Indicates the number of the redundant data replicas stored on this HX storage cluster. For more detail, see Chapter 6, “Disaster Recovery.” Uptime: Shows how long this cluster has been up and online. Total Capacity: Indicates the overall storage size of this cluster. Available Capacity: Indicates the amount of free storage in this cluster. DNS Server(s): Indicates the IP address(es) for the DNS server(s) for this HX storage cluster. NTP Server(s): Indicates the IP address(es) for the NTP server(s) for this HX storage cluster. HX storage cluster status: Provides the functional status of the HX storage cluster, which can be any of the following: Online: Cluster is ready. Offline: Cluster is not ready. Read Only: Cluster is out of space. Unknown: Transitional state while the cluster is coming online.

Controller VM Access You can access the controller VM by using SSH as an administrator. To enable SSH access, click Actions > Enable Controller Access over SSH, as shown in Figure 4-18.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 4-18 Cisco HyperFlex Connect System Information: Enabling Controller VM Access

Nodes Page The Nodes page displays data about all the nodes in the HX storage cluster (see Figure 4-19). It shows the node name, hypervisor, and controller address, along with status, hardware model for each node, HyperFlex Data Platform installer package version, and number of disks on each node. You can sort data based on column.

Figure 4-19 Cisco HyperFlex Connect System

||||||||||||||||||||

||||||||||||||||||||

Information: Nodes In some cases, the HX nodes may need to go through a maintenance window. This may be the case, for example, when replacing disks or making modifications to the UCS service profiles that require reboot. The HX maintenance can be performed with no disruption to the HyperFlex workloads. The maintenance operation can be performed on a single node, on multiple nodes, or on all nodes in the cluster. When you apply changes to all nodes in the cluster, the changes are applied sequentially, through a rolling maintenance operation. You can select a node and click on Enter HX Maintenance Mode to open the Confirm HX Maintenance Mode dialog box.

Disks Page The Disks page displays data about all the disks in the HX storage cluster (see Figure 4-20). The seven-column table on this page allows you to sort the data by column.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 4-20 Cisco HyperFlex Connect System Information: Disks The important information to focus on in this screen is as follows: Node: The name of the node where the disk resides. Slot: The location of the drive. This identifies the drive for maintenance procedures. Capacity: The total disk size. Status: The following status options are available: Available: The initial state for a newly added, data-atrest–capable disk. Also, a transitional state when disks move into one of the other states. Blacklisted: The state when a disk is not being consumed by the cluster due to either a software error or an I/O error. This could be a transitional state while the cluster

||||||||||||||||||||

||||||||||||||||||||

attempts to repair the disk, if the disk is still available, before the state transitions to Repairing. Repairing: The state when a blacklisted disk is being repaired. Claimed: The state when a disk is recognized and in use. Ignored: The state when a disk is not being consumed by the cluster (for example, the HX controller VM system disk, a disk with other data [valid file system partitions], or a disk where I/O is failing). Ok To Remove: The state when a self-encrypting drive (SED) has been securely erased using the Secure Erase option and can be removed safely. To Be Removed: The state when a disk is scheduled for RMA. Removed: The state when an SED is removed after the Secure Erase option has been used. Encrypted status: Whether the node is encrypted. The status is enabled if encryption is configured for this data-at-rest–capable disk; otherwise, it is disabled. It may also be Unknown when the SED is removed. Type: The type of the node, which may be Rotational if the node has a hybrid drive or Solid State for an SSD drive. Usage: How the node is used. Options are Cache and Persistent. Hybrid nodes use solid-state drives (SSDs) for caching and harddisk drives (HDDs) for capacity; all flash nodes use fast SSDs or Nonvolatile Memory Express (NVMe) storage for caching and SSDs for capacity. Turn on Locator LED and Turn Off Locator LED: This option allows activation or deactivation of a beacon LED on the host to help locate the disk.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Secure erase: (Optional) This option is only for a HyperFlex storage cluster that is encrypted using local key encryption.

Datastores Page Datastores are logical containers, like file systems, that hide specifics of physical storage and provide a uniform model for storing VM files. You can also use datastores to store ISO images and VM templates. Figure 4-21 shows the Datastores page.

Figure 4-21 Cisco HyperFlex Connect Datastores Page The Datastores page enables users to create, edit, delete, mount, and unmount datastores. The important

||||||||||||||||||||

||||||||||||||||||||

information to focus on in this screen is as follows: Name: The datastore name, which is unique for a given HX storage cluster. Mount Summary: Basic information about the mount, which can be Mounted when the datastore is successfully mounted on all the nodes in the HX storage cluster or Unmounted if it is not mounted. Pairing Status: The pairing status of the datastore, which is Remote_datastore when it is paired and Unpaired otherwise. Status: The status of the datastore, which is Normal when the datastore is mounted on all nodes in this HX storage cluster, Partial when the datastore is available but not mounted on this HX storage cluster, or Warning for any error condition. Size: The size of the datastore, which is thin provisioned. The minimum size of the datastore is 1 GB. Ensure that the size is sufficient to support the virtual machines in the HX storage cluster. Used: The portion of the allocated datastore that is consumed by virtual machines and related files. Free: The amount of the allocated datastore that is available for use by this HX storage cluster. Create Datastore: This option allows you to create a datastore, for which you will need to provide the name, size, units of the datastore, and block size (see Figure 4-22).

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 4-22 Creating a Datastore You can also edit or delete an existing datastore from the Datastores page. Note If you want to rename datastore, do it only from HX connect GUI.

Virtual Machines Page From the Virtual Machines page, you can perform virtual machine–related tasks such as creating ReadyClones, changing virtual machine power settings, and configuring virtual machine protection through replication (see Figure 4-23).

||||||||||||||||||||

||||||||||||||||||||

Figure 4-23 Cisco HyperFlex Connect Virtual Machines Page With the replication feature you can protect virtual machines by taking replication snapshots at each configured interval. The snapshots are replicated to or from the remote cluster configured in the replication pair. This allows the virtual machines to remain active during replication. The important information to focus on in this screen is as follows: Name: The virtual machine name. Click anywhere in the row to select the virtual machine and enable tasks on that virtual machine. Multiple selection enables group tasks for the virtual

Technet24

||||||||||||||||||||

||||||||||||||||||||

machines. Status: The power state of the virtual machine, which can be Powered On, Suspended, or Powered Off. IP Address: If configured, the IP address for network access by the virtual machine. Guest OS: If installed, the operating system version on the virtual machine. Protection Status: Whether replication is applied to the virtual machine. These are the options: N/A: Protection cannot be applied to any virtual machine or protection group because the replication network and replication pair are not configured. Protected: A replication schedule has been applied to this virtual machine. If protected through a protection group, the is in parentheses. If protected individually, only the word Protected is listed. Unprotected: No replication schedule has been configured for this virtual machine. Storage Provisioned: The amount of storage configured for use by the virtual machine. The total provisioned on all the virtual machines combined exceeds the amount available through the HX storage cluster. Storage Used: The amount of storage currently consumed by the virtual machine. The total consumed storage on all the virtual machines combined does not exceed the amount available through the HX storage cluster. Ready Clones: This option allows you to rapidly create and customize multiple cloned VMs from a host VM. It lets you create multiple copies of VMs that can then be used as standalone VMs. Clones are useful when you deploy many identical VMs to a group.

||||||||||||||||||||

||||||||||||||||||||

Upgrade Page Cisco HyperFlex systems have several components that can be upgraded, depending on the environment. The core components in a HyperFlex system are as follows: Cisco UCS server firmware (the UCS C-bundle, which consists of UCS server, BIOS, CIMC, NIC, and so on) Cisco HX Data Platform software VMware ESXi software

Cisco HyperFlex upgrade deployment can be performed in either online mode or offline mode. You can choose the type of upgrade you want to perform based on your environment. Figure 4-24 shows the upgrade options.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 4-24 Options for Upgrading HyperFlex Clusters

UCS Server Firmware To upgrade the UCS server firmware, you need to provide UCS Manager detail and click on Discover. When you do, HyperFlex Connect populates the M5 Desired Version drop-down, and you can select the version you want to upgrade. Figure 4-25 shows a UCS Server Firmware upgrade page.

Figure 4-25 Options for Upgrading HyperFlex Clusters

HX Data Platform Upgrade When you select the HX Data Platform option, you see the current version and can provide an HX file bundle for the version that you want to upgrade (see Figure 4-

||||||||||||||||||||

||||||||||||||||||||

26).

Figure 4-26 HX Data Platform Upgrade

ESXi Upgrade When you select the ESXi option, you see the current version of ESXi and can specify the ESXi bundle that you want to upgrade (see Figure 4-27).

Figure 4-27 ESXi Upgrade

Technet24

||||||||||||||||||||

||||||||||||||||||||

Progress Page The Validation screen shows the progress of checks performed. If validation fails, you need to fix the validation errors. If there are no errors, you can confirm to complete upgrade. For detailed upgrade information, see Chapter 5, “Maintaining HyperFlex.”

Web CLI Page You can monitor and manage a storage cluster by using the Cisco HX Data Platform command-line interface (CLI). The web CLI allows you to run stcli (the HX Data Platform storage controller VM command line) through HX Connect. Figure 4-28 shows the screen from which to run commands through the web CLI.

||||||||||||||||||||

||||||||||||||||||||

Figure 4-28 Cisco HyperFlex Connect: Running Commands Through the Web CLI You can use the command line to run direct commands that complete in a single pass and do not require responses (for example, stcli cluster info). Only direct commands are supported through HX Connect. To run interactive commands, you need to log in to the storage controller VM command line.

Technet24

||||||||||||||||||||

||||||||||||||||||||

REPLICATION HyperFlex Connect can be used to monitor the status of the protected VMs and to initiate VM migrations, test recoveries, or failovers. It provides native replication and disaster recovery between VMs on a pair of connected HyperFlex clusters that reside in two different vCenter clusters, usually at two separate data centers and connected via WAN, VPN, L2 over L3, or dark fiber. Each cluster server acts as a disaster recovery site for the VMs running on the other cluster and protects virtual machines by replicating to the peer cluster for disaster recovery. The Replication page is available under Protect. You can configure the replication network and a cluster pairing from this page (see Figure 4-29).

||||||||||||||||||||

||||||||||||||||||||

Figure 4-29 Cisco HyperFlex Connect: Configuring a Network for Replication When configuring a replication network, you need to provide the VLAN ID for the replication network, the replication subnet in CIDR notation (that is, a.b.c.d/n), the gateway IP address for the subnet, starting and ending IP addresses for the range that will be added to the pool assigned to the SCVMs, outbound bandwidth limits between 10 and 100,000 Mbps, and MTU. Figure 4-30 shows the screen from which you can configure IP addresses and bandwidth.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 4-30 Cisco HyperFlex Connect: Configure IP & Bandwidth Page

Replication Pairing Two HyperFlex clusters that will be able to replicate VMs to each other must first be paired before the replication can begin, and a datastore on each cluster must also be paired. Prior to pairing, the replication networking on both clusters must be configured, and datastores must have been created on both clusters. Each HyperFlex cluster can be paired with only one

||||||||||||||||||||

||||||||||||||||||||

other cluster, and each datastore can be paired with only one other datastore. The snapshots and VMs on the source cluster, in their source datastore, will be replicated to the paired cluster and stored in the paired datastore. You must know the administrative login credentials of the remote cluster and the remote cluster’s management IP address in order to proceed. When configuring a replication pair, you need to provide the name, the cluster management IP address or FQDN for the remote cluster, the username, and the password. The username and password must have admin rights in the vCenter server managing the remote cluster. Once pairing is done, you can map a local datastore with a remote data-store. Figure 4-31 shows the screen you can use to configure remote cluster information for a replication pair.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 4-31 Cisco HyperFlex Connect: Configuring a Remote Cluster Once a replication pair is established and datastores are mapped to each other across two HX clusters, VM protection can be configured. VMs can be protected individually, or they can be added to a new or existing protection group. For more detail, refer to Chapter 6.

ENCRYPTION HX data-at-rest security is accomplished through SEDs and is managed through Cisco HX Connect in conjunction with UCSM and local or remote key stores using Key Management Interoperability Protocol (KMIP).

||||||||||||||||||||

||||||||||||||||||||

SEDs provide native data-at-rest encryption. All qualified disks are FIPS 140-2 Level 2 validated components for data-at-rest encryption. The hardware encryption is built in; a media encryption key controls this encryption and decryption. The performance is comparable to that of a non-SED system and is transparent to data optimization functions (such as deduplication and compression). If a HyperFlex cluster has SEDs and is encryption capable, Data-at-Rest Encryption Capable appears at the top of the Summary tab. Figure 4-32 shows the screen you use to configure encryption.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 4-32 Cisco HyperFlex Connect: Configuring Encryption When you click the Configure encryption button, you are asked to provide UCSM detail. Figure 4-33 shows the UCSM credentials required to start encryption.

||||||||||||||||||||

||||||||||||||||||||

Figure 4-33 UCSM Credentials to Start the Configuration Process The encryption keys associated with a SED implementation are as follows: Media encryption key: This is where data is always stored in encrypted form. A media encryption key controls the encryption and decryption. This key is never stored in the processor or memory. Key-encryption key: This key, also known as a security key, is used to encrypt the media encryption key. To enable an SED, you must provide a security key.

Cisco HyperFlex supports both local and remote security keys. When you configure a key locally, you must

Technet24

||||||||||||||||||||

||||||||||||||||||||

remember the key. If you forget the key, you cannot retrieve it, and you will lose the data if the drive power cycles. You can configure the key remotely by using a key management server (also known as a KMIP server). This method addresses the issues related to safekeeping and retrieval of the keys in the local manager. You can configure a server with SEDs in a cluster to use one of the following certificate options: Use certificate authority signed certificates: Generate certificate signing requests (CSRs) signed by an external certificate authority. Use self-signed certificates: Generate self-signed certificates.

Figure 4-34 shows the options for encrypting a cluster.

||||||||||||||||||||

||||||||||||||||||||

Figure 4-34 Selecting a Cluster Encryption Method If you are self-signing, you need to specify a local certificate authority in the configuration and get a root certificate. If you are using a trusted third-party CA, you need to specify that in the configuration and use that CA’s root certificate. Figure 4-35 shows the screen for CSR configuration for encryption. You need to provide an email address, an organization name and unit name, a locality, a state, and a country to generate a certificate. You can download

Technet24

||||||||||||||||||||

||||||||||||||||||||

certificates to get them signed by a certificate authority.

Figure 4-35 Certificate Attributes to Generate Certificates For a local key, you need to provide a passphrase to generate certificates. Figure 4-36 shows the local key configuration for encryption.

||||||||||||||||||||

||||||||||||||||||||

Figure 4-36 Specifying a Passphrase to Enable Encryption At any time, for security purposes, if you need to regenerate the authentication keys in a cluster, you can do so using the rekey operation to regenerate the keys; this operation does not destroy the existing data.

SUMMARY Cisco HyperFlex Connect provides access to a HyperFlex storage cluster. The Dashboard page allows you to view overall cluster health and operational status, cluster

Technet24

||||||||||||||||||||

||||||||||||||||||||

storage–related configuration, and IOPS. You can also manage system information, datastores, and VMs from HX Connect. HyperFlex Connect provides additional monitoring capabilities, including alarms, event logs, and activity logs. You can analyze the historical and current performance of a HyperFlex cluster by using the built-in performance charts. You can upgrade HX DP software and Cisco UCS firmware as well. Cisco HyperFlex Connect allows you to configure data protection features, including VM replication and dataat-rest encryption. Chapter 5, “Maintaining HyperFlex,” discusses Hyperflex Licensing, Virtual Machine Management, Scaling Hyperflex, Hardware replacement and Hyperflex upgrade.

||||||||||||||||||||

||||||||||||||||||||

Chapter 5 Maintaining HyperFlex Cisco HyperFlex Data Platform (HX Data Platform) is a hyperconverged software appliance that transforms Cisco servers into a single pool of compute and storage resources. It eliminates the need for network storage and enables seamless interoperability between computing and storage in virtual environments. Cisco HX-Data Platform provides a highly fault-tolerant distributed storage system that preserves data integrity and optimizes performance for virtual machine (VM) storage workloads. In addition, native compression and deduplication reduce the storage space occupied by the VMs and the VM workloads. Cisco HX Data Platform has many integrated components, including Cisco fabric interconnects (FIs), Cisco UCS Manager, Cisco HX-specific servers, and

Technet24

||||||||||||||||||||

||||||||||||||||||||

Cisco compute-only servers; VMware vSphere, ESXi servers, and vCenter; and the Cisco HX Data Platform installer, controller VMs, HX Connect, vSphere HX Data Platform plug-in, and stcli commands. This chapter provides details on managing a HyperFlex system and how to perform Day 2 operations on a HyperFlex cluster. It covers the following: HyperFlex licensing Virtual machine management Snapshots ReadyClones Datastores Scaling HyperFlex clusters Node expansion (converged node and compute node) Node removal (converged node and compute node) Increasing storage capacity (by adding drives) Hardware (disk) replacement Replacing SSDs Replacing NVMe SSDs Replacing housekeeping SSDs Replacing or adding HDDs

||||||||||||||||||||

||||||||||||||||||||

HyperFlex software upgrades Pre-upgrade tasks Upgrading UCS Server, ESXi, and HX Data Platform

HYPERFLEX LICENSING This section describes Smart Licensing in HyperFlex. Cisco Smart Software Licensing (Smart Licensing) is a cloud-based software license management solution that automates time-consuming manual licensing tasks, such as procuring, deploying, and managing licenses across an entire organization. The software allows for easy tracking of the status of license and software usage trends and simplifies the three core licensing functions: purchasing, management, and reporting. It provides visibility into your license ownership and consumption so you know what you own and how you are using it. The Smart Licensing feature integrates with Cisco HyperFlex and is automatically enabled as soon as you create an HX storage cluster. For an HX storage cluster to start reporting license consumption, you must register it with Cisco Smart Software Manager (SSM) through your Cisco Smart Account. A Smart Account is a cloud-based repository that provides full visibility and access control to Cisco software licenses and product instances across your company. Registration is valid for one year.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Smart Account registration enables HyperFlex to be identified to a Smart Account and allows license usage to be reported to Cisco Smart Software Manager or a Smart Software Manager satellite. After registration, HyperFlex reports license usage to Cisco Smart Software Manager or a Smart Software Manager satellite with the current license status.

Registering a Cluster with Smart Licensing Smart Licensing automatically integrates with your HX storage cluster and is enabled by default. Your HX storage cluster is initially unregistered with Smart Licensing and in a 90-day EVAL MODE. Within the 90 days, you need to register your HX storage cluster to use full functionality. Figure 5-1 shows the Smart Licensing user workflow.

||||||||||||||||||||

||||||||||||||||||||

Figure 5-1 Smart Licensing User Workflow

Note In order to begin using Smart Licensing, you need to have a Cisco Smart Account. You can create (or select) a Smart Account while placing an order, or you can create a Smart Account outside of placing an order and add new or existing licenses over time. To create a Smart Account, go to Cisco Software Central (https://software.cisco.com/) and click Get a Smart Account.

Creating a Registration Token A registration token is used to register and consume a product for Smart Licensing. You must create a token to register the product and add the product instance to a specified virtual account. Follow these steps: Step 1. Log in to the software manager at https://software.cisco.com/. Step 2. In the License section, click Smart Software Licensing, as shown in Figure 5-2.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 5-2 Cisco Software Central: License Section Step 3. Under Smart Software Licensing, click Inventory. Step 4. From the virtual account where you want to register your HX storage cluster, click the General tab and then click New Token (see Figure 5-3). The Create Registration Token dialog box appears.

||||||||||||||||||||

||||||||||||||||||||

Figure 5-3 Smart Software Licensing: Inventory Page Step 5. In the Create Registration Token dialog box (see Figure 5-4), do the following: a. Add a short description for the token. b. Enter the number of days you want the token to be active and available to use on other products. The maximum is 365 days. c. Check Allow export-controlled functionality on the products registered with this token.

Technet24

||||||||||||||||||||

||||||||||||||||||||

d. Click Create Token.

Figure 5-4 Creating a Registration Token

Note In this case, I set the max number of uses to 1, so that I am the only one who can use this token.

As shown in Figure 5-5, the new token shows up under the list of tokens with the expiration date, number of uses, and user who created it.

||||||||||||||||||||

||||||||||||||||||||

Figure 5-5 New Token Step 6. Select the token and copy it to the clipboard (see Figure 5-6).

Figure 5-6 Copying a Token to the Clipboard

Registering a Cluster with Smart Software Licensing Through a Controller VM This section covers an alternative method of registering a cluster with Smart Software Licensing through a controller VM. Follow these steps: Step 1. Log in to a controller VM.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Step 2. Confirm that your HX storage cluster is in Smart Licensing mode by entering the following command: # stcli license show status

As shown in Figure 5-7. the output should show “Smart Licensing is ENABLED, Status: UNREGISTERED, and the amount of time left in the 90-day evaluation period (in days, hours, minutes, and seconds). The Smart Licensing evaluation period starts when the HX storage cluster begins using the licensing feature and is not renewable. When the evaluation period expires, the Smart Agent sends a notification.

||||||||||||||||||||

||||||||||||||||||||

Figure 5-7 License Status Before Registration Step 3. Register your HX storage cluster by using the command stcli license register --idtoken idtoken-string, where idtoken-string is the new

Technet24

||||||||||||||||||||

||||||||||||||||||||

ID token from Cisco Smart Software Manager or a Smart Software Manager satellite (see Figure 5-8). For more information on how to create a token for product instance registration, see the section “Creating a Registration Token,” earlier in this chapter.

Figure 5-8 Getting a Registration License Step 4. Confirm that your HX storage cluster is registered by using the stcli license show summary command, as demonstrated in Figure 5-9.

||||||||||||||||||||

||||||||||||||||||||

Figure 5-9 Confirming the Registration License

VIRTUAL MACHINE MANAGEMENT Cisco HyperFlex provides native virtual machine management features such as HX snapshots, ReadyClones, and datastore management. This section discusses the concepts, functionality, best practices, and configuration of these features in detail.

Technet24

||||||||||||||||||||

||||||||||||||||||||

HX Data Platform Native Snapshots Overview HX Data Platform Native Snapshots is a backup feature that saves versions (states) of working VMs. A native snapshot is a reproduction of a VM that includes the state of the data on all VM disks and the VM power state (on, off, or suspended) at the time the native snapshot is taken. You can take a native snapshot to save the current state of a VM so that you can later revert to the saved state. You can use the HX Data Platform plug-in to take native snapshots of your VMs. The HX Data Platform Native Snapshot options include creating a native snapshot, reverting to any native snapshot, and deleting a native snapshot. Timing options include hourly, daily, and weekly, all in 15-minute increments.

Benefits of HX Data Platform Native Snapshots HX Data Platform native snapshots provide the following benefits: Reverting registered VMs: If a VM is registered, whether powered on or powered off, native snapshots, just like VM snapshots, can be used to revert to an earlier point in time (that is, time when the snapshot was created). High performance: The HX Data Platform native snapshot process is fast because it does not incur I/O overhead. VM performance: HX Data Platform native snapshots do not degrade VM performance.

||||||||||||||||||||

||||||||||||||||||||

Crash consistent: HX Data Platform native snapshots are crash consistent by default; this means the correct order of write operations is preserved, to enable an application to restart properly from a crash. Application consistent: Y ou can select the quiesce option of the stcli vm snapshot command through the HX Data Platform CLI to enable HX Data Platform native snapshots to be application consistent. The applications in the guest VM run transparently, exactly as they do in the host VM. Quiescing a file system involves bringing the on-disk data of a physical or virtual computer into a state suitable for backups. This process might include operations such as flushing dirty buffers from the operating system’s in-memory cache to disk, as well as other higher-level application-specific tasks. Scheduled snapshots are tolerant to node failures: Scheduled snapshots are tolerant to administrative operations that require a node shutdown, such as HX maintenance mode and HX online upgrades. Unified interface: Y ou can manage native snapshots created through the HX Data Platform plug-in by using the VMware snapshot manager. Individual or grouped: Y ou can take native snapshots on a VM level, VM folder level, or resource pool level. Granular progress and error reporting: These monitoring tasks can be performed at task level for the resource pool, folder level, and VM level. Instantaneous snapshot delete: Deletion of a snapshot and consolidation always occur instantaneously. Parallel batch snapshots: HX supports up to 255 VMs in a resource pool or folder for parallel batched snapshots. VDI deployment support: HX scheduled snapshots are supported for desktop VMs on VDI deployments using VMware

Technet24

||||||||||||||||||||

||||||||||||||||||||

native technology. Recoverable VM: The VM is always recoverable when there are snapshot failures. Datastore access: Snapshots work on partially mounted/accessible datastores as long as the VM being snapshotted is on an accessible mountpoint.

Native Snapshot Considerations Some snapshot parameters to consider are as follows: Native snapshots: After you create the first native snapshot using the HX Data Platform plug-in, if you create more snapshots in vSphere Web Client, these are considered to be native as well. However, if you create the first snapshot using vSphere Web Client and not the HX Data Platform plug-in, the vSphere Web Client snapshots are considered to be non-native. Maximum number of stored snapshots: Currently VMware has a limitation of 31 snapshots per VM. This maximum total includes VMware-created snapshots, the HX Data Platform SENTINEL snapshot, and HX Data Platform native snapshots. Scheduled snapshots: Do not schedule overlapping snapshots on VMs and their resource pools. Deleted VMs: The life cycle of native snapshots, as with VM snapshots, is tied to the virtual machine. If a VM is deleted, accidentally or intentionally, all associated snapshots are also deleted. Snapshots do not provide a mechanism to recover from a deleted VM. Use a backup solution to protect against VM deletion. HX Data Platform storage controller VMs: Y ou cannot schedule snapshots for storage controller VMs. Non-HX Data Platform VMs: Snapshots fail for any VM that is not on an HX Data Platform datastore. This applies to snapshots on a VM level, VM folder level, or resource pool level. To make a

||||||||||||||||||||

||||||||||||||||||||

snapshot, the VM must reside on an HX Data Platform datastore in an HX Data Platform storage cluster. Suspended VMs: Creating the first native snapshot, the SENTINEL snapshot, from VMs in suspended state is not supported. VM Size: The maximum size of a VM that an HyperFlex snapshot can take depends on the maximum size of the individual Virtual Machine Disk (VMDK), maximum number of attached disks, and overall size of VM. VM Name: The VM name must be unique per vCenter for taking a snapshot. Ready storage cluster: To allow a native snapshot, the storage cluster must be healthy, including sufficient space, and online. The datastores must be accessible. The VMs must be valid and not in a transient state, such as in process of vMotion. vMotion: vMotion is supported on VMs with native snapshots. Storage vMotion: Storage vMotion is not supported on VMs with native snapshots. If a VM needs to be moved to a different datastore, delete the snapshots before running Storage vMotion. VM datastores: Ensure that all the VM (VMDK) disks are on the same datastore prior to creating native snapshots. This applies to snapshots created with HX Snapshot Now and snapshots created with HX Scheduled Snapshots. Thick disks: If the source disk is thick, then the snapshot of the VM’s disk will also be thick. Increase the datastore size to accommodate the snapshot. Virtual disk types: VMware supports a variety of virtual disk backing types. The most common is the FlatVer2 format. Native snapshots are supported for this format. There are other virtual disk formats, such as Raw Device Mapping (RDM), SeSparse, and VmfsSparse (Redlog format). VMs containing virtual disks of these formats are not supported for native snapshots.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Native Snapshot Best Practices Always use the HX Data Platform Snapshot feature to create your first snapshot of a VM. This ensures that all subsequent snapshots are in native format. Here are some additional recommended best practices: Do not use the VMware Snapshot feature to create your first snapshot. VMware snapshots use redo log technology that results in degraded performance of the original VM. This performance degrades further with each additional snapshot. Native format snapshots do not impact VM performance after the initial native snapshot is created. If you have any redo log snapshots, on the ESXi hosts where the redo log snapshots reside, edit the /etc/vmware/config file and set snapshot.asyncConsolidate="TRUE." Add all the VMDKs to the VM prior to creating the first snapshot. When VMDKs are added to the VM, additional SENTINEL snapshots are taken. Each additional SENTINEL consumes space for additional snapshots. For example, if you have an existing VM and you add two new VMDKs, at the next scheduled snapshot, one new SENTINEL is created. Check the snapshot schedule retention number to be sure you have sufficient snapshot slots available: one for the new SENTINEL and one for the snapshot. When creating large numbers of snapshots, consider the following: Schedule the snapshots at a time when you expect data traffic might be low. Use multiple resource pools or VM folders to group VMs rather than using a single resource pool or VM folder. Then stagger the snapshot schedule by group. For example, for

||||||||||||||||||||

||||||||||||||||||||

resourcePool1 schedule snapshots at :00, for resourcePool2 schedule snapshots at :15, and for resourcePool3 schedule snapshots at :30. If you have vCenter running on a VM in the storage cluster, do not take a native snapshot of the vCenter VM.

Understanding SENTINEL Snapshots When you create the first snapshot of a VM, through either Snapshot Now or Scheduled Snapshot, the HX Data Platform plug-in creates a base snapshot called a SENTINEL snapshot. The SENTINEL snapshot ensures that follow-on snapshots are all native snapshots. SENTINEL snapshots prevent reverted VMs from having VMware redo log-based virtual disks. Redo logbased virtual disks occur when an original snapshot is deleted and the VM is reverted to the second-oldest snapshot. SENTINEL snapshots are in addition to the revertible native snapshot. The SENTINEL snapshot consumes 1 snapshot of the total 31 available per the VMware limitation. Keep in mind two important considerations when using SENTINEL snapshots: Do not delete the SENTINEL snapshot. Do not revert your VM to the SENTINEL snapshot.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Native Snapshot Timezones Three objects display and affect the timestamps and schedule of snapshots: vSphere and vCenter use UTC time. vSphere Web Client uses the browser time zone. HyperFlex Data Platform components such as the HX Data Platform plug-in, storage cluster, and storage controller VM use the same configurable time zone; the default is UTC.

The storage controller VM time is used to set the schedule. The vSphere UTC time is used to create the snapshots. The logs and timestamps vary depending on the method used to view them.

Creating Snapshots Redo log snapshots are snapshots that are created through the VMware Snapshot feature and not through the HX Data Platform Snapshot feature. If you have any redo log snapshots for VMs in an HX storage cluster, edit the ESXi host configuration where the redo log snapshots reside. If this step is not completed, VMs might be stunned during snapshot consolidation. Follow these steps to edit the ESXi host configuration: Step 1. Log in to the ESXi host command line. Step 2. Locate and open the file /etc/vmware/config for editing.

||||||||||||||||||||

||||||||||||||||||||

Step 3. Set the snapshot.asyncConsolidate parameter to TRUE (that is, snapshot.asyncConsolidate="TRUE").

Creating Snapshots Workflow Step 1. From the vSphere Web Client navigator, select the VM level, VM folder level, or resource pool level. For example, select vCenter Inventory Lists > Virtual Machines to display the list of VMs in vCenter. Step 2. Select a VM and either right-click the VM and click Actions or click the Actions menu in the VM information portlet. Note Ensure that there are no non-HX Data Platform datastores on the storage cluster resource pool, or the snapshot will fail.

Step 3. From the Actions menu, select Cisco HX Data Platform > Snapshot Now, as shown in Figure 5-10, to open the Take VM Native Snapshot for Test dialog box.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 5-10 Selecting Snapshot Now Step 4. In the Take VM Native Snapshot for Test dialog box (see Figure 5-11), enter a name for the snapshot and type a description of the snapshot. Click OK to accept your configuration.

||||||||||||||||||||

||||||||||||||||||||

Figure 5-11 Adding a Native Snapshot Name and Description

Scheduling Snapshots To schedule snapshots, follow these steps: Step 1. From the vSphere Web Client navigator, select the VM or resource pool list. For example, select vCenter Inventory Lists > Virtual Machines to display the list of VMs in vCenter. Step 2. Select a VM or resource pool and either rightclick the VM or resource pool and click Actions or click the Actions menu in the VM information portlet. Step 3. From the Actions menu, select Cisco HX

Technet24

||||||||||||||||||||

||||||||||||||||||||

Data Platform > Schedule Snapshot (see Figure 5-12) to open the Schedule Snapshot dialog box.

Figure 5-12 Scheduling a Snapshot Step 4. Complete the Schedule Snapshot dialog box, shown in Figure 5-13, as follows: a. To select the snapshot frequency, click the boxes for hourly, daily, and/or weekly frequency and set the starting days, times, and duration. b. Set the number of snapshots to retain. When the maximum number is reached, older snapshots are removed as newer snapshots are created. c. Unselect existing scheduled items, as needed. If a previous schedule existed, unselecting items deletes those items from the future

||||||||||||||||||||

||||||||||||||||||||

schedule. d. Click OK to accept the schedule and close the dialog.

Figure 5-13 Configuring the Schedule

Reverting to a Snapshot Reverting to a snapshot means returning a VM to a state stored in a snapshot. Reverting to a snapshot is performed on one VM at a time. It is not performed at

Technet24

||||||||||||||||||||

||||||||||||||||||||

the resource pool or VM folder level. Reverting to snapshots is performed through the vCenter Snapshot Manager and not through the HX Data Platform plug-in. Follow these steps to revert to a snapshot: Step 1. From the vSphere Web Client navigator, select the VM level, VM folder level, or resource pool level. For example, select vCenter Inventory Lists > Virtual Machines to display the list of VMs in vCenter. Step 2. Select a storage cluster VM and either rightclick the VM and click Actions or click the Actions menu in the VM information portlet. Step 3. From the Actions menu, select Snapshots > Manage Snapshots (see Figure 5-14) to open the vSphere Snapshot Manager.

||||||||||||||||||||

||||||||||||||||||||

Figure 5-14 Managing Snapshots Step 4. In the Snapshot Manager, select a snapshot to revert to from the hierarchy of snapshots for the selected VM and then select All Actions > Revert to, as shown in Figure 5-15.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 5-15 Reverting to a Snapshot Step 5. Click Yes (to confirm the reversion (see Figure 5-16).

||||||||||||||||||||

||||||||||||||||||||

Figure 5-16 Confirm Revert to Snapshot Dialog Box The reverted VM is included in the list of VMs and powered off. In selected cases, a VM reverted from a VM snapshot is already powered on. See Table 5-1 for more details. Table 5-1 VM Power State After Restoring an HX VM Snapshot VM State When HX VM Snapshot Is Taken

VM State After Restoration

Technet24

||||||||||||||||||||

||||||||||||||||||||

Powered on (includes memory)

Reverts to the HX VM snapshot, and the VM is powered on and running.

Powered on (does not include memory)

Reverts to the HX VM snapshot, and the VM is powered off.

Powered off (does not include memory)

Reverts to the HX VM snapshot, and the VM is powered off.

Step 6. If the reverted VM is powered off, select the VM and power it on.

Deleting Snapshots You delete snapshots through the vSphere interface and not through the HX Data Platform plug-in. Follow these steps: Step 1. From the vSphere Web Client navigator, select VMs and Templates > vcenter_server > Snapshots > datacenter > VM. Step 2. Right-click the VM and select Snapshots > Manage Snapshots. Step 3. Right-click the snapshot you want to delete, click Delete. Step 4. In the Confirm Delete dialog box that appears, click YES, as shown in Figure 5-17.

||||||||||||||||||||

||||||||||||||||||||

Figure 5-17 Deleting a Snapshot

Note Delete the SENTINEL snapshot by using the Delete All option only. Do not delete the SENTINEL snapshot individually. This is because Sentinel snapshot is the base snapshot and all subsequent HX snapshots use this base snapshot.

ReadyClones HX Data Platform ReadyClones is a pioneering storage technology that enables you to rapidly create and customize multiple cloned VMs from a host VM. It enables you to create multiple copies of VMs that can then be used as standalone VMs.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Clones are useful when you deploy many identical VMs to a group. A ReadyClone, much like a standard clone, is a copy of an existing VM. The existing VM is called the host VM. When the cloning operation is complete, the ReadyClone is a separate guest VM. Changes made to a ReadyClone do not affect the host VM. A ReadyClone’s MAC address and UUID are different from those of the host VM. Installing a guest operating system and applications can be time-consuming. With ReadyClones, you can make many copies of a VM from a single installation and configuration process.

Benefits of HX Data Platform ReadyClones The HX Data Platform ReadyClones feature provides the following benefits: Create multiple clones of a VM at a time: Simply right-click a VM and create multiple clones of the VM by using the ReadyClones feature. Rapid cloning: HX Data Platform ReadyClones is extremely fast and more efficient than legacy cloning operations because it supports VMware vSphere Storage APIs—Array Integration (VAAI) data offloads. VAAI, also called hardware acceleration or hardware offload APIs, is a set of APIs to enable communication between VMware vSphere ESXi hosts and storage devices. Use HX Data Platform ReadyClones to clone VMs in seconds instead of minutes.

||||||||||||||||||||

||||||||||||||||||||

Batch customization of guest VMs: Use the HX Data Platform Customization Specification to instantly configure parameters such as IP address, hostname, and VM name for multiple guest VMs cloned from a host VM. Automation of several steps to a one-click process: The HX Data Platform ReadyClones feature automates the task of creating guest VMs. VDI deployment support: ReadyClones is supported for desktop VMs on VDI deployments using VMware native technology. Datastore access: ReadyClones works on partially mounted/accessible datastores as long as the VM being cloned is on an accessible mountpoint.

Supported Base VMs HX Data Platform supports: Base VMs stored on an HX Data Platform datastore Base VMs with HX Data Platform Snapshots A maximum of 2048 ReadyClones from 1 base VM A maximum of 256 ReadyClones created in 1 batch at a time

HX Data Platform does not support: Powered-on base VMs with Windows 2008 Server and Windows 2012 Server guest Powered-on base VMs with more than 30 snapshots Powered-on base VMs with redo log snapshots

ReadyClones Requirements

Technet24

||||||||||||||||||||

||||||||||||||||||||

The requirements for ReadyClones are as follows: VMs must be within the HX Data Platform storage cluster. NonHX Data Platform VMs are not supported. VMs must reside on an HX Data Platform datastore, VM folder, or resource pool. ReadyClones fail for any VM that is not on an HX Data Platform datastore. This applies to ReadyClones on a VM level, VM folder level, or resource pool level. VMs can have only native snapshots. ReadyClones cannot be created from VMs with snapshots that have redo logs (that is, nonnative snapshots). SSH must be enabled in ESXi on all the nodes in the storage cluster. Y ou can use only the single vNIC customization template for ReadyClones.

ReadyClones Best Practices When working with ReadyClones, keep the following best practices in mind: Use the customization specification as a profile or a template. Ensure that properties that apply to the entire batch are in the customization specification. Obtain user-defined parameters from the HX Data Platform ReadyClones batch cloning workflow. Use patterns to derive per-clone identity settings such as the VM guest name. Ensure that the network administrator assigns static IP addresses for guest names and verify these addresses before cloning.

||||||||||||||||||||

||||||||||||||||||||

Y ou can create a batch of 1 through 256 at a given time. Do not create multiple batches of clones simultaneously on the same VM (when it is powered on or powered off) because doing so causes failures or displays incorrect information on the master task updates in the HX Data Platform plug-in.

Creating ReadyClones Using HX Connect Use the HX Data Platform ReadyClones feature to populate a cluster by creating multiple clones of a VM, each with a different static IP address. Follow these steps: Step 1. Log in to HX Connect as an administrator. Step 2. From the Virtual Machines page, select a virtual machine and then click ReadyClones, as shown in Figure 5-18.

Figure 5-18 ReadyClones: Selecting a Virtual Machine Step 3. Complete the ReadyClones dialog box, shown in Figure 5-19, as outlined in Table 5-2.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 5-19 ReadyClones Dialog Box Table 5-2 ReadyClones Dialog Box Fields Field Setting Nu

Enter the number of ReadyClones that you want to create.

||||||||||||||||||||

||||||||||||||||||||

mb er of clo nes

Y ou can create a batch of 1 through 256 clones at a given time.

Cu sto mi zat ion Sp eci fic ati on

(Optional) Click the drop-down list and select a customization specification for the clone from the drop-down list. The system filters the customization specifications for the selected host virtual machine. For example, if the selected host virtual machine uses Windows OS for guest virtual machines, the drop-down list displays Windows OS customization specifications.

Re so ur ce Po ol

(Optional) If you have resource pools defined in an HX Storage cluster, you can select one to store the ReadyClones of the selected virtual machine.

V M Na me Pr efi x

Enter a prefix for the guest virtual machine name. This prefix is added to the name of each ReadyClone created.

Sta rti

Enter a clone number for the starting clone. Each ReadyClone must have a unique name; numbering is used to ensure a

Technet24

||||||||||||||||||||

||||||||||||||||||||

ng clo ne nu mb er

unique element in the name.

In cre me nt clo ne nu mb ers by

Enter a value by which the clone number in the guest virtual machine name must be increased or leave the default value 1 as is. The system appends a number to each the name of each virtual machine ReadyClone (such as clone1, clone2, and clone3). By default, the number starts from 1, but you can change this value to any number.

Us e sa me na me for Gu est Na me

Select this checkbox to use the vCenter VM inventory name as the guest host virtual machine name. If you uncheck this box, a text box is enabled. Enter the name you want to use for the guest host virtual machine name.

Pr evi ew

After required fields are completed, HX Data Platform lists the proposed ReadyClones names. As you change the content in the required fields, the Clone Name and Guest Name fields update.

||||||||||||||||||||

||||||||||||||||||||

Po we r on V Ms aft er clo nin g

Select this checkbox to turn on the guest virtual machines after the cloning process completes.

Step 4. Click Clone. HX Data Platform creates the appropriate number of ReadyClones with the naming and location specified.

Creating ReadyClones Using the HX Data Platform Plug-in If you use the VMware cloning operation, you can create only a single clone from a VM. This operation is manual and slower than batch processing multiple clones from a VM. For example, to create 20 clones of a VM, you must manually perform the clone operation over and over again. Follow these steps to create ReadyClones using the HX Data Platform plug-in: Step 1. From the vSphere Web Client navigator, select Global Inventory Lists > Virtual Machines

Technet24

||||||||||||||||||||

||||||||||||||||||||

to open a list of VMs in vCenter. Step 2. Select a VM and either right-click the VM and click Actions or click the Actions menu in the VM information portlet. Step 3. From the Actions menu, select Cisco HX Data Platform > ReadyClones, as shown in Figure 5-20.

||||||||||||||||||||

||||||||||||||||||||

Figure 5-20 HX Data Platform ReadyClones Option The ReadyClones dialog box appears, as shown in Figure 5-21.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 5-21 ReadyClones Configuration Step 4. Enter any changes you want to make and click OK to apply these configuration changes. Note As part of the ReadyClones workflow, a temporary snapshot is listed in vCenter and HX Connect. It is listed as an extra powered-off VM transiently—that is, only while the ReadyClones are being created.

||||||||||||||||||||

||||||||||||||||||||

Datastores Datastores are logical containers that HX Data Platform uses to manage your storage usage and storage resources. Datastores are where the host places virtual disk files and other VM files. Datastores hide the specifics of physical storage devices and provide a uniform model for storing VM files. You can add datastores, refresh the list, edit the names and sizes of datastores, delete datastores, and mount and unmount datastores from either HX Connect or the HX Data Platform plug-in. You can only rename an unpaired datastore that is unmounted. Do not rename a datastore using the vCenter administrator interface. Keep in mind these important considerations: Keep the number of datastores to as few as possible to avoid startup delay and to keep clone savings high. Configuring more than 10 datastores could result in excessive startup delay.

Adding Datastores Datastores are logical containers, similar to file systems, that hide specifics of physical storage and provide a uniform model for storing VM files. You can also use datastores to store ISO images and VM templates. To

Technet24

||||||||||||||||||||

||||||||||||||||||||

add a datastore, follow these steps: Step 1. Choose an interface using either of these methods: From the vSphere Web Client navigator, select vCenter Inventory Lists > Cisco HyperFlex Systems > Cisco HX Data Platform > cluster > Manage > Datastores. From HX Connect, select Datastores.

Step 2. Click on create datastore. Step 3. Enter a name for the datastore. vSphere Web Client enforces a 42-character limit for the datastore name, and each datastore name needs to be unique. Step 4. Specify the datastore size and choose GB or TB from the drop-down list. Step 5. Specify the data block size. From HX Connect, choose 8K or 4K; the default is 8K. In the HX Data Platform plug-in, the default is assumed. For VDI workloads, the default is 4k. Step 6. Click OK to accept your changes or Cancel to cancel all changes. Step 7. To verify the addition of the datastore, click the Refresh icon and ensure that the new datastore is listed. From the HX Data Platform plug-in, Click the Manage > Datastores >

||||||||||||||||||||

||||||||||||||||||||

Hosts to see the mount status of the new datastore. If you check the datastore through the vSphere Client application, by selecting host > Configuration> Datastores, the drive type is listed as Unknown; this is expected vSphere behavior.

Creating Datastores Using the HX Data Platform Plug-in The workflow in Figure 5-22 shows how to create datastores using the HX Data Platform plug-in on VMWare vCenter.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 5-22 Create a Datastore by Using the HX Data Platform Plug-in

Creating Datastores Using HX Connect Figure 5-23 shows how to create a datastore using HX Connect.

||||||||||||||||||||

||||||||||||||||||||

Figure 5-23 Creating a Datastore Using HX Connect

SCALING HYPERFLEX CLUSTERS One of the advantages of the HyperFlex solution is the ease with which you can scale an existing HyperFlex system. This section covers how to perform a node expansion, how to perform a node removal (for both converged and compute-only nodes), and how to increase storage capacity of existing HyperFlex nodes.

Node Expansion You can add converged or compute-only nodes to expand a HyperFlex cluster. The following is the list of supported mixed-cluster expansion guidelines (for both converged and compute-only nodes) in HyperFlex

Technet24

||||||||||||||||||||

||||||||||||||||||||

clusters: Expanding an existing M4 cluster with M5 converged nodes is supported. Expanding an existing M5 cluster with M4 converged nodes is not supported. Expanding an existing mixed M4/M5 cluster with M4 or M5 converged nodes is supported. Adding any supported compute-only nodes is permitted with all M4, M5, and mixed M4/M5 clusters using the HX Data Platform installer. Only the expansion workflow is supported for creating a mixed cluster. Initial cluster creation with mixed M4/M5 servers is not supported. All M5 servers must match the form factor (220/240), type (hybrid/AF), security capability (non-SED only), and disk configuration (QTY , capacity, and non-SED) of the existing M4 servers. HX Edge, SED, LFF, Hyper-V, and stretch clusters do not support mixed M4/M5 clusters.

Note If you have replication configured, put replication in pause mode before performing an upgrade, an expansion, or cluster maintenance. After the upgrade, expansion, or cluster maintenance is complete, resume replication. Perform the pause and resume on any cluster that has replication configured to or from this local cluster.

ESXi installation is supported on SD cards for M4 converged nodes and M.2 SATA SSD for M5 converged nodes. For compute-only nodes, ESXi installation is

||||||||||||||||||||

||||||||||||||||||||

supported for SD Cards, SAN boot, or front SSD/HDD. Installing ESXi on USB flash is not supported for compute-only nodes. Before you start adding a converged or compute node to an existing storage cluster, make sure that the following prerequisites are met: Ensure that the storage cluster state is healthy. Ensure that the new node meets the system requirements listed under Installation Prerequisites, including network and disk requirements. Ensure that the new node uses the same configuration as the other nodes in the storage cluster (for example, VLAN ID, tagging, vSwitch configuration and so on). To add a node that has a different CPU family from what is already in use in the HyperFlex cluster, enable EVC. Allow ICMP for pings between the HX Data Platform installer and the existing cluster management IP address.

The sections that follow describe how to add converged and compute-only nodes to expand a HyperFlex cluster.

Adding a Converged Node You can add a converged node to a HyperFlex cluster after cluster creation. The storage on a converged node is automatically added to the cluster’s storage capacity. Follow these steps to add a converged node to an existing standard cluster:

Technet24

||||||||||||||||||||

||||||||||||||||||||

Step 1. Launch the Cisco HX Data Platform installer. (Use the same version of installer as the version of the HX cluster.) Step 2. On the Workflow page, select Expand Cluster > Standard Cluster, as shown in Figure 5-24.

Figure 5-24 Cluster Expansion: Standard Cluster Step 3. On the Credentials page that appears, complete all the fields, as shown in Figure 5-25, and click Continue.

||||||||||||||||||||

||||||||||||||||||||

Figure 5-25 Expansion Workflow: Credentials Page Step 4. On the Cluster Expand Configuration page that appears (see Figure 5-26), select the HX cluster that you want to expand and click Continue.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 5-26 Selecting a Cluster to Expand Step 5. On the Server Selection page, review the list of unassociated HX servers under the Unassociated tab (see Figure 5-27) and the list of discovered servers under the Associated tab. Select the servers under the Unassociated tab to include in the HyperFlex cluster. Click Continue.

||||||||||||||||||||

||||||||||||||||||||

Figure 5-27 Selecting a Server If HX servers do not appear in this list, check Cisco UCS Manager and ensure that they have been discovered. Step 6. On the UCSM Configuration page that appears, complete the fields for each network and configure the HyperFlex cluster name, as shown in Figure 5-28, and click Continue.

Technet24

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

Figure 5-28 UCSM Configuration Step 7. On the Hypervisor Configuration page that appears (see Figure 5-29), complete all the fields and click Continue.

Figure 5-29 Hypervisor Configuration Step 8. On the IP Addresses page that appears (see Figure 5-30), add more compute or converged servers, as desired, by clicking Add Compute Server or Add Converged Server. Be sure to select Make IP Addresses Sequential to make the IP addresses sequential. For the IP addresses, specify whether the network should belong to the data network or the management network. For each HX node, complete the appropriate fields for hypervisor management and data IP addresses. When you’re finished with the settings on this page, click Start. A Progress page displays the progress of various

Technet24

||||||||||||||||||||

||||||||||||||||||||

configuration tasks.

Figure 5-30 Configuring IP Addresses

Note If the vCenter cluster has EVC enabled, the deployment process fails with the message “The host needs to be manually added to vCenter.” To successfully perform the deploy action, do the following: Step 1. Log in to the ESXi host to be added in vSphere Client. Step 2. Power off the controller VM. Step 3. Add the host to the vCenter cluster in vSphere Web Client. Step 4. In the HX Data Platform installer, click Retry Deploy.

||||||||||||||||||||

||||||||||||||||||||

Step 9. When cluster expansion is complete, start managing your storage cluster by clicking Launch HyperFlex Connect. Note When you add a node to an existing storage cluster, the cluster continues to have the same HA resiliency as the original storage cluster until autorebalancing takes place at the scheduled time. Rebalancing is typically scheduled during a 24-hour period, either 2 hours after a node fails or if the storage cluster is out of space.

Adding a Compute Node You can add a compute-only node to a HyperFlex cluster after cluster creation to provide extra compute resources. The Cisco UCS server does not need to have any caching or persistent drives as they do not contribute any storage capacity to the cluster. Use similar steps for adding a compute node as used for expanding a HyperFlex cluster with a converged node. Minor changes in the workflow as compared to converged node include the following: Select Compute Only Server under Server Selection. In the IP Addresses section, compute-only nodes do not need storage controller management and storage controller data.

Figure 5-31 shows the configuration option when adding a compute server.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 5-31 Adding a Compute Server

Note After you add a compute-only node to an existing cluster, you must manually configure the vmk2 interface for vmotion.

Expanding a Stretch Cluster You can perform cluster expansion on a HyperFlex stretch cluster when an already deployed stretch cluster has more storage utilization and requires expansion so that the storage capacity on a stretch cluster can be

||||||||||||||||||||

||||||||||||||||||||

increased. After node expansion, the storage on a converged node is automatically added to the cluster’s storage capacity. Consider the following cluster expansion guidelines: Stretch cluster expansion supports both converged nodes and compute-only nodes. When adding a converged node, ensure that the configuration is symmetric across both sites. For instance, if Site 1 is expanded with two nodes, Site 2 must also be expanded with two converged nodes. When adding compute nodes, ensure that you do not exceed the supported node count.

To expand a stretch cluster, you need to take the steps outlined in the following sections.

Configuring Sites for Expanding a Cluster Before you can expand a cluster, you need to re-create the sites in the installer as they are deployed using this procedure. Follow these steps: Step 1. Log in to the Cisco HX Data Platform installer. Step 2. On the Select a Workflow page, select Expand Cluster > Stretch Cluster, as shown in Figure 5-32, and click Continue.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 5-32 Expansion Workflow: Stretch Cluster Step 3. On the Cluster page that appears, enter the cluster management hostname, as shown in Figure 5-33, and click Continue.

||||||||||||||||||||

||||||||||||||||||||

Figure 5-33 Cluster Information: Configuring a Site Step 4. On the Credentials page that appears, as shown in Figure 5-34, select Configure Site and then enter UCS manager and hypervisor credentials. Click Continue.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 5-34 Credentials: Configuring a Site Step 5. On the Server Selection page that appears, configure the server ports and associate the new HX expansion nodes with the site, as shown in Figure 5-35, and click Continue.

||||||||||||||||||||

||||||||||||||||||||

Figure 5-35 Node Selection Step 6. On the Node Configuration page that appears, configure the subnet mask, gateway, and hypervisor settings as shown in Figure 5-36, and click Start to begin site configuration for the expanded cluster. A progress page displays the progress of various configuration tasks.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 5-36 Node Configuration Step 7. Repeat steps 1 through 7 for the second site.

Expanding a Cluster To expand a cluster, follow these steps: Step 1. On the Cluster page, as shown in Figure 5-37, enter the cluster management hostname and click Continue.

||||||||||||||||||||

||||||||||||||||||||

Figure 5-37 Cluster Information Step 2. On the Credentials page that appears, as shown in Figure 5-38, select Expand Stretch Cluster, enter the credentials information, and click Continue.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 5-38 Expanding a Stretch Cluster Step 3. Configure the server ports and associate HyperFlex servers.

||||||||||||||||||||

||||||||||||||||||||

Step 4. On the IP Addresses page, as shown in Figure 5-39, configure the hypervisor and IP addresses, select the site, and click Start to start the cluster expansion process.

Figure 5-39 Expanding a Stretch Cluster by Adding a Converged Node

Removing Nodes You can remove converged or compute-only nodes to reduce the size of a HyperFlex cluster. This section

Technet24

||||||||||||||||||||

||||||||||||||||||||

provides the guidelines for node removal of both converged and compute-only nodes in HyperFlex clusters.

Removing Converged Nodes Depending on the node maintenance task, removing a node can occur while the storage cluster is online or offline. Ensure that you have completed the preparation steps before removing a node. Note It is highly recommended that you work with your account team when removing a converged node in a storage cluster. Do not reuse the removed converged node or its disks in the original cluster or in another cluster.

The steps to take in removing a node depend on the cluster size. Table 5-3 provides an overview of the steps for removing clusters of different sizes. Table 5-3 Steps for Removing Converged Nodes Cluste r Size

Nodes Remo ved

Steps

Threenode cluster

One or more

Cluster removal requires Cisco TAC assistance.

||||||||||||||||||||

||||||||||||||||||||

Fournode cluster

One 1. Ensure that the cluster is healthy.

2. Put the affected node in Cisco HX maintenance mode.

3. Shut down the cluster (take the cluster offline) by using the stcli cluster shutdown command.

4. Remove the node by using the stcli node remove command.

5. Restart the cluster by using the stcli cluster start command.

Fournode cluster

Two or more

Fivenode cluster

One

Cluster removal requires Cisco TAC assistance.

1. Ensure that the cluster is healthy.

2. Put the affected node in Cisco HX maintenance mode.

3. Cluster remains online.

4. Remove the node by using the stcli node remove command.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Fivenode cluster

Two 1. Ensure that the cluster is healthy.

2. Put the affected node in Cisco HX maintenance mode.

3. Shut down the cluster (take cluster offline) by using the stcli cluster shutdown command.

4. Remove the nodes by using the stcli node remove command. Specify both nodes.

5. Restart the cluster by using the stcli cluster start command.

6. Shut down the cluster (take the cluster offline) by using the stcli cluster shutdown command.

7. Remove the nodes by using the stcli node remove command and specifying both nodes.

8. Restart the cluster by using the stcli cluster start command.

Five-

Three

Cluster removal requires Cisco TAC assistance.

||||||||||||||||||||

||||||||||||||||||||

node cluster

or more

Removing a Node from an Online Storage Cluster Depending on the node maintenance task, removing a node can occur while the storage cluster is online or offline. Removing a node from a storage cluster while the cluster remains online has slightly different requirements from removing a node while a cluster is offline. Follow these steps to remove a node from an online storage cluster: Note It is highly recommended that you work with TAC when removing a converged node in a storage cluster. Do not remove the controller VM or other HX Data Platform components.

Step 1. To prepare to remove a node, do the following: Ensure that the cluster is healthy by entering the stcli cluster info command. Ensure that SSH is enabled in ESX on all the nodes in the storage cluster. Ensure that DRS is enabled or manually move the VMs from the node. Put the node being removed into HX maintenance mode. Log in to the controller VM of a node that is not being removed.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Step 2. Rebalance the storage cluster to ensure that all datastores associated with the node will be removed. The rebalance command is used to realign the distribution of stored data across changes in available storage and to restore storage cluster health. If you add or remove a node in the storage cluster, you can manually initiate a storage cluster rebalance by using the stcli rebalance command. Note Rebalancing might take some time, depending on the disk capacity used on the failed node or disk.

Log in to a controller VM in the storage cluster. From the controller VM command line, run the stcli rebalance start --force command and then wait and confirm that rebalance has completed. Step 3. Remove the desired node by using the stcli node remove command, which has the following syntax: Click here to view code image stcli node remove [-h] {--id-1 ID1 | --ip-1 NAME1} [{--id-2 ID2 | --ip-2 NAME2}] [-f]

For example:

||||||||||||||||||||

||||||||||||||||||||

Click here to view code image # stcli node remove --name-1 esx.SVHOST144A.complab

The response should be something like the following: Click here to view code image Successfully removed node: EntityRef(type=3, id='', name= 'esx.SVHOST144A.complab')

Figure 5-40 shows this command in use.

Figure 5-40 Running the stlci node remove Command

Note The stlci node remove command unmounts all datastores, removes node from the cluster ensemble, resets the EAM for this node, stops all services (stores, cluster management IP), and removes all firewall rules. This command does not remove the node from vCenter; the node remains in vCenter. It also does not remove the installed HX Data Platform elements, such as the controller VM.

Technet24

||||||||||||||||||||

||||||||||||||||||||

When the stcli node remove command completes successfully, the system rebalances the storage cluster until the storage cluster state is healthy. Do not perform any failure tests during this time. The storage cluster remains healthy. As the node is no longer in the storage cluster, you do not need to exit HX maintenance mode. Note If you want to reuse a removed node in another storage cluster, contact Cisco TAC. Additional steps are required to prepare the node for another storage cluster.

Step 4. To confirm that the node has been removed from the storage cluster, run this command: # stcli cluster info

Check the ActiveNodes entry in the response to verify that the cluster has one less node. Step 5. Confirm that all the node-associated datastores are removed. Note If any node-associated datastores are listed, then manually unmount and delete those datastores.

||||||||||||||||||||

||||||||||||||||||||

Step 6. Remove the host from the vCenter Hosts and Cluster view as follows: a. Log in to vSphere Web Client navigator and navigate to Host in the vSphere Inventory. b. Right-click the host and select Enter Maintenance Mode. Click Yes. c. Right-click the host and select All vCenter Actions > Remove from Inventory. Click Yes. Step 7. Decommission the host from UCS Manager as follows: a. Log in to UCS Manager. b. In the navigation pane, Select Equipment > Chassis > Chassis Number > Servers. c. Choose the HX server you want to decommission. In the work pane, click the General tab. d. In the Actions area, click Server Maintenance. e. In the Maintenance dialog box, click Decommission and then click OK.

Removing a Node from an Offline Storage Cluster Removing a node from an offline storage cluster involves graceful shutdown of the HyperFlex cluster.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Follow these steps: Step 1. To prepare for maintenance operations and removing a node, do the following: Ensure that the cluster is healthy. Ensure that DRS is enabled or manually move the VMs from the node. Rebalance the storage cluster. Put the node being removed into HX maintenance mode. Log in to the controller VM of a node that is not being removed.

Step 2. Prepare to shut down and then shut down the storage cluster as follows: Note Step 2 is required only if the cluster has fewer than five nodes or if you’re removing two nodes from a five-node cluster.

a. Gracefully shut down all resident VMs on all the HX datastores. b. Optionally, vMotion the VMs. c. Gracefully shut down all VMs on non-HX datastores on HX storage cluster nodes and unmount. d. Put all storage cluster nodes in HX maintenance mode.

||||||||||||||||||||

||||||||||||||||||||

e. From any controller VM command line, issue the stcli cluster shutdown command: # stcli cluster shutdown

Step 3. Remove the desired node by using the stcli node remove command. You can specify the node to be removed by either IP address or domain name, as in the following examples: Click here to view code image # stcli node remove --ip-1 10.10.2.4 --ip-2 10.10.2.6

or Click here to view code image # stcli node remove --name-1 esx.SVHOST144A.complab --name -2 esx.SVHOST144B.complab.lab

Note Enter the second IP address if you are removing a second node from a storage cluster that has five or more nodes.

The response to this command should look something like the following: Click here to view code image Successfully removed node: EntityRef(type=3, id='', name='10.10.2.4' name='10.10.2.6')

Technet24

||||||||||||||||||||

||||||||||||||||||||

Note The stlci node remove command unmounts all datastores, removes node from the cluster ensemble, resets the EAM for this node, stops all services (stores, cluster management IP), and removes all firewall rules. This command does not remove the node from vCenter; the node remains in vCenter. It also does not remove the installed HX Data Platform elements, such as the controller VM.

After the stcli node remove command completes successfully, the system rebalances the storage cluster until the storage cluster state is healthy. Do not perform any failure tests during this time. The storage cluster health remains average. As the node is no longer in the storage cluster, you do not need to exit HX maintenance mode. Note If you want to reuse a removed node in another storage cluster, contact Cisco TAC. Additional steps are required to prepare the node for another storage cluster.

Step 4. To confirm that the node has been removed from the storage cluster, run this command: # stcli cluster info

Check the ActiveNodes entry in the response to verify that the cluster has one less node. Step 5. Confirm that all the node-associated

||||||||||||||||||||

||||||||||||||||||||

datastores are removed. Note If any node-associated datastores are listed, manually unmount and delete those datastores.

Step 6. Restart the cluster by using the stcli cluster start command.

Removing a Compute Node Compute-only nodes do not contribute to storage in a HyperFlex cluster. Therefore, when removing compute nodes, there is no limitation on the number of nodes you can remove. The steps to remove a compute node are as follows: Step 1. Migrate all the VMs from a compute node that needs to be removed. Step 2. Unmount the datastore from the compute node. Step 3. Check whether the cluster is in the healthy state by running the following command: Click here to view code image # stcli cluster info --summary

Step 4. Put the ESXi host in HX maintenance mode.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Step 5. Remove the compute node by using the stcli node remove command from CMIP: # stcli node remove --ip-1

Where --ip is the IP address of the node to be removed. (Use the Cisco HX Connect IP address as it is the cluster IP address.) Step 6. Remove any DVS from the ESXi host in vCenter, if there is a DVS. Step 7. Remove the ESXi host from vCenter. Step 8. Check whether the cluster is in the healthy state by running the following command: Click here to view code image # stcli cluster info --summary

Step 9. Clear stale entries in the compute node by logging out of Cisco HX Connect and then logging into Cisco HX Connect. Step 10. Disable and reenable the High Availability (HA) and Distributed Resource Scheduler (DRS) services to reconfigure the services after node removal.

Increasing Storage Capacity by Adding Drives

||||||||||||||||||||

||||||||||||||||||||

You can increase the datastore capacity of a storage cluster by adding drives if there is an available drive slot on the HyperFlex server. Add the same size and type solid-state drives (SSDs) or hard disk drives (HDDs) to each converged node in the storage cluster. For hybrid servers, add HDDs. For all flash servers, add SSDs. Note When performing a hot-plug pull-and-replace on multiple drives from different vendors or of different types, pause for at least 30 seconds between actions. That is, pull a drive, wait 30 seconds and replace it, and wait 30 seconds again before moving on to the next drive.

HARDWARE (DISK) REPLACEMENT Disks, whether SSDs or HDDs, might fail. If a disk failure occurs, you need to remove the failed disk and replace it. Follow the server hardware instructions for removing and replacing the disks in the host. HX Data Platform identifies the SSD or HDD and incorporates it into the storage cluster. The following sections provide details on how to replace the types of drives in HyperFlex.

Replacing SSDs The procedures for replacing an SSD vary depending on the type of SSD, as described here:

Technet24

||||||||||||||||||||

||||||||||||||||||||

Step 1. Identify the failed SSD: For cache or persistent SSDs, perform a disk beacon check. Set the beacon. Only cache and persistent SSDs respond to the beacon request. NVMe cache SSDs and housekeeping SSDs do not respond to beacon requests. For cache NVMe SSDs, perform a physical check. These drives are in Drive Bay 1 of the HX servers. For housekeeping SSDs on HXAF240c or HX240c servers, perform a physical check at the back of the server. For housekeeping SSDs on HXAF220c or HX220c servers, perform a physical check at Drive Bay 2 of the server.

Step 2. If the failed SSD is a housekeeping SSD, refer to the section “Replacing Housekeeping SSDs,” later in this chapter. Step 3. If a failed SSD is a cache or persistent SSD, proceed based on the type of disk: For NVMe SSDs, see the next section, “Replacing NVMe SSDs.” For all other SSDs, follow the instructions for removing and replacing a failed SSD in the host, per the server hardware guide.

After the cache or persistent drive is replaced, the HX Data Platform identifies the SDD and updates the storage cluster. When disks are added to a node, the disks are immediately available for HX consumption. Step 4. To enable Cisco UCS Manager to include new

||||||||||||||||||||

||||||||||||||||||||

disks in the UCS Manager > Equipment > Server > Inventory > Storage tab, reacknowledge the server node. This applies to cache and persistent disks. Note Re-acknowledging a server is disruptive, so place the server into HX maintenance mode before doing so.

Step 5. If you have replaced an SSD and see the message “Disk successfully scheduled for repair,” it means the disk is present but is still not functioning properly. Check that the disk has been added correctly per the server hardware guide procedures.

Replacing NVMe SSDs The procedures for replacing an SSD vary depending on the type of SSD. This section describes the steps for replacing NVMe cache SSDs. Note Mixing storage disk types or sizes on a server or across a storage cluster is not supported. Also, when replacing NVMe disks, always use the same type and size as the original disk.

The requirements for using NVMe in HX are as follows: NVMe SSDs are supported in HX240 and HX220 All Flash servers.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Replacing NVMe SSDs with an HGST SN200 disk requires HX Data Platform version 2.5.1a or later. NVMe SSDs are only allowed in slot 1 of the server. Other server slots do not detect NVMe SSDs. NVMe SSDs are only used for cache. Using them for persistent storage is not supported. Using an NVMe SSD as the housekeeping drive is not supported. Using NVMe SSDs for hybrid servers is not supported.

The steps for replacing NVMe SSDs are as follows: Step 1. Confirm that the failed disk is an NVMe cache SSD by performing a physical check. These drives are in Drive Bay 1 of the HX servers. NVMe cache SSDs and housekeeping SSDs do not respond to beacon requests. If the failed SSD is not an NVMe SSD, see the earlier section “Replacing SSDs.” Step 2. Put the ESXi host into HX maintenance mode by logging in to HX Connect and selecting System Information > Nodes > node > Enter HX Maintenance Mode. Step 3. Follow the instructions for removing and replacing a failed SSD in the host, per the server hardware guide. Note When you remove an HGST NVMe disk, the controller VM fails until you reinsert a

||||||||||||||||||||

||||||||||||||||||||

disk of the same type into the same slot or reboot the host.

After the cache or persistent drive is replaced, HX Data Platform identifies the SDD and updates the storage cluster. When disks are added to a node, the disks are immediately available for HX consumption. Step 4. Reboot the ESXi host. This enables ESXi to discover the NVMe SSD. Step 5. Exit the ESXi host from HX maintenance mode. Step 6. To enable the Cisco UCS Manager to include new disks in the UCS Manager > Equipment > Server > Inventory > Storage tab, reacknowledge the server node. This applies to cache and persistent disks. Note Re-acknowledging a server is disruptive, so place the server into HX maintenance mode before doing so.

Step 7. If you replaced an SSD and see the message “Disk successfully scheduled for repair,” it means that the disk is present but is still not functioning properly. Check that the disk has been added correctly per the server hardware guide procedures.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Replacing Housekeeping SSDs This procedure applies to HXAF220c M4, HX220c M4, HXAF220c M5, HX220c M5, HXAF240c M5, and HX240c M5 servers only. To replace the housekeeping SSD on an HXAF240c M4 or HX240c M4 server, contact Cisco TAC. To replace a housekeeping SSD, perform the following steps: Step 1. Identify the failed housekeeping SSD by physically checking the SSD drives, as housekeeping drives are not listed through a beacon check. Step 2. Remove the SSD and replace it with a new SSD of the same kind and size. Follow the steps in the server hardware guide, which should describe the physical steps required to replace the SSD. Note Before performing the hardware steps, put the node into HX maintenance mode. After performing the hardware steps, exit the node from HX maintenance mode.

Step 3. Using SSH, log in to the storage controller VM of the affected node and run the following command: Click here to view code image

||||||||||||||||||||

||||||||||||||||||||

# /usr/share/springpath/storfs-appliance/config-bootdev.sh -r -y

This command consumes the new disk, adding it into the storage cluster. A sample response might resemble the following: Click here to view code image Creating partition of size 65536 MB for /var/stv ... Creating ext4 filesystem on /dev/sdg1 ... Creating partition of size 24576 MB for /var/zookeeper ... Creating ext4 filesystem on /dev/sdg2 ... Model: ATA INTEL SSDSC2BB12 (scsi) Disk /dev/sdg: 120034MB Sector size (logical/physical): 512B/4096B Partition Table: gpt .... discovered. Rebooting in 60 seconds

Step 4. Wait for the storage controller VM to automatically reboot. Step 5. When the storage controller VM completes its reboot, verify that partitions are created on the newly added SSD by running the df -ah command. A sample response might resemble the following: Click here to view code image ........... /dev/sdb1 63G 324M 60G 1% /var/stv /dev/sdb2 24G 173M 23G 1% /var/zookeeper

Technet24

||||||||||||||||||||

||||||||||||||||||||

Step 6. Identify the HX Data Platform installer package version installed on the existing storage cluster by running the stcli cluster version command. The same version must be installed on all the storage cluster nodes. Run this command on the controller VM of any node in the storage cluster but not the node with the new SSD. Step 7. Copy the HX Data Platform installer packages into the storage controller VM in the /tmp folder: Click here to view code image # scp :/opt/springpath/packages/stor fspackages-.tgz /tmp # cd /tmp # tar zxvf storfs-packages-.tgz

Step 8. Run the HX Data Platform installer deployment script: # ./inst-packages.sh

Note For additional information on installing the HX Data Platform, see the appropriate Cisco HX Data Platform Install Guide.

||||||||||||||||||||

||||||||||||||||||||

Step 9. After the package installation, the HX Data Platform starts automatically. Check the status: # status storfs storfs running

The node with the new SSD rejoins the existing cluster, and the cluster returns to a healthy state.

Replacing or Adding HDDs This section covers how to replace or add HDDs in HyperFlex servers/systems. Note Mixing storage disk types or sizes on a server or across a storage cluster is not supported.

The requirements for using HDDs in HX are as follows: Use all HDDs or all 3.8 TB SSDs or all 960 GB SSDs. Use the hybrid cache device on hybrid servers and all flash cache devices on all flash servers. When replacing cache or persistent disks, always use the same type and size as the original disk.

To replace or add an HDD, follow these steps: Step 1. Refer to the hardware guide for your server

Technet24

||||||||||||||||||||

||||||||||||||||||||

and follow the directions for adding or replacing disks. Step 2. Add HDDs of the same size to each node in the storage cluster. Step 3. Add the HDDs to each node within a reasonable amount of time. The storage starts being consumed by a storage cluster immediately. After performing the steps, the vCenter event log will display messages reflecting the changes to the nodes. Note When disks are added to a node, the disks are immediately available for HX consumption, although they are not seen in the UCSM server node inventory. This includes cache and persistent disks.

Step 4. To enable Cisco UCS Manager to include new disks in the UCS Manager > Equipment > Server > Inventory > Storage tab, reacknowledge the server node. This applies to cache and persistent disks. Note Re-acknowledging a server is disruptive, so place the server into HX maintenance mode before doing so.

UPGRADING HYPERFLEX

||||||||||||||||||||

||||||||||||||||||||

SOFTWARE This section describes how to upgrade an existing installation of Cisco HX Data Platform. Cisco HyperFlex systems have several components that may be upgraded, depending on the environment. The core components in a HyperFlex system are: Cisco UCS server firmware (the UCS C-bundle, which consists of UCS server, BIOS, CIMC, NIC, and so on) Cisco HX Data Platform software VMware ESXi software

The following sections describe the HyperFlex upgrade process.

Upgrading HyperFlex This section provides an overview of the HyperFlex software upgrade workflow. The complete upgrade process can be divided into two parts (see Figure 5-41): Pre-upgrade steps: Download the relevant UCS Infra, B-Series, C-Series, and HX upgrade storfs bundles. Perform a manual pre-upgrade validation and run the Hypercheck pre-upgrade utility for a pre-upgrade and health check. Upgrade the UCS infrastructure firmware.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Upgrade steps: Upgrade HX Data Platform, UCS server firmware, and VMware ESXi vSphere.

Figure 5-41 HyperFlex Upgrade Workflow The following sections cover the upgrade workflow steps in detail.

Pre-Upgrade Workflow Ensure you have reviewed the following important guidelines before scheduling a HyperFlex upgrade: Step 1. See the resolved caveats and open caveats before upgrading and review the new features for this release. Refer to the corresponding

||||||||||||||||||||

||||||||||||||||||||

Cisco HX Data Platform release notes. Visit https://www.cisco.com/c/en/us/support/hyper converged-systems/HyperFlex-hx-dataplatform-software/products-release-noteslist.html. Step 2. Review the supported versions and system requirements. Refer to the hardware and software interoperability information for Cisco HyperFlex HX-Series. Be sure to verify that you have the latest software bundle versions and review the software versions. Refer to the latest Cisco HX Data Platform release notes. Note Hardware and Software Interoperability information can be found in release notes for specific HX releases

Step 3. Back up the configuration to an All Configuration backup file. See the Cisco UCS Manager Backing Up and Restoring the Configuration Guide for the detailed steps https://www.cisco.com/c/en/us/td/docs/unifie d_computing/ucs/ucs-manager/GUI-UserGuides/Admin-Management/31/b_Cisco_UCS_Admin_Mgmt_Guide_3_1/b_ Cisco_UCS_Admin_Mgmt_Guide_3_1_chapte r_01001.html.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Step 4. Before you perform firmware updates, use the Cisco UCS Manager firmware management interface to download relevant images to the fabric interconnect. Images are stored in bootflash partitions in the fabric interconnect. Step 5. If needed, perform an ESXi upgrade when upgrading to newer Cisco HyperFlex HX Data Platform versions. See the supported versions for each HX Data Platform version in HyperFlex Software Versions. Step 6. Keep SSH enabled on all ESXi hosts. Step 7. Enable vMotion so that the VMs can be moved automatically during the upgrade and MTUs are set as required in the environment. Step 8. Verify that the HyperFlex cluster is healthy. Step 9. Verify that the cluster is in lenient mode. If it is not, set the cluster to lenient mode.

Downloading UCS Infra, B-Series, C-Series, and Storfs Bundles For a successful HyperFlex upgrade, the Cisco HyperFlex System component bundles can be downloaded from the Cisco HyperFlex Downloads page on Cisco.com. Browse to https://software.cisco.com/download/home and search for each of the following, depending on the upgrading

||||||||||||||||||||

||||||||||||||||||||

version: HX Data Platform upgrade bundle (.tgz file): As an example, Figure 5-42 shows the HyperFlex Data Platform Upgrade bundle for upgrading HX clusters to version 3.5(2h).

Figure 5-42 HX Data Platform Upgrade Bundle VMware ESXi Offline Zip bundle: As an example, Figure 5-43 shows the offline bundle for upgrading ESXi using Cisco HX Custom Image to version 6.5U3.

Figure 5-43 Offline Bundle for Upgrading ESXi: Cisco HX Custom Image Cisco UCS infrastructure bundle, blade firmware bundle, and rack-mount firmware bundle: As an example, Figure 544 shows the UCS Infrastructure software bundle for 6200, 6300, and 6400 fabric interconnects, UCS B-Series blade server software, and UCS C-Series rack-mount servers for firmware version 4.0(4g).

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 5-44 UCS Infrastructure, Blade, and RackMount Firmware Bundle After the Cisco UCS bundles and firmware are downloaded, they need to be copied to Cisco UCS Manager before you start the HyperFlex upgrade process. To upload the firmware files to UCS, log in to UCSM, browse to Equipment > Firmware Management > Download Tasks, and click on Download Firmware. Next, browse to the software package’s location and click OK to upload it to UCSM. As an example, Figure 5-45 shows the UCS B-Series 4.0(4e) package upload on UCSM.

||||||||||||||||||||

||||||||||||||||||||

Figure 5-45 Downloading UCS Software Packages on UCSM

Note Download the UCS-B Series bundle and upload it to UCSM even if you don’t have compute blades. This is a prerequisite for UCS Firmware Upgrade.

Verifying the Pre-Upgrade UCS Server Firmware (C-Bundle) Version Before performing the upgrade, you need to verify the

Technet24

||||||||||||||||||||

||||||||||||||||||||

current UCS server firmware by using one of the two methods described in the sections that follow.

Using UCS Manager Using Cisco UCS Manager to verify the current UCS server firmware, follow these steps: Step 1. Log in to UCS Manager. Step 2. Select the Server tab. Step 3. Select the host firmware package policy by navigating to Policies > Root > SubOrganizations > > Host Firmware Packages > HyperFlex. Step 4. Under properties, note the current rack package version, which should be listed as X.Y(Z)C. For example, Figure 5-46 shows version 4.0(2d)C.

||||||||||||||||||||

||||||||||||||||||||

Figure 5-46 Verifying the Server Firmware Version

Using HX Connect Using Cisco Hyperflex Connect to verify the current UCS server firmware, follow these steps: Step 1. Log in to HX Connect. Step 2. In the navigation pane, select Upgrade. Step 3. Select the UCS Firmware checkbox and click Discover. Step 4. Note the current C-bundle version displayed. Figure 5-47 shows an example where the current server firmware is 4.0(2d).

Figure 5-47 Verifying the Server Firmware: HX Connects

Pre-Upgrade Validation This section lists the checks that are part of the HyperFlex pre-upgrade validations. Perform the

Technet24

||||||||||||||||||||

||||||||||||||||||||

following validations (which are all described in more detail in the sections that follow) on each HyperFlex node before moving on to upgrade the next node in the cluster: Verify that the HyperFlex cluster is healthy and online. Verify that all HyperFlex cluster nodes are connected to vCenter and are online. Verify the cluster storage capacity. Verify that DRS is enabled and set to fully automated. Verifying the Net.TeamPolicyUpDelay default value. Verify that vSphere services are running and the ESXi Agent Manager (EAM) health is normal. Verify the health of the cluster in Cisco UCS Manager. Verify the vMotion interface. Verify upstream network connectivity. Configure the cluster access policy in lenient mode. Verify that no major alarms are reported for the HyperFlex cluster in HyperFlex Connect.

Viewing the HyperFlex Cluster Health From HyperFlex Connect, select System Information > Nodes page. Verify if the HyperFlex cluster is healthy and online. From the vSphere Web Client navigator, select vCenter Global Inventory Lists > Cisco HyperFlex Systems > Cisco HX Data Platform > cluster >

||||||||||||||||||||

||||||||||||||||||||

Summary. View the cluster widget to verify if the HyperFlex cluster is healthy and online. Verify whether all HX cluster nodes are connected to the vCenter and whether they are online. Note HyperFlex cluster health verification check is automatically performed by Hypercheck utility.

Checking the Cluster Storage Capacity It is recommended that you check the cluster storage capacity before starting the upgrade of an existing installation of Cisco HX Data Platform. If the storage cluster capacity is above 70%, it is highly recommended to either reduce the amount of storage capacity used or increase the storage capacity by adding new nodes or disks. This confirmation of cluster storage capacity is important because if a node goes down in such a situation, the cluster will not be able to rebalance and will stay unhealthy (online). Note Cluster storage capacity (enospace) verification check is automatically performed by the Hypercheck utility.

Verifying That DRS Is Enabled Follow these steps to ensure that DRS is enabled on the

Technet24

||||||||||||||||||||

||||||||||||||||||||

node: Step 1. From the vSphere Web Client navigator, select vCenter Inventory Lists > Clusters > cluster > Configure tab. Verify that DRS is enabled. Step 2. Click the vSphere DRS tab. Ensure that Migration Automation Level is set to Fully Automated, as shown in Figure 5-48.

Figure 5-48 DRS Settings: Enabled and Fully Automated

Verifying and Configuring the Net.TeamPolicyUpDelay Default Value To avoid loss of storage access during fabric interconnect reboots for firmware updates, perform this check prior to UCSM infrastructure upgrade. Upgrades to 3.5(2) require that the default value of the

||||||||||||||||||||

||||||||||||||||||||

ESXi host Net.TeamPolicyUpDelay be set to 30000. Complete the following steps to verify and, if needed, modify the default value of the ESXi host Net.TeamPolicyUpDelay to 30000 (see Figure 5-49): Note The Net.TeamPolicyUpDelay value check is automatically performed by the Hypercheck utility.

Figure 5-49 Advanced Configuration: Net.TeamPolicyUpDelay Step 1. From the vSphere Web Client navigator, select ESXi Host > Configure > System > Advanced System Settings. Step 2. In Advanced System Settings, scroll down to Net.TeamPolicyUpDelay. Step 3. If needed, change the value to 30000. The default value is 100.

Viewing ESX Agent Manager

Technet24

||||||||||||||||||||

||||||||||||||||||||

From the vSphere Web Client navigator, select Administration > vCenter Server Extensions > vSphere ESX Agent Manager > Configure > ESX Agencies. Verify that the ESX Agent Manager (EAM) status is normal, as shown in Figure 5-50.

Figure 5-50 EAM Status

Note Starting with the HX 4.0 release, it is recommended to remove EAM from HyperFlex clusters that are upgraded from older releases. New installations of HX 4.0 do not use EAM.

Verifying the Health of a HyperFlex Cluster in Cisco UCS Manager Cisco UCS Manager health checks help with verifying the status of the UCS infrastructure and servers before you perform an upgrade. Use the following steps for verification: Step 1. Verify whether the high availability status of

||||||||||||||||||||

||||||||||||||||||||

the fabric interconnects shows that both fabric interconnects are up and running. Log in to UCSM and browse to Equipment > Fabric Interconnect > FIA > Expand High Availability Details and verify that the Ready and State fields show Yes and Up, respectively, as shown in Figure 5-51.

Figure 5-51 High Availability Status Step 2. Verify that the data path is up and running by entering the following commands:

Technet24

||||||||||||||||||||

||||||||||||||||||||

a. To enter NX-OS mode for the fabric interconnect, enter the following command: Click here to view code image UCS-A /fabric-interconnect # connect nxos {a | b }

b. To determine the number of active Ethernet interfaces, enter the following command: Click here to view code image UCS-A(nxos)# show int br | grep -v down | wc –l

Verify that the number returned matches the number of Ethernet interfaces that were up prior to the upgrade. c. To determine the total number of MAC addresses, enter the following command: Click here to view code image UCS-A(nxos)# show platform fwm info hw-stm | grep '1.' | wc –l

Verify that this number matches the number of MAC addresses prior to the upgrade. See the Cisco UCS Manager Firmware Management Guide (https://www.cisco.com/c/en/us/td/docs/un ified_computing/ucs/ucs-manager/GUIUser-Guides/Firmware-Mgmt/3-

||||||||||||||||||||

||||||||||||||||||||

2/b_UCSM_GUI_Firmware_Management_ Guide_3_2.html) for more information. Step 3. Verify that the HyperFlex servers have no faults that might impact an upgrade. For example, Figure 5-52 shows an example of a HyperFlex server on Cisco UCSM that has status OK and doesn’t have any critical, major, or minor alerts.

Figure 5-52 HyperFlex Server Status and Faults

Verifying vMotion Interfaces Make sure the vMotion VMkernel interfaces necessary for vMotion to work are configured on each ESXi host. You must predefine vMotion networking by creating a

Technet24

||||||||||||||||||||

||||||||||||||||||||

vSwitch switch and defining the vNICs and VLANs in UCS Manager. Before you perform maintenance operations on a Cisco HyperFlex cluster, you need to verify that all nodes in the HX cluster are configured for vMotion. Note You can create vMotion interfaces by using a post_install script. Refer to Chapter 3, “Installing HyperFlex,” for details. The vMotion verification check is automatically performed by the Hypercheck utility.

Verifying Upstream Network Connectivity You need to ensure that the hx-storage-data and vMotion upstream switches are configured for jumbo frames. Skipping this step could lead to input/output interruption during Cisco UCS infrastructure upgrade. Follow this process: Step 1. Put a node in Cisco HX maintenance mode. Step 2. SSH to the ESXi host in step 1. Step 3. Verify that ping is working by pinging the corresponding vmk1 IP interface of another host. If using jumbo frames, use the following command: Click here to view code image

||||||||||||||||||||

||||||||||||||||||||

# vmkping -I vmk1 -d -s 8972

If not using jumbo frames, use the following command: Click here to view code image # vmkping -I vmk1 -d -s 1472

Step 4. Swap the active interfaces in vswitch-hxstorage-data to force traffic upstream: Click here to view code image # esxcli network vswitch standard policy failover set -a vmnic1 -s vmnic5 -v vswitch-hx-storage-data

Step 5. Again verify that the ping is working by pinging the corresponding vmk1 IP interface of another host. If using jumbo frames, use the following command: Click here to view code image # vmkping -I vmk1 -d -s 8972

If the ping fails, try again with this command:

Technet24

||||||||||||||||||||

||||||||||||||||||||

Click here to view code image # vmkping -I vmk1 -d -s 1472

If not using jumbo frames, use the following command: Click here to view code image # vmkping -I vmk1 -d -s 1472

Note If the ping fails, do not proceed to upgrade the Cisco UCS firmware. Investigate the network configuration, including the upstream switch, to determine the cause of the failure.

Step 6. Return the interface back to the defaults even if the ping fails: Click here to view code image # esxcli network vswitch standard policy failover set -a vmnic5 -s vmnic1 -v vswitch-hx-storage-data

Note vmnic5 and vmnic1 are supported for upgrade from 3.5(x) and later releases of HyperFlex.

Step 7. Exit the node from Cisco HX maintenance

||||||||||||||||||||

||||||||||||||||||||

mode.

Configuring the Cluster Access Policy in Lenient Mode The cluster access policy works with the data replication factor to set levels of data protection and data loss prevention. There are two cluster access policy options: Strict mode: Applies policies to protect against data loss. If nodes or disks in the storage cluster fail, the cluster’s ability to function is affected. If more than one node fails or one node and disk(s) on a different node fail, this is called a simultaneous failure. The strict setting helps protect the data in the event of simultaneous failures. Lenient mode: Applies policies to support longer storage cluster availability. This is the default.

The mode is not configurable during installation but can be changed after installation and initial storage cluster configuration. To modify the cluster access policy to lenient, use the following procedure: Step 1. SSH to any one of the controller VMs and log in as root. Step 2. Check whether lenient mode is already configured: Click here to view code image #stcli cluster get-cluster-access-policy

Technet24

||||||||||||||||||||

||||||||||||||||||||

Step 3. If the mode is set to strict, change it to lenient: Click here to view code image #stcli cluster set-cluster-access-policy --name lenient

If it is already set to lenient, no further action is required. Step 4. Confirm the change: Click here to view code image # stcli cluster info | grep -i policy

Verifying That No Major Alarms Are Reported for the HyperFlex Cluster in HyperFlex Connect Log in to HX Connect and browse to Alarms and, verify that there are major alarms, as shown in Figure 5-53.

Figure 5-53 HyperFlex Connect: No Major Alarms

Hypercheck Utility

||||||||||||||||||||

||||||||||||||||||||

The Hypercheck tool is a utility to perform proactive checks on HyperFlex systems to ensure its stability and resiliency. It provides an automated list of health and pre-upgrade checks on HyperFlex systems to save time during HyperFlex upgrade and maintenance operations. Cisco recommends running the proactive Hypercheck health check utility on a HyperFlex cluster prior to upgrade. Such checks provide early visibility into any areas that may need attention and will help ensure a seamless upgrade experience. Note Most of the health check and pre-validation/pre-upgrade checks on HyperFlex Data Platform are included as a part of the Hypercheck utility and are automatically performed when you run Hypercheck on HyperFlex. The Hypercheck utility was developed by Cisco CX engineers who troubleshoot HyperFlex every day. It is hosted on the CiscoDevNet GitHub repository and is updated regularly whenever Cisco CX identifies a new check (https://github.com/CiscoDevNet/Hyperflex-Hypercheck).

Storage Controller VM and ESXi Node Checks Hypercheck performs the following checks on each storage controller VM: Cluster services check: Verifies the status of the storfs, stMgr, and stNodeMgr services. Enospc state check: Checks whether the cluster space usage is above the warning threshold. Zookeeper check: Checks whether Zookeeper is running.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Exhibitor check: Verifies the status of the Exhibitor service, which manages Zookeeper. HDD health check: Reports if there is any blacklisted disk in a cluster. DNS check: Checks whether DNS is configured and reachable. vCenter reachability check: Checks whether vCenter is reachable on the required ports. Timestamp check: Checks whether all the controller VMs have exactly the same time. NTP sync check: Checks whether NTP is reachable from the storage controller VMs. Check package and versions: Checks for packages and versions on controller VMs. Check iptables count: Compares Iptables counts on all controller VMs. Cluster upgrade check: Checks whether there are any previous stale upgrade entries. Extra pnodes check: Looks for any extra/duplicate pnode entries in the cluster. Disk usage (/var/stv) check: Checks whether the utilization of /var/stv is more than 80%. Disk usage (/var/zookeeper) check: Checks the utilization of /var/zookeeper. Out of memory check: Checks through the log files if the cluster had any out-of-memory events. Supported vSphere check: Lists supported vSphere Versions with current HX Data Platform version. Network checks: Checks the connectivity between the management and storage networks.

||||||||||||||||||||

||||||||||||||||||||

Hypercheck performs the following checks on each ESXi node: HX user account check: Verifies whether hxuser is on all the hosts and has admin rights. vMotion enabled check: Checks whether the vMotion network is configured. vMotion reachability check: Checks the connectivity between vMotion networks. Check for ESXI failback timer: Checks for the ESXi failback timer on ESXi hosts. Network check: Sends a ping to vmk0, eth0, and eth1 (between the management and storage networks). Dumpfile in springpathDS: Checks whether SpringpathDS is configured for a coredump. VMware tools location check: Checks for VMWare tools location if it was modified from default.

Installing and Running Hypercheck To install and run Hypercheck, perform the following steps: Step 1. Download HyperFlex-Hypercheck.zip from https://github.com/CiscoDevNet/HyperFlexHypercheck, as shown in Figure 5-54.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 5-54 Downloading Hypercheck Zip

Note Use only the script downloaded from the CiscoDevNet GitHub account.

Always download the latest version of the tool before you use it. This tool is enhanced frequently, and using older version might result in missing important checks. Step 2. Upload the tool to the Storage Controller VM (SCVM) with the Cluster Management IP (CMIP). Use your preferred method (scp/sftp/ftp/tftp) to copy the HyperFlexHypercheck.zip to the /tmp directory.

||||||||||||||||||||

||||||||||||||||||||

Step 3. Extract the contents of HyperFlexHypercheck.zip: a. Type cd /tmp to change to the /tmp directory: Click here to view code image root@SpringpathController7PVQWP6WV1:~# cd /tmp/

b. Type unzip HyperFlex-Hypercheck.zip to extract the files: Click here to view code image root@SpringpathController7PVQWP6WV1:/tmp# unzip HyperFlex-Hypercheck.zip Archive: HyperFlex-Hypercheck.zip b61c59f7962b72902692ce70548ba3d760efdf06 creating: HyperFlex-Hypercheck/ inflating: HyperFlex-Hypercheck/HXTool.py inflating: HyperFlex-Hypercheck/LICENSE.txt inflating: HyperFlex-Hypercheck/ReadMe.txt inflating: HyperFlex-Hypercheck/TestInfo.txt inflating: HyperFlex-Hypercheck/prettytable.py inflating: HyperFlex-Hypercheck/progressbar.py root@SpringpathController7PVQWP6WV1:/tmp#

Step 4. Execute the HXTool Python script: a. Type cd HyperFlex-Hypercheck to go to the HyperFlex-Hypercheck directory: Click here to view code image root@SpringpathControllerABCDE01234:/tmp# cd HyperFlexHypercheck

b. Type python HXTool.py to execute the

Technet24

||||||||||||||||||||

||||||||||||||||||||

script and provide the root passwords for the controller VM and ESX: Click here to view code image root@SpringpathControllerABCDE01234:/tmp/HyperFlexHypercheck# python HXTool.p . HX Health Check 3.8 Please enter below info of HX-Cluster: Enter the HX-Cluster Root Password: Enter the ESX Root Password:

The tool then starts the automated checks on HyperFlex and takes around three to five minutes to complete the checks on a four-node cluster (though the duration varies depending on the cluster size). Note To stop the script execution, press Ctrl+Z, and it stops immediately.

Example 5-1 shows sample HXTool output from a threenode cluster. It shows the progress bar during the process and gives a binary PASS/FAIL for each of the checks performed. If one or more checks fail, perform the corresponding corrective action or contact Cisco TAC to fix the issue before proceeding with the upgrade process. Example 5-1 Sample HXTool Output from a Three-Node Cluster Click here to view code image

||||||||||||||||||||

||||||||||||||||||||

root@SpringpathControllerABCDE01234:/tmp/HyperFlexHypercheck# python HXTool.py HX Health Check 3.8 Please enter below info of HX-Cluster: Enter the HX-Cluster Root Password: Enter the ESX Root Password: SSH connection established to HX Node: 10.X.X.79 SSH connection established to HX Node: 10.X.X.80 SSH connection established to HX Node: 10.X.X.81

HX Controller: 10.X.X.79 Cluster services check PASS ZooKeeper & Exhibitor check PASS HDD health check PASS Pre-Upgrade Check PASS Network check PASS

[########################] [########################] [########################] [########################] [########################]

HX Controller: 10.X.X.79 Test Summary: +----------------------------------+-----------+---------+ | Name | Result | Comments | +----------------------------------+-----------+---------+ | Cluster services check | PASS | | +----------------------------------+-----------+---------+ | Enospc state check | PASS | | +----------------------------------+-----------+---------+ | Zookeeper check | PASS | | +----------------------------------+-----------+---------+ | Exhibitor check | PASS | | +----------------------------------+-----------+----------

Technet24

||||||||||||||||||||

||||||||||||||||||||

+ | ZooKeeper Disk Usage | PASS | | +----------------------------------+-----------+---------+ | HDD health check | PASS | | +----------------------------------+-----------+---------+ | DNS check | PASS | | +----------------------------------+-----------+---------+ | vCenter reachability check | PASS | | +----------------------------------+-----------+---------+ | Timestamp check | PASS | | +----------------------------------+-----------+---------+ | NTP sync check | PASS | | +----------------------------------+-----------+---------+ | Check package & versions | PASS | | +----------------------------------+-----------+---------+ | Check Iptables count | PASS | | +----------------------------------+-----------+---------+ | Cluster upgrade status | PASS | | +----------------------------------+-----------+---------+ | Extra pnodes check | PASS | | +----------------------------------+-----------+---------+ | Memory usage check | PASS | | +----------------------------------+-----------+---------+ | Incidence of OOM in the log file | PASS | | +----------------------------------+-----------+---------+ | Supported vSphere versions | 6.0.0-U3 | | | | 6.5.0-U1 | |

||||||||||||||||||||

||||||||||||||||||||

| | 6.5.0-U2 | | | | 6.5.0-U3 | | | | 6.7.0-UGA | | | | 6.7.0-U1 | | | | 6.7.0-U2 | | | | 6.7.0-U3 | | +----------------------------------+-----------+---------+ | Check permissions for /tmp | PASS | | +----------------------------------+-----------+---------+

ESX Host: 10.X.X.76 +--------------------------------------+--------+---------+ | Name | Result | Comments | +--------------------------------------+--------+---------+ | HX User Account check | PASS | | +--------------------------------------+--------+---------+ | vMotion enabled check | PASS | | +--------------------------------------+--------+---------+ | vMotion reachability check | PASS | | +--------------------------------------+--------+---------+ | Check for ESXI Failback timer | PASS | | +--------------------------------------+--------+---------+ | Check ping to vmk0, eth0, eth1 | PASS | | +--------------------------------------+--------+---------+ | No extra controller vm folders check | PASS | | +--------------------------------------+--------+---------+ | VMware Tools location check | PASS |

Technet24

||||||||||||||||||||

||||||||||||||||||||

| +--------------------------------------+--------+---------+

Main Report File: HX_Tool_Main_Report_15-03-2020_05-2412_.txt Report tar file: HX_Report_15_03_2020_17_15_17.tar Report file copied to path: /var/log/springpath Release Notes: https://www.cisco.com/c/en/us/support/hyperconvergedsystems/HyperFlex-hx-dataplatform-software/products-release-notes-list.html Upgrade Guides: https://www.cisco.com/c/en/us/support/hyperconvergedsystems/HyperFlex-hx-dataplatform-software/products-installation-guides-list.html Note: 1) Please check the status of Compute nodes manually, script only verifies the config on the converged nodes. 2) Hypercheck doesnot perform FAILOVER TEST, so please ensure that the upstream is configured for network connectivity for JUMBO or NORMAL MTU size as needed.

Be sure to resolve all the issues identified during the preceding verification process and then proceed to the next step of performing the HyperFlex upgrade procedure.

Upgrading UCS Infrastructure Firmware This section covers how to upgrade UCS infrastructure firmware. The steps are as follows: Step 1. Open the UCS Manager GUI. Step 2. Select Equipment > Firmware

||||||||||||||||||||

||||||||||||||||||||

Management > Firmware Auto Install. Step 3. Click Install Infrastructure Firmware (see Figure 5-55). Check the Ignore All box if the warnings are not critical to the user environment and click Next.

Figure 5-55 Firmware Auto Install Step 4. Select Infra Pack from the drop-down list and click the Upgrade Now box to immediately begin the firmware upgrade. Click Finish. The Cisco UCS Manager GUI might disconnect.

Technet24

||||||||||||||||||||

||||||||||||||||||||

This is expected during a UCS Manager upgrade as UCS Manager is stopped and then restarted at the new version. Wait until UCS Manager goes back online. Log back in to UCS Manager to complete the next steps. Step 5. Click Finish, and the UCS Manager software upgrades (see Figure 5-56).

Figure 5-56 Installing the Infrastructure Firmware

||||||||||||||||||||

||||||||||||||||||||

Step 6. Log back in to UCS Manager, wait for the subordinate FI to be activated, and then select Equipment > Installed Firmware > Fabric Interconnects. Step 7. Verify that the subordinate FI has rebooted and joined the UCS cluster: a. Check the kernel and switch versions of the FI. b. Ensure that the FI has no fault. c. Ensure that the FI cluster membership is Subordinate (see Figure 5-57).

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 5-57 Checking the Subordinate FI Version Step 8. Wait until HX traffic is repinned to both FIs. Wait for UCS Manager vNIC faults to be cleared. Step 9. Verify that the HX cluster is online and healthy before rebooting the primary fabric

||||||||||||||||||||

||||||||||||||||||||

interconnect. (Refer to the section “Viewing the HyperFlex Cluster Health,” earlier in this chapter, for more on how to check HX cluster health.) Step 10. In the UCS Manager GUI, on the toolbar, click Pending Activities (see Figure 5-58). Click the Fabric Interconnects tab to see the tasks requiring user acknowledgment before they can be completed.

Figure 5-58 Pending Activities Step 11. Click Reboot now for each pending activity that you want to deploy immediately (see Figure 5-59). Cisco UCS Manager immediately reboots the primary FI.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 5-59 Fabric Interconnect: Reboot Now Step 12. Check the status to determine whether the subordinate FI has become the primary FI. Step 13. Verify that the FI has rebooted and joined the

||||||||||||||||||||

||||||||||||||||||||

UCS cluster as the subordinate. Wait until HX traffic is repinned to both FIs. Step 14. In the UCS manager GUI, wait until all server vNIC faults have been cleared and verify the currently installed infrastructure version (see Figure 5-60).

Figure 5-60 UCS: Installed Firmware Step 15. Verify that the HX Cluster is online and healthy after rebooting the FI. (For more information on how to check the HX cluster health, see the section “Viewing the HyperFlex

Technet24

||||||||||||||||||||

||||||||||||||||||||

Cluster Health,” earlier in this chapter.)

Upgrade Procedure An upgrade can occur online or offline. Using either of these methods, you can choose to perform any of the following: Combined upgrade: Combined upgrade means upgrading all three components together: HX Data Platform, UCS, and ESXi. Split upgrade: Split upgrade means upgrading HX Data Platform first and then upgrading either UCS and/or ESXi. To perform a split upgrade, you must upgrade HX Data Platform first. After HX Data Platform is upgraded to 3.5(1x), you can perform a split upgrade of UCSM only, ESXi only, or both.

Note The UCS infrastructure is upgraded first in both the combined upgrade and split upgrade processes.

Recommended Upgrade Method For both the combined upgrade and the split upgrade, Cisco recommends upgrading the HyperFlex components in the following order for optimizing the upgrade time: 1. Upgrade the Cisco UCS infrastructure. 2. Upgrade Cisco HX Data Platform. 3. Upgrade Cisco customized VMware ESXi. 4. Upgrade the Cisco UCS firmware.

||||||||||||||||||||

||||||||||||||||||||

Online Upgrade Process When using the online upgrade process workflow, consider the following: First upgrade the Cisco UCS infrastructure to the latest version and then use the automated upgrade workflow for a combined upgrade of Cisco UCS firmware and Cisco HX Data Platform. An online upgrade uses host firmware packages to upgrade all server endpoints. During an online upgrade, as one node is being upgraded (placed into maintenance mode), the number of tolerated node failures is reduced based on the Data Replication Factor and Access Policy settings. If upgrading both HX Data Platform and the UCS firmware, a combined upgrade can be selected through HX Connect, depending on the length of the maintenance window.

Note Do not use the Firefox browser for an online upgrade. It is not supported because it is bundled with an outdated version of Flash.

Online Upgrade Workflow Steps To do an online upgrade, follow these steps: Step 1. Download the Cisco UCS infrastructure A, blade bundle B, and rack bundle C on UCSM, as discussed in the section “Downloading UCS Infra, B-Series, C-Series, and Storfs Bundles” earlier in this chapter.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Note Download the UCS B-Series bundle and upload even if you don’t have compute blades. This is a prerequisite for UCS Firmware Upgrade.

Step 2. Ensure that the hx-storage-data and vMotionupstream switches are configured for full network failover capability. Otherwise, the HyperFlex cluster goes offline, and all datastores unmount from the ESXi hosts. For a reminder of how to perform this check, see the section “Verifying Upstream Network Connectivity,” earlier in this chapter. Step 3. Upgrade the Cisco UCS infrastructure as required. Note It is important that you manually upgrade the UCS infrastructure before initiating the upgrade sequence for the HyperFlex components, as described earlier in this chapter, in the section “Upgrading UCS Infrastructure Firmware.” The upgrade feature of the HX Platform software will not upgrade the UCS infrastructure bundle. This upgrade is a separate process (Auto Install) that can be performed from UCSM.

Step 4. Disable the snapshot schedule by using the command stcli snapshot-schedule -disable. Step 5. Initiate the upgrade from the HX Connect UI, as discussed in the next section.

||||||||||||||||||||

||||||||||||||||||||

Step 6. Confirm that the upgrade task is complete. Step 7. Enable the snapshot schedule by using the command stcli snapshot-schedule --enable.

Upgrading a HyperFlex Cluster Using the HX Connect UI To upgrade a HyperFlex cluster using the HX Connect UI, follow these steps: Step 1. Use Auto Bootstrap and Upgrading Management Packages to upgrade Cisco HX Data Platform. Log in to HX Connect, navigate to https://, enter the administrative username and password, and click Login. Step 2. In the navigation pane, select Upgrade. The Select Upgrade Type page appears, as shown in Figure 5-61.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 5-61 Selecting the Upgrade Type in HyperFlex Connect Step 3. Upload the new Cisco HX Data Platform upgrade package, enter your vCenter credentials, as illustrated in Figure 5-62 and Figure 5-63, and then click Upgrade.

||||||||||||||||||||

||||||||||||||||||||

Figure 5-62 Uploading the HX Data Platform Upgrade Package: Validation

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 5-63 HX Data Platform Upgrade Package: Upload Success Step 4. In the Initiating Pre-Upgrade message box that appears (see Figure 5-64), click Confirm. The Pre-Upgrade Process auto bootstrapping and updating management plug-in begins to run on all nodes, as shown in Figure 5-65.

||||||||||||||||||||

||||||||||||||||||||

Figure 5-64 Initiating the Pre-Upgrade Process

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 5-65 Pre-Upgrade in Progress: Auto Bootstrapping Step 5. When the Pre-Upgrade process is complete, and the HX Connect UI prompts you with the onscreen message shown in Figure 5-66, click Upgrade again to complete the HX Data Platform part of the upgrade.

||||||||||||||||||||

||||||||||||||||||||

Figure 5-66 Pre-Upgrade Success

Note The cluster upgrade is not complete until the full upgrade is initiated. The upgrade is only partially complete after the pre-upgrade auto bootstrapping steps (steps 1 through 5).

Step 6. Choose the type of upgrade from the Select Upgrade Type page: HX Data Platform and UCS Server Firmware HX Data Platform and ESXi HX Data Platform, UCS Server Firmware, and ESXi

Step 7. Depending on the type of upgrade you want to perform, complete the following fields on the Select Upgrade Type page(see Figure 5-67):

Technet24

||||||||||||||||||||

||||||||||||||||||||

UCS Server Firmware: In the UCS Manager FQDN/IP field, enter the Cisco UCS Manager FQDN or IP address. In the User Name field, enter the Cisco UCS Manager username. In the Admin Password field, enter the Cisco UCS Manager password. Click the Discover button to view the current UCS firmware package version (which is listed under M5 Current Version).

||||||||||||||||||||

||||||||||||||||||||

Figure 5-67 Uploading Upgrade Bundles Data Platform: Upload the Cisco HyperFlex Data Platform upgrade bundle for

Technet24

||||||||||||||||||||

||||||||||||||||||||

upgrading existing clusters (the one used in Step 3). Notice that the current HyperFlex Data Platform version is displayed. Notice that the HyperFlex Data Platform version of the uploaded bundle is displayed.

ESXi: Upload the latest Cisco HyperFlex custom image offline bundle for upgrading existing ESXi hosts (for example, HX-ESXi-6.7U314320388-Cisco-Custom-6.7.3.1-upgrade-bundle.zip). Notice that the current ESXi version is displayed. Notice that the ESXi version of the uploaded bundle is displayed.

When you’re done with the settings in this screen, click Upgrade to begin the cluster upgrade process. The validation screen on the Progress page shows the progress of the checks performed (see Figure 5-68).

Figure 5-68 Validating the Upgrade

||||||||||||||||||||

||||||||||||||||||||

When the validations are complete, the upgrade process starts, and the Progress page displays the upgrade progress for all the nodes (see Figure 5-69).

Figure 5-69 Upgrade Progress When the upgrade completes, the Progress page shows that the HX Data Platform upgrade has completed and asks you to refresh the browser session to see the upgrade changes (see Figure 5-70).

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 5-70 Upgrade Complete

Offline Upgrade Process Before you proceed with either a combined upgrade or a split upgrade, consider the following guidelines: The package name must match the file that you uploaded to the controller VM. Y ou need to enter passwords when prompted to do so. Nodes are upgraded with the new version of the Cisco HX Data Platform software and rebooted one at a time. Offline cluster upgrades with nested vCenter are not supported.

The steps for an offline upgrade are as follows: Step 1. Download Cisco UCS infrastructure A, blade bundle B, and rack bundle C on UCSM. Step 2. Ensure that the hx-storage-data and vMotionupstream switches are configured for full network failover capability. Step 3. Upgrade the Cisco UCS infrastructure as required (manually). Step 4. Perform a graceful shutdown of an HX cluster

||||||||||||||||||||

||||||||||||||||||||

(SCVMs must remain powered on): a. SSH to any controller VM in the cluster. b. Check the cluster health by using the command stcli cluster info | grep health. c. If the cluster is healthy, shut down the cluster with the command stcli cluster shutdown. d. Wait a few minutes for the shutdown to complete and the prompt to return. Step 5. Disable the snapshot schedule by using the command stcli snapshot-schedule -disable. Step 6. Do a manual bootstrap upgrade by copying storfs-package to /tmp and confirming the plugin version on vCenter. Run the cluster bootstrap script from the node that has the Cluster Management IP. Bootstrap is a process to update the management packages and plugin on vCenter, as in the example as shown here: Click here to view code image # tar –zxvf storfs-packages-4.0.1a-33028.tgz #./cluster-bootstrap.sh

Note Copy storfs-packages-.tgz into /tmp. Do not use any folder other than /tmp

Technet24

||||||||||||||||||||

||||||||||||||||||||

and do not create any subfolders

Step 7. Do an offline upgrade using CLI. The following is an example of a combined upgrade for M5 servers: Click here to view code image # stcli cluster upgrade --components ucs-fw, hxdp, hypervi sor --location /tmp/storfs-packages-3.5.1a-19712.tgz --ucsm-ho st eng-fi16.eng.storvisor.com --ucsm-user admin --ucs5fw-vers ion '3.1(2g)'

Step 8. When the upgrade is complete, start the cluster and power on the VMs by following these steps: a. Log in to any controller VM through SSH by using the command stcli cluster start, as in this example: Click here to view code image root@SpringpathControllerZRVF040451# stcli cluster upgrade-status Cluster upgrade succeeded root@SpringpathControllerZRVF040451# stcli cluster start waiting for Cluster to start on nodes: This will start the cluster and mount the HX datastores. Wait for cluster to come online. Cluster is online

b. Wait for the cluster to become healthy before

||||||||||||||||||||

||||||||||||||||||||

starting the VMs. Then run the command stcli cluster info| grep health, as in this example: Click here to view code image root@SpringpathControllerZRVF040451# stcli cluster info | grep health healthState: healthy state: healthy storage cluster is healthy

c. When the cluster is healthy, launch vSphere Web Client or Thick Client, navigate to Hosts and Cluster > Datacenter, rightclick Cluster, and select Power > Power On to start the VMs. Step 9. Enable the snapshot schedule by using the command stcli snapshot-schedule --enable.

Post-Upgrade Check After a cluster is upgraded, you can do the following to confirm the cluster state after upgrade: Log in to HyperFlex Connect and confirm the cluster health and cluster version. Rerun Hypercheck to perform a health check of the cluster.

SUMMARY This chapter covers important Day 2 operations on

Technet24

||||||||||||||||||||

||||||||||||||||||||

HyperFlex, including the following topics: HyperFlex licensing Virtual machine management Scaling HyperFlex clusters Hardware (disk) replacement HyperFlex software upgrades

||||||||||||||||||||

||||||||||||||||||||

Chapter 6 Advanced Features The HyperFlex HX Data Platform disaster recovery feature allows you to protect virtual machines (VMs) by setting up replication so that protected virtual machines running on one cluster replicate to the other cluster in a pair of network-connected clusters and vice versa. The two paired clusters typically are located at a distance from each other, with each cluster serving as the disaster recovery site for virtual machines running on the other cluster. Once protection has been set up on a VM, the HX Data Platform periodically takes a replication snapshot of the running VM on the local cluster and replicates (copies) the snapshot to the paired remote cluster. In the event of a disaster at the local cluster, you can use the most recently replicated snapshot of each protected VM to recover and run the VM at the remote cluster. Each cluster that serves as a disaster recovery site for another cluster must be sized with adequate spare resources so that, in the event of a

Technet24

||||||||||||||||||||

||||||||||||||||||||

disaster, it can run the newly recovered virtual machines in addition to its normal workload. This chapter describes the HyperFlex HX Data Platform disaster recovery feature and describes the configuration steps needed to enable replication between two HyperFlex clusters. It also covers the available backup solutions that can be integrated with HyperFlex HX Data Platform.

DATA PROTECTION There are several schools of thought on data protection. Some people believe that high availability and durability are part of data protection. Some say that stretch clusters are also part of data protection. However, two very basic parameters or variables allow you to determine the data protection solution you should use: recovery time objective (RTO) and recovery point objective (RPO). RTO essentially refers to how much time it takes for a service or a virtual machine to come up after a disaster or failure has occurred. RPO indicates how much data loss someone is ready to bear while waiting for services to come up. HyperFlex offers the following protection options, listed here from low RTO/RPO level to high RTP/RPO level:

||||||||||||||||||||

||||||||||||||||||||

Local resiliency (high availability and durability): Two (RF-2) or three (RF-3) copies of VM data Data stripped and distributed across all local nodes Redundant network paths An HA-aware hypervisor Zero RPO and zero or very low RTO Site-level resiliency (stretch clusters): Four copies (RF 2+2) of VM data Protection against local failures and site failures Protection against “split brains” VM data mirrored across sites An HA-aware hypervisor Zero RPO and zero or very low RTO Snapshots (VM-centric snapshots): VM-centric instant, space optimized Redirect-on-write snapshots Scheduled with a retention policy Quiesced and crash consistent Rapid provisioning using ReadyClones “Now”/hourly/daily/weekly RPO and RTO in minutes Replication and disaster recovery (VM-centric replication and disaster recovery):

Technet24

||||||||||||||||||||

||||||||||||||||||||

VM-centric replication Periodic asynchronous replication to remote site (WAN distance) Snapshot based Failover, fast failback, and test recovery Minutes/hourly/daily RPO and RTO in minutes Backup and Archive (third-party backup vendor integration): Fully verified Cisco Validated Design (CVD) on UCS infrastructure Integrated with HyperFlex native snapshots Accelerated transfers and low backup window Hourly/daily RPO and RTO in minutes/hours

REPLICATION OVERVIEW HyperFlex 2.5 introduced new data protection features, including snapshot-based VM-level replication between two HyperFlex clusters. Replication can be used to migrate or recover a single VM in the secondary HX cluster, coordinate or recover groups of VMs, or recover all VMs as part of a disaster recovery scenario. In order to start using replication, you must install two HyperFlex clusters and ensure network connectivity between them. The clusters can be extended clusters, and it is possible to replicate between hybrid and all flash clusters. The clusters are allowed to use self-

||||||||||||||||||||

||||||||||||||||||||

encrypting disks or standard disks in either location, both of them, or neither of them; there is no restriction in that respect. To avoid complications with duplicate VM IDs, recommended practice dictates that the two replicating HyperFlex clusters be managed by two different VMware vCenter servers. Figure 6-1 shows the logical network topology used for replication.

Figure 6-1 Replication Logical Network Topology

Port Requirements for Replication The firewall ports listed in Table 6-1 need to be open when configuring native HX asynchronous cluster-tocluster replication. If any of these ports are not open, the storage controller virtual machines (SCVMs) cannot communicate using the specific service for which the ports are closed. Closed ports also prevent the proper functionality of the replication feature. Table 6-1 Firewall Ports Required to Be Open for Replication

Technet24

||||||||||||||||||||

||||||||||||||||||||

Port Service/Protoc Source Port Essential Information Num ol Destina ber tions 933 8

Data Services Manager Peer/TCP

Each CVM node

Each CVM node

Bidirectional, including cluster management IP addresses

304 9

Replication for CVM/TCP

Each CVM node

Each CVM node

Bidirectional, including cluster management IP addresses

40 49

Cluster Map/TCP

Each CVM node

Each CVM node

Bidirectional, including cluster management IP addresses

405 9

NR NFS/TCP

Each CVM node

Each CVM node

Bidirectional, including cluster management IP addresses

909 8

Replication Service

Each CVM node

Each CVM node

Bidirectional, including cluster management IP addresses

888 9

NR Master for Coordination/ TCP

Each CVM node

Each CVM

Bidirectional, including cluster management IP addresses

node

935 0

Hypervisor Service/TCP

Each CVM node

Each CVM node

Bidirectional, including cluster management IP addresses

||||||||||||||||||||

||||||||||||||||||||

Replication Considerations When applying replication in HyperFlex, consider the following: Administrator: All replication and recovery tasks, excluding monitoring, can only be performed with administrator privileges on the local cluster. For tasks involving a remote cluster, both the local and remote user must have administrator privileges and should be configured with the vCenter SSO on their respective clusters. Storage space: Ensure that there is sufficient space on the remote cluster to support the replication schedule. The protected virtual machines are replicated (copied) to the remote cluster at every scheduled interval. Although storage capacity methods (deduplication and compression) are applied, each replicated virtual machine will consume some storage space. Not having sufficient storage space on the remote cluster can cause the remote cluster to reach capacity usage maximums. If there are errors reporting out-of-space errors, all replication schedules must be paused until the space is appropriately adjusted on the HX cluster. Always ensure that the cluster capacity consumption is below the space utilization warning threshold. Supported clusters: Replication is supported between the following HyperFlex clusters: 1:1 replication between HX clusters running under fabric interconnects. 1:1 replication between all flash and hybrid HX cluster running under fabric interconnects. 1:1 replication between a 3-node or 4-node HX Edge cluster and another 3-node or 4-node HX edge cluster.

Technet24

||||||||||||||||||||

||||||||||||||||||||

1:1 replication between all flash 3-node and 4-node edge and hybrid 3-node and 4-node HX edge clusters. 1:1 replication between a 3-node or 4-node HX edge cluster and an HX cluster running under fabric interconnects.

Recovery Considerations When configuring recovery in HyperFlex, consider the following: Rebooting nodes: Do not reboot any nodes in an HX cluster during any restore, replication, or recovery operation. Thin provision: Protected virtual machines are recovered with thin provisioned disks, regardless of how disks were specified in the originally protected virtual machine. Protection group limitations: The maximum number of VMs allowed in a protection group is 32. Do not add VMs with ISOs or floppies to protection groups. Non-HX datastores: If you have protected a VM with storage on a non-HX data-store, periodic replication will fail. Y ou can either unprotect this VM or remove its non-HX storage. Do not move protected VMs from HX datastores to non-HX data-stores. If a VM is moved to a non-HX datastore through storage vMotion, unprotect the VM and then reapply the protection. Protection and recovery of virtual machines with snapshots: There are several options: VM with no snapshots: When replication is enabled, the entire content of the VM is replicated. VM with VMware Redo-log snapshots: When replication is enabled, the entire content, including the snapshot data, is replicated. When a VM with redo-log

||||||||||||||||||||

||||||||||||||||||||

snapshots is recovered, all previous snapshots are preserved. VM with HyperFlex snapshots: When replication is enabled, only the latest data is replicated, and the snapshot data is not replicated. When the VM is recovered, previous snapshots are not preserved. Data protection and disaster recovery (DR) snapshots: These snapshots are stored on the same datastore as the protected VMs. Manual deletion of these snapshots is not supported. Deleting the snapshot directories would compromise HX data protection and disaster recovery.

REPLICATION NETWORKING OVERVIEW The HyperFlex clusters that will replicate must have TCP/IP connectivity between them, and additional IP addresses must be provided to an internal IP address pool that the HX SCVMs will use. The minimum number of IP addresses required is the number of nodes in the cluster plus 1 additional address. More addresses than are currently needed can be placed into the pool to allow for future growth of the HX cluster. An existing VLAN ID and subnet can be used, although it is more typical to configure a specific VLAN and subnet to carry replication traffic that will traverse the campus or WAN links between the two clusters. The VLANs that will be used for replication traffic must already be trunked to the Cisco UCS fabric interconnects from the

Technet24

||||||||||||||||||||

||||||||||||||||||||

Northbound network by the upstream switches, and this configuration step must be done manually prior to beginning the HyperFlex Connect configuration. The bandwidth usage of the replication traffic can be set to a limit to avoid saturation of the interconnecting network links, or it may be left unlimited, in which case the maximum network bandwidth is set as the total available to the network. The bandwidth consumption will be directly affected by the number of VMs being protected and the frequency of their replication. The set bandwidth limit must not exceed the physical bandwidth, and it must be the same on both sites of the disaster recovery environment. The allowed low bandwidth is 10 Mbps, and the maximum latency supported with 10 Mbps is 75 ms. The interconnection between the two clusters at the two sites can be done in several ways. In most cases, the uplinks from the HX clusters carry all the needed VLAN IDs on the same set of interfaces, including HX management, vMotion, storage traffic, guest VM traffic, and the replication traffic. In some cases, it is desired that the replication traffic traverse a set of independent uplinks, which is referred to as a split L2 topology. However, due to a technical limitation, the configuration of replication networking cannot accommodate a split L2 configuration. Specifically, a single Cisco UCS vNIC cannot carry multiple VLANs

||||||||||||||||||||

||||||||||||||||||||

that traverse multiple uplink groups. Because the default configuration uses vmnic0 and vmnic1 to carry HX management traffic and replication traffic, both of those VLANs must arrive to UCS across a single set of uplinks. The replication subnets and VLANs used in the two sites can be different routed subnets, or they can be a single subnet if the WAN is using other technologies, such as Overlay Transport Virtualization (OTV). Replication traffic originates and terminates on the SCVMs running on each HX host. Figure 6-2 shows the network topology of a two-HX cluster.

Figure 6-2 Replication Network

Technet24

||||||||||||||||||||

||||||||||||||||||||

Replication Network Considerations A replication network must be established between clusters that are expected to use replication for data protection. This replication network is created to isolate inter-cluster replication traffic from other traffic within each cluster and site. The following is a list of considerations when configuring replication networks and pairing: To support efficient replication, all M nodes of Cluster A have to communicate with all N nodes of Cluster B, as illustrated in Figure 6-3 in the next section. To enable replication traffic between clusters to cross the site boundary and traverse the Internet, each node on Cluster A should be able to communicate with each node on Cluster B across the site boundary and the Internet. The replication traffic must be isolated from other traffic within the cluster and the data center. Creating this isolated replication network for intercluster traffic involves creating a replication network on each cluster and then pairing clusters to associate the clusters and establish M*N connectivity between the clusters. IP addresses, subnet, VLAN, and gateway are associated with each replication network of each cluster. Y ou must configure the corporate firewall and routers on both sites to allow communication between the clusters and the sites on TCP ports 9338, 3049, 9098, 4049, and 4059.

||||||||||||||||||||

||||||||||||||||||||

Figure 6-3 M*N Cluster Connectivity

M*N Connectivity Between Clusters Figure 6-3 shows the connectivity between two clusters: a primary cluster with four nodes and a secondary cluster with four nodes.

Configuring a Replication Network in HX Connect A replication network can be configured only once. Once a replication network is configured, you can edit the available IP addresses and the networking bandwidth. Here’s how you configure the network: Step 1. Log in to HX Connect as administrator by

Technet24

||||||||||||||||||||

||||||||||||||||||||

entering the username and password, as shown in Figure 6-4, and clicking Login.

Figure 6-4 HX Connect Login Page Step 2. In the HyperFlex Connect screen that appears, select the Replication page and then click Replication Configuration > Configure Network (see Figure 6-5).

||||||||||||||||||||

||||||||||||||||||||

Figure 6-5 Replication Network Configuration Step 3. In the Configure Replication Network dialog box, on the VLAN Configuration tab, enter the network information: Select an existing VLAN: Click this radio button to add an existing VLAN(see Figure 6-6). Create a new VLAN: If you select this radio button, the following fields become available (see Figure 6-7): VLAN ID: A VLAN that is different from the HX Data

Technet24

||||||||||||||||||||

||||||||||||||||||||

Platform management traffic network and data traffic network VLAN Name: The name of the VLAN

Figure 6-6 Existing VLAN Assignment for Replication Network

||||||||||||||||||||

||||||||||||||||||||

Figure 6-7 VLAN Creation for Replication Network UCS Manager host IP or FQDN: The host IP address or FQDN for the UCS Manager Username: The administrator username for Cisco UCS Manager Password: The administrator password for Cisco UCS Manager

Step 4. Click Next. Step 5. In the IP & Bandwidth Configuration tab, set the network parameters and the replication bandwidth (see Figure 6-8): Subnet: The subnet for the replication network

Technet24

||||||||||||||||||||

||||||||||||||||||||

Gateway: The gateway for the replication network IP Range: The range of IP addresses for each converged node in the cluster plus one additional IP address (that is, N+1) Add IP Range: Button for adding the specified IP range Set replication bandwidth limit: The bandwidth that the replication network is allowed to consume Set non-default MTU: MTU other than the default, which is 1500.

Figure 6-8 IP and Bandwidth Replication Settings Step 6. Click Configure. Step 7. In the Test Configuration tab, shown in Figure

||||||||||||||||||||

||||||||||||||||||||

6-9, check the replication network configuration.

Figure 6-9 Replication Network Test Configuration

REPLICATION PAIR OVERVIEW Once a replication network is configured and tested, pairing a replication cluster is the next step. Creating a replication cluster pair is a prerequisite for setting up VMs for replication. The replication network and at least one datastore must be configured before the replication pair can be created. By pairing cluster 1 with cluster 2, you are specifying that all VMs on cluster 1 that are explicitly set up for replication can replicate to cluster 2 and that all VMs on

Technet24

||||||||||||||||||||

||||||||||||||||||||

cluster 2 that are explicitly set up for replication can replicate to cluster 1. By pairing datastore A on cluster 1 with datastore B on cluster 2, you are specifying that for any VM on cluster 1 that is set up for replication, if it has files in datastore A, those files will be replicated to datastore B on cluster 2. Similarly, for any VM on cluster 2 that is set up for replication, if it has files in datastore B, those files will be replicated to data-store A on cluster 1. Pairing is strictly one to one. A cluster can be paired with no more than one other cluster. A datastore on a paired cluster can be paired with no more than one datastore on the other cluster.

Procedure for Creating Replication Pair To create a replication pair, create a datastore on the local network and create a datastore on the remote network. Then follow these steps: Step 1. From HX Connect, log in to either the local cluster or the remote cluster as a user with administrator privileges. Step 2. Select the Replication page and then select Pair Cluster, which will bring up the Create New Replication Pair window. Figure 6-10 shows where to pair the new cluster

||||||||||||||||||||

||||||||||||||||||||

for replication in HX Connect.

Figure 6-10 Replication Pair Cluster Step 3. Enter a name for the replication pair on the Name tab, as shown in Figure 6-11, and click Next. The name, which cannot be changed, is set for both the local and remote clusters.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 6-11 Replication Pair Name Step 4. On the Remote Connection tab (see Figure 612), enter the following information: Management IP or FQDN: The IP address or FQDN for the remote HX storage cluster Username: The vCenter single sign-on or cluster-specific administrator username of the remote HX storage cluster Password: The vCenter single sign-on or cluster-specific password of the remote HX storage cluster

||||||||||||||||||||

||||||||||||||||||||

Figure 6-12 Replication Remote Connection When you’re done with these entries, click Pair to pair with the replication remote cluster. Note Virtual machines to be protected must reside on one of the datastores in the replication pair.

Step 5. Go to the Datastore Mapping tab, as shown in Figure 6-13, to view the data-store mapping between the local and remote clusters. The Local Datastore column shows the list of the configured datastores on the local HX storage

Technet24

||||||||||||||||||||

||||||||||||||||||||

cluster. The Remote Datastore column pairs the datastores between the HX storage clusters. Click Next.

Figure 6-13 Datastore Mapping Between Clusters Step 6. On the Summary tab, shown in Figure 6-14, review the summary of the data-store mapping. The Datastore field indicates the selected datastore on this local HX storage cluster. The Target Datastore field indicates the datastore on the remote HX storage cluster to which the replication snapshot is copied. Click Map Datastores.

||||||||||||||||||||

||||||||||||||||||||

Figure 6-14 Datastore Mapping Summary

Protecting Virtual Machines After a replication cluster is formed, the next step is to protect the virtual machines in the clusters. The following attributes are required to protect the virtual machines: Replication interval, which is the frequency of replication A start time (within the next 24 hours), which specifies the first time replication is attempted for that virtual machine Whether the replication snapshot should be taken with the virtual machine quiesced or not

Technet24

||||||||||||||||||||

||||||||||||||||||||

Protection attributes can be created and assigned to protection groups. To assign the protection attributes to virtual machines, you can add them to a protection group. For example, suppose there are three classes of protection: gold, silver, and bronze. You can set up a protection group for each class, with replication intervals such as 5 or 15 minutes for gold, 4 hours for silver, and 24 hours for bronze. Most of your VMs could be protected by merely adding them to one of the three already created protection groups. To protect virtual machines, you can choose from the following methods: Independently: Select one virtual machine and configure it. Set the replication schedule and the VMware quiesce option for the specific virtual machine. Changes to the replication settings will only affect the independently protected virtual machine. The virtual machine is not included in a protection group. Existing protection group: Select one or more virtual machines and add them to an existing protection group. The schedule and VMware quiesce option settings are applied to all the virtual machines in the protection group. When the protection group settings are changed, the changes are applied to all the virtual machines in the protection group. New protection group: Select two or more virtual machines and choose to create a new protection group. Define the protection group name, schedule, and VMware quiesce option settings. These settings are applied to all the virtual machines in the protection group. When the protection group settings are changed, the changes are applied to all the virtual machines in the protection group.

||||||||||||||||||||

||||||||||||||||||||

Creating Protection Groups Protection groups provide protection to the VMs where they are created. If protection groups have protected virtual machines that replicate to the remote cluster, these protection groups are listed in HX Connect. Before staring the configuration, ensure that replication network and replication pair are configured. Then follow these steps: Step 1. Log in to HX Connect as an administrator. Step 2. On the Replication page, click Create Group (see Figure 6-15.

Figure 6-15 Creating a Protection Group

Technet24

||||||||||||||||||||

||||||||||||||||||||

Step 3. Enter the following information in the Create Protection Group dialog (see Figure 6-16): Protection Group Name: The name for the new protection group for this local cluster Protect virtual machines in this group every: The frequency at which virtual machines are to be replicated to the paired cluster Start protecting the virtual machines immediately: Radio button to start replication immediately after adding the first virtual machine to the protection group Start protecting the virtual machines at: Radio button to set a specific time for the first replication to start Use VMware Tools to quiesce the virtual machine: Check box to have HX Data Platform quiesce the virtual machines before taking the replication snapshot

||||||||||||||||||||

||||||||||||||||||||

Figure 6-16 Protection Group Settings Before starting replication, ensure that at least one virtual machine is added to the protection group and the scheduled start time is reached. To specify the protection start time, click in the time field and select an hour and minute. Step 4. Click Create Protection Group to add the new group to the Protection Groups tab. When the protection group is created, the next step is to add virtual machines to the created group.

Protecting Virtual Machines

Technet24

||||||||||||||||||||

||||||||||||||||||||

This section describes how to protect multiple virtual machines using an existing protection group. Before starting, make sure that the replication network and replication pair are configured and that a protection group is created prior to adding the virtual machines. Then follow these steps: Step 1. Log in to HX Connect as an administrator. Step 2. On the Virtual Machines page, select one or more unprotected virtual machines from the list (see Figure 6-17).

Figure 6-17 Virtual Machine Display Window Step 3. Click Protect.

||||||||||||||||||||

||||||||||||||||||||

Step 4. In the Protect Virtual Machines dialog (see Figure 6-18), verify the selected virtual machine name(s), storage provisioned, and storage used by the VMs Add to an existing protection group: Click this radio button to select an existing protection group from the pull-down list. Create a new protection group: Select this radio button to create a new protection group for the local cluster.

Figure 6-18 Protect Virtual Machines Window: Protection Group Tab

Technet24

||||||||||||||||||||

||||||||||||||||||||

Step 5. Click Next. Step 6. Confirm the information shown on the Summary tab (see Figure 6-19) and click Add to Protection Group.

Figure 6-19 Protect Virtual Machines Window: Summary Tab

DISASTER RECOVERY OVERVIEW Disaster recovery is performed when the source site is unreachable and it is necessary to fail over the VMs and

||||||||||||||||||||

||||||||||||||||||||

the protected groups to the target cluster. The process recovers the VM on the target cluster. The following bullet points describe the commonly used operations for disaster recovery: Testing VM recovery: Testing VM recovery gives the user the ability to test recovery without breaking replication. It can bring up the user VM workload on the target to verify the contents of the VM. Recovering virtual machines: Recovering virtual machines means restoring the most recent replication snapshot from the target (recovery) cluster. Once recovery starts, all the scheduled replication is stopped. Planned migration: Performing a planned migration pauses the replication schedule, replicates the most recent copy, recovers on the target, switches the ownership from the source to the target, and resumes replication on the target that is now the new source.

Compatibility Matrix for Disaster Recovery Operations The compatibility matrix in Table 6-2 lists the disaster recovery operations that are supported when an HX Data Platform version 3.5(x) cluster is paired with an HX Data Platform version 3.5(x) or 3.0(1x) cluster. Table 6-2 Disaster Recovery Operation Compatibility Matrix Feature

3.5(x) Paired

3.5(x) Paired with

Technet24

||||||||||||||||||||

||||||||||||||||||||

with 3.5(x)

3.0(1x)

Replication





Cluster pairing





Datastore mapping





Protection





Planned migration (single-click using HX Connect)





Planned migration (Multistep stcli or WebCLI and HX Connect for recovery)





Test recovery using HX Connect





Recover using HX Connect





Re-protect using HX Connect





Re-protect (multistep stcli or WebCLI)





Testing Virtual Machine Recovery Testing recovery does not disrupt the running clusters.

||||||||||||||||||||

||||||||||||||||||||

The intent is to verify that, in the event of an actual disaster, the VMs will be recoverable. Using the HX Connect user interface to test VM recovery, you can run a maximum of 10 tasks in a sequence without waiting for the previously submitted task to complete. The following section describes the steps needed to properly test virtual machine recovery: Step 1. Log in to HX Connect on the target cluster as administrator. Step 2. Navigate to the Replication page and select the Remote VMs tab. Figure 6-20 shows the Replication page for remote VMs for test recovery.

Figure 6-20 Replication Test Recovery Step 3. To test the recovery process, select a protected

Technet24

||||||||||||||||||||

||||||||||||||||||||

VM and click the Test Recovery button. Figure 6-21 shows the test recovery parameters, which are as follows: Note All the fields listed in step 3 are optional. Resource Pool: Select a location for the test VM to be stored. Folders: Select a location for the test VM to be stored. Power On/Off: Select the status of the recovery VM. VM Name: Specify the name for the created test VM. Test Networks: Select the HX storage cluster network to use for transferring the data from the replication snapshot. It can be one of the following: Storage controller data network Storage controller management network Storage controller replication network VM network Map Networks: Create a map between the source and the target cluster networks.

||||||||||||||||||||

||||||||||||||||||||

Figure 6-21 Test Recovery Parameters Step 4. When you have finished setting the parameters in the Test Recovery Parameters dialog, click Recover VM. Step 5. When the job completes, verify that the VM has been recovered to the HxRecoveryTest folder or the folder you designated. Step 6. Power on the recovered VMs via vSphere Web Client or HTML5 vSphere Client to test their functionality. Step 7. Repeat steps 3 through 6 for each VM you want to test. Step 8. When the testing is completed, delete the test

Technet24

||||||||||||||||||||

||||||||||||||||||||

recovery VMs.

Recovering Virtual Machines In the case of a site outage or the failure of a cluster, VMs can be recovered to their state as of the last successfully transmitted snapshot, running on the secondary or target cluster as part of a disaster recovery operation. This recovery operation assumes that the primary, or source, site and cluster is either offline or isolated in such a way that it can no longer communicate with the secondary, or target, site, and it cannot be managed. A recovery operation stops all replication between the two clusters; replication can be reestablished at a later time, after the faults or outages have been repaired. The following section describes the steps needed to properly recover virtual machines: Step 1. Log in to HX Connect as administrator. Step 2. Navigate to the Replication page and select the Remote VMs tab (see Figure 6-22).

||||||||||||||||||||

||||||||||||||||||||

Figure 6-22 Virtual Machine Recovery Step 3. Select a protected VM and click the Recover button. Step 4. To recover the VM and build a new VM on the local cluster, set the following options in the Recover VM on This Cluster dialog (see Figure 6-23): Note All the fields listed in step 4 are optional. Resource Pool: Specify the location for the new VM to be stored. Folders: Specify the location for the new VM to be stored. Power On/Off: Select the status of the recovery VM. Map Networks: Create a map between the source and the target cluster networks. These are the options: Source Network: Indicates the network name at the source side on which the VM is connected.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Target Network: Select the target network where the VM has to be connected.

Figure 6-23 Virtual Machine Recovery Parameters Step 5. When you have finished setting the parameters in the Recover VM on This Cluster dialog, click Recover VM. Step 6. Wait for the recovery to complete. View the recovered VM in the target vCenter.

Plan Migration Performing planned migration pauses the replication schedule, replicates the most recent copy, recovers on

||||||||||||||||||||

||||||||||||||||||||

the target, switches the ownership from the source to the target, and resumes replication on the target that is now the new source. The following section describes the steps needed to properly perform a planned migration: Step 1. Log in to HX Connect as administrator. Step 2. Navigate to the Replication page and select the Remote VMs tab. Step 3. Select a protected VM and click Migrate. Note All the fields that are listed in step 4 are optional.

Step 4. To migrate a VM, set the following options: Resource Pool: Specify the location for the new VM to be stored. Folders: Specify the location for the new VM to be stored. Power On/Off: Select the status of the recovery VM. Map Networks: Create a map between the source and the target cluster networks. These are the options: Source Network: Indicates the network name at the source side on which the VM is connected. Target Network: Select the target network where the VM has to be connected.

Step 5. Monitor the progress on the Activity page.

Technet24

||||||||||||||||||||

||||||||||||||||||||

HYPERFLEX BACKUP The Cisco HyperFlex HX Data Platform is a purposebuilt, high-performance distributed file system that delivers a wide range of enterprise-class data management and optimization services. This platform redefines distributed storage technology, expanding the boundaries of hyperconverged infrastructure with independent scaling, continuous data optimization, simplified data management, and dynamic data distribution for increased data availability. This agile system is easy to deploy, manage, and scale as your business needs change, and it provides the first level of data availability. However, as with most other systems, a second layer of protection that is equally agile is recommended. HyperFlex can therefore integrate with third-party solutions such as Veeam Availability Suite, Cohesity, and Commvault.

Veeam Availability Suite The Cisco HyperFlex system and Veeam Availability Suite solution is flexible, agile, and scalable infrastructure that is protected and easy to deploy. Building on top of the Cisco HyperFlex HX Data Platform’s built-in protection tools, Veeam Availability Suite expands data protection with local and remote backups and VM-level replication.

||||||||||||||||||||

||||||||||||||||||||

Veeam Availability Suite enables backup and replication of infrastructure VMs on Cisco HyperFlex clusters located in the same data center. Veeam Availability Suite, which includes the Veeam Repository, Veeam Proxy, and Veeam Backup Server, resides on a single Cisco UCS S3260 storage server, which provides up to 600 TB of raw storage capacity. Replication of the application VM is executed to a separate Cisco HyperFlex cluster. Figure 6-24 provides a high-level view of Cisco HyperFlex with a Cisco S3260 storage server and Veeam Availability Suite and illustrates the following: Replication of application VMs across Cisco HyperFlex clusters through Veeam Availability Suite Backup of application VMs on a Cisco S3260 storage server Management endpoints for Cisco HyperFlex, a Cisco UCS S3260 storage server, and Veeam Availability Suite

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 6-24 HyperFlex with Cisco S3260 Storage Server and Veeam Availability Suite Backup Solution The Veeam Backup & Replication application operates at the virtualization layer and uses an image-based approach for VM backup. To retrieve VM data, no agent software needs to be installed inside the guest OS. Instead, Veeam Backup & Replication leverages vSphere snapshot capabilities and application-aware processing. When a new backup session starts, a snapshot is taken to create a cohesive point-in-time copy of a VM, including its configuration, OS, applications, associated data, system state, and so on. Veeam Backup & Replication uses this point-in-time copy to retrieve VM data. Image-based backups can be used for different

||||||||||||||||||||

||||||||||||||||||||

types of recovery, including full VM recovery, VM file recovery, instant VM recovery, file-level recovery, and application item recovery. The image-based approach allows Veeam Backup & Replication to overcome shortfalls and limitations of traditional backups. It also helps streamline recovery verification and the restoration process; to recover a single VM, there is no need to perform multiple restore operations. Veeam Backup & Replication uses a cohesive VM image from the backup to restore a VM to the required state without the necessity for manual reconfiguration and adjustment. In Veeam Backup & Replication, backup is a job-driven process in which one backup job can be used to process one or more VMs. A job is a configuration unit of the backup activity. Essentially, the job defines when, what, how, and where to back up. It indicates what VMs should be processed, what components should be used for retrieving and processing VM data, what backup options should be enabled, and where to save the resulting backup file. Jobs can be started manually by the user or scheduled to run automatically. The resulting backup file stores compressed and deduplicated VM data. Compression and deduplication are done by the Veeam proxy server. Regardless of the backup method you use, the first run of a job creates a full backup of a VM image. Subsequent

Technet24

||||||||||||||||||||

||||||||||||||||||||

job runs are incremental; that is, Veeam Backup & Replication copies only those data blocks that have changed since the last backup job run. To keep track of changed data blocks, Veeam Backup & Replication uses different approaches, including VMware’s Changed Block Tracking (CBT) technology.

Changed Block Tracking To perform an incremental backup, Veeam Backup & Replication needs to know which data blocks have changed since the previous job run. Figure 6-25 shows Veeam full and incremental backups.

Figure 6-25 Veeam Full and Incremental Backups For VMware VMs with hardware version 7 or later, Veeam Backup & Replication employs VMware vSphere Changed Block Tracking (CBT), which is a native VMware feature. Instead of scanning Virtual Machine File System (VMFS), Veeam Backup & Replication

||||||||||||||||||||

||||||||||||||||||||

queries CBT on vSphere through the VMware vStorage APIs for Data Protection (VADP) and gets the list of blocks that have changed since the last run of this particular job. Use of CBT increases the speed and efficiency of block-level incremental backups. CBT is enabled by default; if necessary, you can disable it in the settings of a specific backup job. Veeam Backup & Replication offers a number of recovery options for various disaster recovery scenarios: Veeam Explorer: Enables you to restore single applicationspecific items. Instant VM Recovery: Enables you to instantly start a VM directly from a backup file. Full VM recovery: Enables you to recover a VM from a backup file to its original location or to another location. VM file recovery: Enables you to recover separate VM files (virtual disks, configuration files, and so on). Virtual drive restore: Enables you to recover a specific hard drive of a VM from the backup file, and attach it to the original VM or to a new VM. Windows file-level recovery: Enables you to recover individual Windows guest OS files (from FAT, NTFS, and ReFS). Multi-OS file-level recovery: Enables you to recover files from 15 different guest OS file systems.

Veeam Backup & Replication uses the same image-level backup for all data recovery operations. You can restore

Technet24

||||||||||||||||||||

||||||||||||||||||||

VMs, VM files and drives, application objects, and individual guest OS files to the most recent state or to any available restore point.

COHESITY Cohesity running alongside Cisco HyperFlex within a Cisco UCS domain offers a consolidated system that provides the primary storage, workload hosting, data protection, and file services required for most virtualized data centers, all within a single unified architecture. Cohesity and Cisco HyperFlex share complementary data center technologies, both utilizing a distributed file system architecture that is designed for high availability. Through a shared-nothing topology, there is no single point of failure, and there are no inherent bottlenecks, and both performance and capacity can scale linearly as more physical nodes are added to the clusters. The distributed file system spans all nodes in the cluster and natively provides global deduplication, compression, and encryption. With Cisco HyperFlex integration, Cohesity DataProtect software takes virtual machine snapshots directly on HyperFlex, which creates a storage-native snapshot for the virtual machine. Because this snapshot is native to HyperFlex, it has very similar performance characteristics as that of the original base disk when

||||||||||||||||||||

||||||||||||||||||||

compared to the performance when using standard VMware redo-log-based snapshots. After a snapshot is taken, Cohesity DataProtect proceeds to back up the virtual machine data, and then the snapshot is deleted through the HyperFlex API. Using native snapshots eliminates common delays and I/O penalties and improves application performance by using the underlying HyperFlex distributed storage technology to create and consolidate the snapshots.

Cohesity Protection During a Cohesity Protection job, a new snapshot of the virtual machine is taken, and that snapshot is transferred via the network to the storage domain configured in the job. This constitutes a new incremental backup of that virtual machine. Once the snapshot is transferred, the snapshot of the virtual machine is deleted in the source hypervisor node. If the virtual machine being backed up was already running with an active snapshot, the new snapshot taken by Cohesity is a child of the existing snap, and then it is deleted, coalescing the changes back into the existing snapshot level where the virtual machine was already running. If storage snapshot provider integration with Cisco HyperFlex is enabled, then all of these snapshots are taken as HX native snapshots. If the HX native snapshot attempt fails, such as when an existing

Technet24

||||||||||||||||||||

||||||||||||||||||||

VMware standard redo-log snapshot exists, the protection job falls back to taking a standard VMware snapshot.

Cohesity Recovery A recovery job can be initiated to restore a virtual machine from the backed-up snapshots and return the virtual machine to service. A unique aspect of the Cohesity software is the sequence of the recovery process. When a recovery job is started, the Cohesity system presents an NFS-based datastore from itself, which is mounted to the ESXi host, inside of which are the virtual machine files that have been bloomed from the snapshots. The virtual machine is then registered in vCenter from this location, and the virtual machine is powered on. This process returns the recovered virtual machine to service much faster than would a typical recovery process because the virtual machine immediately runs with its virtual files sourced from the Cohesity NFS datastore. After the virtual machine is powered on, a storage vMotion relocates the virtual machine files to their original location. The benefit of this recovery workflow is amplified when multiple simultaneous virtual machine recoveries are needed because the time to return the virtual machines to service is very low, and the remaining process of relocating the virtual machines via storage vMotion

||||||||||||||||||||

||||||||||||||||||||

happens in the background while the virtual machines are already online. A recovered virtual machine has no snapshots, even if the virtual machine originally had snapshots at the time of the backup that is being restored.

COMMVAULT Commvault software natively protects VMware and Hyper-V workloads on Cisco HyperFlex for fast protection and to meet demanding RTO and RPO requirements. For VMware, Commvault IntelliSnap technology leverages Cisco HyperFlex snapshots, and for Hyper-V, Commvault application-aware technology ensures application-consistent backups. It embraces and utilizes all cloud options, instantly migrating data between any virtualized platform, any public or private cloud platform, and any on-premises infrastructure. It also seamlessly adopts scale-up and scale-out architecture options to maximize cost-effectiveness and ensure that users have the scale capabilities needed based on workload size and criticality. Figure 6-26 shows the architecture of Commvault software for Cisco HyperFlex.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 6-26 Commvault Software for Cisco HyperFlex Architecture

SUMMARY This chapter provides an overview of the HX Data Platform disaster recovery feature. It also discusses the configuration steps required to properly enable the feature. Finally, the chapter introduces the supported

||||||||||||||||||||

||||||||||||||||||||

third-party solutions to use as a second layer of protection for HyperFlex Data Platform.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Chapter 7 Deploying HyperFlex Anywhere with Cisco Intersight This chapter provides details about what Cisco Intersight is and how you can deploy edge clusters and standard clusters through Intersight.

CLOUD-BASED MANAGEMENT A cloud-based management platform provides unified access to applications and to infrastructure monitoring, configuration, and orchestration, which helps reduce IT management complexity and unify the deployment and management of many edge devices. Cisco Intersight is Cisco’s new system management platform for Cisco UCS servers and Cisco HyperFlex systems. This cloud-based platform offers centralized management that enables IT organizations to analyze, simplify, and automate their

||||||||||||||||||||

||||||||||||||||||||

environments in ways that are not possible using traditional tools. This capability empowers organizations to achieve significant savings in total cost of ownership (TCO) and to deliver applications faster so they can support new business initiatives. To access the Cisco Intersight platform, go to https://intersight.com. The Cisco Intersight platform simplifies the installation, monitoring, troubleshooting, upgrade, and support processes for your infrastructure. It offers the following benefits: Cloud-based management: Thanks to the capability to manage Cisco UCS and Cisco HyperFlex systems from the cloud, you can quickly and easily manage and scale your infrastructure, whether it is in the data center or a remote or branch office location. Automation: The unified API in Cisco UCS and Cisco HyperFlex systems enables policy-based configuration and management of the infrastructure. It also makes the Cisco Intersight platform and the devices connected to it fully programmable and DevOps friendly. Analytics and telemetry: The Cisco Intersight platform monitors the health and relationships of all physical and virtual infrastructure components. It also collects telemetry and configuration information to develop the intelligence of the platform in accordance with Cisco information security requirements. Advanced level of support: Solid integration with the Cisco Technical Assistance Center (TAC) enables more efficient and proactive technical support. The Cisco Intersight platform provides enhanced operations automation by expediting file transmission to accelerate troubleshooting.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Recommendation engine: Driven by analytics and machine learning, the Cisco Intersight recommendation engine provides actionable intelligence for IT operations management through a constantly increasing knowledge base and practical insights learned from the entire system. Management as a service (MaaS): The Cisco Intersight platform provides management as a service and is designed to be infinitely scalable and easy to implement. It relieves users of the burden of maintaining systems management software and hardware.

Figure 7-1 shows the Cisco Intersight login page.

||||||||||||||||||||

||||||||||||||||||||

Figure 7-1 Cisco Intersight The Cisco Intersight website and framework can be upgraded with new and enhanced feature sets independently of the products that are managed, thus providing many new features and capabilities with no downtime or upgrades required of end users. Future releases of Cisco HyperFlex systems will enable additional functions along with upgrades to the Cisco

Technet24

||||||||||||||||||||

||||||||||||||||||||

Intersight framework. This unique combination of embedded and online technologies will provide a complete cloud-based management solution that can manage Cisco HyperFlex systems throughout the entire life cycle. The minimum supported web browser versions to run the Cisco Intersight platform are as follows: Google Chrome: Release 62.0.3202.94 Firefox: Release 57.0.1 Microsoft Edge: Release 40.15063.674.0 Safari: Release 10.1.1

Two licensing editions are available for Cisco Intersight customers: Base edition: This edition is available at no additional cost to customers. It is automatically included with every Cisco UCS and Cisco HyperFlex system purchase. Cisco Intersight Base edition provides access to a portal that delivers centralized monitoring and basic inventorying of managed systems, organizational capabilities including tagging and search, and the capability to launch native endpoint management interfaces including Cisco UCS Manager and Cisco HyperFlex Connect. Essentials edition: This edition provides additional value beyond what is provided in the Base edition. It includes features to centralize configuration management through a unified policy engine, determine compliance with the Cisco UCS Hardware Compatibility List (HCL), and initiate firmware updates. The Essentials edition provides a single interface for monitoring,

||||||||||||||||||||

||||||||||||||||||||

management, and operations, with the capability to launch the virtual keyboard, video, and mouse (vKVM) console directly from the Cisco Intersight platform.

Figure 7-2 summarizes the features of the Cisco Intersight license types.

Figure 7-2 Cisco Intersight License Options The Cisco Intersight Essentials edition can be ordered for 1-year, 3-year, and 5-year subscription periods. You can start a free 90-day evaluation trial of the Essentials edition by signing up at the licensing page in the Cisco

Technet24

||||||||||||||||||||

||||||||||||||||||||

Intersight platform.

HyperFlex Cluster Policies in Intersight HyperFlex cluster profiles are built on policies which are administrator-defined rules and operating characteristics such as node identity, interfaces, and network connectivity. Every active node in a HyperFlex cluster must be associated with a HyperFlex cluster profile. HyperFlex policies in Cisco Intersight provide different configurations, including Auto Support, External Storage (such as FC and iSCSI), security, network configuration, and more. A policy that is configured can be assigned to any number of servers to provide a configuration baseline. All HyperFlex policies can be shared by HyperFlex Edge and HyperFlex with fabric interconnect, unless otherwise mentioned in the list that follows: Auto-Support Policy: Enables Auto-Support, the alert notification service provided through HyperFlex Data Platform in Intersight. When Auto-Support is enabled, notifications are sent to designated email addresses or email aliases that you want to receive the notifications. Typically, Auto-Support is configured during HyperFlex storage cluster creation by configuring the SMTP mail server and adding email recipients. DNS, NTP, and Timezone Policy: Configures DNS, NTP, and the time zone on all servers. DNS and NTP servers should reside outside a HyperFlex storage cluster. Use an internally hosted NTP

||||||||||||||||||||

||||||||||||||||||||

server to provide a reliable source for the time. External FC Storage Policy: Enables the use of external FC Storage and configures the WWxN name and the associated VSAN details. This policy is not applicable to HyperFlex edge clusters. External iSCSI Storage Policy: Enables the use of external iSCSI storage and configures the associated VLAN details. This policy is not applicable to HyperFlex Edge clusters. HyperFlex Storage Network Policy: Configures the storage network VLAN name ID for the HyperFlex cluster. IP & Hostname Policy: Configures the management IP pool. The Management IP addresses 169.254.254.0/23 are now reserved for Cisco Container Platform. Network Configuration Policy: Configures the VLAN, KVM, and MAC prefix for the management network in fabric interconnects; and uplink speed, VLAN, and jumbo frames for the management network in edge clusters. The VLAN must have access to Intersight. This policy cannot be shared by HyperFlex edge and HyperFlex with fabric interconnect clusters. Proxy Setting Policy: Specifies the HTTP proxy settings to be used by the HyperFlex installation process and the HyperFlex storage controller VMs. This policy is required when the Internet access of your servers, including CIMC and HyperFlex storage controller VMs, is secured by an HTTP proxy. Security Policy: Configures an ESXi and controller VM password for the HyperFlex cluster. This policy presents an option to update the hypervisor password in Intersight, if you have not already changed it on the hypervisor. Storage Configuration Policy: Configures the options for VDI optimization (for hybrid HyperFlex systems) and cleanup of disk partitions. For HyperFlex with fabric interconnect, this policy provides the option to enable logical availability zones as part of the cluster deployment workflow. Logical availability zones are not supported on HyperFlex edge clusters.

Technet24

||||||||||||||||||||

||||||||||||||||||||

vCenter Policy: (Optional) Registers a cluster to vCenter to ensure that the cluster functions smoothly.

CISCO HYPERFLEX EDGE CLUSTER DEPLOYMENT Cisco HyperFlex Edge brings the simplicity of hyperconvergence to remote and branch office (ROBO) and edge environments. Starting with Cisco HX Data Platform Release 4.0, HyperFlex Edge deployments can be based on 2-node (2N), 3-node (3N), or 4-node (4N) edge clusters. Figure 7-3 shows the Cisco Intersight Edge 2N, 3N, and 4N deployments options.

||||||||||||||||||||

||||||||||||||||||||

Figure 7-3 Cisco Intersight Edge Deployment Options Cisco HyperFlex Edge 2-node clusters require the use of Cisco Intersight for initial deployment and ongoing operations using the invisible cloud witness. HyperFlex Edge 2-node clusters cannot be deployed using the onpremises installer. HyperFlex Edge 3-node and 4-node clusters may also be deployed using Cisco Intersight. For on-premises HyperFlex, an installer VM may be used to deploy 3node and 4-node HyperFlex edge clusters.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Intersight Arbitrator Overview for 2-Node Intersight and the device connector serve as a secondary heartbeat between controllers when the primary communication channel is not responding. The two nodes in the cluster maintain separate connections to the Intersight service. Intersight holds a single WitnessLock object for each HyperFlex cluster. Nodes can update the WitnessLock object to attain the lock in the event of node failure or network partitioning. The node that acquires the lock can then proceed to take ownership of the cluster and start serving data requests. Lock acquire is implemented as a REST POST to update the object in Intersight. Once a lock is acquired, it can only be released by the node that acquired the lock. If a node attempts to acquire a lock that is already held by a peer node, it receives an HTTP error in response. All locking operations are exposed by a device connector REST API running on localhost 8777 on each controller VM. The device connector exposes the APIs, which are then invoked by the 2-node ROBO controller service. Figure 7-4 shows the lock attained by Intersight in the event of node failure or network partitioning.

||||||||||||||||||||

||||||||||||||||||||

Figure 7-4 WitnessLock in the Event of Node Failure or Network Partitioning

Technet24

||||||||||||||||||||

||||||||||||||||||||

The minimum supported firmware versions for Cisco HyperFlex Edge systems to be deployed by the Cisco Intersight platform are: Cisco IMC software: Release 3.1(2d) Cisco HyperFlex Data Platform: Release 2.6.1b Device connector: Release 1.0.4-10 or later

Physical Topology Single-switch and dual-switch are two physical topologies supported between the HX-Series servers and the upstream network. The choice depends on the requirements of the installation and the availability of hardware and networking resources. Cisco HyperFlex Edge offers both Gigabit Ethernet and 10 Gigabit Ethernet installation options.

Cisco HyperFlex Edge Single-Switch Configuration The single-switch configuration requires a single switch and a minimum of two Gigabit Ethernet ports per server. Link and switch redundancy are not provided in this basic topology. Access ports and trunk ports are the two supported network port configurations. Figure 7-5 shows a Cisco HyperFlex Edge single-switch configuration.

||||||||||||||||||||

||||||||||||||||||||

Figure 7-5 Cisco HyperFlex Edge Single-Switch Configuration The single-switch topology requires at least two separate networks: A management network, which includes guest VM network and VMware vMotion traffic A data network for storage traffic

Two Gigabit Ethernet ports on each server are required, and one of these two ports must be dedicated to Cisco HyperFlex storage traffic. The upstream network has the following requirements:

Technet24

||||||||||||||||||||

||||||||||||||||||||

A managed switch with VLAN capability is required. A minimum of six physical Gigabit Ethernet ports for three Cisco HyperFlex nodes are required if in-band Cisco IMC is configured on the Gigabit Ethernet LOM (LAN on motherboard) ports in shared mode. Users can optionally configure out-of-band IMC using a dedicated port on the LOM card, which requires an additional Gigabit Ethernet switch port per server. PortFast or a PortFast trunk should be configured for uninterrupted IMC access.

Cisco HyperFlex Edge Dual-Switch Configuration The dual-switch configuration provides full redundancy that protects against switch failures, link and port failures, and LOM and PCIe NIC hardware failures. It requires two switches, which can be standalone or stacked switches, four Gigabit Ethernet ports, and one additional Intel i350 PCIe NIC per server. Trunk ports are the only supported network port configuration. Figure 7-6 shows a Cisco HyperFlex Edge dual-switch configuration.

||||||||||||||||||||

||||||||||||||||||||

Figure 7-6 Cisco HyperFlex Edge Dual-Switch Configuration The dual-switch topology also requires two separate networks: A management network, which includes the guest VM network A data network for storage and VMware vMotion traffic

Four Gigabit Ethernet ports on each server are required: two ports from the onboard LOM card and two ports from a PCIe add-on NIC. One LOM and one PCIe port serve management and virtual machine guest traffic in a

Technet24

||||||||||||||||||||

||||||||||||||||||||

redundant configuration, and the second LOM and second PCIe port serve storage data and vMotion traffic in a redundant and load-balanced configuration. The upstream network has the following requirements: Two managed switches with VLAN capability are required. A minimum of 12 physical Gigabit Ethernet ports for three Cisco HyperFlex nodes are required if in-band Cisco IMC is configured on the Gigabit Ethernet LOM ports in shared mode. Users can optionally configure out-of-band IMC using a dedicated port on the LOM card, which requires an additional Gigabit Ethernet switch port per server. A PortFast trunk should be configured for uninterrupted IMC access.

Logical Topology The Cisco HyperFlex system has communication pathways that are categorized into four zones: Management zone Data storage zone vMotion zone Guest virtual machine zone

The Cisco HyperFlex installer creates four virtual switches (vSwitches) on all HX-Series nodes, each serving a different network: vswitch-hx-inband-mgmt

||||||||||||||||||||

||||||||||||||||||||

vswitch-hx-storage-data vswitch-hx-vm-network vmotion

For a Cisco HyperFlex Edge system, only the first two vSwitches are used: vswitch-hx-inband-mgmt and vswitch-hx-storage-data. Figure 7-7 shows a Cisco Intersight Edge single-switch configuration.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 7-7 Cisco Intersight Edge Single-Switch Configuration For the single-switch configuration, one LOM port is assigned to vswitch-hx-inband-mgmt, and another LOM port is assigned to vswitch-hx-storage-data. The data storage network requires a dedicated port in this configuration. Therefore, vswitch-hx-storage-data carries the storage data traffic, and vswitch-hx-inband-

||||||||||||||||||||

||||||||||||||||||||

mgmt carries the traffic for management, vMotion, and guest virtual machines. There is no uplink redundancy on vSwitches in this configuration. Figure 7-8 shows a Cisco Intersight Edge dual-switch configuration.

Figure 7-8 Cisco Intersight Edge Dual-Switch Configuration

Technet24

||||||||||||||||||||

||||||||||||||||||||

For the dual-switch configuration, the redundancy occurs at the vSwitch level: One LOM port and one PCIe NIC port are assigned to vswitch-hx-inband-mgmt, and the second LOM port and the second PCIe NIC port are assigned to vswitch-hx-storage-data. The vMotion VMkernel port is on the vswitch-hx-storage-data network and uses a failover order opposite that of the storage data network. With this approach, the uplinks for storage traffic and the uplinks for vMotion traffic are set as active/standby, but each service uses the opposite active link. The vswitch-hx-inband-mgmt network is set to active/standby with the same failover order for all port groups. By default, all network services use the active path and fail over to the standby path only as needed during a failure scenario. Cisco HyperFlex servers communicate with Cisco Intersight through a device connector that is embedded in firmware and can be enabled from the Cisco IMC software. The IMC is a baseboard management controller that provides embedded server management for Cisco HyperFlex HX-Series rack servers. It can be configured to operate in dedicated mode or shared mode.

Installing Cisco HyperFlex Edge with the Cisco Intersight Platform A Cisco HyperFlex system comes with a factory

||||||||||||||||||||

||||||||||||||||||||

preinstalled configuration. Cisco HyperFlex servers are delivered with the proper firmware revisions preset, a copy of the VMware ESXi hypervisor software preinstalled, and some components of the Cisco HyperFlex software prestaged. When you install a Cisco HyperFlex system from the Cisco Intersight cloud management platform, the installer function is delivered from the cloud, so users don’t need an installer virtual machine. A Cisco HyperFlex Edge installation with the Cisco Intersight platform is supported only for completely new factoryshipped Cisco HyperFlex M5 rack-mount servers. The following steps summarize the installation workflow for Cisco HyperFlex Edge: Step 1. Complete the preinstallation checklist. Step 2. Ensure that the network is set up. Step 3. Log in to Cisco Intersight. Step 4. Claim devices (that is, edge nodes). Step 5. Run the HyperFlex cluster profile wizard. Step 6. Run the postinstallation script through the controller VM.

Intersight Installation Prerequisites Prior to beginning the installation activities, you must

Technet24

||||||||||||||||||||

||||||||||||||||||||

set up a Cisco Intersight account. You also should gather all the configuration settings contained in the preinstallation checklist. To create your account, you must first have a valid Cisco ID. If you do not yet have a Cisco ID, the account can be generated in this way: Step 1. Visit https://intersight.com from your workstation. Step 2. Click Sign In with Cisco ID. On the Cisco Log In page, you can either choose to log into an existing account or create a new account. Step 3. Click Register Now and provide the requested information to create a Cisco.com account. Step 4. After a valid account has been created, use it to log in to the Cisco Intersight platform. To install Cisco HyperFlex Data Platform from Cisco Intersight, you need to allocate some IP addresses that are used by the system for Cisco UCS IMC, Cisco HyperFlex and VMware ESXi management, Cisco HyperFlex storage, and VMware vMotion: The Cisco UCS IMC IP addresses and HX Data Platform management IP addresses must allow communication with the Cisco Intersight platform. Additional IP addresses for the Cisco HyperFlex Edge system need to be allocated from the appropriate subnets and VLANs to be used.

||||||||||||||||||||

||||||||||||||||||||

By default, the Cisco HyperFlex installation assigns a static IP address to the management interface of the ESXi servers. For the IMC network, either DHCP or static assignment can be used to ease the configuration process. The use of DHCP is not recommended for Cisco HyperFlex and ESXi management, Cisco HyperFlex storage, or vMotion networks. DNS servers should be configured to query fully qualified domain names (FQDNs) in the Cisco HyperFlex and ESXi management group. DNS forward and reverse lookup records need to be created prior to beginning the installation. All device connectors on the Cisco HyperFlex nodes and all Cisco HyperFlex controller virtual machines must properly resolve public domains through DNS and permit outbound-initiated HTTPS connections on port 443 so that the latest Cisco HyperFlex software packages can be automatically downloaded and the Cisco HyperFlex edge cluster can be fully deployed from the Cisco Intersight platform. Consistent time clock synchronization is required across the components of the Cisco HyperFlex system; NTP is used by the IMC, vCenter, ESXi hypervisor hosts, and Cisco HyperFlex storage platform controller virtual machines. Prior to performing the installation, create the required VLAN IDs in the upstream network. At a minimum, two VLANs are needed for trunk link connected to the upstream network: a VLAN for the Cisco HyperFlex and ESXi management group and a VLAN for the Cisco HyperFlex storage group. Optionally, you can add two more VLANs: a VLAN for the vMotion group and at least one VLAN for the guest virtual machine traffic. Several usernames and passwords need to be defined or known as part of the Cisco HyperFlex installation process: those for the Cisco Intersight account, Cisco UCS IMC administrator, VMware ESXi administrator, Cisco HyperFlex administrator, and VMware

Technet24

||||||||||||||||||||

||||||||||||||||||||

vCenter administrator.

Cisco Integrated Management Controller Configuration The Cisco Intersight cloud-based management platform provides remote management of Cisco HyperFlex Edge servers through a device connector that is embedded in and can be enabled through Cisco IMC software. The Cisco UCS device connector provides access to Cisco UCS server management functions that are built in to the server. By pointing a device connector to the Cisco Intersight platform, Cisco UCS servers and Cisco HyperFlex clusters can be claimed as serviceable devices and easily managed remotely. The Cisco Intersight platform needs access to the IMC and to the Cisco HyperFlex management network. To enable the device connector on the Cisco HyperFlex nodes and the Cisco Intersight platform to communicate, the IMC interface needs to be configured on each server. Two options are available for setting the IP address for the IMC: DHCP or static configuration. The following steps guide you through a static configuration: Step 1. Attach the VGA dongle to the new Cisco HyperFlex server and connect it to a monitor and USB keyboard. Power on the server. Step 2. Wait for some minutes until the screen with

||||||||||||||||||||

||||||||||||||||||||

the Cisco logo appears. Step 3. When prompted for boot options, press the F8 key to enter the Cisco IMC configuration utility. Figure 7-9 shows the Cisco HyperFlex server boot options.

Figure 7-9 Cisco HyperFlex Server Boot Options Step 4. For a first-time configuration, you are prompted for a new IMC password. Enter the default password password as the current IMC password and then set a new IMC password. The configuration described here uses Cisco123 as the new password (although this is not a very good password and is used as an example only). Step 5. In the Cisco IMC configuration utility window, choose the desired NIC mode. Select Dedicated if you want to access the IMC

Technet24

||||||||||||||||||||

||||||||||||||||||||

through the dedicated management port. Select Shared LOM if you want to access the IMC through any LOM port. Select Shared LOM Ext mode if you want to access the IMC through any LOM or adapter port. Step 6. Deselect the DHCP enabled option and select IPv4 to set a static IP address. Step 7. Enter IPv4 settings for the CIMC IP, prefix/subnet, and gateway options. Step 8. Leave NIC redundancy set to Active-active. Step 9. Select the VLAN enabled option and enter the VLAN ID for IMC management. Step 10. Press F10 to save the configuration and then press Esc to exit the utility. The server reloads and may boot into the ESXi operating system preloaded at the factory. Step 11. Ping the IMC IP address that was just set to validate the configuration. Step 12. Open a web browser with https:// to the IMC management GUI. Log in as admin with the configured password. Step 13. On the Summary page, check the running versions of the BIOS and IMC firmware to verify that these versions meet the

||||||||||||||||||||

||||||||||||||||||||

requirements for Cisco HyperFlex and Cisco Intersight deployment. If an IMC upgrade is needed, go to the Cisco website to download the latest Host Upgrade Utility (HUU) and use that utility to upgrade the firmware to a compatible version. Figure 7-10 shows the Cisco HyperFlex server summary page.

Figure 7-10 Cisco HyperFlex Server Summary Page Step 14. If desired, click Select Timezone to set the correct time zone for the server. Step 15. From the navigation pane, choose Compute and then click the Power Policies tab. Change the Power Restore Policy from the default

Technet24

||||||||||||||||||||

||||||||||||||||||||

Power Off to the desired option. Step 16. From the navigation pane, expand Admin. Then click Networking, and under the Network tab, review the IMC IP settings and add an alternative DNS server here if desired. You can change the hostname here as well. Step 17. Click the NTP Setting tab. Then add your NTP server information. Step 18. After making the changes you want, click Save Changes to save and apply the changed values. A hostname change creates a new certificate for the web browser and requires you to log in to the web GUI again. The system may take several minutes to commit the changes. Step 19. Repeat steps 1–18 for all Cisco HyperFlex Edge servers.

Claiming Devices in Cisco Intersight Now you can enable the device connector on the Cisco HyperFlex servers and let the Cisco Intersight platform claim them for cloud management. Follow these steps: Step 1. Log in to the IMC web management GUI with https://. Step 2. From the navigation pane, expand Admin. Then click Device Connector and enable

||||||||||||||||||||

||||||||||||||||||||

Cisco Intersight management on the device connector, using Allow Control as the access mode. (This option may already be set.) This configuration enables the IMC to establish a connection to the Cisco Intersight platform. Figure 7-11 shows the Device Connector configuration in IMC.

Figure 7-11 Device Connector Configuration in IMC Step 3. Wait until the connection succeeds and a claim code has been generated for this device. Note that the device is not claimed yet. Write down the device ID (server system number) and claim code as they will be needed in a subsequent step. Step 4. Repeat steps 1 through 3 for each Cisco HyperFlex Edge server to gather all device IDs

Technet24

||||||||||||||||||||

||||||||||||||||||||

and claim codes. Note that each claim code has an expiration time, as indicated by the colored bar under the claim code. Be sure to claim the servers before the codes expire. Step 5. Go to Cisco Intersight Cloud Management platform, at https://intersight.com. Step 6. Log in with your Cisco ID and password. If this is your first time using the Cisco Intersight platform, you should take a site tour to learn about some of the platform’s main features. Step 7. To claim a new device, in the navigation pane at the left, click Devices. Then, in the top-right corner of the Devices window, click Claim a New Device. Figure 7-12 shows how to claim a new device from Intersight.

||||||||||||||||||||

||||||||||||||||||||

Figure 7-12 Claiming a New Device from Intersight Step 8. Enter the device ID and claim code obtained from the IMC management GUI. Use copy and paste for accuracy. Click Claim. Step 9. Wait until the device has been claimed successfully and then go to the server’s IMC page to verify the claim. The Device Connector page should now show that this device has been claimed. Step 10. Repeat steps 5–10 to claim all three Cisco HyperFlex servers on the Cisco Intersight platform. Step 11. Review the Device window again. The Cisco

Technet24

||||||||||||||||||||

||||||||||||||||||||

HyperFlex nodes should now be listed as connected devices. Step 12. You now have the option to launch the IMC management GUI directly from the Cisco Intersight platform. If you want to do so, click Launch IMC. Step 13. To unclaim (remove) a device from the Cisco Intersight platform, select the device and click the trash icon. On the pop-up confirmation page, click Remove to confirm the removal.

Cisco HyperFlex Edge Clusters Installation Wizard The Cisco Intersight platform provides an installation wizard to install, configure, and deploy Cisco HyperFlex edge clusters. The wizard constructs a preconfiguration definition of an edge cluster called a Cisco HyperFlex cluster profile. The cluster profile is policy based, with administrator-defined sets of rules and operating characteristics such as the node identity, interfaces, and vCenter connectivity. Every active node in the edge cluster must be associated with a Cisco HyperFlex cluster profile. After the user enters all the configuration settings, the installation wizard validates and deploys the cluster profile on the Cisco HyperFlex edge nodes.

||||||||||||||||||||

||||||||||||||||||||

To install and configure a Cisco HyperFlex edge cluster with the Cisco Intersight platform, complete the following steps: Step 1. Log in to the Cisco Intersight cloud management platform at https://intersight.com with your Cisco ID and password. Step 2. In the navigation pane at the left, choose Service Profiles. On the Service Profiles page, select the HyperFlex Cluster Profiles tab and click Create HyperFlex Cluster Profile. Figure 7-13 shows how to create HyperFlex cluster profiles.

Figure 7-13 Creating HyperFlex Cluster Profiles Step 3. Proceed through the installation steps of the HyperFlex cluster profile installation wizard. First, on the General page (see Figure 7-14), enter a cluster name that is unique and can be

Technet24

||||||||||||||||||||

||||||||||||||||||||

used as the HX Data Platform cluster name, vCenter cluster name, and Cisco Intersight cluster name. Under Type, select Cisco HyperFlex Edge. Add the necessary description and tags for this cluster for reference.

Figure 7-14 Type of HyperFlex Cluster Step 4. Click Next to move to the Cluster

||||||||||||||||||||

||||||||||||||||||||

Configuration page of the wizard. Step 5. Click the + sign to expand the Security section (see Figure 7-15). Enter root as the hypervisor administration username and password. Enter a user-supplied password for the Cisco HyperFlex controller virtual machine and then confirm that password in the second text field.

Figure 7-15 Security Configuration

Technet24

||||||||||||||||||||

||||||||||||||||||||

Step 6. Close the Security section, and the settings are automatically saved to a policy named localcredential-policy. This policy is reusable and can be selected when you create your next Cisco HyperFlex cluster profile. Step 7. (Optional) To choose an existing policy for your cluster profile, at the policy line click the Select Policy button. Choose the desired policy from the available policy list and click Select. Step 8. Click the + sign to expand the DNS, NTP, and Timezone section of the page (see Figure 7-16). Choose a time zone from the drop-down list and enter DNS server and NTP server information. Click the + sign to enter secondary DNS and NTP servers.

||||||||||||||||||||

||||||||||||||||||||

Figure 7-16 DNS, NTP, and Timezone Configuration Step 9. Close the DNS, NTP and Timezone section, and the settings are automatically saved to a reusable policy named sys-config-policy. Step 10. Click the + sign to expand the vCenter section. Enter the vCenter server FQDN or IP address, administration username, and password. Enter

Technet24

||||||||||||||||||||

||||||||||||||||||||

the data center name for vCenter hosting the Cisco HyperFlex Edge cluster. Leave the vCenter Single Sign-On Server field blank unless instructed otherwise by the Cisco Technical Assistance Center (TAC). If you are deploying your system in a nested vCenter configuration, you can leave the vCenter fields blank and register the cluster to vCenter after it has been deployed on the cluster. For help, see the guide “How to Deploy vCenter on the HX Data Platform.” Figure 7-17 shows the vCenter configuration.

||||||||||||||||||||

||||||||||||||||||||

Figure 7-17 vCenter Configuration Step 11. Close the vCenter section, and the settings are automatically saved to a reusable policy named vcenter-config-policy. Step 12. Click the + sign to expand the Storage Configuration section. Select Clean Up Disk Partitions if you are performing a

Technet24

||||||||||||||||||||

||||||||||||||||||||

reinstallation on top of an existing deployment. If you are deploying a virtual desktop infrastructure (VDI) environment on a hybrid Cisco HyperFlex cluster, select the VDI Optimization box. Figure 7-18 shows the storage configuration options.

Figure 7-18 Storage Configuration Step 13. Close the Storage Configuration section, and the settings are automatically saved to a reusable policy named cluster-storage-policy. Step 14. Click the + sign to expand the Auto Support section (see Figure 7-19). Select the box AutoSupport to enable this feature. Enter your email address for service ticket notification.

Figure 7-19 Auto-Support Configuration Step 15. Close the Auto-Support configuration, and the

||||||||||||||||||||

||||||||||||||||||||

settings are automatically saved to a reusable policy named auto-support-policy. Step 16. Click the + sign to expand the IP & Hostname section (see Figure 7-20). Enter a hostname prefix. In a later step, hostnames will be sequentially assigned to hosts using this prefix. Enter the starting IP address, ending IP address, netmask, and gateway for the management IP address pool. IP addresses from this range will automatically be assigned to hosts in the node configuration step. If you enter only the management network IP addresses, the same range will be used for both ESXi management and Cisco HyperFlex controller virtual machine management IP addresses. If you want to use a second, noncontiguous range of IP addresses for the Cisco HyperFlex controller virtual machines, you may optionally enter the starting IP address, ending IP address, netmask, and gateway for the controller virtual machine management IP address pool. Note that these two IP ranges must fall within the same IP subnet and VLAN.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 7-20 IP and Hostname Configuration Step 17. Close the IP & Hostname section, and the settings are automatically saved to a reusable named node-config-policy. Step 18. Click the + sign to expand the Network Configuration section (see Figure 7-21). Enter the VLAN ID for the management network. Leave the Jumbo Frames checkbox unselected for Cisco HyperFlex Edge deployments.

||||||||||||||||||||

||||||||||||||||||||

Figure 7-21 Network Configuration Step 19. Close the Network section, and the settings are automatically saved to a reusable policy named cluster-network-policy. Step 20. (Optional) Click the + sign to expand the Proxy Setting section (see Figure 7-22).

Figure 7-22 Proxy Configuration Step 21. Click the + sign to expand the HyperFlex Storage Network section (see Figure 7-23). Enter the VLAN ID for the data storage network. As a best practice, use a unique storage VLAN for each cluster if you are

Technet24

||||||||||||||||||||

||||||||||||||||||||

deploying multiple clusters in the same network.

Figure 7-23 Storage Network Configuration Step 22. Click Next to move to the Nodes Assignment page of the wizard (see Figure 7-24). Select three available Cisco HyperFlex Edge nodes. You also have the option to assign nodes later, saving the Cisco HyperFlex cluster profile now and returning when the hardware is available for assignment after it has been claimed in the Cisco Intersight platform.

||||||||||||||||||||

||||||||||||||||||||

Figure 7-24 Node Assignment Step 23. Click Next to navigate to the Nodes Configuration page. Review the node configuration for all three Cisco HyperFlex Edge nodes. You can freely modify the hostname defined by automatic IP address assignment if desired. Enter the cluster management IP address within the same IP

Technet24

||||||||||||||||||||

||||||||||||||||||||

management subnet. Step 24. Click Next to move to the Summary page. Review the Cluster Configuration and Nodes Configuration pages and check them for errors. Step 25. Click Validate to validate the Cisco HyperFlex edge cluster configuration without starting the deployment. This selection starts a series of hardware, software, and environmental checks that take a few minutes to complete. Alternatively, click Validate & Deploy to complete validation and deployment together. This option performs an explicit validation first, followed by the start of the actual deployment. Step 26. Wait until validation succeeds. Fix any errors found and note any warnings before continuing. Step 27. On the Results page for the validation process (see Figure 7-25), click Validate & Deploy to install the Cisco HyperFlex edge cluster. Optionally, you can click Close to complete deployment later. Installation time will vary, depending on network bandwidth, but typically takes about one hour. You can remain on the Results page to watch the cluster deployment progress in real time. Alternatively, you can click Close to send the task to the background

||||||||||||||||||||

||||||||||||||||||||

and navigate elsewhere within the Cisco Intersight platform. To return to the Results page, navigate back to the Service Profiles > HX Cluster Profile list view and select the cluster name.

Figure 7-25 Cluster and Node Configuration Step 28. When the deployment has completed successfully, click OK. Step 29. When you are back on the Service Profiles > HX Cluster Profile page, find the newly deployed Cisco HyperFlex cluster profile with a status of OK.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Step 30. If desired, clone and modify the cluster profile to quickly create many new cluster profiles. To clone a cluster profile, select the profile that you want to clone, in the Settings column click the three dots (…), and then click Clone. Figure 7-26 shows the options to clone or unassign nodes.

Figure 7-26 Cloning or Unassigning Nodes Step 31. In the Clone pop-up window, enter the name for the cloned cluster and choose the number of clones. Click Clone. Step 32. Before deploying the new copy of the cloned cluster profile, review the configuration and make any necessary changes to the policies. Step 33. To unassign the servers from the selected cluster profile, in the Settings column, click the three dots (…) and then click Unassign and then click Unassign again. Note that

||||||||||||||||||||

||||||||||||||||||||

unassigning removes all assigned servers from the cluster profile. This action is specific to the Cisco Intersight platform and does not result in a factory reset or affect the operation of the running cluster. To completely clean up this cluster from the Cisco Intersight platform, unclaim the cluster and associated servers. Contact the TAC for return materials authorization (RMA) support or cluster reinstallation procedures. Step 34. To verify the Cisco HyperFlex edge cluster information on the Cisco Intersight platform, in the navigation pane, go to the HyperFlex Clusters page and click the hyperlinked cluster name. Step 35. On the HyperFlex Clusters page, on the General tab, review the summary information about the new cluster: health state, storage capacity, node count, and so on. Check for any critical alarms requiring your attention. Step 36. On the same page, click the Profile tab and review the newly created HX cluster profile information. Under Cluster, you can review the details of the policies you just configured (see Figure 7-27). Step 37. Review the Nodes configuration.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 7-27 HX Cluster Profile Information Step 38. Review the details of the installation process under Results (see Figure 7-28). Click the + sign at the top of each panel to view the detailed information.

||||||||||||||||||||

||||||||||||||||||||

Figure 7-28 Results Tab Step 39. Review more detailed status information for the new Cisco HyperFlex Edge cluster in the Cisco HyperFlex Connect management GUI. Cisco HyperFlex Connect can be directly launched from the Cisco Intersight platform. To find the cross-launch feature, navigate to the cluster list view by selecting HyperFlex Clusters in the navigation pane at the left. Step 40. Find the desired cluster in the list, in the Settings column click the three dots (…), and then click Launch HyperFlex Connect. The native Cisco HyperFlex Connect user interface launches in a new browser tab. You can easily manage large-scale Cisco HyperFlex

Technet24

||||||||||||||||||||

||||||||||||||||||||

deployments directly from the Cisco Intersight platform. All Cisco HyperFlex functions are available through cross-launch except cluster upgrades, support bundle downloads, and HX Data Platform device connector settings. This operation must be performed by directly launching Cisco HyperFlex Connect over the local network.

Postinstallation Configuration To automate the postinstallation procedures and verify the Cisco HyperFlex cluster configuration, a script is provided on the Cisco HyperFlex controller virtual machines. These steps can also be performed manually or with a PowerCLI script in vCenter, if preferred. The following procedure uses the post_install script: Step 1. Use SSH to go to the cluster management IP address and log in using the username and the controller virtual machine password provided during installation. Verify that the cluster is online and healthy by using the stcli cluster info or stcli cluster storage-summary command. Step 2. Type the command curl -L http://cs.co/hxupdate | sh in the shell and press Enter to update to the latest

||||||||||||||||||||

||||||||||||||||||||

postinstallation script. Look for the message “Scripts successfully updated.” Step 3. Type the command /usr/share/springpath/storfs-misc/hxscripts/post_install.py in the shell and press Enter. Step 4. Enter the Cisco HyperFlex storage controller virtual machine root password for the Cisco HyperFlex cluster. (Use the password entered during the Cisco HyperFlex cluster installation.) Also enter the vCenter username and password. You must license the vSphere hosts through the script or complete this task in vCenter before continuing. Failure to apply a license will result in an error when you attempt to enable VMware High Availability (HA) or Distributed Resource Scheduler (DRS) in subsequent steps. Step 5. Enter y to enable HA/DRS. Step 6. Enter y to disable the ESXi hosts’ SSH warning. SSH running in ESXi is required in HX Data Platform 2.6. Step 7. Add the vMotion VMkernel interfaces to each node by entering y. Enter the netmask, the vMotion VLAN ID, and the vMotion IP addresses for each of the hosts, as prompted. vMotion will be configured using best practices

Technet24

||||||||||||||||||||

||||||||||||||||||||

according to the choice of a single- or dualswitch configuration. In the case of singleswitch configuration, a traffic shaper will automatically be applied to the VMkernel port to help ensure that vMotion does not consume all the bandwidth available on the shared uplink port. Step 8. If desired, add virtual machine network port groups for guest virtual machine traffic. Enter n to skip this step and create the port groups manually in vCenter. Alternatively, you can create virtual machine network port groups and add them to the virtual machine vSwitch. This step will add the same network configuration to all nodes in the cluster. After the postinstallation script is complete, a vMotion VMkernel port and virtual machine network port groups are visible for each host in vCenter. Step 9. Enter n to skip the testing of the Auto-Support email function. Step 10. When a summary of the cluster is displayed upon completion of the script, verify that the cluster is healthy. You should enable a syslog destination for permanent storage of the ESXi host logs. The configuration can be performed manually in vCenter or through a PowerCLI

||||||||||||||||||||

||||||||||||||||||||

script. Cisco HyperFlex Edge is optimized for ROBO and edge environments. As previously described, Cisco HyperFlex Edge systems also can be remotely deployed from the cloud with the Cisco Intersight platform. This is a big step forward in Cisco’s strategy to build cloud-managed, on-premises, intelligent infrastructure.

STANDARD UCSM MANAGED HYPERFLEX CLUSTER DEPLOYMENT For a standard UCSM managed HyperFlex cluster, you use almost the same steps as for an edge cluster. Only two things are different here: Y ou need to claim fabric interconnects instead of individual nodes. In the HX cluster profile installation wizard, under HX Cluster Type, select Cisco HyperFlex with Fabric Interconnect and then select Replication Factor (see Figure 7-29).

Technet24

||||||||||||||||||||

||||||||||||||||||||

Figure 7-29 Cluster Types and Replication Factor The following steps summarize the installation workflow for configuring a fabric interconnect–attached HyperFlex cluster: Step 1. Complete the preinstallation checklist.

||||||||||||||||||||

||||||||||||||||||||

Step 2. Ensure that the network is set up. Step 3. Log in to Cisco Intersight. Step 4. Claim devices (that is, FIs). Step 5. Run the HyperFlex cluster profile wizard. Step 6. Run the postinstallation script through the controller VM.

SUMMARY The Cisco Intersight cloud management platform is designed to deploy, monitor, manage, and provide additional value-added services with strengthened security to Cisco UCS and Cisco HyperFlex products. In the future, Cisco Intersight capabilities will continue to improve, with the end goal of providing customers with the industry’s lowest operating costs at greatest scale. The Cisco Intersight platform combined with Cisco HyperFlex systems will completely transform the way that IT infrastructure is managed in the future.

Technet24

||||||||||||||||||||

||||||||||||||||||||

Index NUMBERS 10 Gigabit Ethernet switch edge cluster configurations, 70–73 40 Gigabit Ethernet cabling, 56

A access cluster access policies, 241 Controller VM access (HX Connect), 167 RBAC (HX Connect), 152–154 accounts (local), HX Connect, 151 ACI (Application Centric Infrastructures) ACI mode, Nexus 9000 series switches, 34–36 stretch cluster requirements, 133 Activity page (HX Connect), 163–164 adaptability, HyperFlex, 7–12 administrators, replication, 266

||||||||||||||||||||

||||||||||||||||||||

Alarms page (HX Connect), 160–161 All Flash nodes, 26–27, 47, 51 All-NVMe nodes, NVMe drives, 51 analytics, Intersight, 293 ASUP (Auto-Support) HX Connect, 156–157 HyperFlex installation requirements, 100–101 automation, Intersight, 293

B backups HyperFlex backups, 286 changed block tracking, 288–289 Cohesity, 289–290 Cohesity Protection, 289–290 Cohesity Recovery, 290 Commvault, 290–291 Veeam Availability Suite, 286–289 Hyper-V, 39 replication administrators, 266 considerations, 266–267

Technet24

||||||||||||||||||||

||||||||||||||||||||

port requirements, 265–266 storage spaces, 266 supported clusters, 266–267 topologies, 265 replication networks considerations, 269 HX Connect configurations, 270–273 overview of, 267–268 topologies, 268 replication pairs creating, 274–277 overview of, 273–274 bandwidth, replication network connections, 272–273 bare metal (type 1) hypervisors, 37 Base edition (Intersight), 295 base VM, ReadyClone support, 200 best practices native snapshots, 192–193 ReadyClones, 201 blade firmware bundle (UCS), 232–233 blade servers

||||||||||||||||||||

||||||||||||||||||||

compute-only nodes, 52 UCS 5108 chassis, 19–20 UCS-B series blade servers, 20, 62 branch offices, HyperFlex Edge, 27–28 browsers HyperFlex installations, 81 Intersight browser compatibility, 295 bundles (HyperFlex components) downloading, 232–233 UCS bundles, 232–233 upgrade bundle (.tgz file), 232 VMware ESXi Offline Zip bundle, 232

C cabling, 40 Gigabit Ethernet, 56 carbon-aggregator (controller service), 43 carbon-cache (controller service), 43 Catalyst switches, 88 certificates CSR, 179 self-certificates, 179 changed block tracking, HyperFlex backups,

Technet24

||||||||||||||||||||

||||||||||||||||||||

288–289 charts, Performance page (HX Connect), 164–165 Chrome (Google), HyperFlex installations, 81 cip-monitor (controller service), 43 Cleaner, 45, 46 clients (vSphere), 38-38 cloud-based management, Intersight, 23–24, 293–296 Cluster IP Monitor (controller service), 43 Cluster Management IP (controller service), 44 Cluster Manager Service (controller service), 43 clusters, 75 access policies, configuring, 241 compute-only nodes, 81 converged nodes, 43 CRM, 46 direct connect mode setup, 101–103 edge clusters, 41, 64–66 10 Gigabit Ethernet switch configurations, 70–73 dual-switch configurations, 68–70 HyperFlex configurations, 107–120

||||||||||||||||||||

||||||||||||||||||||

single-switch configurations, 66–68 UCS FI, 78 health verifying in UCSM, 238–239 viewing, 235–236 HyperFlex Edge clusters installation wizard, 310–320 management, 321–322 Intersight cluster deployments, 297 cluster policies, 296–297 M*N connectivity, 269 nodes five-node clusters, 219 four-node clusters, 219 three-node clusters, 219, 245–247 QoS, 88 remote clusters, HX Connect, 176–177 replication, 266–267 scaling, 207 compute nodes, 213, 224

Technet24

||||||||||||||||||||

||||||||||||||||||||

converged nodes, 208–212, 217–224 expanding stretch clusters, 213–218 node expansion, 207–213 node removal, 217, 224 SED cluster ports, 86 shutdown stops, 46 standard clusters, 41, 58–59 FI connectivity, 59–60 HX-series rack-mount server connectivity, 60–62 HyperFlex configurations, 107–120 UCS B-series blade server connectivity, 62 storage capacity, 236 storage clusters adding drives, 225 HyperFlex installation requirements, 94–95 NTP servers, 96 registration, 184–189 stretch clusters, 41, 63–64, 76 ACI requirements, 133 expanding, 213–218 FI, 132

||||||||||||||||||||

||||||||||||||||||||

HyperFlex configurations, 130–150 network IP addressing, 136–137 stretch witness ports, 85 UCSM requirements, 133 VLAN requirements, 133 VMware requirements, 133 witness nodes, 133–136 Witness VM, 132 upgrades, with HX Connect UI, 254–259 upstream switches, 88, 89 Cohesity Cohesity Protection, 289–290 Cohesity Recovery, 290 HyperFlex backups, 289–290 combined upgrades, 252 Commvault, HyperFlex backups, 290–291 component bundles (HyperFlex upgrades) downloading, 232–233 UCS bundles, 232–233 upgrade bundle (.tgz file), 232 VMware ESXi Offline Zip bundle, 232

Technet24

||||||||||||||||||||

||||||||||||||||||||

compression (data), 45 compute nodes removing, 224 scaling clusters, 213, 224 compute-only nodes, 52, 81 computing Hyper-V computing environment, 39 UCS, 15 configurations cluster access policies, 241 HyperFlex configurations edge clusters, 107–120 HyperFlex 3.5 configurations, 88–89 standard clusters, 107–120 stretch clusters, 130–150 HyperFlex Edge, postinstallation configurations, 320–321 integrated management controllers, 306–308 Net.TeamPolicyUpDelay default values, 236–237 replication networks via HX Connect, 270–273 vCenter configuration requirements, 95 connectivity

||||||||||||||||||||

||||||||||||||||||||

HyperFlex, 7–9, 75 M*N connectivity, 269 remote connectivity, Hyper-V, 39 upstream network connectivity, verifying, 240–241 controller services, 43–44 Controller VM access (HX Connect, System Information page), 167 CPU resource reservations, 96–97 HX220c M4 Controller VM, 99 HX220c M5 Controller VM, 97–98 HX240c M4 Controller VM, 99–100 HX240c M5 Controller VM, 97, 98 HXAF220c Controller VM, 97–98 HXAF220c M4 Controller VM, 99 HXAF240c M4 Controller VM, 99–100 HXAF240c M5 Controller VM, 97–98 locations, 97–100 storage cluster registration, 188–189 converged data centers, 1–2 converged nodes, 77–78 HyperFlex

Technet24

||||||||||||||||||||

||||||||||||||||||||

clusters and, 43 installation requirements, 79–80 removing, 217–224 scaling clusters, 208–212, 217–224 CPU resource reservations, Controller VM, 96–97 CRM (Cluster Resource Manager), 46 cron (controller service), 44 CSR (Certificate Signing Requests), 179

D Dashboard page (HX Connect), 154–155 data centers converged data centers, 1–2 HCI, 3–4. See also HyperFlex deployment models (automated), 4 server integration, 4 traditional (non-converged) data centers, 1–2 data optimization, 45 compression, 45 deduplication, 45 data platforms, HyperFlex, 24–25

||||||||||||||||||||

||||||||||||||||||||

data protection DR snapshots, 267 protection groups, 267 replication administrators, 266 considerations, 266–267 port requirements, 265–266 storage spaces, 266 supported clusters, 266–267 topologies, 265 replication networks considerations, 269 HX Connect configurations, 270–273 overview of, 267–268 topologies, 268 replication pairs creating, 274–277 overview of, 273–274 RPO, 263–265 RTO, 263–265 VM protection groups, 276–281

Technet24

||||||||||||||||||||

||||||||||||||||||||

data recovery, Cohesity Recovery, 290 datastores, 205 adding, 205–206 creating, 206 non-HX datastores, disaster recovery, 267 Datastores page (HX Connect), 169–170 deduplication, 45 deleting snapshots, 198–199 deployments cluster deployments with Intersight, 297 HCI automated deployments models, 4 HyperFlex deployment options, 6–7 desktops (VDI), hypervisors, 36–37 Device Connect page (HX Connect), 159–160 direct connect mode, cluster setup, 101–103 disaster recovery, 263 compatibility matrix, 281–282 considerations, 267 data protection, RTO/RPO, 263–265 DRS, verifying enabled status, 236 HyperFlex, 12

||||||||||||||||||||

||||||||||||||||||||

Hyper-V, 39 non-HX datastores, 267 overview of, 281 plan migration, 285–286 protection groups, 267 rebooting nodes, 267 replication administrators, 263 considerations, 266–267 port requirements, 265–266 storage spaces, 266 supported clusters, 266–267 topologies, 265 replication networks considerations, 269 HX Connect configurations, 270–273 overview of, 267–268 topologies, 268 replication pairs creating, 274–277 overview of, 273–274

Technet24

||||||||||||||||||||

||||||||||||||||||||

snapshots DR snapshots, 267 VM, 267 thin provisioning, 267 VM protection groups, 276–281 recovery, 282–285 snapshots, 267 disks HyperFlex installation requirements, 79 partitioning, 80 Disks page (HX Connect, System Information page), 168–169 DJL2 (Disjoint Layer 2), 89 DNS and name server service (controller service), 44 DNS servers, HyperFlex installation requirements, 96 downloading component bundles (HyperFlex upgrades), 232–233 DR (Disaster Recovery) snapshots, 267 drives

||||||||||||||||||||

||||||||||||||||||||

adding to storage clusters, 225 HDD, 229–230 SSD, 225–229 DRS (Disaster Recovery Systems), verifying enabled status, 236 dual-switch configurations HyperFlex Edge, 300–301 edge clusters, 68–70

E edge clusters, 41, 64–66 10 Gigabit Ethernet switch configurations, 70–73 dual-switch configurations, 68–70 HyperFlex configurations, 107–120 single-switch configurations, 66–68 UCS FI, 78 edge for remote branch offices, 47 encryption HX Connect, 177–179 key-encryption (security) keys, 177 media encryption keys, 177 security (key-encryption) keys, 177

Technet24

||||||||||||||||||||

||||||||||||||||||||

ensembles, 47 Essentials edition (Intersight), 295–296 ESX Agent Manager, 237 ESXi, 37 ESXi Offline Zip bundle (VMware), 232 ESXi Upgrade (HX Connect, Upgrade page), 173–174 node checks, 242–243 stretch cluster installations, 130–150 vmkernel, 37 Ethernet 10 Gigabit Ethernet switch edge cluster configurations, 70–73 40 Gigabit Ethernet cabling, specifications, 56 Events page (HX Connect), 162–163 exhibitor (controller service), 43 external HyperFlex connections, HyperFlex installation requirements, 87–88

F FI (Fabric Interconnects) direct connect mode cluster setup, 103 DJL2, 89

||||||||||||||||||||

||||||||||||||||||||

edge clusters, 78 HyperFlex installation requirements, 92 HyperFlex installations, 78 network provisioning, 88 standard cluster topologies, 59–60 stretch clusters, 132 UCS 6200 series, 18–19 UCS 6300 series, 17–18 UCS 6454, 16–17 upstream switches and, 88, 89 Firefox, HyperFlex installations, 81 firmware, UCS blade firmware bundle, 232–233 infrastructure firmware, upgrading, 248–261 rack-mount firmware bundle, 232–233 server firmware, Upgrade page (HX Connect), 173 five-node clusters, 219 four-node clusters, 219

G Google Chrome, HyperFlex installations, 81 graphite (controller service), 44

Technet24

||||||||||||||||||||

||||||||||||||||||||

H hard disks HyperFlex installation requirements, 79 partitioning, 80 hardware replacement HDD, 229–230 SSD, 225–226 housekeeping SSD, 227–229 NVMe SSD, 226–227 HCI (Hyperconverged Infrastructure), 3–4. See also HyperFlex deployment models (automated), 4 management (centralized), 4 scalability, 4 server integration, 4 storage, 4 HDD (Hard-Disk Drives), replacing, 229–230 health checks HyperCheck utility, 242–248 storage controller VM, 242–243 health of clusters

||||||||||||||||||||

||||||||||||||||||||

verifying in UCSM, 238–239 viewing, 235–236 hosts, HyperFlex installation requirements, 79 housekeeping SSD, 227–229 HX Connect, 29 Activity page, 163–164 Alarms page, 160–161 Dashboard page, 154–155 datastores creating, 206 Datastores page, 169–170 encryption, 177–179 Events page, 162–163 local accounts, 151 logins, 151 Performance page, 164–165 pre-upgrade UCS server firmware (C-bundle), verifying, 234–235 RBAC, 152–154 ReadyClones, creating, 201–203 remote clusters, 176–177 replication, 175–177

Technet24

||||||||||||||||||||

||||||||||||||||||||

replication networks, 270–273 Settings page, 155 Auto-Support, 156–157 Device Connect, 159–160 Kubernetes, 158–159 Notifications Settings page, 157–158 Support Bundle Settings, 156–157 System Information page, 165–166 Controller VM access, 167 Disks page, 168–169 Nodes page, 167 Upgrade page, 172 ESXi Upgrade, 173–174 HX Data Platform Upgrade, 173 Progress page, 174 UCS server firmware, 173 Virtual Machines page, 171–172 Web CLI page, 174–175 HX Connect UI, cluster upgrades, 254–259 HX Data Platform, 29, 41, 183 components of, 78

||||||||||||||||||||

||||||||||||||||||||

datastores, 205 adding, 205–206 creating, 206 disaster recovery, 263 compatibility matrix, 281–282 considerations, 267 data protection, RTO/RPO, 263–265 replication, 265–267 replication, networking, 267–273 replication, pairs, 273–277 VM migration, 285–286 VM protection, 276–281 VM recovery, 282–285 installations, 183 installer navigation aid buttons, 117 Installer OVA deployments with vSphere Web Client, 104–107 installer-related ports, 82 ReadyClones, 199 base VM support, 200 benefits of, 200 best practices, 201

Technet24

||||||||||||||||||||

||||||||||||||||||||

creating, 201–205 requirements, 200–201 standard cluster topologies, 58–59 FI connectivity, 59–60 HX-series rack-mount server connectivity, 60–62 UCS B-series blade server connectivity, 62 storage clusters, 183, 184–189 upgrade bundle (.tgz file), 232 vSwitch switches, 91 HX Device Connector (controller service), 43–44 HX220 M5/M4 Series, 47 HX220c-M4S Hybrid, 49 HX220c-M5SX Hybrid, 48 HXAF220c-M4S All Flash, 48 HXAF220c-M5SX All Flash, 48 HX220c M4 Controller VM, 99 HX220c M5 Controller VM, 97–98 HX240 M5/M4 Series, 49 HX240c-M4SX All Flash, 50–51 HX240c-M4SX Hybrid, 51 HX240c-M5L Hybrid, 50

||||||||||||||||||||

||||||||||||||||||||

HX240c-M5SX Hybrid, 49–50 HXAF220c-M5N All-NVMe, 51 HXAF240c-M5SX All Flash, 49 HX240c M4 Controller VM, 99–100 HX240c M5 Controller VM, 97, 98 HXAF220c Controller VM, 97–98 HXAF220c M4 Controller VM, 99 HXAF240c M4 Controller VM, 99–100 HXAF240c M5 Controller VM, 97–98 hx-device-connector (controller service), 43–44 hxmanager (controller service), 43 HX-series nodes all flash nodes, 47 HX220 M5/M4 Series, 47 HX220c-M4S Hybrid, 49 HX220c-M5SX Hybrid, 48 HXAF220c-M4S All Flash, 48 HXAF220c-M5SX All Flash, 48 HX240 M5/M4 Series, 49 HX240c-M4SX All Flash, 50–51 HX240c-M4SX Hybrid, 51

Technet24

||||||||||||||||||||

||||||||||||||||||||

HX240c-M5L Hybrid, 50 HX240c-M5SX Hybrid, 49–50 HXAF220c-M5N All-NVMe, 51 HXAF240c-M5SX All Flash, 49 hybrid nodes, 47 node models, 47 HX-series rack-mount servers, standard cluster topologies, 60–62 Hybrid converged nodes, SSD, 51 hybrid nodes, 25, 47 HyperCheck utility, 242–248 HyperFlex, 4–5, 41–42. See also HCI adaptability, 7–12 advantages of, 7–12 All Flash nodes, 26–27 backups, 286 changed block tracking, 288–289 Cohesity, 289–290 Cohesity Protection, 289–290 Cohesity Recovery, 290 Commvault, 290–291 Veeam Availability Suite, 286–289

||||||||||||||||||||

||||||||||||||||||||

clusters, 75 converged nodes, 43 edge clusters, 41, 64–73, 120–130 node expansion, 207–213 node removal, 217, 224 scaling clusters, 207–224 standard clusters, 41, 58–62, 107–120 storage clusters, 225 stretch clusters, 41, 63–64, 76, 130–150, 213–218 verifying health in UCSM, 238–239 components of, 4–6 compute-only nodes, 52 connectivity, 7–9, 75 control services, 43–44 CRM, 46 data compression, 45 data optimization, 45 data platform, 24–25 data platforms, 41 deduplication, 45 deployment options, 6–7

Technet24

||||||||||||||||||||

||||||||||||||||||||

edge for remote branch offices, 47 external connections, 87–88 hardware replacement HDD, 229–230 SSD, 225–229 HX Connect, logins, 151 HX Data Platform, 41 183–0006, 183 components of, 78 creating snapshots, 193–195 datastores, 205–206 deleting snapshots, 198–199 disaster recovery, 263 disaster recovery, compatibility matrix, 281–282 disaster recovery, considerations, 267 disaster recovery, data protection, 263–265 disaster recovery, replication, 265–267 disaster recovery, replication networking, 267–273 disaster recovery, replication pairs, 273–277 disaster recovery, VM migration, 285–286 disaster recovery, VM protection, 276–281

||||||||||||||||||||

||||||||||||||||||||

disaster recovery, VM recovery, 282–285 installations, 183 native snapshots, 190, 193 ReadyClones, 199–205 reverting to snapshots, 196–198 scheduling snapshots, 195–196 SENTINEL snapshots, 193, 199 storage cluster registration, 184–189 storage clusters, 183 HX220 M5/M4 Series, 47 HX220c-M4S Hybrid, 49 HX220c-M5SX Hybrid, 48 HXAF220c-M4S All Flash, 48 HXAF220c-M5SX All Flash, 48 HX240 M5/M4 Series, 49 HX240c-M4SX All Flash, 50–51 HX240c-M4SX Hybrid, 51 HX240c-M5L Hybrid, 50 HX240c-M5SX Hybrid, 49–50 HXAF220c-M5N All-NVMe, 51 HXAF240c-M5SX All Flash, 49

Technet24

||||||||||||||||||||

||||||||||||||||||||

HX-series nodes all flash nodes, 47 hybrid nodes, 47 node models, 47 HX-series system management components of, 28–29 HX Connect, 29 HX Data Platform, 29 UCSM, 28–29 VMware vCenter management, 29 hybrid nodes, 25 HyperFlex 3.5 configurations, 88–89 HyperFlex Cleaner, 45–46 HyperFlex Edge, 27–28 installations, 52–53 ASUP requirements, 100–101 browser recommendations, 81 compute-only nodes, 79–80 Controller VM locations, 97–100 converged nodes, 79–80 CPU resource reservations, Controller VM, 96–97

||||||||||||||||||||

||||||||||||||||||||

direct connect mode cluster setup, 101–103 disk requirements, 79–81 edge clusters, 107–120 external HyperFlex connections, 87–88 host requirements, 79 HX Data Platform, Installer OVA deployments, 104–107 hypervisor requirements, 93–94 Intersight and, 76 IP address requirements, 93–94 memory resource reservations, storage controller VM, 97 network settings, 91 port requirements, 81–87 prerequisites, 76–77 software versions, 78 SSO requirements, 101 storage clusters, 94–95 stretch clusters, 130–150 system components, 77–78 system servers, 101 system services requirements, 96

Technet24

||||||||||||||||||||

||||||||||||||||||||

UCS FI, 78 UCS FI network provisioning, 88 UCS requirements, 92–93 upstream switches and FI, 88, 89 vCenter configuration requirements, 95 VLAN requirements, 91–92 VMware vSphere, 79 vSwitch switch requirements, 91–92 workflows, 103 Intersight and, 12–13, 76 Log File System, 44–45 management HX Connect, 151–179 next generation management, 12–13 scaling clusters, 207–224 single point of management, 7–9 node models, 47 nodes, 75 replication, 12 scalability, 7–11 SCVM, 42

||||||||||||||||||||

||||||||||||||||||||

IO Visor, 42, 44 stHypervisorSvc, 43 VAAI, 42 VIB, 42–43 Smart Account registration, 184, 185 Smart Licensing, 184–189 snapshots creating, 193–195 deleting, 198–199 native snapshots, 190, 193 reverting to, 196–198 scheduling, 195–196 SENTINEL snapshots, 193, 199 software, versions of, 78 storage clusters, registration, 184–189 stretch clusters, 12 system components, 77–78 UCS and, 7–9 upgrading, 230–231 cluster health, 235–236 cluster storage capacity, 236

Technet24

||||||||||||||||||||

||||||||||||||||||||

cluster upgrades with HX Connect UI, 254–259 downloading component bundles, 232–233 ESX Agent Manager, 237 health checks via HyperCheck utility, 242–248 Net.TeamPolicyUpDelay default values, 236–237 pre-upgrade validation, 235–241 pre-upgrade workflow, 231–232 verifying cluster access policies, 241 verifying DRS enabled status, 236 verifying pre-upgrade UCS server firmware (Cbundle), 234–235 verifying upstream network connectivity, 240–241 verifying vMotion interfaces, 239 Zookeeper, 46–47 HyperFlex 3.5 configurations, 88–89 HyperFlex Cleaner, 45–46 HyperFlex Connect server (controller service), 43 HyperFlex Edge clusters installation wizard, 310–320 management, 321–322

||||||||||||||||||||

||||||||||||||||||||

dual-switch configurations, 300–301 installations via Intersight, 304 claiming devices, 308–310 HyperFlex Edge clusters installation wizard, 310–320 integrated management controller configurations, 306–308 postinstallation configurations, 320–321 prerequisites, 304–306 single-switch configurations, 299–300 topologies logical topology, 301–304 physical topology, 299 Hyper-V, 38 backups, 39 computing environment, 39 disaster recovery, 39 optimization, 39 portability, 39 remote connectivity, 39 security, 39 hypervisors, 36–37

Technet24

||||||||||||||||||||

||||||||||||||||||||

bare metal (type 1) hypervisors, 37 HyperFlex installation requirements, 93–94 type 1 (bare metal) hypervisors, 37 type 2 hypervisors, 37 VMware vSphere hypervisors, 37–38

I infrastructure bundle (UCS), 232–233 installations HX Data Platform, 183 HyperCheck utility, 243–248 HyperFlex, 52–53, 76 ASUP requirements, 100–101 browser recommendations, 81 compute-only nodes, 79–80 Controller VM locations, 97–100 converged nodes, 79–80 CPU resource reservations, Controller VM, 96–97 direct connect mode cluster setup, 101–103 disk requirements, 79–81 edge clusters, 107–120 external HyperFlex connections, 87–88

||||||||||||||||||||

||||||||||||||||||||

host requirements, 79 HX Data Platform, Installer OVA deployments, 104–107 hypervisor requirements, 93–94 IP address requirements, 93–94 memory resource reservations, storage controller VM, 97 network settings, 91 port requirements, 81–87 prerequisites, 76–77 software versions, 78 SSO requirements, 101 storage clusters, 94–95 stretch clusters, 130–150 system components, 77–78 system servers, 101 system services requirements, 96 UCS FI, 78, 88 UCS requirements, 92–93 upstream switches and FI, 88, 89 vCenter configuration requirements, 95 VLAN requirements, 91–92

Technet24

||||||||||||||||||||

||||||||||||||||||||

VMware vSphere, 79 vSwitch switch requirements, 91–92 workflows, 103 HyperFlex Edge clusters installation wizard, 310–320 HyperFlex Edge via Intersight, 304 claiming devices, 308–310 HyperFlex Edge clusters installation wizard, 310–320 integrated management controller configurations, 306–308 postinstallation configurations, 320–321 prerequisites, 304–306 VIB IO Visor, 42 stHypervisorSvc, 43 VAAI, 42 VIC, 52–53 installer-related ports, HyperFlex installation requirements, 82 integrated management controller configurations, 306–308 Internet Explorer, HyperFlex installations, 81

||||||||||||||||||||

||||||||||||||||||||

Intersight, 294 analytics, 293 arbitrator overview, 298–299 automation, 293 Base edition, 295 browser compatibility, 295 cloud-based management, 23–24, 293–296 Essentials edition, 295–296 HyperFlex, 12–13 clusters, 296–297 installations, 76 HyperFlex Edge dual-switch configurations, 300–301 installations, 304–321 logical topology, 301–304 physical topology, 299 single-switch configurations, 299–300 MaaS, 294 recommendation engine, 294 support for, 294 telemetry, 293

Technet24

||||||||||||||||||||

||||||||||||||||||||

IO Visor, 42, 44 IP addresses HyperFlex installation requirements, 93–94 network IP addressing, stretch clusters, 136–137 replication network connections, 272–273

J Job scheduler (controller service), 44

K key-encryption (security) keys, 177 Kubernetes page (HX Connect), 158–159

L Licensing (Smart), 184–189 local accounts, HX Connect, 151 Log File System, 44–45 logical topology, HyperFlex Edge, 301–304 logins, HX Connect, 151

M M*N connectivity, 269 MaaS (Management as a Service), Intersight, 294

||||||||||||||||||||

||||||||||||||||||||

mail servers, ports, 83 management HCI, 4 HyperFlex HX Connect, 151–179 next generation management, 12–13 scaling clusters, 207–224 single point of management, 7–9 HyperFlex Edge cluster management, 321–322 HyperFlex HX-series system management, components of, 28–29 Intersight cloud-based management, 23–24 UCS, 16 VM datastores, 205–206 ReadyClones, 199–205 snapshots, 189–199 VMware vCenter, 29 vSphere update manager, 38 MDS 9000 multilayer director, SAN switches, 29 9500 series, 31 9700 series, 30

Technet24

||||||||||||||||||||

||||||||||||||||||||

media encryption keys, 177 memory, resource reservations, storage controller VM, 97 mgmtip (controller service), 44 Microsoft Internet Explorer, HyperFlex installations, 81 migration planning, disaster recovery, 285–286 monitoring-related ports, HyperFlex installation requirements, 83 Mozilla Firefox, HyperFlex installations, 81

N name servers, ports, 83–84 native snapshots, 190 benefits of, 190–191 best practices, 192–193 considerations, 191–192 parameters, 191–192 timezones, 193 Net.TeamPolicyUpDelay default values, 236–237 network switches, 31 Nexus 5000 series, 31–33

||||||||||||||||||||

||||||||||||||||||||

Nexus 9000 series, 33 ACI mode, 34–36 NX-OS mode, 33–34 networks HyperFlex installation settings, 91 IP addressing, stretch clusters, 136–137 provisioning, UCS FI, 88 replication networks considerations, 269 HX Connect configurations, 270–273 overview of, 267–268 topologies, 268 UCS, 15 upstream network connectivity, verifying, 240–241 VLAN, replication network connections, 270–272 Nexus 5000 series switches, 31–33 Nexus 9000 series switches, 33 ACI mode, 34–36 NX-OS mode, 33–34 nginx (controller service), 44 nodes, 75

Technet24

||||||||||||||||||||

||||||||||||||||||||

All Flash nodes, 26–27, 47, 51 All-NVMe nodes, NVMe drives, 51 clusters, scaling compute nodes, 213, 224 converged nodes, 208–212, 217–224 expanding stretch clusters, 213–218 compute nodes removing, 224 scaling clusters, 213, 224 compute-only nodes, 52 clusters, 81 HyperFlex installation requirements, 81 converged nodes, 77–78 HyperFlex clusters, 43 HyperFlex installation requirements, 79–80 removing, 217–224 scaling clusters, 208–212, 217–224 DRS, verifying enabled status, 236 ensembles, 47 ESXi node checks, 242–243 five-node clusters, 219

||||||||||||||||||||

||||||||||||||||||||

four-node clusters, 219 HX220 M5/M4 Series, 47 HX220c-M4S Hybrid, 49 HX220c-M5SX Hybrid, 48 HXAF220c-M4S All Flash, 48 HXAF220c-M5SX All Flash, 48 HX240 M5/M4 Series, 49 HX240c-M4SX All Flash, 50–51 HX240c-M4SX Hybrid, 51 HX240c-M5L Hybrid, 50 HX240c-M5SX Hybrid, 49–50 HXAF220c-M5N All-NVMe, 51 HXAF240c-M5SX All Flash, 49 Hybrid converged nodes, SSD, 51 hybrid nodes, 47 HyperFlex node models, 47 rebooting nodes, 267 removing compute nodes, 224 converged nodes, 217–224 simultaneous failures, 95

Technet24

||||||||||||||||||||

||||||||||||||||||||

three-node clusters, 219, 245–247 witness nodes stretch cluster deployments, 134–136 stretch cluster requirements, 133–134 znode, 46–47 Nodes page (HX Connect, System Information page), 167 non-converged (traditional) data centers, 1–2 non-HX datastores, disaster recovery, 267 Notifications Settings page (HX Connect), 157–158 NTP servers HyperFlex installation requirements, 96 storage clusters, 96 NVMe drives, All-NVMe nodes, 51 NVMe SSD, replacing, 226–227 NX-OS mode, Nexus 9000 series switches, 33–34

O optimization data, 45 compression, 45

||||||||||||||||||||

||||||||||||||||||||

deduplication, 45 Hyper-V, 39

P partitioning disks, 80 Performance page (HX Connect), 164–165 physical topology, HyperFlex Edge, 299 plan migration, disaster recovery, 285–286 portability, Hyper-V, 39 ports HyperFlex installation requirements, 81–82 installer-related ports, 82 mail server-related ports, 83 monitoring-related ports, 83 name server-related ports, 83–84 replication ports, 85–86 SED cluster ports, 86 SSO server-related ports, 85 stretch witness ports, 85 time server-related ports, 82 UCSM-related ports, 86 user-related ports, 84

Technet24

||||||||||||||||||||

||||||||||||||||||||

replication requirements, 265–266 prerequisites, HyperFlex installations, 76–77 pre-upgrade UCS server firmware (C-bundle), verifying, 234–235 Progress page (HX Connect, Upgrade page), 174 protecting data DR snapshots, 267 protection groups, 267 replication administrators, 266 considerations, 266–267 port requirements, 265–266 storage spaces, 266 supported clusters, 266–267 topologies, 265 replication networks considerations, 269 HX Connect configurations, 270–273 overview of, 267–268 topologies, 268 replication pairs creating, 274–277

||||||||||||||||||||

||||||||||||||||||||

overview of, 273–274 RPO, 263–265 RTO, 263–265 VM protection groups, 276–281

Q QoS (Quality of Service), clusters, 88 QSA, VIC 1387 mLOM cards support, 54–56 QSFP, VIC 1387 mLOM cards support, 54–55

R rack servers, UCS-C series, 22 rack-mount firmware bundle (UCS), 232–233 rack-mount servers (HX-series), standard cluster topologies, 60–62 RBAC (Role-Based Access Control), HX Connect, 152–154 ReadyClones, 199 base VM support, 200 benefits of, 200 best practices, 201 creating via

Technet24

||||||||||||||||||||

||||||||||||||||||||

HX Connect, 201–203 HX Data Platform, 203–205 requirements, 200–201 rebalancing (CRM), 46 rebooting nodes, 267 recommendation engine, Intersight, 294 recovery. See disaster recovery redo log snapshots, 193–195 registration Smart Account registration, 184, 185 storage clusters, 184–185 controller VM, 188–189 creating registration tokens, 185–187 tokens, creating, 185–187 remote clusters, HX Connect, 176–177 remote connectivity, Hyper-V, 39 remote offices edge for remote branch offices, 47 HyperFlex Edge, 27–28 removing nodes compute nodes, 224

||||||||||||||||||||

||||||||||||||||||||

converged nodes, 217–224 replacing hardware HDD, 229–230 SSD, 225–226 housekeeping SSD, 227–229 NVMe SSD, 226–227 repl-cip-monitor (controller service), 43 replication administrators, 265 considerations, 266–267 HX Connect, 175–177 HyperFlex, 12 pairing, HX Connect, 176–177 port requirements, 265–266 ports, HyperFlex installation requirements, 85–86 replication networks considerations, 269 HX Connect configurations, 270–273 overview of, 267–268 topologies, 268 replication pairs

Technet24

||||||||||||||||||||

||||||||||||||||||||

creating, 274–277 overview of, 273–274 storage spaces, 266 supported clusters, 266–267 topologies, 265 Replication Cluster IP Monitor (controller service), 43 Replication services (controller service), 43 replsvc (controller service), 43 resolvconf (controller service), 44 resource reservations CPU, Controller VM, 96–97 memory, storage controller VM, 97 Reverse proxy server (controller service), 44 reverting to snapshots, 196–198 robo clusters. See edge clusters RPO (Recovery Point Objectives), 263–265 RTO (Recovery Time Objectives), 263–265

S SaaS (Software as a Service), Intersight cloudbased management, 23–24

||||||||||||||||||||

||||||||||||||||||||

SAN switches, MDS 9000 multilayer director, 29 9500 series, 31 9700 series, 30 scalability HCI, 4 HyperFlex, 7–11 scaling clusters, 207 node expansion, 207–208 compute nodes, 213, 224 converged nodes, 208–212, 217–224 stretch clusters, 213–218 node removal compute nodes, 224 converged nodes, 217–224 scheduling snapshots, 195–196 SCVM (Storage Controller Virtual Machine), 42 HX services, 43–44 IO Visor, 42, 44 SCVM client (controller service), 43 stHypervisorSvc, 43 VAAI, 42

Technet24

||||||||||||||||||||

||||||||||||||||||||

VIB, 42–43 security Hyper-V, 39 key-encryption security, 177 SED cluster ports, HyperFlex installation requirements, 86 self-certificates, 179 SENTINEL snapshots, 193, 199 servers blade servers compute-only nodes, 52 standard cluster topologies, 62 UCS 5108 chassis, 19–20 UCS-B series blade servers, 20 disk partitioning, 80 DNS servers, HyperFlex installation requirements, 96 HCI, server integration, 4 mail servers, ports, 83 name servers, ports, 83–84 NTP servers HyperFlex installation requirements, 96 storage clusters, 96

||||||||||||||||||||

||||||||||||||||||||

rack servers, UCS-C series, 22 rack-mount servers, standard cluster topologies, 60–62 SSO servers, ports, 85 storage servers, UCS-S series, 23 UCS server firmware, Upgrade page (HX Connect), 173 vCenter servers, 38 services, controller services, 43–44 Settings page (HX Connect), 155 Auto-Support, 156–157 Device Connect, 159–160 Kubernetes, 158–159 Notifications Settings page, 157–158 Support Bundle Settings, 156–157 SFP (Small Form Factor Pluggable) VIC 1227 mLOM cards support, 54 VIC 1387 mLOM cards support, 56 SFP+10 Gbps transceivers, VIC 1457 mLOM cards, 56, 57 shutdown stops (clusters), 46 simultaneous failures, 95

Technet24

||||||||||||||||||||

||||||||||||||||||||

single-switch configurations edge clusters, 66–68 HyperFlex Edge, 299–300 Smart Account registration, 184, 185 Smart Licensing, storage cluster registration, 184–185 controller VM, 188–189 creating registration tokens, 185–187 snapshots creating, 193–195 deleting, 198–199 native snapshots, 190 benefits of, 190–191 best practices, 192–193 considerations, 191–192 parameters, 191–192 timezones, 193 redo log snapshots, best practices, 193–195 reverting to snapshots, 196–198 scheduling, 195–196 SENTINEL snapshots, 193, 199 VM management, 189–199

||||||||||||||||||||

||||||||||||||||||||

datastores, 205–206 ReadyClones, 199–205 VM protection, 267 software, HyperFlex software versions, 78 split L2 topologies, 267–268 split upgrades, 252 Springpath file system (controller service), 43 SSD (Solid-State Drives) All Flash nodes, 51 Hybrid converged nodes, 51 replacing, 225–226 housekeeping SSD, 227–229 NVMe SSD, 226–227 SSO (Single Sign-On) HyperFlex installation requirements, 101 servers, ports, 85 standard clusters, 41, 58–59 FI connectivity, 59–60 HX-series rack-mount server connectivity, 60–62 HyperFlex configurations, 107–120 UCS B-series blade server connectivity, 62

Technet24

||||||||||||||||||||

||||||||||||||||||||

Stats Aggregator Service (controller service), 43 Stats Cache Service (controller service), 43 Stats Listener Service (controller service), 43 Stats web server (controller service), 44 statsd (controller service), 43 stHypervisorSvc, 43 stMgr (controller service), 43 storage cluster storage capacity, 236 HCI, 4 replication of storage spaces, 266 UCS, 16 UCS-S series storage servers, 23 storage clusters, 183 adding drives, 225 HyperFlex installation requirements, 94–95 NTP servers, 96 registration, 184–185 controller VM, 188–189 creating registration tokens, 185–187 storage controller VM

||||||||||||||||||||

||||||||||||||||||||

health checks, 242–243 time zones, 96 storfs (controller service), 43 stretch clusters, 41, 63–64, 76 ACI requirements, 133 expanding, 213–218 FI, 132 HyperFlex, 12, 130–150 network IP addressing, 136–137 stretch witness ports, 85 UCSM requirements, 133 VLAN requirements, 133 VMware requirements, 133 witness nodes, 133–136 Witness VM, 132 stretch witness ports, HyperFlex installation requirements, 85 Support Bundle Settings page (HX Connect), 156–157 switches 10 Gigabit Ethernet switch edge cluster configurations, 70–73

Technet24

||||||||||||||||||||

||||||||||||||||||||

Catalyst switches, clusters and, 88 dual-switch configurations edge clusters, 68–70 HyperFlex Edge, 300–301 HyperFlex Edge dual-switch configurations, 300–301 single-switch configurations, 299–300 network switches, 31 Nexus 5000 series, 31–33 Nexus 9000 series, 33–36 SAN switches, MDS 9000 multilayer director switches, 29–31 single-switch configurations edge clusters, 66–68 HyperFlex Edge, 299–300 upstream switches clusters and, 88, 89 DJL2, 89 vSwitch switches HX Data Platform, 91 HyperFlex installation requirements, 91–92 System Information page (HX Connect), 165–166

||||||||||||||||||||

||||||||||||||||||||

Controller VM access, 167 Disks page, 168–169 Nodes page, 167 system management service (controller service), 43 system servers, HyperFlex installation requirements, 101 system services, HyperFlex installation requirements, 96

T telemetry, Intersight, 293 .tgz file (upgrade bundle), 232 thin provisioning, 267 three-node clusters, 219, 245–247 time server-related ports, HyperFlex installation requirements, 82 time zones native snapshots, 193 storage controller VM, 96 tomcat (controller service), 44 topologies edge clusters, 64–66

Technet24

||||||||||||||||||||

||||||||||||||||||||

10 Gigabit Ethernet switch configurations, 70–73 dual-switch configurations, 68–70 single-switch configurations, 66–68 HyperFlex Edge logical topology, 301–304 physical topology, 299 replication, 265 replication networks, 268 split L2 topologies, 267–268 standard clusters, 58–59 FI connectivity, 59–60 HX-series rack-mount server connectivity, 60–62 UCS B-series blade server connectivity, 62 stretch clusters, 63–64 traditional (non-converged) data centers, 1–2 Twinax VIC 1227 mLOM cards support, 54 VIC 1387 mLOM cards support, 56 VIC 1457 mLOM cards support, 57–58 type 1 (bare metal) hypervisors, 37 type 2 hypervisors, 37

||||||||||||||||||||

||||||||||||||||||||

U UCS (Unified Computing System), 15 blade firmware bundle, 232–233 B-series blade servers, standard cluster topologies, 62 components of, 15–16 computing, 15 FI edge clusters, 78 HyperFlex installations, 78, 92 network provisioning, 88 HyperFlex and, 7–9, 92–93 infrastructure bundle, 232–233 firmware, upgrading, 248–261 management, 16 networks, 15 pre-upgrade UCS server firmware (C-bundle), verifying, 234–235 rack-mount firmware bundle, 232–233 server firmware, Upgrade page (HX Connect), 173 storage, 16

Technet24

||||||||||||||||||||

||||||||||||||||||||

UCS-B series, 16 blade servers, 20 UCS 5108 chassis, 19–20 UCS 6200 series, 18–19 UCS 6300 series, 17–18 UCS 6454, 16–17 UCS-C series rack servers, 22 UCSM, 16, 20–22, 28–29 UCS-S series storage servers, 23 virtualization, 16 UCSM (UCS Manager), 16, 20–22 cluster health, verifying, 238–239 HyperFlex Edge cluster management, 321–322 HyperFlex HX-series system management, 28–29 HyperFlex installation requirements, 93 ports, HyperFlex installation requirements, 86 replication network connections, 271 stretch cluster requirements, 133 verifying pre-upgrade UCS server firmware (Cbundle), 234 updates, vSphere update manager, 38 Upgrade page (HX Connect)

||||||||||||||||||||

||||||||||||||||||||

ESXi Upgrade, 173–174 HX Data Platform Upgrade, 173 Progress page, 174 UCS server firmware, 173 upgrades cluster upgrades with HX Connect UI, 254–259 combined upgrades, 252 HX Data Platform Upgrade, 173 HyperFlex, 230–231 cluster health, 235–236 cluster storage capacity, 236 cluster upgrades with HX Connect UI, 254–259 downloading component bundles, 232–233 ESX Agent Manager, 237 health checks via HyperCheck utility, 242–248 Net.TeamPolicyUpDelay default values, 236–237 pre-upgrade validation, 235–241 pre-upgrade workflow, 231–232 verifying cluster access policies, 241 verifying DRS enabled status, 236 verifying pre-upgrade UCS server firmware (Cbundle), 234–235

Technet24

||||||||||||||||||||

||||||||||||||||||||

verifying upstream network connectivity, 240–241 verifying vMotion interfaces, 239 split upgrades, 252 UCS infrastructure firmware, 248–261 Upgrade page (HX Connect), 172 upstream network connectivity, verifying, 240–241 upstream switches clusters and, 88, 89 DJL2, 89 user-related ports, HyperFlex installation requirements, 84

V VAAI (VMware API for Integration Application), 42 validation, HyperFlex pre-upgrade validation, 235–241 vCenter configuration requirements, 95 servers, 38 VDI (Virtual Desktop Infrastructure),

||||||||||||||||||||

||||||||||||||||||||

hypervisors, 36–37 Veeam Availability Suite changed block tracking, 288–289 HyperFlex backups, 286–289 verifying cluster health in UCSM, 238–239 DRS enabled status, 236 Net.TeamPolicyUpDelay default values, 236–237 pre-upgrade UCS server firmware (C-bundle), 234–235 upstream network connectivity, 240–241 vMotion interfaces, 239 VIB (vSphere Installation Bundles) IO Visor, 42, 44 stHypervisorSvc, 43 VAAI, 42 VIC (Virtual Interface Cards) installation, 52–53 VIC 1227 mLOM cards, 52–53 cable specification, 54 SFP support, 54 Twinax support, 54

Technet24

||||||||||||||||||||

||||||||||||||||||||

VIC 1387 mLOM cards, 53 QSA support, 54–56 QSFP support, 54–55 SFP support, 56 Twinax support, 56 VIC 1455, direct connect mode cluster setup, 103 VIC 1457, direct connect mode cluster setup, 103 VIC 1457 mLOM cards, 53 SFP support, 56–57 SFP+10 Gbps transceivers, 56–57 SFP28 25 Gbps transceivers, 57 Twinax support, 57–58 Virtual Desktop Infrastructure (VDI), hypervisors, 36–37 Virtual Machines. See VM virtualization Hyper-V, 38 backups, 39 computing environment, 39 disaster recovery, 39 optimization, 39 portability, 39

||||||||||||||||||||

||||||||||||||||||||

remote connectivity, 39 security, 39 UCS, 16 VLAN (Virtual Local Area Networks) HyperFlex installation requirements, 91–92 replication network connections, 270–272 stretch cluster requirements, 133 VM (Virtual Machines) base VM, ReadyClone support, 200 controller VM access (HX Connect, System Information page), 167 CPU resource reservations, 96–97 HX220c M4 Controller VM, 99 HX220c M5 Controller VM, 97–98 HX240c M4 Controller VM, 99–100 HX240c M5 Controller VM, 97, 98 HXAF220c Controller VM, 97–98 HXAF220c M4 Controller VM, 99 HXAF240c M4 Controller VM, 99–100 HXAF240c M5 Controller VM, 97–98 locations, 97–100

Technet24

||||||||||||||||||||

||||||||||||||||||||

storage cluster registration, 188–189 disaster recovery, snapshots, 267 management datastores, 205–206 ReadyClones, 199–205 snapshots, 189–199 protection groups, 276–281 snapshots, 267 ReadyClone support, 200 recovery, 282–285 SCVM, 42–43 storage controller VM health checks, 242–243 time zones, 96 Virtual Machines page (HX Connect), 171–172 Witness VM, stretch cluster installations, 132 vmkernel, 37 vMotion interfaces, verifying, 239 VMware ESXi, 37

||||||||||||||||||||

||||||||||||||||||||

ESXi Offline Zip bundle, 232 vmkernel, 37 stretch cluster requirements, 133 vCenter management, 29 vSphere clients, 38, 104–107 ESXi, 37 vmkernel, 37 web client, 38 HyperFlex installations, 79 hypervisors, 37–38 update manager, 38 vCenter servers, 38 web client, 38 vSwitch switches HX Data Platform, 91 HyperFlex installation requirements, 91–92

W web browsers. See browsers Web CLI page (HX Connect), 174–175 web clients, vSphere web client, 38

Technet24

||||||||||||||||||||

||||||||||||||||||||

Web server (controller service), 44 witness nodes, stretch clusters deployments, 134–136 requirements, 133–134 Witness VM, stretch cluster installations, 132 workflows, HyperFlex installations, 103

X-Y-Z znode, 46–47 Zookeeper, 46–47

||||||||||||||||||||

||||||||||||||||||||

Technet24

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

Code Snippets Many titles include programming code or configuration examples. To optimize the presentation of these elements, view the eBook in single-column, landscape mode and adjust the font size to the smallest setting. In addition to presenting code and configurations in the reflowable text format, we have included images of the code that mimic the presentation found in the print book; therefore, where the reflowable format may compromise the presentation of the code listing, you will see a “Click here to view code image” link. Click the link to view the print-fidelity code image. To return to the previous page viewed, click the Back button on your device or app.

Technet24

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

Technet24

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

Technet24

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

Technet24

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

Technet24

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

Technet24

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

Technet24

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

Technet24

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

Technet24

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

Technet24

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

Technet24

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

Technet24

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

Technet24

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

Technet24

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

Technet24

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

Technet24

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

Technet24

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

Technet24

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

Technet24

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

Technet24

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

Technet24

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

||||||||||||||||||||

Technet24