IBM Tape Solutions for Storage Area Networks and FICON 9780738499147

188 95 11MB

English Pages 208 Year 2003

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

IBM Tape Solutions for Storage Area Networks and FICON
 9780738499147

Citation preview

Front cover

IBM Tape Solutions utions for Storage Area Networks and nd FICON Exploit storage area networks (SANs) for your tape subsystem workload Improve your tape performance with FICON Share your tape libraries more effectively

Barry Kadleck Christina Coutts David McFarlane Paul Rizio

ibm.com/redbooks

International Technical Support Organization IBM Tape Solutions for Storage Area Networks and FICON December 2003

SG24-5474-03

Note: Before using this information and the product it supports, read the information in “Notices” on page vii.

Fourth Edition (December 2003) This edition applies to a range of IBM products used to configure a storage area network (SAN), including but not limited to the IBM SAN Data Gateway, the IBM Fibre Channel Switch, Magstar MP libraries, Ultrium libraries, and IBM Tivoli Storage Manager. © Copyright International Business Machines Corporation 2000, 2001, 2002, 2003. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Contents Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

ix ix xi xi

Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii December 2003, Fourth Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Chapter 1. Storage area networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 The SCSI legacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Storage networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.1 Network Attached Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.2 Storage area network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 The value of SANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.4 SAN overview and topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.4.1 Fibre Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.4.2 SAN topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.4.3 Extended distance fabrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.4.4 SAN and NAS bridges and gateways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.4.5 Storage network positioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Chapter 2. Tape in the SAN environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 LAN backup model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 SAN backup model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 LAN-free backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Server-free backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Sharing and pooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Economics of sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Remote tape vaulting and disaster tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Fully redundant, disaster tolerant tape solutions. . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Storage virtualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17 18 19 20 20 22 22 25 26 27 28

Chapter 3. SAN building blocks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 IBM SAN Data Gateway SCSI Tape Router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 IBM SAN Data Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 IBM TotalStorage SAN Controller 160 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 IBM Fibre Channel Storage Hub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Hub configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 IBM TotalStorage SAN Switch F08 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 IBM TotalStorage SAN Switch F16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.1 Product overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.2 Hardware components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.3 Software specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31 32 33 35 36 37 38 39 40 41 47

Contents

iii

3.6.4 Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 IBM TotalStorage SAN Switch M12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.1 M12 description. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.2 M12 connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.3 Intelligence within the M12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.4 Open SAN management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.5 Seamless upgrades and investment protection . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 INRANGE FC/9000 Fibre Channel Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.1 INRANGE Director product description. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.2 Supported attachments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.3 Supported port types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.4 Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.5 Scalable capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9 McDATA ES-1000 Loop Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9.1 Product description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9.2 High availability features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9.3 Concurrent firmware upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9.4 Serviceability features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9.5 ES-1000 zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.10 McDATA ES-3216 and ES-3232 fabric switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.10.1 Product description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.10.2 High availability features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.10.3 Setup configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.10.4 Management software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.10.5 Serviceability features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11 McDATA ED-6064 Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11.1 Product description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11.2 Attachment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11.3 Planning for 2 Gbps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11.4 Port types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11.5 Scalable configuration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

51 53 54 56 56 56 57 57 57 58 58 59 59 64 64 66 67 67 68 68 68 69 70 70 71 71 71 73 73 74 74

Chapter 4. Tape building blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 4.1 SAN readiness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4.2 LTO tape subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4.2.1 Ultrium. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4.2.2 The IBM LTO Ultrium family of tapes and libraries . . . . . . . . . . . . . . . . . . . . . . . . 81 4.2.3 IBM 3580 Ultrium tape drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.2.4 IBM 3581 Ultrium tape autoloader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 4.2.5 IBM 3582 Ultrium Scalable tape library. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.2.6 IBM 3583 Ultrium Scalable tape library. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.2.7 IBM 3584 UltraScalable tape library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.2.8 IBM 3600 Series LTO tape automation family . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.3 IBM TotalStorage MP 3570 and 3575 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.3.1 IBM 3570 drive and autoloader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.3.2 IBM 3575 library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.4 IBM TotalStorage 3590 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.4.1 IBM TotalStorage 3590 tape drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 4.4.2 IBM 3590 cartridges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 4.4.3 IBM 3590 Model A60 ESCON and FICON control unit . . . . . . . . . . . . . . . . . . . . . 97 4.4.4 IBM TotalStorage 3592 tape drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.4.5 IBM 3592 cartridges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.4.6 IBM TotalStorage 3494 Virtual Tape Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

iv

IBM Tape Solutions for Storage Area Networks and FICON

4.5 Digital Linear Tape and SuperDLT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 IBM 3502 DLT tape library family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Other tape libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 SAN readiness summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

104 105 106 107

Chapter 5. Tape solutions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Highly reliable, low cost backup and archive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Backup for workgroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Backup for departments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.3 Backup for large departments and enterprises . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Remote tape vaulting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Workgroup and small department solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Large department and enterprise solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Disaster tolerant tape implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Disaster tolerance for departments and small enterprises . . . . . . . . . . . . . . . . . 5.3.2 Disaster tolerance for medium and large enterprise . . . . . . . . . . . . . . . . . . . . . . 5.4 Sharing S/390 tape library resources with open servers. . . . . . . . . . . . . . . . . . . . . . .

109 110 110 113 115 118 119 122 125 126 129 133

Chapter 6. FICON attachment of IBM tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 FICON overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 FICON infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Fiber connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.3 Directors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.4 FICON control units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.5 FICON management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.6 Sharing with storage area networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 3590-A60 control unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Installation options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Software support for FICON 3590-A60s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.4 Defining devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.5 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 IBM 3592-J70 control unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Installation options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.3 Software support for FICON 3592-J70s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.4 Defining devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 IBM 3494 Virtual Tape Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.1 Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.2 Software support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.3 Advanced functions with the VTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.4 Feature codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

137 138 139 139 139 140 143 144 145 145 151 152 154 154 156 160 163 164 166 166 167 168 170 170 170

Appendix A. Fibre Channel discussion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lower layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Upper layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Classes of service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

173 174 174 174 174 175

Appendix B. Terminology and other basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 StorWatch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 SCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 Contents

v

Fibre Channel terms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Referenced Web sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

181 181 181 182 182 182

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183

vi

IBM Tape Solutions for Storage Area Networks and FICON

Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces.

© Copyright IBM Corp. 2000, 2001, 2002, 2003. All rights reserved.

vii

Trademarks The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: ® xSeries® AIX® AS/400® DFSMS/MVS® DFSMSdss™ DFSMShsm™ Enterprise Storage Server® ES/3090™ ES/9000® ESCON® FlashCopy® FICON® Infoprint®

IBM® ibm.com® iSeries™ Magstar® MVS™ Netfinity® NUMA-Q® OS/390® OS/400® PowerPC® pSeries® PTX® Redbooks(logo) Redbooks™



RETAIN® RS/6000® S/390® Seascape® StorageSmart™ Tivoli® TotalStorage® VM/ESA® VSE/ESA™ z/OS® z/VM® zSeries® 3090™

The following terms are trademarks of other companies: Intel, Intel Inside (logos), MMX, and Pentium are trademarks of Intel Corporation in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure Electronic Transaction LLC. Other company, product, and service names may be trademarks or service marks of others.

viii

IBM Tape Solutions for Storage Area Networks and FICON

Preface The explosive growth of stored data, the increasing value of the data, and the fact that it is often distributed over multiple heterogeneous servers has created significant problems for backing up and archiving data. Also, the increased pressure for more productive IT time and less time for administrative tasks means that there is more data to backup in less time. This IBM® Redbook explains how tape drives and tape libraries can use storage area networks (SANs) to solve these problems. It explains how you can exploit SANs to attach, share, and exploit IBM tape subsystems and tape libraries. The ability to share tape libraries across many hosts creates a tremendous financial advantage that can be an immediate benefit of implementing SANs in your enterprise. You can often achieve significant cost savings and increase data security by implementing the tape sharing and extended distance capabilities of SAN. This redbook also includes a practical description of the products and components that were made available with the IBM SAN product rollout. For a definitive guide to SANs and their implementation, refer to the redbook Designing an IBM Storage Area Network, SG24-5758. Although the primary focus in this redbook is on SAN tape solutions, you must also understand advances in SAN-attached disk storage, advances in copy functions, and storage management software.

The team that wrote this redbook This redbook was produced by a team of specialists from around the world working at the International Technical Support Organization (ITSO), San Jose Center. Barry Kadleck is responsible for tape and Tivoli® Storage projects at the International Technical Support Organization, San Jose. He has worked with storage software and hardware for the last 10 years. He has a degree in electronic engineering and joined IBM United Kingdom (UK) in 1985. Before joining the ITSO in 2001, Barry worked at the Product Introduction Consultancy in Hursley, UK, as a program manager, introducing new Tivoli Storage products into Europe. Christina Coutts is a tape specialist, based in the United Kingdom. She has many years of experience in the IT field. She is a recognized expert in tape technology and its use in complex customer environments. Christina is often called upon to present at project launch seminars. Her skills are sought after to assist European customers with their implementation needs. David McFarlane is a solutions designer from Moorooduc, Australia. He has 20 years of experience in the IT industry mainly in the mainframe S/390® arena. He has worked at IBM for five years. His areas of expertise include DFSMS software, disaster recovery concepts, and 3494 and Virtual Tape Server (VTS) configuration. He also has expertise in overall design for the implementation of new tape technology. Paul Rizio is a storage systems specialist in Australia with IBM Global Services. He has over 12 years of experience in the storage field. His area of storage expertise is mainly based on S/390 platform. He has a sound background in operations and automated operations and over the last year has specialized in the 3494 and VTS environment (especially Peer-to-Peer (PtP) VTS subsystems). His areas of expertise include 3494 and VTS configuring, data conversion, and data center relocation projects.

© Copyright IBM Corp. 2000, 2001, 2002, 2003. All rights reserved.

ix

The team from left to right: David McFarlane, Paul Rizio, Barry Kadlec, and Christina Coutts

The authors of the three previous editions of this redbook were: Andrew Bentley Jonathan Jeans Wolfgang Kesse Barbara Manger-Strassguetl Jack McGuckian Nigel Morton Frank Schallmoser Charles Silvan Anthony Vulicl Matthias Werner Many thanks to the following people for their invaluable contributions to this project: Yvonne Lyon Barry Mellish Jon Tate IBM ITSO, San Jose Center Joe Bacco Thomas Bish Wayne Carlson Erika Dawson Vernon Earle James Fisher x

IBM Tape Solutions for Storage Area Networks and FICON

Kevin Gibble Dennis Haight Sally Krusing J.D. Metzger Collette Morton Christina Orosco Jon Peake Jeff Pilch Craig Schultz David Shields Jesse Thrall IBM SSG Tucson

Become a published author Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. You'll team with IBM technical professionals, Business Partners and/or customers. Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you'll develop a network of contacts in IBM development labs, and increase your productivity and marketability. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.html

Comments welcome Your comments are important to us! We want our Redbooks™ to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: 򐂰 Use the online Contact us review redbook form found at: ibm.com/redbooks

򐂰 Send your comments in an Internet note to: [email protected]

򐂰 Mail your comments to: IBM Corporation, International Technical Support Organization Dept. QXXE Building 80-E2 650 Harry Road San Jose, California 95120-6099

Preface

xi

xii

IBM Tape Solutions for Storage Area Networks and FICON

Summary of changes This section describes the technical changes made in this edition of the book and in previous editions. This edition may also include minor corrections and editorial changes that are not identified. Summary of Changes for SG24-5474-03 for IBM Tape Solutions for Storage Area Networks and FICON as created or updated on December 8, 2003.

December 2003, Fourth Edition This revision reflects the addition, deletion, or modification of new and changed information described below.

New information 򐂰 FICON® support for Virtual Tape Server (VTS) added 򐂰 3590 H model drives information added

© Copyright IBM Corp. 2000, 2001, 2002, 2003. All rights reserved.

xiii

xiv

IBM Tape Solutions for Storage Area Networks and FICON

1

Chapter 1.

Storage area networks During the past 10 years, a major shift has occurred away from mainframe host-centric computing to distributed computing with multiple, lower-cost servers. Storage, for the most part, remains connected to the processor it services via a dedicated channel, such as a Small Computer Systems Interface (SCSI) bus. However, large numbers of often heterogeneous servers create a complex environment which is inefficient in capacity utilization, and is difficult and expensive to manage. During this same period, the reliance on computing in every aspect of business has increased to the point that most companies cannot survive without information technology. It is easy to see that computers and software play a large part, but most people now recognize that data is the most valuable corporate asset. Therefore, it is essential to keep your data secure and have it where you want it, when you want it, and in the format you want. More recently, the amount of data that a company needs to store and access has grown at an exponential rate. This is primarily due to the fact that the cost of communications and the technology to access the data has made the data available to a huge number of people. This rate of growth will continue to increase as more and more people shop and communicate online. It will also increase as businesses become better at exploiting the technology to become more efficient, reduce their time to market and reach new markets. This is not just a boom-and-bust dot-com phenomenon, but a fundamental change in the way traditional brick-and-mortar companies are doing business, both at the business-to-business and business-to-consumer level. As the value of data continues to grow, so does the need to protect it and make it available. It is now even more important to have fast, reliable backup of data and a long term archive. However, the dilemma is that there is more data to back up and less time to do it. Fortunately, recent advances in storage infrastructure with storage area networks (SANs) together with advanced disk copy functions, automated tape library sharing, and storage management have made it possible to significantly reduce or totally eliminate the impact of backups on productive work.

© Copyright IBM Corp. 2000, 2001, 2002, 2003. All rights reserved. .

1

1.1 The SCSI legacy SCSI is the conventional, server-centric method of connecting peripheral devices (disks, tapes and printers) in the open client/server environment. As its name indicates, it was designed for the PC and small computer environment. It is a bus architecture, with dedicated, parallel cabling between the host and storage devices, such as disk arrays. This is similar in implementation to the Original Equipment Manufacturer’s Information (OEMI) bus and tag interface commonly used by mainframe computers until the early 1990s. SCSI shares a physical characteristic with the bus and tag interface. That is cables and connectors are bulky, relatively expensive, and are prone to failure. As a parallel interface, signal skew limits the cable distances, again like the bus and tag interface. The amount of data available to the server is determined by the number of devices which can attach to the bus, and by the number of buses attached to the server. Up to 15 devices can be attached to a server on a single SCSI bus. In practice, because of performance limitations due to arbitration, it is common for no more than four or five devices to be attached in this way, limiting capacity scalability. Arbitration is the process whereby a device gains exclusive control of the SCSI bus to allow the transfer of data and commands. Access to data is lost in the event of a failure of any of the SCSI connections to the disks. This also applies in the event of re-configuration or servicing of a disk device attached to the SCSI bus, because all the devices in the string must be taken offline. In today’s environment, when many applications need to be available continuously, this downtime is unacceptable. The data rate of the SCSI bus is determined by the number of bits transferred, and the bus cycle time is expressed as a frequency in megahertz (MHz). Decreasing the cycle time increases the transfer rate. However, due to limitations inherent in the bus architecture, it may also reduce the distance over which the data can be successfully transferred. The physical transport was originally a parallel cable comprising eight data lines, to transmit eight bits in parallel, plus control lines. Later implementations widened the parallel data transfers to 16-bit paths (SCSI-wide) to achieve higher bandwidths. Propagation delays in sending data in parallel along multiple lines lead to a well-known phenomenon known as skew. This means that all bits may not arrive at the target device at the same time. This limits the distance that devices can be positioned away from the initiating server to between 2 to 25 meters, depending on the cycle time. Faster speeds mean shorter distances. Figure 1-1 shows the distances as they refer to the maximum length of the SCSI bus, including all attached devices and examples.

2

IBM Tape Solutions for Storage Area Networks and FICON

1-2 host systems

15 devices per connection Device

20 MB/sec

Device

Termination

.....

Up to 25 meters

Fast/Wide SCSI 1-2 host systems

15 devices per connection Device

40 MB/sec

Device

Termination

.....

Up to 25 meters

Ultra Wide SCSI differential 1-2 host systems

80 MB/sec

15 devices per connection Device

Device

Termination

.....

Up to 12 meters multidrop, up to 25 m point to point

Ultra2 SCSI

Figure 1-1 SCSI distance limitations

For large servers with a need to attach many devices, the distance limitation can become a severe problem, and limits the amount of peripheral sharing that can be done. Normally, only a single server can access data on a specific disk by means of a SCSI bus. In a shared bus environment, it is clear that all devices cannot transfer data at the same time. SCSI uses an arbitration protocol to determine which device can gain access to the bus. Arbitration occurs before and after every data transfer on the bus. While arbitration takes place, no data movement can occur. This represents an additional overhead which reduces bandwidth utilization, substantially reducing the effective data rate achievable on the bus. Actual sustained data rates are typically less than 50% of the rated speed of the SCSI bus. In addition to being a physical transport, SCSI is also a protocol. The protocol specifies commands and controls for sending blocks of data between the host and the attached devices. SCSI commands are issued by the host operating system in response to user requests for data. Some operating systems, for example, Microsoft® Windows® NT, treat all attached peripherals as SCSI devices, and issue SCSI commands to deal with all read and write operations. It is clear that the physical parallel SCSI bus architecture has a number of significant speed, distance, and availability limitations which make it increasingly less suitable for many applications in today’s networked IT infrastructure. However, since the SCSI protocol is deeply embedded in the way that commonly encountered operating systems handle user requests for data, it would be a major inhibitor to progress if we were obliged to move to new protocols.

Chapter 1. Storage area networks

3

1.2 Storage networks As the era of hard-wired computer screens sitting on managers’ desks gave way to pervasive mobile computing with the advent of local area networks (LANs) and wide area networks (WANs), the rigid inflexibility of direct attached storage is rapidly being superseded by flexible storage networks. Today’s enterprise IT planners need to link many users of multivendor, heterogeneous systems to multivendor shared storage resources. They need to allow those users to access common data, wherever it is located in the enterprise. These requirements imply a network solution. Two types of network storage solutions are now available: 򐂰 Network Attached Storage (NAS) 򐂰 Storage area network It is important that you realize that NAS and SAN are not mutually exclusive. In fact, they are both likely to be found performing different functions within the same company. The differentiation between the two is becoming more blurred with the introduction of SCSI over the Internet Protocol (iSCSI) and NAS bridges or gateways. After looking at NAS and SAN separately, we attempt to position them.

1.2.1 Network Attached Storage NAS solutions use the LAN in front of the server. They transmit data over the LAN using messaging protocols, such as Transmission Control Protocol/Internet Protocol (TCP/IP) and NetBIOS. Figure 1-2 illustrates this.

Intelligent Disk Array

Clients

Clients

JBOD

IBM 3466 (NSM) Network Storage Manager

Local/Wide-Area Network (Messaging Protocol - TCP/IP, Net BIOS)

Database Server

Application Server

Network Backup Server

Web Server

Figure 1-2 Network Attached Storage

By making storage devices LAN addressable, the storage is freed from its direct attachment to a specific server. In principle, any user running any operating system can address the storage device by means of a common access protocol, such as Network File System (NFS), a fundamental point of difference between NAS and iSCSI. In addition, a task, such as backup to tape, can be performed across the LAN, enabling sharing of expensive hardware resources between multiple servers. Most storage devices cannot attach to a LAN. NAS appliances are specialized file servers that are designed for this type of attachment.

4

IBM Tape Solutions for Storage Area Networks and FICON

Several vendors, including IBM, offer a variety of network storage solutions. These currently fall into two categories: 򐂰 File servers 򐂰 Backup-archive servers NAS, therefore, offers a number of benefits, which address some of the limitations of parallel SCSI. However, by moving storage transactions, such as disk accesses, and tasks, such as backup and recovery of files, to the LAN, conflicts can occur with end-user traffic in the network. LANs are tuned to favor short burst transmissions for rapid response to messaging requests, rather than large continuous data transmissions. Significant overhead can be imposed to move large blocks of data over the LAN, due to the small packet size used by messaging protocols. For instance, the maximum packet size for Ethernet is about 1500 bytes. A 10 MB file has to be segmented into more than 7000 individual packets (each sent separately by the LAN access method and then reassembled), if it is to be read from an NAS device. Therefore, an NAS solution is best suited to handle cross-platform direct access applications, not to deal with applications requiring large data transfers. NAS solutions are relatively low cost. They are also straightforward to implement, since they can fit into the existing LAN environment, which is a mature technology. However, the LAN must have plenty of spare capacity to justify NAS implementations, or as is often the case, a dedicated, parallel LAN is used for NAS traffic. LANs today typically run at 10 MB/sec or 100 MB/sec. But the relatively new Gigabit Ethernet is becoming more common. This requires a change of infrastructure from copper cables to fiber-optic cables. Even when the bandwidth of SAN and LAN are equivalent, as in the case of Gigabit Ethernet, a LAN is still best suited to message traffic and a SAN to storage data transfers. NAS typically operates at a file level, with a user attached to an NAS appliance (usually a file server) requests a specific file. This can be a problem in the case of large files such as databases. Storage works in blocks and has no knowledge of files. Blocks map to groups of sectors on the storage media where the data is actually stored. The file table that maps files to blocks resides in the server and is maintained by the server’s operating system, and not in the storage device. The block table which maps blocks to sectors resides in the storage device. SCSI commands, which are used to read and write data to and from the storage devices, contain block information, not file information. Why is all this relevant? It is because of an the emerging iSCSI implementation. iSCSI enables SCSI commands to be sent over IP networks, hence the name iSCSI. This means that applications connected to an NAS can request smaller amounts of data by issuing SCSI commands. At first glance, this may appear to be a panacea for NAS and a death for SANs. However, it is still subject to the overhead of LANs, and most applications work on files. You can use NAS separately or together with a SAN, since the technologies are complementary. In general terms, NAS offers lower cost solutions, is quick and easy to install, and makes file sharing simpler, but with lower performance and less scalability than Fibre Channel (FC) SANs.

1.2.2 Storage area network A SAN is a specialized, high-speed, intelligent, and well-designed network attaching servers and storage devices. A SAN allows an any-to-any connection across the storage network, Chapter 1. Storage area networks

5

using interconnect elements such as routers, gateways, hubs and switches. It eliminates the traditional dedicated connection between a server and storage, and the concept that the server effectively owns and manages the storage devices. It also eliminates any restriction on the amount of data that a server can access, currently limited by the number of storage devices that can be attached to the individual server. Instead, a SAN introduces the flexibility of networking to enable one server or many heterogeneous servers to share a common storage utility. The utility may comprise many storage devices, including disk, tape, and optical storage. The storage utility may be located far from the servers which use it. Figure 1-3 shows how a SAN may look.

Clients

Local/Wide-Area Network (Messaging Protocol - TCP/IP, NetBIOS)

Servers Storage Area Network

(I/O protocols: SCSI, ESCON, FICON etc)

Disk storage arrays

Non-RAID disks

Tape libraries

Figure 1-3 Storage area network

A SAN differs from traditional networks because it is constructed from storage interfaces. SAN solutions use a dedicated network behind the servers, based primarily (although, not necessarily) on Fibre Channel architecture. Fibre Channel provides a highly scalable bandwidth over long distances, and with the ability to provide full redundancy, including switched, parallel data paths to deliver high availability and high performance. Therefore, a SAN can bypass traditional network bottlenecks. It supports direct, high-speed transfers between servers and storage devices in the following ways: 򐂰 Server to storage: This is the traditional method of interaction with storage devices. The SAN advantage is that the same storage device may be accessed concurrently by multiple servers.

6

IBM Tape Solutions for Storage Area Networks and FICON

򐂰 Storage to storage: An example is that a disk array can back up its data directly to tape across the SAN, without processor intervention. Or, a device can be mirrored remotely across the SAN. 򐂰 Server to server: This involves high-speed, high-volume communications between servers. It is not typical of SAN usage but is included here for the sake of completeness. A SAN changes the server-centric model of the typical open systems IT infrastructure, replacing it with a data-centric infrastructure. SAN is concisely defined as a dedicated, centrally managed, secure information infrastructure, which enables an any-to-any interconnection of servers and storage systems.

1.3 The value of SANs SANs solve the three key limitations inherent in parallel SCSI attached storage. These limitations are distance, speed, and connectivity. Removing these restrictions has enabled value added functions that were either not possible or practical before: 򐂰 Improved data availability: SAN connectivity makes it easy to have multiple paths to data, and for multiple servers to access the same data. 򐂰 Improved data access: Enables applications to share data between themselves if appropriate data sharing techniques are used. 򐂰 Reduced storage management and infrastructure costs: By consolidating disk and tape devices, multiple servers share the same storage devices, reducing the number of tape drives and libraries you need, and make it easy for you to reassign unused capacity between servers. You can make further significant savings by implementing a common storage management strategy, which reduces complexity and improves storage administration productivity. 򐂰 Improved disaster tolerance: You can implement improved disaster tolerance solutions, especially for open servers. Previously, disaster tolerance at a remote site was typically implemented by server-to-server communications over a LAN. Now, data can be mirrored to the remote site at the storage level, directly from storage to storage, freeing server and LAN resources. 򐂰 Backup window reduction: You can eliminate or significantly reduce the backup window by creating an almost instantaneous FlashCopy, and then performing the backup from the copy while the original data is still being used for productive work. 򐂰 LAN-free and server-less backup: You use the SAN to transfer backup data between disk storage and tape. You only need to establish control or setup data across the LAN. 򐂰 Remote tape vaulting: Remote tape vaulting eliminates the need to physically transport tapes between sites. This significantly increases reliability by eliminating handling damage and the potential for loss. It also improves availability because the data you need to restore is never in transit. Not all of these advantages may be available on all platforms. However, significant resource is being deployed to increase the already large number of solutions that are available. This redbook describes some of the solutions that you can implement today.

1.4 SAN overview and topologies This section provides a general overview of Fibre Channel architecture and SAN topologies. It places particular emphasis on how you can use tapes. For a better understanding of Fibre Channel and SAN technology, refer to the following redbooks: Chapter 1. Storage area networks

7

򐂰 Introduction to Storage Area Networks, SG24-5470 򐂰 Designing an IBM Storage Area Network, SG24-5758 򐂰 IBM SAN Survival Guide, SG24-6143

1.4.1 Fibre Channel Today, Fibre Channel is well established in the open systems environment as the underlining architecture of the SAN. Fibre Channel is a technology standard that allows data to be transferred from one network node to another at very high speed. Current implementations transfer data at 100 MB/sec or 200 MB/sec. However, 400 MB/sec data rates have already been tested, and products for 1 GB/s are being planned. This standard is backed by a consortium of leading vendors and is accredited by the American National Standards Institute (ANSI). Note: The spelling of the word fibre in Fibre Channel. This is done because the interconnections between nodes are not necessarily based on fiber optics, but can also be based on copper cables. Fibre Channel is structured with independent layers, as are other networking protocols. There are five layers, where 0 is the lowest layer. The physical layers are 0 to 2. These layers carry the physical attributes of the network and transport the data created by the higher level protocols, such as SCSI, TCP/IP, or FICON. As you can see in Figure 1-4, these protocols can use the two top layers (the session and transport layers) to move data segments. These segments are then rolled into a packet, which in turn, are rolled into a frame. The originator creates the frame and sends it to the destination, which unravels the frame back to a segment. The SCSI protocol is extended to SCSI-3 to allow the protocol to travel across the Fibre Channel hardware layers.

4 Session Layer

TCP/IP, SCSI, ... 3 Transport Layer

TH

NH

2 Network Layer

Fibre Channel physical layers

1 Data Link Layer

DLH

Segment

Packet

Frame

0 Physical Layer

Figure 1-4 Fibre Channel protocol layers

1.4.2 SAN topologies Fibre Channel-based networks support three types of topologies: 򐂰 Point-to-point 򐂰 Loop (arbitrated) 򐂰 Switched These can be implemented separately or interconnected to form a fabric. The fabric can also be extended to cover even greater distances. Currently, the distance limitation for 8

IBM Tape Solutions for Storage Area Networks and FICON

synchronous data transfers is about 100 km and virtually unlimited for asynchronous data transfers (see 1.4.3, “Extended distance fabrics” on page 13). Synchronous data transfers require an acknowledgement that the data arrived at receiving before the data is discarded at the transmitting end, where asynchronous data transfers do not require acknowledgement before the data is discarded.

Point-to-point The point-to-point topology is the easiest Fibre Channel configuration to implement. It is also the easiest to administer. You can use this simple link to provide a high-speed interconnection between two nodes as shown in Figure 1-5. A node is any device with one or more Fibre Channel ports.

100 MB/s Full Duplex 200 MB/s (100x2)

100 MB/s

Storage device

Server

Figure 1-5 Fibre Channel point-to-point topology

Because connectivity is limited to two nodes, the exploitation of point-to-point in tape environments is limited. However, the distance between nodes can be up to 10 km, which enables a tape library to be located at a different site. When greater connectivity and performance are required, each device can be connected to a fabric without incurring any additional expense beyond the cost of the fabric itself.

Loops and hubs The Fibre Channel Arbitrated Loop (FC-AL) offers relatively high bandwidth and connectivity at a low cost. For a node to transfer data, it must first arbitrate to win control of the loop. After the node has control, it is free to establish a virtual point-to-point connection with another node on the loop. After this point-to-point (virtual) connection is established, the two nodes consume all of the loop’s bandwidth until the data transfer operation is complete. When the transfer is complete, any node on the loop can arbitrate to win control of the loop. The characteristics of an FC-AL include: 򐂰 Support of up to 126 devices is possible on a single loop. 򐂰 Devices can be hot-swapped with the implementation of hubs and bypass ports. 򐂰 A loop is self-discovering. It finds out who is on the loop and tells everyone else. 򐂰 Logic in the port allows a failed node to be isolated from the loop without interfering with other data transfers. 򐂰 Virtual point-to-point communication is possible. 򐂰 A loop can be interconnected to other loops, essentially forming its own fabric. 򐂰 A loop can be connected to a suitable Fibre Channel switch to create fan-out, or the ability to increase the size of the fabric even more. Note that not all switches support direct attachment of loops. An intermediate edge switch may often be used. Chapter 1. Storage area networks

9

Other characteristics to consider are: 򐂰 Arbitration means that only one device can be talking (writing or reading data) at any one time. This is particularly important if multiple servers are to be attached to the same loop for tape library sharing. After a device is arbitrated and has a virtual connection, other servers are locked out and must wait. Because storage devices usually cannot write and read at the same time, only one half of the full duplex bandwidth is being used effectively. 򐂰 The hub is a single point-of-failure because the loop has to be complete for it to work. More advanced FC hub devices support FC loop connections while offering some of the benefits of switches. Figure 1-6 shows an FC loop using a hub.

Fibre Channel Arbitrated Loop

HUB

Figure 1-6 Fibre Channel loop topology

Switches Fibre Channel switches function in a manner similar to traditional network switches to provide increased bandwidth, scalable performance, an increased number of devices, and, in some cases, increased redundancy. Fibre Channel switches vary in the number of ports and media types they support. Multiple switches can be connected to form a switch fabric capable of supporting a large number of host servers and storage subsystems, as shown in Figure 1-7. When switches are connected, each switch’s configuration information has to be copied (cascaded) into all the other participating switches. Switched fabrics provide the richest function for the tape environment, providing distance for remote tape vaulting and disaster tolerance, multi-server connectivity for tape library sharing, and multiple paths for redundancy. Switches can be connected together or cascaded, as shown in Figure 1-7, to provide extra paths and larger networks.

10

IBM Tape Solutions for Storage Area Networks and FICON

Fibre Channel Switch

Fibre Channel Switch

Figure 1-7 Fibre Channel switched topology

Switched fabric without cascading A switched point-to-point fabric without cascading can support the configurations required by very large systems and a large number of servers. A configuration can be set up to allow every server to have access to every switch, and every controller to be connected to at least two switches. This allows any server to access any controller or device. It also allows for continuous operation (although with degraded performance) in the event that a switch fails. Figure 1-8 shows an example of a fabric with non-cascaded switches.

Chapter 1. Storage area networks

11

Fibre Channel Switch

Fibre Channel Switch

Figure 1-8 Fibre Channel switched topology (non-cascaded switches)

Switched fabric with cascading A switched fabric with cascading provides interconnections between switches, so that the collection of switches looks like one large, any-to-any switch. Fabric management becomes more complex than with basic switched point-to-point configurations. There are often limits imposed by product vendors on the number of switch-to-switch hops permitted when negotiating the fabric. Interswitch links can fail and must be identified (many switch vendors do not yet support any reporting on interswitch links). Traffic can be routed in many ways. For technical, security, or other reasons, various levels of zoning (specifying access authority to connected ports or devices) or other mechanisms may be used to restrict the any-to-any access. Performance monitoring and configuration changes or upgrades needed to keep the network performing adequately are more complex. The primary advantage of a switched cascaded fabric is that it looks like a large logical switch, where a single connection provides access to any other port on the total set of switches, as shown in Figure 1-9.

12

IBM Tape Solutions for Storage Area Networks and FICON

Any-to-any connectivity

Fibre Channel Switch

Fibre Channel Switch

Fibre Channel Switch

Switched cascaded fabric

Fibre Channel Switch

Figure 1-9 Fibre Channel switched topology (cascaded switches)

1.4.3 Extended distance fabrics Extending distance across a WAN is a big topic. It is covered only briefly here to show what you can achieve, but not how you can achieve it. As stated earlier, the distance of a SAN can be extended beyond the normal 10 km achieved with Fibre Channel longwave GBICs to about 100 km for synchronous transfers and to virtually unlimited distances for asynchronous transfers over a WAN. In both cases, additional components are required to extend the fabric. Synchronous transfer is limited by the speed of light in glass. A Fibre Channel frame in which data is packaged spans just over 2 km of fibre. Therefore at 10 km, there can be almost five frames in the pipeline. At 100 km, there can be almost 50 frames. The Fibre Channel architecture includes a feature know as buffer credits, where the sender is only allowed to send as many frames as he has received buffer credits from the receiver. To maintain performance at distances above 10 km, and fully use the potential bandwidth, a switch can use multiple frame buffers. The buffers allow the switch to remember the data until the data is successfully acknowledged. The switch uses these buffers together with a greatly increased number of buffer credits for the port that is attached to the remote switch. The number of buffer credits is normally controlled by a software feature in the switch’s operating system. For example, in the IBM 2109 switch, the Extended Fabrics feature must be licensed. In addition to extra frame buffers, there must also be a mechanism to physically send the light over longer distances. Many products are available to do this. Two examples are the Finisar Optilinx 2000 Optical Link Extender and the IBM 2029 Fibre Saver (see Figure 1-10).

Chapter 1. Storage area networks

13

Up to 100km

Fibre Channel Switch

Extender

Extender

Fibre Channel Switch

Figure 1-10 Extended distance

Even greater distances can be achieved by connecting to an asynchronous transfer mode (ATM) WAN. This requires the use of additional components such as the CNT Open Systems Gateway to convert the SAN to WAN and back again. Figure 1-11 shows a simple example.

WAN

SAN Open System Gateway

Open System Gateway

SCSI or FC tape library

Figure 1-11 Extended distance with a WAN

1.4.4 SAN and NAS bridges and gateways Because SAN is good at storage connectivity and NAS is good for sharing and ease of implementation, why not combine the two? This is exactly what NAS bridges (also called gateways or NAS heads) do. An NAS gateway is simply an NAS appliance without its own storage that connects to a SAN. This enables all the servers connected to the same LAN as an NAS gateway to access to the same storage pool as the SAN attached servers. The benefits of this are: 򐂰 Better capacity usage, because NAS and SAN share a common storage pool 򐂰 A common backup and restore strategy for the whole storage environment 򐂰 Extending SAN to the IP network and providing file system and file server function to the SAN

1.4.5 Storage network positioning The industry seems to be polarizing into a storage network battle between NAS and SAN. This is largely because most of the players can only do one or the other effectively. Therefore, they try to promote the solution they have. It is usually possible to make either work, but doing 14

IBM Tape Solutions for Storage Area Networks and FICON

so may not be the most efficient or cost effective solution. IBM has a growing number of NAS and SAN solutions and experience in both. The following is a high-level view of storage network positioning: 򐂰 Direct-attached storage is optimized for single, isolated servers and a low initial cost. It is usually limited in scalability and connectivity. 򐂰 SAN is optimized for high performance, distance, and scalability using Fibre Channel links. 򐂰 NAS is optimized for ease-of-implementation and file sharing using lower-cost Ethernet-based connections. 򐂰 iSCSI provides some of the benefits of SAN at a lower cost, but at lower performance. We conclude by showing a simple summary of these relative positions in Figure 1-12.

Block level access

File level access

High volume

SAN

NAS Gateway

Low volume

iSCSI

NAS

Figure 1-12 Storage network technology positioning

Chapter 1. Storage area networks

15

16

IBM Tape Solutions for Storage Area Networks and FICON

2

Chapter 2.

Tape in the SAN environment Along with higher bandwidth, storage area networks (SANs) allow greater connectivity of storage devices at greater distances. This enables more advanced functions such as tape and library sharing, remote tape vaulting, and disaster tolerance. In the next few chapters, you may notice that the SAN infrastructure to implement different solutions is very similar. This means that, after you establish a basic SAN infrastructure, you can quickly and easily match your IT capability to your changing business needs. You become much more responsive, yet at a lower cost. Even though most tape libraries and tape drives currently have parallel Small Computer Systems Interface (SCSI) interfaces that limit connectivity and distance, they can still take advantage of the full SAN function by connecting to the SAN fabric though inexpensive SCSI to FC gateways. Refer to Chapter 3, “SAN building blocks” on page 31, for more information about SAN components. Most tape libraries are used for backup and restore. However, the local area network (LAN) and SAN backup models discussed in this chapter apply equally well to archive and near-line storage. The fundamental difference between the LAN model and the SAN model is how the backup data is transported. In the LAN model, all the data is sent over the LAN. In the SAN model, only metadata (control data) is sent over the LAN, while the backup data is transferred over the SAN. This creates the opportunity for a LAN-free backup (Figure 2-4 on page 21) and server-less backup (Figure 2-5 on page 22).

© Copyright IBM Corp. 2000, 2001, 2002, 2003. All rights reserved.

17

2.1 Terminology There is no industry standard definition for terms such as tape library sharing and tape drive pooling. The following sections define what these terms mean in this redbook.

Tape library A tape library consists of the physical robotics that move cartridges, one or more tape drives, and slots for tape storage. It must also have a mechanism for controlling the robotics (a library controller). It may also have a library manager that maintains inventory and mediates sharing. In most cases, a library does not have a built-in library manager, so server-based software has to provide the library management function. For example, the IBM 3584 and IBM 3494 have built-in library managers, where the IBM 3583 does not.

Tape library sharing Multiple servers attached to a tape library share the robotics. The tape drives and slots within the library may be shared among the attached servers.

Tape drive pooling or tape drive sharing This is the dynamic sharing of one or more tape drives between multiple servers.

Partitioning Partitioning allows the ability to partition tape drives and slots to create logical libraries within the same physical library. The server attached to each logical library has no knowledge of any drives or slots outside the partition. The partitions are fixed.

Multipath This is an architecture of the tape library, where the tape library has multiple paths to the robotics controller or SCSI Medium Changer (SMC). The IBM 3584 and IBM 3575 are multipath libraries. A multipath architecture is a prerequisite for partitioning. The IBM 3583 is a single path architecture. See Figure 2-1.

1 2 3 4 5 6

Library controller

1 2 3 4 5 6

Library controller Single path

Multipath

Figure 2-1 Schematic diagram of single path and multipath

Virtual tape library In a virtual tape library, such as the IBM Virtual Tape Server (VTS), data is stored at high speed onto disk. It is then intelligently staged to a small number of physical tape drives. The performance and intelligent staging allows the VTS to appear as a large number of virtual tape drives. This is particularly useful for OS/390® environments where tape utilization can

18

IBM Tape Solutions for Storage Area Networks and FICON

be poor. In the open server environment, this is not a problem. However, if you already have a VTS installed, it can be partitioned and connected to a open system SAN.

Backup server and backup client A backup server is a server running the master or control version of the backup software. The entire server may be dedicated to the backup task, or it may run other applications. A backup client is a server running productive applications with the backup client software installed.

2.2 LAN backup model Figure 2-2 shows the LAN network backup model used by most of the popular backup solutions (for example, Tivoli Storage Manager, Legato Networker, Veritas NetBackup, EDM, and Computer Associates ArcServe). The data to be backed up is passed over the LAN to a centralized backup server. This server then manages the tape drives and library so that each client server does not need an attached tape drive.

Backup Clients

Backup Clients

Corporate LAN Network Backup Server

Local Tape Library Figure 2-2 LAN backup model

This model simplifies the backup process, makes the process more reliable, and generally lowers the overall cost of the backup process, since fewer tape drives are needed, which reduces cost. However, this model is not particularly suitable if the amount of backup data is large, because both the library commands and the data are transferred across the LAN. For large database servers and other applications using large files, incremental backups are generally not practical, and full backups require a lot of network bandwidth. Thus, we often allow the large database server to retain access to its own tape drives, with its own copy of the backup package running on their server. This increases the requirements for tape drives, which increases the cost.

Chapter 2. Tape in the SAN environment

19

The IBM 3466 Network Storage Manager provides a pre-packaged solution for implementing backup over the LAN and getting the benefits of shared tape.

2.3 SAN backup model In the SAN backup model (Figure 2-3), backup data is transferred across the SAN. Only metadata is transferred across the LAN. Metadata is data about data, control information that describes the files that are being transferred. Since the disk storage and tape storage share the same SAN, it also becomes possible to transfer data directly between disk and tape. This is called server-less backup. It is also much easier for multiple servers to share a tape library or share a pool of tape drives. These functions are considered in more detail later in this chapter.

LAN Backup Clients

Backup Clients

Backup Server

SAN

Disk Pools on the SAN

Tape Library Figure 2-3 SAN backup model

2.3.1 LAN-free backup Backup is traditionally done by transferring the data to be backed up over the LAN. The advent of SANs enables data to be transferred directly over the SAN from disk storage to the backup server and then directly over the SAN to tape. Tivoli Storage Manager (from Version 3.7) supports a LAN-free backup mode of operation, enhanced in Tivoli Storage Manager 4.2. A LAN-free backup is done by a backup server using the SAN topology and functions of Fibre Channel (FC) to move the backup data over the SAN, eliminating the LAN from the data flow. This does two things. First, the LAN traffic is reduced. Secondly (and most importantly), the traffic through the backup server is reduced. This traffic generally is processor-intensive because of TCP/IP translations. With LAN-free backup, the backup server orchestrates the data movement, manages the tape library and drives, and tells the clients what data to move. The client is connected to the SAN. Its data can be on the SAN or the data can be on storage directly attached to the server, as shown in Figure 2-4.

20

IBM Tape Solutions for Storage Area Networks and FICON

Corporate LAN Step 1

Backup Client

Backup Server

Client Disk

Step 2

SAN Topology Fiber Cables

SCSI Cables

BackupServer Disk Pool

Remote Tape Library Figure 2-4 LAN-free backup

The LAN is still used to pass metadata back and forth between the backup server and the client. However, the actual backup data is passed over the SAN. The metadata is the data needed by the backup server to manage the entire backup process. It includes such things as the file name, the file location, the date and time of the data movement, and where the new copy resides. The metadata is small compared to the actual client data being moved. In Step 1 of Figure 2-4, the backup server talks with the client to determine what data needs to be moved. After the backup server determines what it wants to be done with the data, it sends the request to the client. In this case, we show a request to move the data to the backup disk storage pool, as may happen in a Tivoli Storage Manager environment. The client’s own data is not part of the SAN topology in this case. Therefore, the client (in Step 2) simply moves the data from its own disk to the backup server’s storage pool, as instructed by the backup server. When the move is complete, the client notifies the backup server. This reduces the system processor overhead on the backup server dramatically. For the movement of data through the server, about half of the server’s processing power is used to process the TCP/IP transaction. For this example, we assume that the backup server uses its normal (non-SAN) processes of moving the data from the storage pool to the tape. Later, we show how this is further improved when the backup server uses the SAN directly for its own workload.

Chapter 2. Tape in the SAN environment

21

2.3.2 Server-free backup Server-free (or server-less) backup refers to the ability to take a snapshot of the data to be backed up with minimal or no disruption to productive work, and then move it intelligently between tape and disk without the data going through a server. Figure 2-5 shows this type of backup. All the elements to achieve this backup are available. Currently several software solutions are available. The key elements are: 򐂰 Creating an instant copy: This can be done either by host-based software or by copy functions in the disk storage server itself, such as FlashCopy® in the IBM TotalStorage® Enterprise Storage Server® (ESS) or the Modular Storage Server (MSS). 򐂰 Mapping physical blocks to logical files: The physical data blocks must be mapped to the file system or database and the mapping process managed. 򐂰 Moving the data directly between the disk and tape: This is done using the SCSI extended copy command. The component that performs the SCSI commands can theoretically be in one of the SAN fabric components. Or it can be software that runs on a server. Current implementations of server-free backup, such as Tivoli Storage Manager (from Version 5.1), Legato Celestra Power and Veritas NetBackup, manage the whole process by software.

LAN

Backup Client

Backup Server

SAN

Device-to-device data movement Active data

Flash copy

Tape Library

Disk Pool Figure 2-5 Server-free backup

2.4 Sharing and pooling Library sharing has been practiced for some time by partitioning a physical library into multiple logical libraries. Alternatively, the library can appear to be shared by multiple hosts when, in reality, one of the hosts (the library manager) is issuing all the library commands

22

IBM Tape Solutions for Storage Area Networks and FICON

both for itself and for the other hosts (clients), but all of them have direct access to the tape drives (tape pooling). At this point, we need to clarify the terms homogeneous and heterogeneous, as they apply to library sharing. The terms refer to the library management or backup software, not to the platforms on which they are running. For example, the IBM Tivoli Storage Manager library manager may be running on an IBM Eserver xSeries®, and several Tivoli Storage Manager clients may be running on a mixture of HP-UX, Sun Solaris, and Windows NT servers. This is still a homogeneous environment, because Tivoli Storage Manager is the only application using the library although the other hosts may be sending data directly to the tape drives. We have a heterogeneous environment when different software applications access the library, in which case, there is no master control. Therefore, the library needs to be logically partitioned or have its own in-built library manager, such as in the IBM 3494. Table 2-1 shows the relationship between homogeneous and heterogeneous applications with respect to library and drive sharing. Table 2-1 Library and drive: Partitioning versus sharing Tape library sharing

Tape drive sharing

Homogeneous application sharing

Solved by backup application (ARCserve, Tivoli Storage Manager) OR by tape library hardware partitioning (IBM 3584, 3575, or 3494)

Solved by backup application (such as CA ARCserve or Tivoli Storage Manager)

Heterogeneous application sharing

Solved by Library Management Middleware OR by tape library hardware partitioning (IBM 3584, 3575, or 3494)

Solved by Library Management Middleware (such as Veritas Media Librarian, or Legato SmartMedia)

Prior to SAN, sharing single-path libraries was quite difficult due to the physical restrictions of attaching multiple hosts to the library controller. However, with SAN, all hosts now have access to all devices including the library controller. Figure 2-6 shows the relationships between single path and multipath libraries, and library and robotic sharing.

Tape Library/Robotic Sharing

Single Path (3583)

MultiPath (3584, 3575)

Homogeneous application - TSM (AIX, NT, Sun) - ISV

Single Logical Library Homogeneous application - TSM (AIX, NT, SUN) - BRMS (OS/400) - ISV

Multiple Logical Libraries Heterogeneous application - TSM (any platform) - BRMS (OS/400) - ISV

Figure 2-6 Single-path and multipath sharing

Chapter 2. Tape in the SAN environment

23

The physical topology for library sharing and tape pooling is the same. The only requirement is that all the hosts have access to all of the drives and the library. The difference is in the software applications on the server. There may also be other reasons that restrict library sharing. For example, the IBM Eserver iSeries™ servers (AS/400®) require that each input/output processor (IOP) or input/output adapter (IOA) that has a tape drive on the bus requires that the associated library control path is on the same bus as the tape drive. This means that, for a single path library such as the IBM 3583, only one iSeries can attach to it. Furthermore, only one IOP or IOA in the iSeries can attach to the library. However, multiple iSeries servers can share an IBM 3584 because it has a multipath architecture. Therefore, every iSeries server connected to a tape drive is also on the same bus as the library controller. With the 3584, the library controller appears as LUN 1 for each SCSI ID. Even in this case, where library sharing exists, the iSeries software, Backup Recovery and Media Services (BRMS), does not allow tape pooling, so an iSeries can only be connected to one tape drive in the library. Figure 2-7 shows multi-host tape sharing using a SAN. In this case, three hosts and six tape drives are connected to a single SAN. If the tape drives have SCSI interfaces, a router or gateway is required, but the result is the same. If software to manage the sharing is not available, then you need to isolate (or zone) the drives to unique hosts to avoid contention, and sharing is not possible. However, with the proper software, each drive can talk to each host, and the connections are dynamic without the need for any recabling.

LAN Backup Clients

Backup Clients

Backup Server

SAN Disk Pools on the SAN

Tape Library Figure 2-7 Multi-host tape drive sharing

Tape pooling is the ability to allow two or more servers to logically share tape drives within a tape library. In this case, the servers must be attached to the same SAN as the tape drives. There must also be some software management to control who owns the drives and tape cartridges at any one time. Figure 2-8 shows three backup servers all getting their client data through the corporate LAN sharing six tape drives in one library. Software running in the master backup server talks with tape sharing client code on each slave backup server to control the allocations of the tape drives. The only difference between Figure 2-7 and Figure 2-8 is the software created by the backup server solution provider to actually manage the tape resources. 24

IBM Tape Solutions for Storage Area Networks and FICON

Tape pooling across a SAN is supported by Tivoli Storage Manager, starting with Version 3.7. The nature of the SAN is not critical to this software solution, since the control of the actual access is done through the Tivoli Storage Manager management software. Physical access to each tape drive is required, which SAN topology allows. When using a tape library with an integrated library manager, like the IBM 3494, the tape drives can be shared without one of the Tivoli Storage Manager servers taking the role of the arbiter. The tape drives can be shared using SCSI Reserve/Release functions (dubbed auto-share, a 3590 drive feature). Also the tape media is controlled by the library and assigned to a specific host. This eliminates the need to manage the tape inventory using application software.

Backup Clients

Backup Clients

Corporate LAN

Backup master (library manager)

Backup Slave

Backup Slave

SAN Topology

Local Tape Library Figure 2-8 Tape pooling

Sharing typically reduces the number of drives and tapes required, giving an obvious cost benefit. The rule for sharing resources is that, for two systems sharing the same resource, there is a reduction of about 20% to 30% in the total resource requirements. For example, using this rule of thumb suggests that two Tivoli Storage Manager servers, both needing four tape drives and 500 tape slots to manage their individual workloads, are expected to need only six drives and about 800 slots to manage the same workload if they could share the resources.

2.4.1 Economics of sharing We now look at the economics of library sharing and disk pooling. The majority of tape mounts are for single file restores, typically because a user accidentally deleted a file. The more mounts you can do per tape drive per day, the fewer tape drives you need. Since SAN and LAN network backup solutions enable multiple servers to share the same tape library and tape drives, you need fewer drives and have a lower cost. In Figure 2-9, you can see that if we can use the library and drives more efficiently (more mounts per day per drive), we can reduce the total cost of tape backup. SAN technology Chapter 2. Tape in the SAN environment

25

lowers the cost by allowing tape sharing and pooling. The examples shown are for Tivoli Storage Manager, but they are similar for any other network backup solution. When the database servers each own their own tape drives, there are more drives and a higher cost.

$$

Cost per mount

Number of mounts per day #

$$

# $$

#

TSM Database Servers

TSM File Servers

SAN Sharing TSM Full Service

Figure 2-9 Cost of tape

The use of Tivoli Storage Manager for file server type clients increases the number of mounts that we can manage per day, lowering the cost per mount. Since the SAN technology allows us to share drives across these two environments (while increasing the performance), you would expect even more mounts per day per drive, driving the cost per mount down even further. Tivoli Storage Manager, with its disk cache, already increases the productivity of the tape drive by allowing data to move to tape at a higher sustained data transfer rate.

2.5 Remote tape vaulting and disaster tolerance This section discusses the general principles of remote tape vaulting and disaster tolerance. Read Chapter 5, “Tape solutions” on page 109, for specific implementations. The primary difference between remote tape vaulting and disaster tolerance is that disaster tolerance requires two libraries (one at each site) and the ability to mirror data between the two. Fundamental to both is extended distance. The extended distance capability of SAN enables simple, low-cost, and reliable remote tape vaulting and disaster tolerance solutions. Figure 2-10 shows such a solution, where the SAN is used for attachment of a local library, perhaps for sharing, but also provides attachment for a remote library. Today, most enterprises take their tape backups off-site for disaster protection. The tape is actually created in a locally attached tape library, ejected from the library, and finally removed to an off-site location. All of this requires manual intervention and tracking and is error-prone. Two major reasons for failed or delayed recoveries are the time needed for tape handling or the tape being in the wrong place. Longwave Fibre Channel ports permit up to 10 km between nodes. Distances of over 100 km can be achieved using additional SAN fabric components. This means that the backup server

26

IBM Tape Solutions for Storage Area Networks and FICON

can create the tape easily and safely in a remotely-attached tape library, removing all of the manual effort and risk. For disaster tolerance, a second copy or mirror of the data is required. Unlike some disk subsystems, tape libraries do not have the ability to automatically perform remote copies. Therefore, the copy needs to be made by software in the server. An exception to this is the Peer-to-Peer VTS, which provides automated data copying. However, this is only available for S/390 and IBM Eserver zSeries® Enterprise System Connection (ESCON®) attachment.

Corporate LAN

Backup Server

Local Tape Library

SIte 1

Up to 100 km

SAN Topology

Site 2

Remote Tape Library Figure 2-10 Remote tape creation using a SAN

2.5.1 Fully redundant, disaster tolerant tape solutions The increasing value of data as a corporate asset means that the need for fully redundant sites is already here. The purpose of a disaster tolerant implementation is to resume business as soon as possible in the event of a local disaster. An integral part of that strategy is the ability to perform backup and recovery at the disaster recovery site. Elimination of every single point of failure can be assured using SAN topologies. Figure 2-11 shows how a fully redundant tape subsystem may look using a SAN. Although it is a single SAN and managed as a single SAN, it is constructed so that all of the SAN components are duplicated and connected in such a way that all servers have multiple paths to all storage devices. Most tape device drivers do not support dual paths. This means that, although each server has two separate host bus adapters and two paths for redundancy, failover is not automatic. The alternate path to the library and tapes must be hidden using Fibre Channel zoning, logical unit number (LUN) masking, or software. Note, however, that the 3590 device driver provides automatic failover.

Chapter 2. Tape in the SAN environment

27

Local Server

Local Tape Library

Site 1 Up to 10km

Redundant SAN Topology

Site 2

Remote Server

Remote Tape Library

Figure 2-11 Fully redundant tape subsystem configuration

2.6 Storage virtualization The IBM implementation of storage virtualization, a form of system-managed storage (SMS), provides applications and users with a view of almost infinite storage that meets administrator-defined service levels for performance, availability, and reliability. It also incorporates policy-based data placement, policy-based life-cycle management of data and resources, server-less data movement, and scalable data sharing. Storage virtualization is an architecture and software product that exploits the distance, performance, and scalability advantages of SAN and NAS. Storage virtualization enables data to be tapped like a virtual utility across heterogeneous, open systems platforms. Figure 2-12 shows that storage virtualization clients communicate with storage virtualization servers over an enterprise’s existing IP network, using storage virtualization protocols. It also shows that storage virtualization clients, servers, and storage devices are all connected to a SAN on a high-speed, Fibre Channel network.

28

IBM Tape Solutions for Storage Area Networks and FICON

NFS CIFS FTP HTTP

Existing IP Network for Client/Server Communications

NT Client IFS w/ cache

AIX Client Solaris Client Linux Client Other Client IFS w/ cache

IFS w/ cache

IFS w/ cache

SAN Fabric

Device-to-device data movement

Active data

Backup, archive and inactive data

IFS w/ cache

Meta-data Server

Private Cluster Persistent store shared among servers

Meta-data Server

. . .

Meta-data Server

Server Cluster for •Load balancing •Fail-over processing •Scalability

Figure 2-12 Storage virtualization concepts

An Installable File System (IFS) is installed on each of the heterogeneous clients supported by storage virtualization. The IFS directs requests for metadata (control information) and locks (to control serialization) to a storage virtualization server and sends requests for data to storage devices on the SAN. Storage virtualization clients cache files, as well as metadata and locks they obtain from a storage virtualization server. They do not cache any files to disk. An enterprise can use one storage virtualization server, a cluster of storage virtualization servers, or multiple clusters of storage virtualization servers. Servers are clustered to provide load balancing, failover processing, and increased scalability. A cluster of storage virtualization servers are interconnected on their own high-speed network or on the same IP network they use to communicate with storage virtualization clients. The private server storage that contains the metadata managed by storage virtualization servers can be attached to a private network connected only to the cluster of servers. Or, it can be attached to the storage virtualization SAN. Within each server cluster is a storage management server. This is a logical server that issues commands to back up and migrate files directly over the Fibre Channel network from one storage device to another. No client involvement is required to perform these tasks.

Chapter 2. Tape in the SAN environment

29

30

IBM Tape Solutions for Storage Area Networks and FICON

3

Chapter 3.

SAN building blocks This chapter first looks at the different building blocks that you can use to build your storage area network (SAN) environment, before you consider specifically how to attach tape subsystems to a SAN and what solutions it can offer. This chapter also presents an overview of most of the basic SAN components. However, it does not provide detailed information about host adapter cards or fibre cabling. You can find more detailed information in the following Redbooks: 򐂰 򐂰 򐂰 򐂰

Introduction to Storage Area Networks, SG24-5470 Designing an IBM Storage Area Network, SG24-5758 Implementing an Open IBM SAN, SG24-6116 IBM SAN Survival Guide, SG24-6143

The products that we overview in this chapter are: 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰

IBM SAN Data Gateway SCSI Tape Router IBM SAN Data Gateway IBM TotalStorage SAN Controller 160 IBM Fibre Channel Storage Hub IBM TotalStorage SAN Switch F08 IBM TotalStorage SAN Switch F16 IBM TotalStorage SAN Switch M12 INRANGE FC/9000 Fibre Channel Director McDATA ES-1000 Loop Switch McDATA ES-3216 Fabric Switch McDATA ES-3232 Fabric Switch McDATA ED-6064 Enterprise Fibre Channel Director

© Copyright IBM Corp. 2000, 2001, 2002, 2003. All rights reserved.

31

3.1 IBM SAN Data Gateway SCSI Tape Router The IBM SAN Data Gateway Router is a Small Computer Systems Interface (SCSI) to Fibre Channel (FC) protocol converter for tape libraries, with one Fibre Channel adapter, and up to two SCSI ports. It is a low-cost solution, compared to the IBM SAN Data Gateway product, which offers up to three FC x four SCSI port configurations. The IBM SAN Data Gateway Router (2108-R03) can accommodate either UltraSCSI single-ended ports or UltraSCSI differential ports. The Router supports full mapping of SCSI IDs and logical unit numbers (LUNs) between the Fibre Channel attached host and the SCSI tape library. The IBM SAN Data Gateway Router can be attached to an IBM Fibre Channel Switch for connectivity.

Fibre Channel attachment Industry-standard Fibre Channel technology is rapidly replacing SCSI channel attachment between open system servers and tape storage systems. However, many tape storage systems do not provide Fibre Channel attachment. To bridge the gap between Fibre Channel server adapters and SCSI-attached tape storage, IBM developed the SAN Data Gateway Router. The SAN Data Gateway Router is a hardware solution that enables the attachment of SCSI storage systems to Fibre Channel adapters on specific Intel®-based servers running Windows NT® and UNIX®-based servers from IBM and Sun Microsystems. For the most current list of supported products, visit: http://www.ibm.com/storage/sangateway The SAN Data Gateway Router with shortwave ports can provide Fibre Channel distance extension up to 500 meters between an open system server and a storage system. This is ideal for server and storage consolidation. With IBM Fibre Channel Storage Hubs and Managed Hubs, and Fibre Channel Switches, connectivity options enable distances up to 10 kilometers and many server and storage connections. This any-to-any switched fabric capability supports large and rapidly growing storage consolidation and data sharing requirements.

IBM StorWatch SAN Data Gateway Specialist The SAN Data Gateway Router provides access between its Fibre Channel ports and SCSI ports. Channel zoning controls access between ports. The IBM StorWatch SAN Data Gateway Specialist, an easy-to-use graphical user interface (GUI), includes the tools to define SAN Data Gateway Router channel zoning and to control access to specific storage devices.

Multiple configuration options The SAN Data Gateway Router uses Fibre Channel and UltraSCSI channel bandwidth for high-performance attachment of the following devices: 򐂰 Magstar® 3590 Tape Subsystem in stand-alone, Magstar 3494 Tape Library, and Magstar 3590 Silo Compatible Tape Subsystem environments 򐂰 Magstar MP 3570 Tape Subsystem or Magstar MP 3575 Tape Library Dataserver 򐂰 IBM 3580 Ultrium Tape Drive, 3584 UltraScalable Tape Library, 3583 Ultrium Scalable Tape Library, and 3581 Ultrium Tape Autoloader 򐂰 IBM 3502 DLT Tape Library

32

IBM Tape Solutions for Storage Area Networks and FICON

The SAN Data Gateway Router is available as a rack-mounted unit or as a stand-alone table-top unit. The low-cost router provides one shortwave Fibre Channel port and two UltraSCSI Differential or UltraSCSI Single-End ports for tape storage attachment.

3.2 IBM SAN Data Gateway The IBM SAN Data Gateway (2108-G07) was one of the first components of the IBM SAN solution that allows an easy migration to the SAN environment using Fibre Channel technology. The SAN Data Gateway connects SCSI and UltraSCSI storage devices to Fibre Channel environments. It attaches new or existing SCSI storage products to the SAN using an industry standard Fibre Channel Arbitrated Loop (FC-AL) interface. The SAN Data Gateway solves three immediate problems: 򐂰 The 25 m cable length restriction for SCSI: The cable can extend up to 500 m 򐂰 Increased bandwidth demand that UltraSCSI storage products can place on the SCSI bus 򐂰 The address limitations of SCSI The use of hubs in SAN configurations increases the device connectivity, but hubs have some impact with respect to multiple hosts on the FC-AL loop. These include loop initialization process and arbitration. If a system is turned off and then turned on, or rebooted, it may impact the operation of other systems in the FC-AL loop. Many integrators do not support multi-host loop at all. The use of switches or directors increases the host fan-out or number of host connections of SAN configurations. The SAN Data Gateway uses Fibre Channel and UltraSCSI channel bandwidth for high-performance attachment of the following devices: 򐂰 IBM TotalStorage Enterprise Storage Server 򐂰 IBM Magstar 3590 Tape Subsystem in stand-alone, Magstar 3494 Tape Library, and Magstar 3590 Silo Compatible Tape Subsystem environments 򐂰 IBM Magstar MP 3570 Tape Subsystem or Magstar MP 3575 Tape Library Dataserver 򐂰 IBM 3502 DLT Tape Library 򐂰 IBM Ultrium 358X Tape Subsystems with Linear Tape-Open (LTO) Tape Drives For the latest list of supported servers, adapters, disk, and tape subsystems on the SAN Data Gateway, see: http://www-1.ibm.com/servers/storage/support/san/ Sharing the Gateway between disk and tape products is currently not supported or practical, because: 򐂰 The IBM TotalStorage Enterprise Storage Server needs all the SCSI attachments. 򐂰 The levels of the HBA driver required for disk and for tape are different. This makes it impossible to use Gateway-attached disks and tapes on the same host. This will eventually be fixed, but provides a good illustration of an interoperability problem. You can use the Gateway either as a stand-alone table top unit or mounted in a standard 19-inch rack. The rack can be either the IBM 2101 Seascape® Solutions rack or an industry standard rack.

Chapter 3. SAN building blocks

33

The SAN Data Gateway is equipped with: 򐂰 Four UltraSCSI differential ports 򐂰 One to six FC-AL shortwave and longwave ports and Fibre Channel optic cables 򐂰 StorWatch SAN Data Gateway Specialist (included on the CD) The features and functions of the SAN Data Gateway include: 򐂰 SAN connectivity: Creates reliable SAN solutions without needing hubs, switches, and bridges. The SAN Data Gateway provides a distance or connectivity solution for SCSI attached storage devices. 򐂰 Heterogeneous systems and storage: Provides seamless support for different host platforms and multiple device types. 򐂰 SAN resource sharing: Zoning or partitioning enables a simple and effective resource sharing solution. Zones are created by controlling the access between different channels or ports. They are implemented with the StorWatch SAN Data Gateway Specialist access control function. 򐂰 SAN value added functions: – Supports up to 256 LUNs across multiple interfaces – Persistent address maps preserved in non-volatile memory – Full awareness of SCSI 3 protocol for disk and tape – SCSI over TCP for remote transfer, management and control (SCSI commands and data are encapsulated in TCP packets.) – Support for SNIA Extended Copy Command specification, the basis for server-free backup solutions in the future 򐂰 Transparent SAN performance: The total bandwidth of the SAN Data Gateway is 120 MB/sec. The overall performance is driven by the maximum available device performance. 򐂰 SAN management: The SAN Data Gateway is remotely managed and controlled by the StorWatch SAN Data Gateway Specialist. 򐂰 SAN scalability: The SAN Data Gateway is offered with up to six FC ports to provide 6 x 4 configurations.

Zoning or access control The SAN Data Gateway has the ability to connect to more than one host. In the default configuration, there is no restriction between the channels for access to the target devices. Without additional controls, host operating systems do not handle multiple systems using the same target devices simultaneously. The result is corrupted file systems when two hosts try to use the same disk drives or logical unit number (LUN). Or, tape backup and restore operations may be interrupted. The IBM StorWatch SAN Data Gateway Specialist Channel Access options can be used to disable access between the SAN Connections and individual SCSI channels.

IBM StorWatch SAN Data Gateway Specialist The SAN Data Gateway provides access between its Fibre Channel ports and its SCSI ports. Channel zoning provides access control between ports. While channel zoning provides control of paths between host adapters and SCSI storage ports, it does not limit access to specific devices (LUNs) within the storage system. Virtual Private SAN (VP SAN) provides LUN masking to limit access between host adapters and LUNs attached to SAN Data Gateway SCSI ports. The IBM StorWatch SAN Gateway Specialist, an easy to use GUI, provides the tools to define SAN Data Gateway channel zoning, the VP SAN LUN-masking, and controls which host systems have access to specific storage devices. 34

IBM Tape Solutions for Storage Area Networks and FICON

This Access Control function, also called zoning, partitions the SAN configuration by either allowing or denying access between the FC and SCSI ports of the Gateway.

Advantages of SAN Data Gateway versus a hub The advantages of the SAN Data Gateway compared to a hub are: 򐂰 Concurrency: Aggregate throughput is not limited to one loop. 򐂰 Zoning: Access control is available based on FC and SCSI ports. 򐂰 Hosts are each point-to-point and can be heterogeneous. 򐂰 Smaller configurations with fewer devices lower administration cost for customer and lower service and support cost for IBM (easier to isolate problems). 򐂰 Avoids the inherent multi-host issues of the FC-AL loop, such as the loop initialization process (LIP) and arbitration. If a system is turned off or on, or rebooted, it may impact the operation of other systems in the FC-AL loop. Many integrators do not support multi-host loop.

Advantages of SAN Data Gateway versus a switch The advantages of the San Data Gateway compared to a switch are: 򐂰 It defers or completely avoids the high entry cost of a switch. 򐂰 Smaller configurations with fewer devices lower administration cost for customer and lower service/support cost for IBM (easier to isolate problems). 򐂰 There are interoperability issues with switches. Fabric support is limited. Resource sharing requires middleware. Figure 3-1 shows the IBM SAN Data Gateway.

Figure 3-1 IBM SAN Data Gateway

3.3 IBM TotalStorage SAN Controller 160 The IBM TotalStorage SAN Controller 160 (7140-160) enables all IBM 7133, 7131, and 3527 Serial Disk Systems to attach to host systems using Fibre Channel host adapters and drivers. You should consider the SAN Controller 160 for configurations where native SSA connectivity is not possible, and where the RAID-1 data replication capability can provide added data protection. The SAN Controller 160 is designed to bring the performance, availability, and scalability advantages of the Serial Storage Architecture (SSA) to customers with new or existing Fibre Channel-based host servers. The SAN Controller replicates data across or within serial disk systems. Simultaneously mirroring two or three copies of data without host involvement. With global hot disk sparing,

Chapter 3. SAN building blocks

35

data is automatically rebuilt if a mirrored disk fails. In this way, the SAN Controller improves performance and data availability while simplifying storage operations. The Instant Copy function can create a separately addressable copy of mirrored data that can be used for tape backup. After the backup has completed, data is resynchronized with the primary copy. To support remote storage operations, mirrored 7133 Advanced Models D40 or T40 can be separated by up to 10 km with serial storage fiber-optic extenders. With 36.4 GB disks, logical volume groups or partitions as large as 580 GB can be created for Windows NT servers, which have limited volume addressing. The SAN Controller can also create composite drives by concatenating up to 16 physical disks. These capabilities provide excellent configuration flexibility for growing storage environments.

Simplified management The SAN Controller 160 Manager is a Windows NT-based management tool that provides configuration and service functions, including mirror group definition, the ability to create composite drives, and Instant Copy disk management. The Manager can manage multiple SAN Controllers across the enterprise.

A highly scalable solution The SAN Controller supports up to 64 serial disk drives in a single loop and enables nondisruptive growth in disk capacity from 18.2 GB to 2.3 TB. Up to eight UNIX and Windows NT host systems can be attached to a single loop. Performance scales up as more SAN Controllers are added to the serial loop. A stand-alone, tabletop SAN Controller unit provides one Fibre Channel port and two SSA ports. Short and longwave laser optical interfaces are supported. An optional rack-mounted enclosure can hold up to four SAN Controllers in a compact 2U-high space in an industry-standard 19-inch rack.

Fully utilized bandwidth potential The IBM TotalStorage SAN Controller 160 enables Fibre Channel servers to benefit from high-performance, non-arbitrated serial disk technology. The Fibre Channel host-based adapter views the IBM TotalStorage SAN Controller 160 as a single FC-AL target, which minimizes loop arbitration overhead. It has measured up to 90 MB/sec sustained throughput, or up to 25,000 input/output (I/O) per second per logical channel in full-duplex, simultaneous read/write mode.

3.4 IBM Fibre Channel Storage Hub The IBM Fibre Channel Storage Hub (2103-H07) is an entry level component for SAN fabric installations designed for connecting one or more storage devices, to one or more servers. It is not a very scalable solution and should not be chosen if many devices are to be connected later. A hub can also be used to connect to a remote location to extend the distance. The Fibre Channel Storage Hub is designed to provide a centralized point of connectivity, to provide loop fault tolerance, and to simplify configuration management. Fibre Channel products that are commonly interconnected to the Fibre Channel Hub are Fibre Channel host bus adapters, FC-AL storage devices, and FC-AL storage arrays. In terms of scalability of bandwidth, one FC-AL loop by itself is not scalable. All devices share the bandwidth of 100 MB/sec, rather than that offered by the Managed Hub. Figure 3-2 shows the hub.

36

IBM Tape Solutions for Storage Area Networks and FICON

Figure 3-2 IBM Fibre Channel Storage Hub

3.4.1 Hub configuration The IBM Fibre Channel Storage Hub interconnects multiple servers and storage systems, over fiber-optic media, and transfers data at speeds up to 100 MB/sec. Each port requires a Gigabit (Gb) Interface Converter to connect it to each attached node. The Fibre Channel Storage Hub supports any combination of shortwave or longwave optical GBICs. Figure 3-3 shows a GBIC.

Figure 3-3 Gigabit Interface Converter

The GBICs are hot-pluggable into the IBM Fibre Channel Storage Hub. This means that you can add host computers, servers, and storage modules to the arbitrated loop dynamically, without powering off the Fibre Channel Storage Hub or any connected devices. If you remove a GBIC from a Fibre Channel Storage Hub port, that port is automatically bypassed. The remaining hub ports continue to operate normally with no degradation of system performance. Conversely, if you plug a GBIC into the Fibre Channel Storage Hub, it is Chapter 3. SAN building blocks

37

automatically inserted and becomes a node on the loop, if valid Fibre Channel data is received from the device.

3.5 IBM TotalStorage SAN Switch F08 IBM introduces a new entry-level model of the IBM TotalStorage SAN Switch F08 (3534-F08). It has 2 Gigabit Fibre Channel performance and additional functions to support the storage networking demands for throughput and management controls. Based on the Brocade SilkWorm 3200, the IBM TotalStorage SAN Switch Model F08 is an 8-port Fibre Channel switch based on a new generation of switch technology. It is designed to provide 2 Gbps port-to-port throughput with auto-sensing capability for connecting to existing 1 Gbps host servers, storage, and switches, in a smaller 1U form factor. It requires half the space of the previous 2109-S08 8-port SAN Switch. The new model is fully interoperable with the current IBM 2109 SAN Switches (Models S08, S16, and F16). The F08 extends the broad range of scalable SAN connectivity solutions available from IBM for a wide variety of host and storage types. Figure 3-4 shows San Switch F08.

Figure 3-4 IBM TotalStorage SAN Switch F08

The F08 is designed to provide: 򐂰 Eight nonblocking ports, each with full-duplex throughput at either 2 Gbps or 1 Gbps 򐂰 Auto-sensing ports that self-negotiate to the highest speed supported by the attached server, storage, or switch 򐂰 Hardware zoning controlled at the port level, and software zoning controlled at the worldwide name level 򐂰 Support for high-speed data traffic with the Performance Bundle feature, which provides interswitch link (ISL) trunking and performance monitoring (Up to four ISLs can be combined for throughput capability of up to 8 Gbps.) 򐂰 Cascading support for flexibility in creating scalable fabric topologies 򐂰 The IBM TotalStorage SAN Switch Specialist, which provides a comprehensive set of management tools that support a Web browser interface for flexible, easy-to-use operations The F08 supports Fibre Channel connectivity for the following servers: 򐂰 򐂰 򐂰 򐂰 򐂰

IBM Eserver pSeries® and selected RS/6000® servers xSeries and select Netfinity® servers IBM TotalStorage Network Attached Storage 300G Other Intel-based servers with Microsoft Windows NT and Windows 2000 Selected Sun and HP servers

The F08 supports Fibre Channel connectivity for the following servers storage systems: 򐂰 IBM TotalStorage Enterprise Storage Server (ESS) 򐂰 IBM FAStT Family of Storage Servers 򐂰 IBM Magstar 3590 Subsystems and 3494 Tape Libraries

38

IBM Tape Solutions for Storage Area Networks and FICON

򐂰 IBM Ultrium and UltraScalable Tape Libraries 򐂰 Other selected storage systems Additionally, the F08 offers: 򐂰 Two Gbps industry-standard Fibre Channel switch throughput 򐂰 Scalability from small to very large enterprise SAN fabric environments 򐂰 A high availability design with hot-pluggable components, and automatic path rerouting 򐂰 Modularity for flexible system configurations, including diagnostics to isolate problems quickly 򐂰 An interswitch link trunking option to combine up to four physical links into one logical high-speed trunk with up to 8 Gbps full-duplex throughput 򐂰 A performance monitoring feature for end-to-end measurement of Fibre Channel traffic, including cyclic redundancy checking (CRC) error counts 򐂰 Support for Public FC-AL devices 򐂰 IBM TotalStorage SAN Fibre Channel Specialist for fabric management The F08 Switch is ideally suited for disaster tolerance solutions such as remote tape vaulting and remote disk mirroring. F08 Switches can provide up to twice the throughput of SAN Managed Hubs. You can use this performance capability to either reduce the number of expensive extended distance ISL connections or to improve the performance with the same number of connections. F08 Switches are ready to exploit the performance potential of newer servers and storage devices with 2 Gbps capabilities. You can use F08 Switches as edge switches to expand an existing core-to-edge SAN fabric infrastructure. As F16 Switches and larger, high availability core switches are added to the core. Installed F08 Switches can be migrated to the edge. This approach supports scalable network growth in a modular, cost-effective, and nondisruptive manner with investment protection for installed switches.

3.6 IBM TotalStorage SAN Switch F16 The IBM TotalStorage SAN Switch F16 (2109-F16) provides 2 Gigabit Fibre Channel (FC) performance and additional functions to support the storage networking demands for higher security, throughput, and management controls. Because there is much similarity between the S08/S16 and F16 models, we restrict this discussion to the new features that this F16 Switch brings to the market. Users that are familiar with the S08/S16 will have no problem upgrading their awareness to successfully encompass the F16. Figure 3-5 shows the 2109-F16.

Chapter 3. SAN building blocks

39

Figure 3-5 2109-F16 switch

3.6.1 Product overview The F16 is a 16-port Fibre Channel switch based on a new generation of switch technology. It provides 2 Gbps port-to-port throughput with auto-sensing capability for connecting to existing 1 Gbps host servers, storage, and switches, in a smaller 1U form factor, requiring half the space of the previous 16-port SAN Switch. The new model is fully interoperable with the current IBM TotalStorage SAN Switches (Models S08 and S16). It can be added to existing fabrics with minimal disruption, enabling an easy transition for existing Fibre Channel storage networks to the faster technology. The IBM TotalStorage SAN Switch Model F16 extends the broad range of scalable SAN connectivity solutions available from IBM for a wide variety of host and storage types. IBM TotalStorage SAN Switches enable storage resources to be shared efficiently and to scale rapidly to meet the demands by users for highly available, heterogeneous access to expanding storage pools. The new Model F16 provides: 򐂰 Sixteen non-blocking ports, each with full-duplex throughput at either 2 Gbps or 1 Gbps 򐂰 Auto-sensing ports that self-negotiate to the highest speed supported by the attached server, storage, or switch 򐂰 Universal ports that self-configure as F_ports, FL_ports, or E_ports 򐂰 Each port supporting the new Small Form-Factor Pluggable (SFP) media with options for either shortwave optical connection for distances up to 300 meters, or longwave optical connections for distances up to 10 kilometers 򐂰 A smaller 1U package that can be either rack-mounted or used in a table-top configuration, with the option of a redundant power supply, providing a highly available switch 򐂰 Hardware zoning controlled at the port level, and software zoning controlled at the worldwide name level

40

IBM Tape Solutions for Storage Area Networks and FICON

򐂰 Support for high-speed data traffic with the Performance Bundle feature, which provides ISL trunking and performance monitoring (Up to four ISLs can be combined for throughput capability of up to 8 Gbps.) 򐂰 Cascading support for flexibility in creating scalable fabric topologies 򐂰 Distributed fabric services such as name serving, zoning, routing, and microcode upgrade 򐂰 The IBM TotalStorage SAN Switch Specialist, which provides a comprehensive set of management tools that support a Web browser interface for flexible, easy-to-use operations

3.6.2 Hardware components The 2109 Model F16 system board is a single-board design with a highly integrated central processing unit (CPU). The Intel 80960VH CPU is a RISC core processor and is the top choice for this platform. It provides over 70% of the functionality for the digital section of the system board. The system uses three types of memory devices: DRAM, Flash File, and Boot Flash. On the Fibre Channel section of the system board, the Bloom application-specific integrated circuit (ASIC), the Serializer/Deserializer (SERDES), and the SFP media are the key components that provide high-speed data transfer. The SFP media interfaces support shortwave laser (SWL) and longwave laser (LWL). The system chassis is a 1U height enclosure with space for two power supply units and one system board. The system board is placed in an Electromagnetic Interference (EMI) enclosure tray as an EMI-proof system unit. Two 126-watt removable, redundant power supplies provide hot-swappable capability. Cooling fans are mounted in the rear to provide airflow for system cooling.

CPU subsystem An Intel 80960VH CPU is used for switch initialization and management functions. The CPU runs the Fabric OS and is responsible for switch initialization, configuration, and management. IBM-designed ASICs provide the switching functionality. The following peripherals are also supported: 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰

An Ethernet port A serial port Three digital thermometers A real-time clock Two power supply controls General I/O

The CPU subsystem is a mixed voltage system using 1.8 V, 2.5 V, 3.3 V, and 5 V depending on the device. The maximum board power consumption is 78 W.

Features The 2109-F16 CPU subsystem includes the following features: 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰

A 80960VH-100 MHz CPU A SDRAM controller with parity check at 33 MHz A Peripheral Control Interconnect (PCI) bus arbiter An on-board SDRAM with data parity to support a 16 MB configuration One PLCC32 Boot Flash socket to support up to 512 KB of Flash memory 8 MB (2 x 4 MB) Flash memory for software storage 10BASE-T or 100BASE-T port for management connection with RJ45 connector One RS232 port with DB9 connector 16 LEDs to indicate the status for each port Chapter 3. SAN building blocks

41

򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰

16 LEDs to indicate the link speed for each port One LED (green) to indicate the system power-on status Three digital thermometers for temperature sensing Two analog switches to control the power supply inter-integrated circuit (I2C) bus access One 3.3 V to 1.8 V dc/dc converter for Bloom ASIC core supply Two Bloom ASICs supporting up to 16 non-blocking ports 16 SERDES One real-time clock with a battery and 56 bytes of nonvolatile RAM (NVRAM)

Embedded processor The embedded processor is an Intel 80960VH processor with a clock speed of 100 MHz. It contains: 򐂰 A high-performance RISC processor core (compatible with the 2109 series of switches and the 3534 switch) 򐂰 An integrated EDO memory controller (for DRAM, SRAM, ROM, and Flash memory) 򐂰 A PCI bus interface 򐂰 A complex programmable logic device (CPLD) for SDRAM control 򐂰 Two direct memory access (DMA) channels 򐂰 An I2C interface 򐂰 General purpose I/O You can access system memory through the local bus. The external CPLD SDRAM device provides SDRAM controller functionality at 33 MHz. It supports parity checking to enhance the data integrity of the system. The CPU communicates with the ASIC and the 10BASE-T or 100BASE-T Ethernet media access controller (MAC) through the PCI interface. An external PCI bus arbiter enables the Ethernet device to be a bus master. You can also access the RS232 Universal Asynchronous Receiver Transmitter (UART) serial port through the local bus. Other I/O peripherals, such as the real-time clock, the two power supply controls, the LEDs, the three digital thermometers, and miscellaneous I/O are handled by the I2C bus of the CPU. The CPU is the only I2C bus master in the system. The RS232 port and drivers, Ethernet MAC/PHY, and LEDs are external components to the CPU. An RJ45 connector provides Ethernet connection to external systems. The DB9 RS232 is a ribbon-cable connection through the on-board 10-pin header.

Bus operations The interface between the embedded processor, the ASICs, and the 10BASE-T or 100BASE-T Ethernet MAC is implemented using a PCI bus. All PCI devices on the bus are PCI Revision 2.2 compliant. The PCI bus interface operates at 32-bit, up to 33 MHz and has a worldwide even parity bit. A slave-only PCI interface is provided by each ASIC to allow the processor to program various registers, routing tables, and so on within the chip. An external PCI bus arbiter enables the Ethernet device to be a bus master. The local bus, a 32-bit multiplexed burst bus, provides the interface between the system memory and the I/O. Because the integrated EDO memory controller on the CPU allows only direct control for DRAM, SRAM, ROM, and Flash memory, the external CPLD controller is included to provide SDRAM controller functionality. The I2C bus provides peripheral I/O control for the LEDs, the thermometers, and general I/O functions. The 80960VH CPU serves as the master on the I2C bus. Each Bloom ASIC is an eight-port Fibre Channel switch controller. There are two Bloom ASICs to support up to 16 ports. The communication between ASICs is over a proprietary 42

IBM Tape Solutions for Storage Area Networks and FICON

10-bit wide SSTL2 bus running at 106.25 MHz. An SSTL2 bus is also used between the Bloom ASICs and the SERDES.

Memory The system design uses the following types of memory devices: 򐂰 DRAM 򐂰 Flash File 򐂰 Boot Flash Two on-board SDRAM chips provide up to 16 MB for system memory. Two additional SDRAM chips provide data parity. The printed circuit board (PCB) SDRAM footprint is designed to be compatible with 64 MB, 128 MB, and 256 MB devices. An external CPLD device added to the local bus provides control functions for the 80960VH processor. The system provides 4 MB of on-board redundant Flash File memory for software and data storage. The Boot Flash is an 8-bit Flash device socket that is used only for system start. The Boot Flash device contains a block area for startup code protection. The PLCC32 socket supports 3.3 V Boot Flash memory up to 512 KB.

Central memory As with the 2109 series of switches and the 3534 switch, the 2109 Model F16 is based on a central memory architecture. In this scheme, a set of buffers in the central memory is assigned to each port, to be used for receipt of frames. As an ASIC port receives and validates a frame, it stores the frame in one of its receive buffers in the central memory and forwards a routing request (a Put message) to the appropriate destination ports. When a destination port is capable of transmitting the frame, it reads the frame contents from central memory and forwards the frame to its transmit interface. It does not wait for the frame to be written in memory, unless the port is busy. After it removes an entry for a frame from its internal transmit queue in preparation for transmitting a frame, the destination port sends a transmission complete message (a Finish message) to the port that received the frame. This allows the receiving port to reuse the buffer for subsequent frames received. The central memory is also incorporated into the ASICs. Frames received on the ports in an ASIC are written into the portion of central memory in the receiving chip. Received frames cannot be written into the sections of central memory located in other ASICs. All transmitters in a 2109 Model F16 switch can read from the memories in any of the ASICs, through inter-chip connections clocked at 106.25 MHz. Each ASIC contains RAM devices plus data path crossbar logic that is used to implement the central memory. Memory blocks are accessed in a time-sliced fashion. The buffer pool can be split into 2112-byte buffers or into 312-byte mini-buffers. If frames that need to be buffered are smaller than the maximum 2112 bytes, using mini-buffers effectively expands the buffer pool and increases the efficiency of memory usage by providing more (but smaller) receive buffers. In addition, the Bloom ASIC provides a special memory interface (SMI). The SMI provides the firmware with a mechanism to read and write frame contents to and from the ASIC. It also supports higher throughput transfers. The SMI includes a set of two buffers that are large enough for an entire maximum-sized frame to be transferred in a single operation. Additionally, because there are two buffers available, the firmware can perform a read or write on a frame in one of the buffers, while the ASIC streams another frame into the other buffer.

Chapter 3. SAN building blocks

43

ASICs Two ASICs within the system provide the switching functionality. Each ASIC provides eight Fibre Channel ports that can be used to connect to external N_ports (as an F_port), external loop devices (as an FL_port), or to other 3534 or 2109 series boxes (as an E_port). Each port can operate at either 1.0625 Gbps or 2.125 Gbps link speeds. The ASIC contains the Fibre Channel interface logic, message and buffer queuing logic, receive buffer memory for the eight on-chip ports, and other support logic. The Bloom ASICs are PCI slaves to the CPU. The two ASICs interface through an inter-chip 10-bit SSLT2 bus connection clocked at 106.25 MHz. A 16-channel SERDES is used to support 16 ports. The interface between ASIC and SERDES is also a 10-bit SSTL2 bus running at 106.25 MHz. The SERDES converts the 10-bit wide parallel data from the SSTL2 bus into high-speed serial data for the SFP media and vice versa. The SERDES supports single data rate (SDR) or double data rate (DDR) transfer between the SERDES and the SFP media. The DDR operation supports 2.125 Gbps data transfer rate between ASICs. Implementing the SERDES external to the ASIC reduces the risk of silicon packaging as well as the risk of running 2.125 Gbps signals on a board with a long trace length. The SFP media interfaces to external devices and enables support for shortwave laser and longwave laser. Two LEDs for each port provide port status and link speed information.

Control Message Interface The 2109 Model F16 Control Message Interface (CMI) consists of a set of control signals that are used to pass hardware-level messages between ports. Recipient ports use these control signals to inform transmitting ports when a new frame needs to be added to the output queue of the transmitter. Transmitting ports also use the CMI to inform recipient ports that a frame transmission has been completed. A recipient port is free to reuse a receive buffer when it receives notification that the frame has been transmitted. In the case of multicast, multiple notifications are required to determine when a receive buffer is freed. The CMI interfaces for the ASICs are connected inside each ASIC through a message crossbar, implementing a barrel shift message scheme. Each chip time slices its output port to each possible destination chip in the switch. If it has a message to send to a particular destination during the corresponding time slot, the chip uses the time slot to send the message. Otherwise, the output port lines are driven to indicate that no message is present. The time slicing of the output CMI control signals of the ASICs are arranged out of phase from each other so that each chip’s output port is time sliced to a different destination chip in any given clock cycle. Messages that are displayed at the input control signal interface of a given ASIC are also time sliced through each possible source chip in the switch.

Ports The 2109 Model F16 supports the following port types: 򐂰 Optical ports 򐂰 Ethernet port 򐂰 Serial port Each ASIC in the 2109 Model F16 switch connects up to eight SFP media. SFP devices are encased in metal to ensure low emissions and high thermal management. They are hot-swappable and use industry-standard local channel connectors. Each port provides ISL, loop, and fabric (E, F, and FL respectively) type connectivity that the 2109 Model F16 senses automatically. It requires no administration to identify the port type.

44

IBM Tape Solutions for Storage Area Networks and FICON

Fibre Channel connections The IBM SAN Fibre Channel Switch supports the following types of Fibre Channel connections: 򐂰 Fabric (F_Port) 򐂰 Arbitrated loop: Public and private (FL_Port) 򐂰 Interswitch connection (E_Port)

Supported port types The IBM 2109-S08 and 2109-S16 support the following port types: 򐂰 E_Port: An expansion port. A port is designated an E_Port when it is used as an interswitch expansion port to connect to the E_Port of another switch, to build a larger switch fabric. 򐂰 F_Port: A fabric port that is not loop capable. It is used to connect an N_Port to a switch. 򐂰 FL_Port: A fabric port that is loop capable. It is used to connect NL_Ports to the switch in a loop configuration. 򐂰 G_Port: A generic port. It can operate as either an E_Port or an F_Port. A port is defined as a G_Port when it is not yet connected or has not yet assumed a specific function in the fabric. 򐂰 Isolated E_Port: This is a port that is online but not operational between switches due to overlapping domain ID or nonidentical parameters such as E_D_TOVs. 򐂰 L_Port: A loop-capable fabric port or node. 򐂰 N_Port: A node port that is not loop capable. It is used to connect an equipment port to the fabric. 򐂰 NL_Port: A node port that is loop capable. It is used to connect an equipment port to the fabric in a loop configuration through an FL_Port. 򐂰 U_Port: A universal port. A port is defined as a U_Port when it is not connected, or has not yet assumed a specific function in the fabric.

Optical ports For optical ports, the 2109 Model F16 uses SFP fiber-optic transceivers that convert electrical signals to optical signals (and optical signals to electrical signals). Capable of transmitting at both 1 Gbps and 2 Gbps speeds, each SFP fiber-optic transceiver supports 850 nm SWL on multimode fiber-optic cable, 1310 nm LWL on single-mode fiber-optic cable, and 1550 nm ELWL 5 on single-mode fiber-optic cable. These miniature optical transceivers provide high port density and deliver twice the port density of standard removable GBIC transceivers.

Ethernet port The 2109 Model F16 provides a fully IEEE-compliant 10BASE-T or 100BASE-T Ethernet port for switch management console interface. When a device is connected to the port, both ends negotiate to determine the optimal speed. The Ethernet port uses an RJ45 connector. There are two LEDs for the port. One LED indicates transmit and receive activity and one LED indicates speed (10 MB/sec or 100 Mbps). The Transmission Control Protocol/Internet Protocol (TCP/IP) address for the port can be configured from the serial port.

Serial port An RS232 serial port is provided on the 2109 Model F16. The serial port uses a DB9 connector. The connector is a header pin block on the system board. The parameters of the serial port are fixed at 9600 baud, 8 data bits, no parity, no hardware flow control, 1 start and 1 stop bit.

Chapter 3. SAN building blocks

45

You use this connector to configure the IP address and to recover the factory default settings of the switch should Flash memory contents be lost. Do not use the serial port connection to perform normal administration or maintenance functions. Accessible functions are limited to connecting a terminal to the port to re-initialize the switch defaults, which restores the switch to its factory configuration. This is required to restore the switch passwords to a known state and to allow customers to set a specific switch IP address.

Enclosure The 2109 Model F16 enclosure is designed to be mounted in a 19-inch rack, with a height of 1 RETMA unit (1 micron (µ)), but it can also be used in a tabletop configuration. The enclosure houses dual-redundant power supplies, dual-redundant fan assemblies, and a system board that supports the two ASICs and the CPU. The 2109 Model F16 enclosure has forced-air cooling. The fans push the air from the rear chassis intake through the enclosure and exhaust the air through venting holes in the front panel. The SFP media, the cooling fan, and the power supplies are hot-swappable so that you can remove and replace them without interrupting the system power. You can remove the top panel of the 2109 Model F16 enclosure without tools, allowing access to the system board. The enclosure design provides for simple assembly of the system board into the enclosure, allowing for ease of manufacture and maintenance. All pieces of the product are modular, and all maintenance can be performed without special tools. On the front of the unit, there are two port connections (an RS232 connection and an RJ45 connection). The RJ45 connection provides a 10BASE-T or 100BASE-T Ethernet port for a full system management console interface. The RS232 connection provides a serial port interface for setting the IP address of the switch and for resetting the switch to factory defaults The fibre-optic cables, Ethernet cables, and serial port cables are located on the front of the switch. AC power input cables, power supplies, and cooling modules are inserted and removed from the rear of the switch.

Power supply The 2109 Model F16 power supply is a hot-swappable switching unit, allowing 1 + 1 redundant configurations. The unit is a universal power supply capable of functioning worldwide without voltage jumpers or switches. The fully enclosed, self-contained unit has its own internal fans to provide cooling. It is auto-ranging in terms of accommodating input voltages. The power supply has three DC outputs (3.3 V, 5 V, and 12 V) that provide a total output power of 126 maximum usable watts. The power supplies plug directly into the enclosure from the rear of the unit, mating to internal blind connectors that connect both the DC outputs and the interface signals to the system backplane. An integral on and off switch, input filter, and power indicator are provided in the power supply.

LEDs The 2109 Model F16 provides several LEDs to indicate status on the switch. Each of the 16 ports has two status indicators. The first LED for the port is a two-color (green and yellow) LED, and indicates the status for the port. Green indicates normal status, and yellow indicates an error. The second LED is a single-color (green) LED and indicates the link speed for the port. Green indicates 2 Gbps. If the LED is not lit (dark), it indicates 1 Gbps. A single-color (green) LED is located at the front of the unit and indicates system power-on status. On the back of the unit, there is a two-color (green and yellow) LED driven by an I2C I/O expander that indicates the mode of the unit. (Green indicates normal mode and yellow 46

IBM Tape Solutions for Storage Area Networks and FICON

indicates diagnostic mode.) All LEDs are surface mount components with an on-board light pipe and are visible externally with full chassis enclosure. There are two LEDs for the Ethernet port. One LED indicates the transmit and receive activity, and one LED indicates speed (10 Mbps 100 Mbps).

3.6.3 Software specifications The 2109 Model F16 switch is supported by the Fabric OS Version 3.0. The Fabric OS is implemented in firmware and manages the operation of the 2109 Model F16 switch. The switch firmware is designed to make a 2109 Model F16 easy to install and use while retaining the flexibility needed to accommodate user requirements. A fabric constructed with cascaded 2109 Model F16 switches automatically assigns individual switch addresses, establishes frame routes, configures the internal name server, and so on. Users can access internal management functions using standard host-based Simple Network Management Protocol (SNMP) software or Web browsers. They can access these functions using network connectivity through the Ethernet port or using IP over the Fibre Channel ports. SCSI Enclosure Services (SES) is also supported as a management method. The management functions of the switch allow a user to monitor frame throughput, error statistics, fabric topology, fans, cooling, media type, port status, IDs, and other information to aid in system debugging and performance analysis. The Fabric OS includes all basic switch and fabric support software and optionally licensed software that is enabled using license keys. The fabric license is pre-installed on the 2109 Model F16 switch to ensure fabric operation. The Fabric OS is composed of two major software components: 򐂰 Firmware that initializes and manages the switch hardware 򐂰 Diagnostics that perform component self-testing algorithms for fault isolation during the manufacturing process and in customer installations The internal firmware can be viewed as a set of embedded applications that run on top of a proprietary real-time operating system. Additionally, host-based software includes the drivers, utilities, and applications that use the switch. You can obtain these components from your system vendor or Fibre Channel component supplier.

2109 Model F16 software The 2109 Model F16 software consists of a set of embedded applications that runs on top of a real-time operating system kernel. The set of applications include: 򐂰 Name server 򐂰 Alias server 򐂰 SNMP agent The set of applications also includes several tasks to manage: 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰

Address assignment Routing Link initialization Fabric initialization Link shutdown Switch shutdown Frame filtering Performance monitoring Trunking Chapter 3. SAN building blocks

47

򐂰 Auto speed negotiation 򐂰 The user interface All embedded applications are written in C, except for the SNMP agent (included with the real-time operating system package) and the Web server.

Applications The 2109 Model F16 software applications implement a variety of functions. Switch applications exist to provide fabric services, such as name server and alias server functionality, to external devices. These particular applications process requests from fabric-attached external devices, and communicate with similar applications running on other switches within the fabric to obtain fabric-wide information to satisfy these requests. The applications present an interface to these standards-based services that provides access to information throughout the fabric while hiding the details about how the information is distributed across switches within the fabric from the external devices. Other applications running in a switch implement functions used to manage internal fabric operation. One task allows for automatic address assignment throughout a fabric through a distributed algorithm run by participating switches. Another task, used to set up routes within the fabric, communicates with tasks that are running on other switches in the fabric to set up the lowest cost, loop-free routes. The 2109 Model F16 switch provides an extensive set of diagnostics. You can use a number of comprehensive low-level diagnostics to detect failing switch hardware components by performing hardware-specific tests. In general, you must run these diagnostics when the switch is offline. However, you can use an additional set of high-level diagnostics to exercise individual ports, passing data through external media interfaces and cables. These allow various media, cable, and port faults to be detected while normal switch operation continues on other ports.

New features The 2109 Model F16 software includes some new features and functionality. The Fabric OS enables the 2109 Model F16 to support the new functionality described in the following sections.

Auto-sensing speed negotiation The 2109 Model F16 ASIC supports link operation at either 2 Gbps or 1 Gbps. Auto-sensing negotiation allows easy configuration. Connecting the device and the link speed are negotiated to the highest speed that is supported by the device. Speed selection is auto-negotiated by the ASIC driver on a per-port basis. After the speed is determined, the transmitter and receiver for the port are automatically set. If multiple devices are connected to a port (for example, on an FL_port), the driver auto-negotiates for the highest common speed and sets the transmitter and receiver accordingly.

Frame filtering Zoning is a fabric management service that can be used to create logical subsets of devices within a SAN and enable partitioning of resources for management and access control purposes. Frame filtering is a new feature of the 2109 Model F16 ASIC that enables it to provide zoning functions with finer granularity. Frame filtering can be used to set up port-level zoning, world-wide name zoning, device-level zoning, protocol-level zoning, and LUN-level zoning. After the filter is set up, the complicated function of zoning and filtering can be achieved at wire speed.

48

IBM Tape Solutions for Storage Area Networks and FICON

Performance monitoring Performance monitoring is a licensed feature that provides error and performance information to manage your storage environment. There are three types of monitoring: 򐂰 Arbitrated Loop Physical Address (AL_PA) monitoring: This provides information regarding the number of CRC errors. 򐂰 End-to-end monitoring: This provides information regarding a configured source identifier (SID) to destination identifier (DID) pair. Information includes the number of CRC errors for frames with the SID-DID pair, Fibre Channel words transmitted from the port for the SID-DID pair, and Fibre Channel words received for the port for the SID-DID pair. 򐂰 Filter-based monitoring: This provides error information with a customer-determined threshold.

Trunking Trunking is a new feature on the 2109 Model F16 switch that enables traffic to be distributed across available ISLs while preserving in-order delivery. On some Fibre Channel protocol devices, frame traffic between a source device and destination device must be delivered in order within an exchange. This restriction forces current devices to fix a routing path within a fabric. Consequently, certain traffic patterns in a fabric can cause all active routes to be allocated to a single available path and leave other paths unused. The 2109 Model F16 ASIC creates a trunking group (a set of available paths linking two adjacent switches). Ports in the trunking group are called trunking ports. One trunking port is designated as the trunking master port and is used to set up all routing paths for the entire trunking group. The trunk provides an 8 Gbps single-aggregate ISL pipe between switches.

Real-time operating system The 2109 Model F16 real-time operating system consists of a hardware-independent layer and a hardware-dependent section. The hardware-independent portion of the operating system consists of a third-party real-time kernel plus a number of interfaces. The interfaces provide a structure for handling various layers in the Fibre Channel protocol hierarchy. In this collection of modules, the FC-PH layer provides FC-2 functionality, supporting reassembly of inbound frames into sequences. This layer also allows for creation of a set of frames to transmit from an internal Fibre Channel sequence description. The FC-LS layer handles various sorts of Fibre Channel link services, including basic link services and extended link services. Operations using the Fibre Channel common transport interface, as defined in the FC-GS specification, use the interface provided by FC-CT code in the 2109 Model F16. Switch-to-switch communications used to manage fabric initialization and routing use the services provided by the FC-SW layer to implement these functions. Hardware-dependent functions of the real-time operating system contain several elements, including the Board Support package. This code is used to provide an interface between VxWorks and the 2109 Model F16-specific hardware related to supporting the 80960VH processor. Drivers for specific hardware interfaces are also considered part of the hardware-dependent portion of the real-time operating system. Several drivers support interface hardware that is Chapter 3. SAN building blocks

49

used for fabric management purposes, such as the Ethernet port and serial port. Other drivers are used for miscellaneous internal functions, including temperature monitoring and power supply control. Additional drivers, written for the Fibre Channel interfaces of the switch, are managed through two layers. The port driver, creates a generic interface to the underlying switch hardware, and provides functions common to all switch implementations. Reporting to the port driver are the switch-hardware-specific drivers, which handle the operations of individual types of switch ASICs. Three of these drivers, for the stitch, flannel, and loom chips, are used for first and second-generation IBM hardware. A fourth module implements the functionality required to drive the Bloom ASIC, which is used in the 2109 Model F16 switch.

Initialization When the system is started or restarted, the following operations are performed: 1. Early power-on self test (POST) diagnostics are run. POST is run before VxWorks runs. 2. VxWorks is initialized. 3. The hardware is initialized. The system is reset, the internal addresses are assigned to loom chips, the Ethernet port is initialized, the serial port is initialized, and the front panel is initialized. 4. A full POST is run. 5. The links are initialized. Receiver and transmitter negotiation is run to bring the connected ports online. 6. A fabric exploration is run. This determines whether any ports are connected to other switches. If so, it determines the principal switch. 7. Addresses are assigned. After the principal switch is identified, port addresses are assigned. Each 2109 Model F16 tries to keep the same addresses that it used previously. Previous addresses are stored in the configuration Flash memory. 8. The routing table is constructed. After the addresses are assigned, the unicast routing tables are constructed. 9. Normal Nx_port operation is enabled.

Routing The embedded processor maintains two routing tables, one for unicast and one for multicast. The unicast routing tables are constructed during fabric initialization. The multicast tables are initially empty, except for broadcast. After the tables are constructed, they are loaded into each ASIC. The unicast tables change if ports or links come online or go offline, or if some other topology changes occur. When new paths become available, the embedded processor can change some routes to share the traffic load. The multicast tables change as ports register with the alias server to create, join, or leave a multicast group. Each time a table changes it must be reloaded into the ASICs.

Service functions The ASIC interrupts the embedded processor when a frame arrives that has an error (for example, incorrect source ID), when a frame times-out, or when a frame arrives for a destination that is not in its routing tables. In the latter case, the frame may be addressed to an illegal destination ID, or it may be addressed to one of the service functions that are provided by the embedded processor such as SNMP, name server, or alias server.

50

IBM Tape Solutions for Storage Area Networks and FICON

Simple Network Management Protocol SNMP allows network devices to be monitored, controlled, and configured remotely from a network management station running a network manager program. SNMP agent code in the network device allows management by transferring data that is specified by a Management Information Base (MIB). The 2109 Model F16 switch agent supports: 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰

SNMPv1 manager Command line utilities to provide access to and command the agent MIB-II system group, interface group, and SNMP group Fabric-element MIB IBM-specific MIBs Standard generic traps IBM-specific traps

Diagnostics The 2109 Model F16 switch supports a set of POSTs, as well as tests that can be invoked using Telnet commands. These diagnostics are used during the manufacturing process and for fault isolation of the product in customer installations.

Diagnostic environment Most diagnostics are written to run in the VxWorks environment. However, because VxWorks does not run without a working SDRAM, a SDRAM/boot EEPROM test is run as part of the pre-VxWorks startup code to verify that the basic processor-connected memories are functioning properly.

Hardware support Loop-back paths for frame traffic are provided in the hardware for diagnostic purposes. A loop-back path within the ASIC, at the final stages of the Fibre Channel interface, can verify that the internal Fibre Channel port logic is functioning properly, as well as paths between the interface and the central memory. In addition, the SerialLink macro within the ASIC includes a serial data loop-back function that can be enabled through a register in the corresponding ASIC. Diagnostics are provided to allow traffic to be circulated between two switch ports that are connected with an external cable. This allows the diagnostics to verify the integrity of the final stage of the SERDES interface, as well as the media interface module.

Diagnostic coverage The POST and diagnostic commands concentrate on the Fibre Channel ports and verify switch functionality of the 2109 Model F16 switch.

3.6.4 Interoperability The sections discuss: 򐂰 Switch interoperability 򐂰 HBA interoperability 򐂰 Operating system support

Switch interoperability The 2109 Model F16 switch supports both 1 Gbps and 2 Gbps transmit and receive rates with auto-negotiation. The actual data signaling rate that is used on a port is automatically sensed, and is set to the rate that is supported by a device or devices that are attached to the port. The 2109 Model F16 has been tested and is compliant with the current FC standards. The Chapter 3. SAN building blocks

51

2109 Model F16 is compatible with most current-generation switches N_ports, NL_ports, and E_ports, as well as host adapters, Redundant Array of Independent Disks (RAID) storage devices, hubs, and Fibre-SCSI bridge devices, including the 3534 and 2109 series of switches.

Implementation in existing environments Because the 2109 Model F16 switch has a compatible 1 Gbps auto-negotiated signaling rate on each port, it can be used as a replacement for current 3534 and 2109 series switches. As newer technology is added to existing systems that support 2 Gbps signaling, the ports can accept these devices and interoperate with existing 1 Gbps devices. If the 2109 Model F16 is connected to a third-party device but is unable to negotiate the signaling rate, the 2109 Model F16 allows you to manually set the speed of each port through the management interfaces.

Heterogeneous inter-switch operations Fabric OS 3.0 supports interoperability for the following functions: 򐂰 Basic switch functions – Link initialization – Principal switch selection – Routing (FSPF) 򐂰 Basic services – Simple name service – State change notification – Worldwide Name (WWN) zoning (typically referred to as soft zoning or name server zoning) The following facilities are switch-based facilities and continue to function on any 2109 switch: 򐂰 򐂰 򐂰 򐂰 򐂰

SNMP facilities Simple QuickLoops with no zoning Translative mode (private target support on fabrics) Trunking (only functions between two IBM switches) Enhanced performance metrics

The following facilities are IBM value-added facilities that are not supported in a multivendor fabric. Use of these facilities causes the fabric to segment. 򐂰 QuickLoop zones 򐂰 QuickLoop Fabric assist mode 򐂰 Port, protocol, or LUN zoning IBM is not aware of any areas of non-compliance with any ratified standards at this time.

Host bus adapter interoperability The 2109 Model F16 has been tested with the following host bus adapters (HBAs) from the following vendors: 򐂰 Emulex – – – – – –

52

LP6000 LP7000 LP8000 LP850 LP952 LP9000

IBM Tape Solutions for Storage Area Networks and FICON

򐂰 QLogic – QLA2100 – QLA2200 򐂰 JNI – – – –

FC64-1063 FCI-1063 FCE-6410 FCE-6460

򐂰 Agilent – HHBA-5100 – HHBA-5101

Operating system support Fabric OS Versions 2.x and 3.x have no specific operating system dependencies. The Fabric OS in the switches allows for any Fibre Channel compliant device to attach to the switches as long as it conforms to the standards for device login, name service, and related Fibre Channel features. Regardless of the operating environment, proper interface to the fabric requires a Fibre Channel HBA with a standards-compliant driver. The operating system versions listed in Table 3-1 were tested (using HBA devices and drivers supplied by QLogic, Emulex, JNI, and Agilent) for interoperability. Table 3-1 Compatible operating systems Operating System

Version

AIX®

4.3.3

Windows NT

4.0

Windows 2000

Initial release

Solaris

2.5, 2.5.1, 2.6, 2.8, 7

HP-UX

10.0, 11.0

Linux RedHat

6.2 and 7.0

PTX® Novel NetWare

NetWare 5.2

3.7 IBM TotalStorage SAN Switch M12 The IBM TotalStorage SAN Switch M12 (2109-M12) (see Figure 3-6) provides a highly reliable solution for deploying enterprise-class SANs. By delivering up to 128 ports of connectivity in a single enclosure, the M12 provides unprecedented levels of availability, scalability, manageability, and security for open enterprise storage applications. With the introduction of the M12, which is an OEM version of the Brocade SilkWorm 12000, IBM continues to extend its IBM TotalStorage SAN connectivity solutions. This high performance, 2 Gbps Core Fabric Switch is designed to provide the high availability, scalability, manageability and security features to meet your open systems' requirements. The M12 will be available in configurations of a 32-port switch, a 64-port switch, or two 64-port switches in a single, 14U rack mountable enclosure.

Chapter 3. SAN building blocks

53

Based upon the same next-generation switching technology used in the IBM TotalStorage SAN Switch F16 and F08, this switch supports 1 Gbps and 2 Gbps auto-sensing ports as well as advanced fabric services that can simplify the design, administration and management of enterprise SANs. It is designed to provide investment protection to existing customers by being fully backwards-compatible with existing SAN Switches S08, S16, F08, and F16. High availability features include a fully redundant design and hot-swappable components. IBM is the first storage provider to offer end-to-end 2 Gbps solutions designed to exploit next-generation switching technology. Because these end-to-end solutions are up to twice as powerful as previous solutions, they can help reduce the total cost of ownership, simplify SAN management and enable more scalable, larger enterprise SANs.

Figure 3-6 IBM TotalStorage SAN Switch M12

3.7.1 M12 description Availability, scalability, and performance are the key attributes required by today’s open systems customers. The M12 Switch is designed to use such high availability features as: 򐂰 򐂰 򐂰 򐂰

54

Redundant, hot-pluggable components Dual-redundant control processors (active/standby) Redundant power (four power supplies, two redundant) Redundant cooling (three fans, one redundant)

IBM Tape Solutions for Storage Area Networks and FICON

򐂰 Automatic path rerouting 򐂰 Non-disruptive software upgrades The IBM TotalStorage SAN Switch M12is designed to support up to eight, 16-port Fibre Channel modules (blades) enabling 128 universal (E,F, and FL), full duplex, auto-sensing ports in a single 14U enclosure, each port capable of self-negotiation to the highest speed supported by the attached SAN infrastructure. The M12 is available in configurations of a 32-port switch, a 64-port switch, or two 64-port switches in a single, 14U rack mountable enclosure. When combined with IBM TotalStorage SAN Switch F16s, S16s and S08s, it is designed to provide a highly scalable core/edge fabric required by our largest enterprise storage customers. The M12 Switch is designed for high performance, with full duplex, 1 Gbps throughput on each Fibre Channel port. Distances up to 10 km are supported using longwave laser transceivers with 9.0u fiber cables. Shortwave laser transceivers support distances up to 500m at 1 Gbps and up to 300m at 2 Gbps with 50.0u fiber cables. The standard features of the M12 Switch include: 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰

Rack-mount chassis Four power supplies (two redundant) with four rack PDU power cables Three fans (one redundant) Two control processors (active/standby, with automatic failover) Two 16-port 2 Gbps switch blades provide a single 32-port switch Performance Monitoring tools for measuring end-to-end activities. ISL-Trunking with up to four links, and up to 8 Gbps bandwidth Advanced fabric services provided by Fabric OS Version 4.0, IBM SAN Switch Specialist, Advanced Zoning and Fabric Watch

The M12 has several selectable options. The Core Fabric Switch includes two 16-port switch blades with the option of either shortwave or longwave SFP optical transceivers and space for up to six additional 16-port switch blades. A mixture of shortwave and longwave SFP optical transceivers may be ordered with a minimum of 32 and either 64 or 128 SFP transceivers to completely populate all switch blades. The M12 Switch requires fiber optic cables for connection to the host systems, and storage systems or devices. These cables can be customer supplied, or ordered with the switch. Additional features available for the M12 Switch include: 򐂰 Shortwave SFP transceiver: Provides shortwave optical transceiver for SFP LC media. 򐂰 Longwave SFP transceiver: Provides longwave optical transceiver for SFP LC media. 򐂰 64-port Upgrade: Provides two additional switch blades to create a single 64-port switch. 򐂰 128-Port Upgrade: Provides six additional switch blades to create two 64-port switches. 򐂰 Fabric Manager 3.0: Provides a Java™-based application that can simplify management of a multiple switch fabric. IBM SAN Switch Specialist and Fabric Manager run on the same management server attached to any switch in the core/edge fabric. It may also manage up to eight fabrics. It requires a Windows NT/2K or Solaris 7 server with a Netscape or Internet Explorer Web browser. 򐂰 Extended Fabric Activation: Provides license key to optimize management of the internal switch buffers to maintain performance on interswitch links at distances greater than 10 kilometers, and up to 70 kilometers using selected fiber cable extension mechanisms. 򐂰 Remote Switch Activation: Provides a license key to enable the interconnection of two SAN Switches with a pair of CNT Open System Gateways across an asynchronous transfer mode (ATM) wide area network (WAN). Chapter 3. SAN building blocks

55

Note: QuickLoop support is not provided with the Core Fabric Switch. IBM TotalStorage SAN Switches with QuickLoop capability may be used to attach private loop devices in a core or edge fabric.

3.7.2 M12 connectivity The M12 supports Fibre Channel connectivity for: 򐂰 Servers: – – – –

pSeries and selected RS/6000 servers xSeries and selected Netfinity servers Other Intel-based servers with Microsoft Windows NT and Windows 2000 Selected Sun and HP servers

Important: zSeries and S/390 G5/G6 servers with FICON channels are not supported. 򐂰 Storage systems: – IBM TotalStorage Enterprise Storage Server (ESS) – IBM FAStT Family of Storage Servers – IBM TotalStorage Enterprise Tape System 3590 and IBM TotalStorage Enterprise Tape Library 3494 – IBM 3583 Ultrium Tape Library and IBM 3584 UltraScalableTape Library – IBM 3590-A60 FICON and ESS FICON devices are NOT supported 򐂰 SAN Switches: – IBM SAN Switch F16 and F08 (Firmware Version 3.0) – IBM SAN Switch S16, S08 and SAN Managed Hub (Firmware Version 2.6) For specific availability dates, configuration options, server models, operating systems levels, and attachment capabilities, consult the Web at: http://www.ibm.com/storage/FCSwitch

3.7.3 Intelligence within the M12 To improve security and manageability, advanced Brocade Frame Filtering intelligence is built directly into the M12 ASIC technology. This design enables new capabilities such as fabric zoning based on LUN, WWN, or protocol. Administrators can improve end-to-end performance analysis by measuring resource utilization on a fabric-wide basis. They can also track port traffic levels based on source and destination IDs. This simplifies the reporting of, and adherence to, service level agreements.

3.7.4 Open SAN management The M12 simplifies management by networking core and edge switches under the Fabric OS, the embedded real-time operating system. This enables heterogeneous device connectivity, automatic data routing and rerouting, self-healing capabilities, and scalable connectivity. The Fabric Access layer (the Fabric OS API) provides critical functions for integrating applications within the SAN environment. The API enables software vendors to develop feature-rich management applications that leverage the distributed intelligence in IBM SANs.

56

IBM Tape Solutions for Storage Area Networks and FICON

3.7.5 Seamless upgrades and investment protection To help protect existing investments, the M12 provides a seamless upgrade path and backward and forward compatibility with IBM TotalStorage switch, midrange, and port aggregation offerings. As SAN technologies evolve, the M12 multiprotocol architecture is designed to integrate with emerging storage networking protocols such as iSCSI, FC-IP, and InfiniBand. The current design is extendable to future 10 Gbps technologies with a switch module upgrade rather than a forklift upgrade of the chassis.

3.8 INRANGE FC/9000 Fibre Channel Director The INRANGE FC/9000 Fibre Channel Director (2042-001) is the core product of an IBM and INRANGE Technologies reseller agreement. This agreement adds the INRANGE FC/9000 Fibre Channel Director to the growing list of IBM enterprise-class SAN fabric offerings. To help provide high data availability across the SAN, IBM now offers the INRANGE FC/9000 Fibre Channel Director, which provides the scalability required by rapidly growing e-business and other mission-critical applications. The Director design is based upon S/390 FICON server requirements for a large number of ports in a single Director (256 ports in a single footprint) and upgradability to larger configurations in the future. The Director is also designed to provide the high levels of availability, performance, and integrity required by today's most demanding data centers. The Director features N+1 redundancy at all critical points of design, automatic internal failover, extensive hot-swapping, nondisruptive firmware updates, and automatic fault detection and isolation. In addition, call-home and pager capabilities can automatically alert support and maintenance personnel to accelerate problem resolution. The High-Availability Option provides redundancy for all electronic and power modules. This helps to enable continuous data access and high performance in the event of a single component failure. Together, these capabilities are designed to help provide uninterrupted full-bandwidth service without the loss of data access during periods of failure, repair, maintenance, and capacity upgrades. Multiple Directors can provide a scalable enterprise SAN backbone that supports consolidated storage management applications such as disk sharing, tape pooling, and enterprise-wide data sharing. The following sections show some of the major components, both hardware and software, that warrant its inclusion in the IBM portfolio.

3.8.1 INRANGE Director product description The INRANGE Director can currently be configured from its base of 24 ports, and in 8 port increments, up to 64 ports in a single cabinet. All ports are interconnected to provide full non-blocking performance. Each port has a speed of 100 MB/s, full duplex bandwidth enabling industry leading transmission with a latency of 0.6 to 3 micro seconds. Figure 3-7 shows the INRANGE Director.

Chapter 3. SAN building blocks

57

Figure 3-7 INRANGE FC9000 Fibre Channel Director

3.8.2 Supported attachments The INRANGE Director provides excellent flexibility by supporting the following types of attachment: 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰

FICON FCP FC-IP Private and Public Arbitrated Loop (including Public-Private translation) Cascaded Directors Interoperability port for connection of other vendors switched fabrics

3.8.3 Supported port types The INRANGE Director supports a comprehensive range of port types to allow for a vast range of connection options. The supported port types include: 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰

58

F_Port (Fabric) FL_Port (public loop) E_Port (ISL port more commonly known as an E_Port) T_Port (not an ISL port but a switch interoperability port) TL_Port (private to public bridging) SL_Port (segmented private loop)

IBM Tape Solutions for Storage Area Networks and FICON

All ports, with the exception of SL and TL, are self discovering. INRANGE Directors automatically sense the attributes of individual end-nodes, configuring themselves in any combination of loop, fabric or switch-to-switch ports as needed. Manual adjustments are not necessary at the time of installation or as the fabric evolves because the 2042 adapts to change dynamically, while the network is still up and running.

3.8.4 Availability The INRANGE Director provides excellent availability with fully redundant components supporting automatic failover, automatic fault detection and isolation, in addition to call home and pager support to enable rapid problem resolution. Other features include: 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰

Redundant internal pathing Redundant power Redundant control Non-disruptive SW/FW upgrades Passive backplane Hot swapping for all field replaceable unit (FRU) components

3.8.5 Scalable capacity To meet the demands of a growing enterprise, the INRANGE Director provides one of the most flexible capacity solutions in the market today. With an entry level of 24 ports, the Fibre Channel Switch can currently be scaled, in increments of 8 ports, up to 64 ports. Additionally, INRANGE recently announced the capability to expand the 64-port switch up to a 128-port switch within the same footprint, and have pre-announced a 256-port switch. IN-VSN Enterprise Manager software provides the interface into one or multiple Fibre Channel Switches and can support up to 16 IN-VSN clients for remote management. The sections that follow describe the main components of the INRANGE Director, along with showing the physical location of the components. Figure 3-10 on page 62 shows the components.

The Fibre Channel I/O card The INRANGE Director uses Fibre Channel I/O (FIO) cards to provide the physical connection between the INRANGE Director and the external devices being connected. These cards are commonly referred to as blades. Each FIO blade has 8 ports that terminate at INRANGE certified GBIC compliant devices including Copper, Multi Mode Fiber Optics, Single Mode Fibre Optics, and FICON. If required, 1 x 9 connection modules can be installed using an RPQ. The INRANGE Director has a maximum port count of 64, and currently the minimum IBM configuration requires three FIO blades which means 24 ports. When one or more of the FIO blades is not required, an FIO blank plate must be installed. Shortwave (multi mode), color coded beige or black exposed surface GBICs, and longwave (single mode) color coded blue exposed surface GBICs are supported. Each GBIC consists of a transmitter and receiver optical subassembly. Both the shortwave and longwave discrete laser diodes are classified as Class 3B laser products. Supported interface converters (for example FICON and GBICs) can be installed in the FIOs in any combination. Currently, IBM configurations require longwave and shortwave GBICs to be ordered in increments of eight. All ports are self-configuring and have a full 64 buffer credit set allowing, if required, all 64 ports to be used for longwave transmission. Chapter 3. SAN building blocks

59

The FIO module has two redundant backplane paths through the backplane to the redundant Fibre Channel Switch Module (FSW). The FSW is described in “Fibre Channel Switch Module” on page 60. The base FIO module logic manages the synchronized switch over to the spare module. Each backplane base FIO port has a redundant I/O which is routed to a cross point chip, which has two input ports and two output ports that are routed to a spare FSW module. The corresponding chip is then configured to bypass a failed FSW module and switches the connectivity through the spare FSW module. Three types of memory are used on the FIO modules: 򐂰 Non-volatile, read-only memory: Used to store hardware configuration, boot code, and maintenance interface code 򐂰 Non-volatile, block rewritable memory: Used to store firmware operation code or user configurable port settings 򐂰 Volatile high-speed memory: Contains a full copy of firmware, operation code, operating parameters, and data packets for routing or special handling Figure 3-8 shows the FIO module.

Figure 3-8 Eight port FIO module

XCAF FIO module The Extended Credit and Addressing Facility (XCAF) FIO module provides support for distances up to 100 km. The XCAF blade has the same physical dimensions and GBIC support criterion as the FIO blade. If the INRANGE Director is configured as an XCAF only device, it is not possible to intermix FIO and XCAF blades. However, if the INRANGE Director is configured as a base FIO system, FIO and XCAF blades can be intermixed. When running at extended distances, there is no requirement to alter the default resource allocation time out values (RA TOV) values.

Fibre Channel Switch Module The Fibre Channel Switch Module provides the middle or cross connection architecture of the Director. There are four active and one hot spare FSWs that provide the physical and logical links between the FIOs blades. The hot spare FSW is only provided when the high availability option (feature code 5020) is selected. FSW modules are hot-swappable modules.

60

IBM Tape Solutions for Storage Area Networks and FICON

Fibre Channel Control Module (FCM) The remaining module the FCM provides the command and control interface for the system. It enables the control management software, IN-VSN, to configure, modify and test the INRANGE Director. See Figure 3-9. In addition to the three types of memory in the FSW modules, the FCM module also has Non-Volatile Random Access Memory (NVRAM). This is used to store persistent system configuration and status information. Error log information is also stored in the NVRAM. For customer engineer access to the diagnostic log and maintenance panels, there is an RS232 port. For high availability, it is possible to have a redundant hot standby FCM module. The FCM hot spare is only provided when the high availability option (feature code 5020) is selected. The FCM modules are hot-swappable units.

Figure 3-9 FCM module

Figure 3-10 shows the slot layout for each of the modules described. The major differences between an FIO blade and FSW module are that FIOs have external ports and have Serializer/Deserializer component. FSW modules do not have either of these functions. Located at the bottom of each FIO blade and FSW module, there are four LED indicators that display the status of the component. These include: 򐂰 򐂰 򐂰 򐂰

Over temperature Heart beat PWR OK F LEDS

Chapter 3. SAN building blocks

61

FCM Module

LED Display panel

FIO Module FSW Module

Power Modules Fan Module

Figure 3-10 Location of modules

Power and fan assembly Two hot-swappable power supply assemblies provide INRANGE Director with full redundancy. Each power supply has an LED display indicating the power supply and DC voltage are functioning correctly. The electrical specifications of the power supply assembly are: 򐂰 Input voltage is 220 VAC nominal; VAC input range is from 180 VAC to 264 VAC. 򐂰 Input frequency is 50/60 Hz nominal; frequency range is 47 Hz to 63 Hz. 򐂰 Output is 48 VDC nominal, plus/minus 5%. There are four fan modules which either push or draw air to achieve the INRANGE Director cooling. All fans can be replaced independently. In the event of a single fan failure, the remaining fans automatically adjust their speed to compensate for the failing component.

Backplane module The INRANGE Director has a passive backplane that provides the connectivity for all modules. This backplane has the capability to be extended from a 64 port model to a 256 port model. A backplane upgrade is a disruptive upgrade. You can avoid this disruption by configuring FIO blades evenly either side of the FSW blades and adding new FIO blades accordingly. The backplane module also provides the connectivity capability for connecting multiple INRANGE Directors together.

62

IBM Tape Solutions for Storage Area Networks and FICON

The backplane module is a passive component, and therefore, has no moving components. If physically damaged, it is not a FRU, so the entire INRANGE Director needs to be replaced. The physical replacement of a chassis takes one to two hours, depending on the number of ports.

ASIC INRANGE 2042 Directors feature a flexible 5th-generation chip capable of operating dynamically in multiple modes (F, FL, TL, and E_Port) or a mixture of modes. There is no requirement to swap or reconfigure ASICs if you need to alter the use of ports in your SAN fabric.

Upgrade Path/Extensible Core Architecture Upgrade Path/Extensible Core Architecture (XCA) is the architecture which allows a single chassis to expand. Upgrades within a given chassis are accomplished by insertion of port cards (each supporting eight Fibre Channel ports). These port cards can be inserted while the 2042 is in operation, without causing any disruption to connectivity of ports already in operation. These port cards are off-the-shelf items with published list prices and availability. The upgrade from 64 port to 256 port systems requires additional inter-chassis connections, as well as upgrades to control software, in addition to the second chassis. The XCA architecture allows the Director to be viewed as a single fabric eliminating the requirement to have any principle and subordinate relationships within the Director.

Cabinet (IBM Model 2042-C40) All INRANGE Directors must be configured within an associated cabinet. A single 40U cabinet can contain up to two INRANGE Director, and the IN-VSN Enterprise Management Server. When ordering a cabinet, it is important to specify if it will be field or plant installed. To provide for additional physical security, the cabinet has a locking door. It is good practice to keep the cabinet locked and make the key available only to authorized staff.

Management software IN-Vision Enterprise Manager INRANGE Virtual Storage Network Enterprise Manager (IN-VSN) is browser based software used to manage and control one or more INRANGE Directors. The IN-VSN suite consists of two components: 򐂰 Server software 򐂰 Client software The server communicates with the INRANGE Director, while the IN-VSN client communicates with the IN-VSN server. All user interface is performed by the client software. IN-VSN management software capabilities include: 򐂰 򐂰 򐂰 򐂰

Defining module and port configurations Defining zoning parameters Monitor alarms and system performance Invoking system diagnostics

Chapter 3. SAN building blocks

63

3.9 McDATA ES-1000 Loop Switch The McDATA ES-1000 Loop Switch (2031-L00) provides an ideal way to consolidate workgroup servers and storage into a seamless, well-managed workgroup or mini SAN. With the ES-1000 switch fabric port, it is possible to create an enterprise storage solution by centralizing data into a single enterprise SAN based on your highly available Director-based backbone. In this way, the enterprise storage network is centrally managed from the data center, dramatically reducing management costs. Data is accessible enterprise-wide, and is backed up to central tape libraries, improving data protection. Data center experience and methods now extend from the core to the edge.

3.9.1 Product description The ES-1000 switch acts as a loop switching hub and a fabric attached switch. The switch provides connectivity between attached Fibre Channel Arbitrated Loop (FC-AL) devices and a Fabric. This loop function connects workgroup devices into a miniature SAN. Figure 3-11 shows a picture of the ES-1000.

Figure 3-11 McDATA ES-1000 Loop Switch

The switch also incorporates a bridging function that provides dynamic connectivity between FC-AL devices and McDATA Directors participating in a switched fabric. This bridging function allows low-cost or low-bandwidth workgroup (edge) devices to communicate with fabric devices (mainframe servers, mass storage devices, or other peripherals), and ultimately be incorporated into an enterprise SAN environment. The ES-1000 switch is intended to: 򐂰 Implement stand-alone SANs at the departmental and workgroup level. These SANs provide scalability to meet nondisruptive growth requirements, and provide future connectivity to the enterprise SAN. 򐂰 Consolidate departmental and workgroup servers to allow centralization of associated storage resources and server communication with the enterprise SAN. These servers are typically low-cost, low-bandwidth devices using the Windows NT or UNIX operating systems. 򐂰 Consolidate tape storage devices. Consolidation of workgroup storage and connecting that storage to the enterprise SAN provides better storage resource utilization, increased data protection, and improved data access.

Connectivity The switch provides device connectivity through eight hub ports (H_Ports) that attach to device node loop ports. The H-Ports allow for eight FC-AL ports on the switch. Through the 64

IBM Tape Solutions for Storage Area Networks and FICON

use of cascaded unmanaged hubs, up to 125 FC-AL devices (including hubs) can attach to the switch. The FC-AL standards provides for 127 arbitrated loop physical addresses (AL_PAs). There is a user transparent fabric loop port (FL_Port) and a node loop port (NL_Port) embedded on the switch’s control processor card (CTP), each having an AL_PA assigned, leaving 125 for device attachment. H_Port connectivity is provided through a pluggable fiber optic Gigabit Interface Converter (GBIC) with a shortwave laser transceiver, or a pluggable copper GBIC. GBICs are plugged at front of the switch and they are standard size, fiber cables are connected to GBICs using duplex SC connectors. Copper cables are connected to active copper GBICs with 9 pin DB-9 or 20 pin HSSDC connectors. The switch also provides connectivity to a switched fabric through a bridge port (B_Port) that attaches to an expansion port (E_Port) of an ED-5000 Director (but not another ES-1000 switch), and through a user transparent FL_Port. This bridge connection forms an interswitch link (ISL) through which a fabric device can communicate with a public loop device attached to the switch. Bridge port connectivity is provided through a pluggable fiber-optic GBIC transceiver that can be either shortwave or longwave. It is also located in the switch front panel to the left of the H_Port GBICs. The switch can be configured to operate in shared mode or switched mode.

Shared mode When set to shared mode, the switch acts as a hub that implements arbitrated loop topology (although the loop has the physical appearance of a star configuration). When a loop circuit is initialized and established, arbitration protocol ensures only one device attached to an H_Port owns the loop at a time. The port establishes communication with another device attached to an H_Port (or the B_Port), and half-duplex or full-duplex operation (the default is half duplex) allows the devices to transmit or receive frames at 1.0625 Gbps. During frame transmission between these devices, the full bandwidth of the switch is used and no other H_Ports or devices are available for connection. When frame transmission completes, the loop circuit closes and other devices are able to contend for operation (using standard loop arbitration).

Switched mode When set to switched mode, the switch bypasses full loop arbitration and enables frame transmission through logical connected device pairs. Connections can be established between H_Port pairs, or between an H_Port and FL_Port. Switched mode also allows independent operation of looplets of devices, each connected through an unmanaged hub, and each attached to a single switch H_Port. Because of opportunistic bandwidth sharing, all looplets or connected device pairs operate half duplex or full duplex at 1.0625 Gbps. The ES-1000 switch supports connection of public or private fabric loop devices as explained in the following sections.

Public device A loop device that can transmit a fabric login (FLOGI) command to the switch, receive acknowledgement from the switch’s login server, register with the switch’s name server, and communicate with fabric attached devices is a public device. Public devices communicate with fabric-attached devices through the switch’s B_Port connection to a Director. Public devices support normal fabric operational requirements, such as fabric busy and reject conditions, frame multiplexing, and frame delivery order.

Chapter 3. SAN building blocks

65

Private device A loop device that cannot transmit an FLOGI command to the switch or communicate with fabric attached devices, is a private device. Public and private devices are partitioned into two separate address spaces defined in the Fibre Channel address, and the switch’s embedded FL_Port ensures private address spaces are isolated from a fabric. The switch does not support any other form of Fibre Channel address conversion (spoofing) that allows private device-to-fabric device communication. The switch is controlled by a CTP card. The CTP card initializes and configures the switch after the switch is plugged in or a POR is performed, and contains the microprocessor and associated logic that coordinate switch operation. The CTP card provides Intel I960 processor and ASIC subsystems that: 򐂰 Execute the switch firmware and the underlying operating system 򐂰 Provide the embedded E_Port and FL_Port that enable communication with a switched fabric and provide fabric services to attached loop devices 򐂰 Provide nonvolatile memory for storing firmware, switch configuration information, persistent operating parameters, and memory dump files Firmware is upgraded concurrently. However, the switch resets during the upgrade, causing Fibre Channel links to momentarily drop and attached FC-AL devices to log out and log back in. 򐂰 Provide connections to Fibre Channel ports and enable frame transmission between switch ports without software intervention 򐂰 Provide connections to an RS-232 maintenance port and 10/100 Mbps Ethernet port The CTP card is not an FRU. If the CTP card fails and cannot be rebooted by performing a POR, you must replace the entire switch.

3.9.2 High availability features The following sections discuss the FRUs that provide high availability for the ES-1000 switch.

Power supplies The switch contains two power supplies that share the electrical operating load. If one power supply fails, the other supply handles the full load. Separate power cord receptacles at the rear of the switch provide facility input power to each supply. For full redundancy, input power for each receptacle should come from a different source.

Fan modules The switch contains six fans. If one fan fails, the switch can operate indefinitely with the remaining five fans. If two or more fans fail, they must be replaced immediately.

Ports The switch is delivered with eight H_Ports that support pluggable fiber-optic or active copper GBICs. Any unused H_Port can be used in place of a failed H_Port. To continue device operation, the cable from a failed port is reconnected to an unused operational port. GBICs can be removed, replaced, or relocated without affecting operation of remaining ports. The B_Port is unique and cannot be swapped with another port.

66

IBM Tape Solutions for Storage Area Networks and FICON

3.9.3 Concurrent firmware upgrades Since the CTP card provides two memory regions to store firmware, firmware can be upgraded concurrently from the EFC server. However, the switch resets during the firmware upgrade, causing Fibre Channel links to momentarily drop and any attached FC-AL devices to log out and log back in. Data frames lost during switch reset must be retransmitted.

3.9.4 Serviceability features The ES-1000 switch, the EFC Manager application, and the ES-1000 Product Manager application provide the following serviceability features: 򐂰 LEDs on switch FRUs and adjacent to FIbre Channel ports that provide visual indicators of hardware status or malfunctions 򐂰 System alerts, event logs, audit logs, link incident logs, and hardware logs that display switch, Ethernet link, and Fibre Channel link status at the EFC server or a remote workstation 򐂰 Diagnostic software that performs power-on self-tests (POSTs) and B_Port and H_Port diagnostics (internal loopback and external loopback wrap tests) 򐂰 Automatic notification of significant system events (to support personnel or administrators) through e-mail messages or the call-home feature 򐂰 An external modem for use by support personnel to dial in to the EFC server for event notification and to perform remote diagnostics 򐂰 An RS-232 maintenance port at the rear of the switch (port access is password protected) that enables installation or service personnel to change the switch’s IP address, subnet mask, and gateway address You can also change these parameters through a Telnet session, access for which is provided through a local or remote PC with an Internet connection to the switch. 򐂰 Redundant FRUs (GBICs, power supplies, and cooling fans) that are removed or replaced without disrupting switch or Fibre Channel link operation 򐂰 A modular design that enables quick removal and replacement of FRUs without the use of tools or equipment 򐂰 Beaconing to assist service personnel in locating a specific port or switch When port beaconing is enabled, the amber LED associated with the port flashes. When unit beaconing is enabled, the system error indicator on the front panel flashes. Beaconing does not affect port or switch operation. 򐂰 Data collection through the Product Manager application to help isolate system problems. The data includes a memory dump file and audit, hardware, and engineering logs. 򐂰 SNMP management using the Fibre Alliance MIB that runs on the EFC server. Up to 12 authorized management workstations can be configured through the EFC Manager application to receive unsolicited SNMP trap messages. The trap messages indicate operational state changes and failure conditions. 򐂰 SNMP management using the Fibre Channel Fabric Element MIB (Version 2.0), TCP/IP MIB-II definition (RFC 1213), or a product-specific MIB that run on each switch Up to 12 authorized management workstations can be configured through the Product Manager application to receive unsolicited SNMP trap messages. The trap messages indicate switch operational state changes and failure conditions.

Chapter 3. SAN building blocks

67

3.9.5 ES-1000 zoning The switch supports a name server zoning feature that partitions attached devices into restricted-access groups called zones. For public loop connectivity, this feature is implemented in conjunction with zoning for a fabric Director. FC-AL and fabric-attached devices in the same zone can recognize and communicate with each other through port-to-port connections. Devices in separate zones cannot recognize name server information or communicate with each other. Name server zoning for the arbitrated loop switch is implemented by device WWN only. This contrasts to a fabric Director, where zoning is implemented by domain ID and port number or WWN. Zoning of switch H_Ports is not implemented because fabric Directors only recognize the arbitrated loop physical address (AL_PA) of the switch’s embedded FL_Port. Directors cannot recognize H_Port AL_PAs because: 򐂰 The AL_PA assigned to each switch port number is information stored on the switch’s CTP card and is not accessible by the fabric. 򐂰 The AL_PA assigned to each switch port number is dynamic and can change each time the arbitrated loop initializes. If an attempt is made to implement ES-1000 zoning by domain ID and port number (by explicitly defining such a zone or merging zone sets), the switch segments form the attached fabric.

3.10 McDATA ES-3216 and ES-3232 fabric switches The McDATA 2 Gbps fabric switches provide an entry point for building a highly available, extensible SAN. It is focused on providing to a mid-range environment, a flexible solution capable of handling current workload and being able to merge into a larger fabric.

3.10.1 Product description The McDATA ES-3216 switch, IBM 2031-216 (Figure 3-12) provides 16 Fibre Channel Generic ports for attachment to device ports or Director expansion ports through fiber optic links. The switch provides full duplex, bidirectional data transfer for all ports. The ES-3232, IBM 2031-232 switch provides 32 ports with the same characteristics.

Figure 3-12 McDATA ES-3216 switch (top) and ES-3232 (bottom)

68

IBM Tape Solutions for Storage Area Networks and FICON

The switches have small form factor (SFF) transceivers that are hot pluggable. Fiber optic cables are attached using LC connectors. Shortwave and longwave laser transceivers are available. They can be intermixed as needed. The 16 Generic ports (G_Port) on the ES-3216, or the 32 Generic ports on the ES-3232, are available in the front panel. Any port can function as an F_Port when connected to a device or as an E_Port when connected to another switch. These switches do not support direct connection of arbitrated loop devices. However, these devices can communicate with the switches using bridge devices like the McDATA ES-1000 switch. The switch is initialized, configured, and controlled by a CTP card. The CTP card contains microprocessor and an ASIC subsystem that provides port communication functions and enables frame transmission between switch ports without software intervention. The CTP card also provides nonvolatile memory for storing firmware (two memory regions to be able to store two firmware versions), switch configuration information, persistent operating parameters and memory dump files. There is also a 10/100 Mbps Ethernet port and an RS-232 maintenance port controlled by the CTP card. The CTP is not a FRU. If it fails, you must replace the entire switch.

3.10.2 High availability features The following sections discuss the features that ensure high availability of the ES-3216 and ES-3232 switches.

Power supplies Two redundant power supplies share the operating load. If one supply fails, the other supply handles the full load. The failed power supply can be replaced concurrently. There are separate receptacles at the rear of the switch for input power connection. For full redundancy, each input should come from a different power source.

Fans The switches have six fans: two on each power supply and two in the center section of the switch. If a single fan fails, the redundant fans provide cooling until it is replaced. If two or more fans fail, you must replace them immediately.

Spare ports The switches have 16 or 32 ports. Unused ports can be used as spare ports. In case of a port failure, you can move the cable to a spare port to continue switch operation. Use care when zoning is configured to specify port numbers since you may need to reconfigure any affected zone or zones. Depending on the operating system, you may need to reconfigure the path to continue operation on a new port.

Concurrent firmware upgrade The CTP card provides two nonvolatile memory regions for storing firmware. Storing two firmware versions allows firmware upgrades to be performed concurrently without disrupting switch operation.

Chapter 3. SAN building blocks

69

3.10.3 Setup configuration You can install the switch in one of three configurations: 򐂰 Table or desk top version 򐂰 Fabricenter equipment cabinet: One or more switches come pre-installed in a McDATA supplied cabinet 򐂰 Customer-supplied equipment rack: One or more switches and the required mounting hardware are shipped to be installed in a customer supplied 19-inch rack The height of the ES-3216 switch is 1 EIA unit (1.75 inches), and the height of the ES-3232 is 1.5 EIA unit (2.6 inches). If the switch is to be attached to the local area network (LAN), or to an existing EFC server LAN, the network address must be set. The network address is set through the RS232 port. If a new EFC server is being installed with the switch, the EFC server setup is the same as for the ED-5000 Director. After the ES-3216 or ES-3232 switch is connected and configured to the EFC server, you can do the switch configuration by using the ES-3216 or ES-3232 Product Manager. Where there is no EFC server available, you can do the switch configuration by using the embedded Web server application. A PC platform running Netscape Navigator 4.6 or higher or Microsoft Internet Explorer 4.0 or higher is required. This PC and the Ethernet LAN segment where the switch is attached must have connectivity through the customer network. You can perform the following configuration tasks from the Web server: 򐂰 Configure the switch ports. 򐂰 Configure the switch identification, date and time, operating parameters, and network addresses. 򐂰 Configure SNMP trap recipients. 򐂰 Configure user passwords. For installations where these switches will coexist with ED-5000 Directors, give special consideration to the small form factor LC connectors. You can order adapters so that you can attach existing fiber cables with dual SC connectors to the new SFF transceivers.

3.10.4 Management software An Ethernet LAN connection to the CTP card provides management access to the switches.

Management Management is achieved through the EFC Manager, EFC Fabric Manager, and ES-3216 or ES-3232 Product Manager applications residing in the EFC server. EFC Manager at release Level 3.01 or higher is required to configure an ES-3232 switch. The ES-3216 and ES-3232 switches do not provide inband management capabilities.

Web management The ES-3216 and ES-3232 switches also have an embedded Web server application that provides management capabilities if an EFC server is not available. This interface supports configuration, statistics monitoring, and basic operation of the switch, but does not offer all the capabilities of the ES-3216 or ES-3232 Product Manager application. The Web server is accessed from any PC attached to the same network and running an Internet browser. 70

IBM Tape Solutions for Storage Area Networks and FICON

Pointing the PC browser to the IP address of the switch, a login screen is presented. After you enter a valid user name and password, the PC browser becomes a management console.

3.10.5 Serviceability features The ES-3216 and ES-3232 switches provide the following error detection, reporting, and serviceability features: 򐂰 LEDs on switch FRUs and next to each Fibre Channel port that provide visual indication of status or failures. 򐂰 System alerts that display at the EFC server or a remote workstation connected to it. 򐂰 Event logs, audit logs, link incident logs, and hardware logs. 򐂰 Diagnostic software that performs POSTs and port diagnostics, including internal and external loopback wrap tests. 򐂰 Automatic notification to support personnel or administrators by e-mail messages. 򐂰 Automatic notification to service support center by the call home feature. 򐂰 Dial-in capabilities for use by service personnel to monitor or perform remote diagnostics. 򐂰 RS232 maintenance port that is password protected and allows service personnel to change the switch network address. 򐂰 Redundant FRUs (power supplies and fans) that are removed and replaced without affecting switch operations. No special tools needed to remove and replace FRUs. 򐂰 SFF transceivers that are removed and replaced without affecting other ports operation. 򐂰 Beaconing for quick identification of a switch or specific port by a flashing LED without affecting operation. 򐂰 Data collection through the Product Manager application to help isolate problems. 򐂰 Unsolicited SNMP trap messages indicating operational state changes and failure conditions sent to authorized workstations.

3.11 McDATA ED-6064 Director The McDATA ED-6064 Enterprise Fibre Channel Director (2032-064) offers the same kind of enterprise level availability and performance characteristics as the ED-5000 Director, but with double the number of ports and reduced size. You can install up to four ED-6064 Directors and the EFC Manager in a single FC-512 cabinet. The ED-6064 Director is focused on providing better scalability characteristics to cover the growing requirements of today’s enterprise-level SANs.

3.11.1 Product description The ED-6064 is a second-generation, 64-port Director that provides dynamic switched connections between Fibre Channel servers and devices in a SAN environment. The Director implements Fibre Channel technology that provides high performance scalable bandwidth (1 Gbps), highly available operation, redundant switched data paths, long transmission distances (up to 20 kilometers), and high device population. The Director provides high performance port connections to end devices such as servers, mass storage devices, and other peripherals in a Fibre Channel switched network. Up to 64

Chapter 3. SAN building blocks

71

Fibre Channel connections are provided through generic ports (G_Ports). The McDATA ED-6064 Director software configures and supports any-to-any port connectivity. The McDATA ED-6064 Director offers the following performance and redundancy characteristics: 򐂰 Any-to-any non-blocking connections. 򐂰 High bandwidth: All ports provide full duplex serial data transfer. 򐂰 High availability: Redundant configuration of critical FRUs with automatic fault detection and notification. 򐂰 Low latency: Less than 2 microseconds between frame transmission at source port and reception at the corresponding destination port. 򐂰 Hot FRU replacement. 򐂰 Concurrent firmware updates. 򐂰 Service Class 2, Class 3, and Class F support. Figure 3-13 shows the ED-6064 and the FC-512 cabinet. Note: The McDATA ED-6064 Enterprise Fibre Channel Director (2032-064), resold by IBM, now offers options that enable 2 Gb Fibre Channel technology. The base ED-6064 is now 2 Gbps capable with the required firmware and CTP2 standard on all new Directors. New customers can take advantage of 2 Gbps technology by ordering the new Universal Port Modules (UPM) 2 Gbps 4-port cards. Existing ED-6064 Directors are upgradable to 2 Gbps technology with upgrade kits. This upgrade capability provides investment protection for customers by allowing existing customers to upgrade with minimal impact to their SAN. A total of five new features are being announced for the McDATA ED-6064 Enterprise Fibre Channel Director (2032-064). These features are designed to: 򐂰 Introduce 2 Gbps Fibre Channel technology 򐂰 Offer new customers the capability to operate at 2 Gbps 򐂰 Offer investment protection to existing ED-6064 (2032-064) customers who want to migrate to 2 Gbps capability For specific configuration support dates and other details on availability, server models, operating system levels, and attachment capabilities, see: http://www.ibm.com/storage/mcdata

72

IBM Tape Solutions for Storage Area Networks and FICON

Figure 3-13 ED-6064 (left) and FC-512 cabinet

3.11.2 Attachment The McDATA ED-6064 Director supports attachment of Open Systems FCP and S/390 FICON servers and devices. The Director supports both point-to-point and multi-switch fabric topologies, and indirectly supports arbitrated loop topology. Point-to-point topology provides a single direct connection between two device N_Ports. This topology supports bidirectional transmission between source and destination ports. Through dynamic switching, the Director configures different point-to-point transmission paths. In all cases, connected N_Ports use 100% of the available bandwidth. A multi-switch fabric topology provides the ability to connect Directors (and other McDATA switch elements) through E_Ports and ISLs to form a Fibre Channel fabric. Director elements receive data from a device. Based on the destination N_Port address, they route the data through the fabric (and possibly through multiple switch elements) to the destination device. An arbitrated loop topology connects multiple device node loop (NL_Ports) in a loop (or hub) configuration without benefit of a multi-switch fabric. Although the Director does not support direct connection of arbitrated loop devices, such devices can communicate with the Director through the McDATA ES-1000 switch. This switch connects to the Director through a bridge port (B_Port).

3.11.3 Planning for 2 Gbps Each Director provides up to 64 ports of non-blocking Fibre Channel switching capability. The minimum configuration of the ED-6064 (2032-064) stays the same, 24 ports using six 4-port modules (now either FPMs and UPMs). Scalability continues to be in increments of four ports, using 4-port modules, up to the maximum of 64 ports. The three new UPMs are 2 Gbps, 4-port modules, and are available in these combinations:

Chapter 3. SAN building blocks

73

򐂰 Four shortwave optical ports 򐂰 Four longwave optical ports 򐂰 Three shortwave and one longwave optical ports New customers who purchase a McDATA ED-6064 (2032-064) need only configure these new 2 Gbps UPMs to enable the Director to run at 2 Gbps. The base machine is now 2 Gbps capable, because the required firmware and new control processors (CTP2) come standard with the ED-6064s. To run at 2 Gbps, all the 4-port modules installed in the ED-6064 must be 2 Gbps UPMs. The 2 Gbps ports are auto-sensing and will negotiate to operate at the speed (1 Gbps or 2 Gbps) of the attached device, on a port-by-port basis. For existing customers who already purchased a McDATA ED-6064 (2032-064), 2 Gbps operation requires that all of the existing 1 Gbps FPM 4-port modules be upgraded to the new 2 Gbps UPM 4-port Modules. Operation at 2 Gbps also requires installation of either Upgrade Kit depending upon configuration of the Director. Important: Both 1 Gbps (FPM) and 2 Gbps (UPM) port cards can be intermixed in the McDATA ED-6064 Enterprise Fibre Channel Director. To enable 2 Gbps operations, all port cards must be UPM cards. If there are any 1 Gbps FPM 4-port modules installed, the entire Director operates at 1 Gbps. Operation of the ED-6064 at 2 Gbps requires: 򐂰 򐂰 򐂰 򐂰

ECFM to be at release 6.0 or higher Firmware to be at release 2.0 or higher CTP cards to be replaced with CTP2 cards All 1 Gbps 4-port cards to be replaced with 2 Gbps 4-port cards

3.11.4 Port types When connected to a device, a G_Port behaves as a fabric port (F_Port). When connected to another Director (or other managed McDATA product) in a multi-switch fabric, a G_Port behaves as an expansion port (E_Port). Up to 32 ports can be used as expansion ports, which is half the maximum total number of ports. Each Fiber Port Module (FPM) card provides four Fibre Channel connections through duplex SFF pluggable fiber optic transceivers. Shortwave laser transceivers are available for transferring data over multimode fiber-optic cable. Longwave laser transceivers are available for transferring data over single mode fiber-optic cable. Transceivers in a single FPM card can be mixed as needed. Fiber cables attach to the SFF transceivers on the FPM card using duplex LC connectors.

3.11.5 Scalable configuration options The Director is configured from a minimum of six FPM cards (24 ports) to a maximum of 16 FPM cards (64 ports). Two options are available for fiber port module (FPM) cards: 򐂰 Four duplex small form factor pluggable optic transceivers: These can be all shortwave laser transceivers, all longwave laser transceivers, or a mixture of each. 򐂰 Four duplex fixed optical transceivers: These transceivers can all have the same parameters, such as cable connector and optics (shortwave or longwave), or each transceiver may have different parameters. 74

IBM Tape Solutions for Storage Area Networks and FICON

High availability features Pairs of critical FRUs installed in the Director provide redundancy in case a FRU fails. When an active FRU fails, the backup FRU takes over operation automatically (failover) to maintain Director and Fibre Channel link operation. A standard availability Director has all possible FRUs installed and is fully redundant. Standard redundancy is provided through dual sets of FRUs and spare (unused) ports on FPM cards.

Power supplies The Director contains two power supplies that share the electrical operating load. If one power supply fails, the other supply handles the full load. Separate receptacles at the rear of the Director provide facility input power to each supply. For full redundancy, input power for each receptacle should come from a different source.

Fan modules The Director contains two fan modules, each containing three fans (six fans total). If one or more fans in a module fail, the redundant fan module provides cooling until the failed module is replaced. If the second fan module fails, software shuts off power to the Director to prevent system damage.

CTP cards The Director is delivered with two CTP cards. The active CTP card initializes and configures the Director after power on and contains the microprocessor and associated logic that coordinate Director operation. A CTP card provides an initial machine load (IML) button on the faceplate. When you press the button and hold it for three seconds, the Director reloads firmware and resets the CTP card without switching off power or affecting operational fiber optic links. Each CTP card also provides a 10/100 Mbps RJ-45 twisted pair connector on the faceplate that attaches to an Ethernet LAN to communicate with the EFC server or an SNMP management station. Each CTP card provides system services processor (SSP) and embedded port (EP) subsystems. The SSP subsystem runs Director applications and the underlying operating system, communicates with Director ports, and controls the RS-232 maintenance port and 10/100 Mbps Ethernet port. The EP subsystem provides Class F and exception frame processing and manages frame transmission to and from the serial crossbar (SBAR) assembly. In addition, a CTP card provides nonvolatile memory for storing firmware, Director configuration information, persistent operating parameters, and memory dump files. Director firmware is upgraded concurrently (without disrupting operation). The backup CTP card takes over operation if the active card fails. Failover from a faulty card to the backup card is transparent to attached devices.

SBAR assemblies The Director contains two SBAR assemblies. Each SBAR card is responsible for Fibre Channel frame transmission from any Director port to any other Director port. Connections are established without software intervention. The card accepts a connection request from a port, determines if a connection can be established, and establishes the connection if the destination port is available. The card also stores busy, source connection, and error status for each Director port.

Chapter 3. SAN building blocks

75

The redundant SBAR assembly ensures uninterrupted transmission and receipt of Fibre Channel frames between ports if the active SBAR card fails. Failover to the backup card is transparent to attached devices.

FPM cards The Director is delivered with a minimum of eight FPM cards (32 ports). Any unused Fibre Channel port of the same type can be used in place of a failed port. You can remove and replace pluggable transceivers concurrently with other port operations. Note: Spare port cards do not automatically fail over and provide link operation after a port card failure. To continue device operation, the fiber optic cable from a failed port is reconnected to an unused operational port. When storage ports are moved, you may need to reconfigure the path for the operating system. An FPM card is a concurrent FRU. You can add or replace it while the Director is powered on and operating.

Power module assembly The power module contains two AC power connectors and the power circuit breaker. Included in this module is a 9-pin serial connector used for a local terminal or remote dial-in attachment for maintenance purposes. This serial interface is also used to set up Director network addresses. The module is a non-concurrent FRU, and the Director must be powered off prior to scheduled removal and replacement.

Backplane The backplane provides 48 VDC power distribution and connections for all logic cards. The backplane is a non-concurrent FRU. You must power off the Director prior to FRU removal and replacement. The backplane in the ED-6064 Director is ready to support 2 Gbps operation. When 2 Gbps becomes available, it will only require replacing FPM cards and GBICs.

Management software The McDATA ED-6064 Director provides for outband management access in the following ways: 򐂰 Through the EFC server attached to the Director’s CTP card 򐂰 Through a remote PC or workstation connected to the EFC server through the customer intranet 򐂰 Through an SNMP management workstation connected through the Director LAN segment or customer intranet 򐂰 Through a PC with a direct serial connection to the Director maintenance port (rear of the Director chassis) The maintenance port is used by installation personnel to configure switch network addresses.

76

IBM Tape Solutions for Storage Area Networks and FICON

򐂰 Through a PC with a modem connection to the EFC server The modem is for use by support center personnel only. 򐂰 Through a PC with a Web browser and Internet connection to the Director through a LAN segment Inband management console access (through a Fibre Channel port) is provided by enabling user specified features that allow Open Systems or FICON host control of the Director. The features are mutually exclusive. Only one can be installed at a time.

Web management The embedded Web server interface provides a GUI accessed through the LAN (locally or remotely) to manage the McDATA ED-6064 Director. This interface is available with Director firmware Version 1.2 (or later) installed. It does not replace nor offer the management capability of the EFC Manager and Product Manager applications (for example, the Web server does not support all Director maintenance functions). In addition, the Web server interface manages only a single Director. Web server users can perform the following functions: 򐂰 Display the operational status of the Director, FRUs, and Fibre Channel ports, and display Director operating parameters 򐂰 Configure the Director (identification, date and time, operating parameters, and network parameters), ports, SNMP trap message recipients, zones and zone sets, and user rights (administrator and operator) 򐂰 Monitor port status, port statistics, and the active zone set, and display the event log and node list 򐂰 Perform Director firmware upgrades and port diagnostics, reset ports, enable port beaconing, and set the Director online or offline You can open the embedded Web server interface from a standard Web browser running Netscape Navigator 4.6 or higher, or Microsoft Internet Explorer 4.0 or higher.

Chapter 3. SAN building blocks

77

78

IBM Tape Solutions for Storage Area Networks and FICON

4

Chapter 4.

Tape building blocks Consider the tape storage devices available at the moment, and you soon realize that you have a great variety of tape systems from which to choose. Tape systems range from stand-alone tape drives, through tape autoloaders, up to large, scalable tape libraries. However, not all of those products are suitable for a storage area network (SAN) environment. This chapter explains the fundamental differences between the various tape storage devices, demonstrates how you can position them in the SAN environment, and shows how to connect each tape subsystem to the SAN environment. Currently, most tape subsystems are Small Computer Systems Interface (SCSI) devices and are connected to a SAN environment by using a SAN Data Gateway. However, direct fiber attachment is becoming more common for tape subsystems, usually using a Fibre Channel Arbitrated Loop (FC-AL) connection. A SAN can provide extended distance for tape applications such as remote vaulting. However, it also provides much greater flexibility for sharing. This has two implications. First, you need to consider the sharing capability of a library when you are deciding what type of library to use. Second, high levels of sharing can make it highly attractive to consolidate tape work from many small, unshared libraries to a larger, more capable library. The newest tape technology that we discuss here is provided by the Linear Tape-Open (LTO) Ultrium tape family. This family goes from single drives through autoloaders to a sharable, enterprise tape library with features that match well to a SAN environment. Next we describe the IBM TotalStorage 3590 family, where the 3494 tape library — also offering high capacity, scalability, and availability — can be shared between multiple systems, including S/390. Finally, we look at some libraries that you may already have installed and want to attach to your SAN as a first step in tape SAN exploitation, including IBM TotalStorage MP and DLT libraries.

© Copyright IBM Corp. 2000, 2001, 2002, 2003. All rights reserved.

79

4.1 SAN readiness We can classify tape subsystems as SAN-ready to different degrees. Not all tape subsystems are created equal. It is reasonable to assume that we do not want to use a switched SAN fabric to give access to single drives or autoloaders with dual drives and limited capacity. These subsystems do not offer the capacity or scalability that we would want for most SAN solutions. To be fully SAN-ready, a tape subsystem should have these attributes: 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰

Native Fibre Channel interfaces Multiple drives Sharing and partitioning capabilities Remote management Enterprise-class automation High performance High cartridge and library capacity

Currently, few drives offer native Fibre Channel interfaces. The IBM LTO Ultrium drives in the 3584 do and IBM TotalStorage 3590 drives do. However, there are many drives and libraries that may otherwise be suitable to use initially in a SAN which have SCSI interfaces. For these, you can use bridges to convert SCSI to Fibre Channel. You should also be aware that it is currently normal for tape devices to have an FC-AL interface, so they must attach to a fabric device that supports FC-AL attachment through an FL port.

4.2 LTO tape subsystems The LTO program is a joint IBM, HP, and Seagate initiative. This program was founded in 1997 and creates a new tape standard for the Open Systems markets. The group developed the formats to serve multiple requirements. Now the standards are supported by multiple suppliers. These formats provide a clear and straightforward technology road map, destined to have broad industry support. The LTO objective was to establish a new, open format for high capacity, high performance tape storage products to be used in midrange and network server computing environments. LTO technology has developed into two open tape format specifications: Accelis and Ultrium. Not all users require the same features and functions. Some applications are read-intensive and require the fastest access to data possible. Some applications are write-intensive and fill tape cartridges very quickly, requiring the highest single-tape-cartridge capacity possible. IBM, HP, and Seagate overcame the potential trade-off compromises implied by a single-format solution by specifying both a fast-access open tape format specification (the Accelis format) and a high-capacity open tape format specification (the Ultrium format). The Accelis cartridge offers mid-point load with two reels to speed access to data. The Ultrium cartridge has a single reel within the cartridge to maximize the amount of data stored within the space occupied by the cartridge.

4.2.1 Ultrium The Ultrium tape format is the LTO technology implementation optimized for high capacity and performance with outstanding reliability, in either a stand-alone or an automated environment. The Ultrium tape format uses a single reel cartridge to maximize capacity. It is ideally suited for backup, restore, and archival applications. The cartridges load in a manner similar to DLT and fit in DLT automation slots.

80

IBM Tape Solutions for Storage Area Networks and FICON

Here are some key points about the Ultrium tape format: 򐂰 The first generation of Ultrium allows for 100 GB native capacity in a single compact cartridge. The cartridge is smaller than existing single-reel tape cartridges. 򐂰 The Ultrium standard provides for data transfer rates of 10-20 MB/sec with the 8-channel head version and 5-10 MB/sec with the 4-channel head version. 򐂰 Ultrium provides for four different cartridge capacities (10 GB, 30 GB, 50 GB, and 100 GB). The IBM drives and cartridges use 100 GB uncompressed capacity. 򐂰 LTO-Cartridge Memory (CM) enhances functionality by providing a redundant file log as well as user-defined information. A noncontact external reader allows immediate access to that information without having to insert the cartridge into a drive.

4.2.2 The IBM LTO Ultrium family of tapes and libraries IBM offers several Ultrium format tape products designed to meet different levels of automation with SCSI or native fiber interfaces, depending on the model. The IBM Ultrium family of tapes and libraries (shown in Figure 4-1) comprises four different product offerings, ranging from a stand-alone drive unit to a highly-scalable automated library. They are all based around a common tape drive sub-assembly which is packaged in different automated and stand-alone subsystems.

4.2.3 IBM 3580 Ultrium tape drive The IBM 3580 Ultrium tape drive, shown in Figure 4-1, is an external, stand-alone, SCSI-attached tape drive that attaches to: 򐂰 򐂰 򐂰 򐂰 򐂰

AS/400 and iSeries with OS/400® V4R4 or later RS/6000, RS/6000 SP and pSeries with AIX 4.3.2 Sun servers with Solaris 2.6, Solaris 7 or 8 HP A, L, and N-class servers with HP-UX 11.0 Netfinity, xSeries, and other PC servers supporting Microsoft Windows NT 4, Windows 2000 or Red Hat Linux 7.0 (either 2.2.16 or 2.4.2 kernel)

The drive attaches using a suitable SCSI adapter. Tapes are mounted manually one at a time.

Figure 4-1 IBM 3580 Ultrium tape drive

Chapter 4. Tape building blocks

81

The 3580 drive offers a data transfer rate of 15 MB/sec using 100 GB Ultrium cartridges without considering the effect of compression. The IBM 3580 Ultrium tape drive is a cost-effective solution for save-and-restore and archiving functions. It provides a migration path from DLT, ¼-inch, 4 mm, or 8 mm tape drives. There are two models: 򐂰 IBM 3580-L11 has a Low-Voltage Differential (LVD) Ultra2 SCSI attachment that connects to LVD fast/wide adapters. 򐂰 IBM 3580-H11 has a High-Voltage Differential (HVD) UltraSCSI attachment that connects to HVD fast/wide adapters.

4.2.4 IBM 3581 Ultrium tape autoloader The IBM 3581 Ultrium tape autoloader, shown in Figure 4-2, is the second product in the IBM LTO Ultrium family. It is an external, single drive, SCSI-attached, stand-alone or rack-mounted autoloader. It attaches to the same range of systems as the 3580 drive.

Figure 4-2 IBM 3581 Ultrium tape autoloader

The 3581 Ultrium tape autoloader uses a single IBM LTO Ultrium tape drive. The autoloader contains seven tape slots providing a media capacity of up to 700 GB (1.4 TB with a two to one (2:1) compression) per autoloader. It is capable of sustaining a data rate of up to 15 MB/sec (uncompressed). Note: If you install the optional barcode reader, the cartridge capacity is reduced to six. With its higher capacity and performance, the 3581 is an excellent replacement for other externally attached DLT, ¼-inch, 4 mm, or 8 mm tape devices. Optionally, additional hardware is available for installation of the autoloader in a standard 19-inch rack.

82

IBM Tape Solutions for Storage Area Networks and FICON

The 3581 Ultrium tape autoloader is available as two models, depending on which type of SCSI interface is required: 򐂰 IBM 3581 Model L17 has an LVD Ultra2 SCSI attachment that connects to LVD fast/wide adapters. 򐂰 IBM 3581 Model H17 has an HVD UltraSCSI attachment that connects to HVD fast/wide adapters. The 3581 Ultrium tape autoloader is also available in an OEM version as the 3586-SL7 (see Table 4-3 on page 90).

4.2.5 IBM 3582 Ultrium Scalable tape library The 3582 Ultrium Tape Library can accommodate one or two Ultrium 2 Tape Drives and comes standard with a one-cartridge input/output (I/O), (or import/export) station and 23 data cartridge slots giving a native library capacity of 4.8 TB uncompressed data storage (9.6 TB with 2:1 compression). Tape cartridge capacity is up to 200 GB native capacity (400 GB with 2:1 compression) with the IBM TotalStorage LTO Ultrium 200 GB Data Cartridge. Drive performance is up to 35 MB/sec native data transfer rate (70 MB/sec with 2:1 compression) with the IBM LTO Ultrium 2 Tape Drives. The Ultrium 2 Tape Drives come in 2 Gb switched fabric Fibre Channel, LVD Ultra160 SCSI, or HVD UltraSCSI interfaces to attach to a wide spectrum of open system servers. The Ultrium Tape Library 3582 has two 7-cartridge removable cartridge magazines, a barcode scanner, and the IBM patented Multi-Path Architecture to partition the library into two logical libraries. The library can be configured as a stand-alone desktop unit or can be mounted in an industry-standard 19-inch rack. Optional features include Control Path Failover and a Remote Management Unit/Specialist for remote library management. You can use the 3582 tape library in network-attached storage implementations, including backups and mass storage archives where multi-terabyte capacities are required. IBM LTO Ultrium technology is designed for the heavy demands of automated tape systems. This proven tape technology has enhanced digital speed matching, power management, channel calibration, servo technology, track layout, head design, error correction codes, and data compression resulting in increased capacity, performance, and reliability in an entry-level, automated tape system.

Tape drive interfaces The 3582 Ultrium Tape Library supports the Ultrium 2 Tape Drive with the following interfaces: 򐂰 Fibre Channel 򐂰 LVD Ultra160 SCSI 򐂰 HVD UltraSCSI

Server attachment You can attach the 3582 Ultrium Tape Library to servers by using: 򐂰 SCSI interface 򐂰 Native Fibre Channel interface The sections that follow describe each type of interface.

SCSI interface The 3582 Ultrium Tape Library operates as a set of SCSI-3 devices. For drives that use a SCSI interface, the following conditions apply:

Chapter 4. Tape building blocks

83

򐂰 The Ultrium 2 Tape Drive can attach to a server through an LVD Ultra160 SCSI interface or an HVD UltraSCSI interface 򐂰 Each SCSI drive sled uses shielded HD68 connectors and can attach directly to a 2-byte-wide SCSI cable. Any combination of up to two initiators (servers) and up to four targets (devices) is allowed on a single SCSI bus if the following conditions are met: 򐂰 The SCSI bus is terminated properly at each end. 򐂰 Cable restrictions are followed according to SCSI-3 standards. Under the SCSI-3 protocol, this type of attachment allows cable lengths of up to 25 m (81 feet) with the appropriate cable and terminator.

IBM Ultrium 2 Tape Drive The IBM 3582 Ultrium Tape Library contains the IBM Ultrium 2 Tape Drive. The IBM Ultrium 2 Tape Drive supports Fibre Channel, LVD Ultra160, or HVD UltraSCSI interfaces. It features two HD68 connectors or one LC Fibre Channel connector. Table 4-1 lists the performance characteristics of the Ultrium 2 Tape Drive. Table 4-1 Performance characteristics of the Ultrium 2 Tape Drive Performance characteristic

Ultrium 2 Tape Drive

Native sustained data rate

35 MB/sec (with Ultrium 2 media) 20 MB/sec (with Ultrium 1 media)

Compressed data rate (at 2:1 compression)

70 MB/sec (with Ultrium 2 media) 40 MB/sec (with Ultrium 1 media)

Maximum sustained data rate (at maximum compression)

107 MB/sec (Ultra160)

Burst data rate for LVD SCSI drives

160 MB/sec (Ultra160)

Burst data rate for HVD SCSI drives

40 MB/sec (Ultra)

Burst data rate for Fibre Channel drives

200 MB/sec

Nominal load-to-ready time

15 seconds

Nominal unload time

15 seconds

Average search time to first byte of data

49 seconds

Note: All sustained data rates are dependent on the capabilities of the interconnect. For example, an UltraSCSI bus is limited to less than 40 MB/sec. By using the built-in data-compression capability of the tape drives, you can achieve greater data rates than the native data transfer rate. However, the actual throughput is a function of many components, such as the host system processor, disk data rate, block size, data compression ratio, SCSI bus capabilities, and system or application software.

Multi-Path Architecture The 3582 Ultrium Tape Library features the SAN-ready Multi-Path Architecture, which allows homogeneous or heterogeneous open systems applications to share the library’s robotics without middleware or a dedicated server (host) acting as a library manager. The SAN-ready 84

IBM Tape Solutions for Storage Area Networks and FICON

Multi-Path Architecture makes sharing possible by letting you partition the library’s storage slots and tape drives into logical libraries. Servers can then run separate applications for each logical library. This partitioning capability extends the potential centralization of storage that the SAN enables. The Multi-Path Architecture is compliant with the following attachment interfaces: 򐂰 SCSI 򐂰 Fibre Channel Regardless of whether it is partitioned, the 3582 Ultrium Tape Library is certified for SAN solutions (such as LAN-free backup). The Multi-Path Architecture also lets you configure an additional control path when the library is not partitioned. A control path is a logical path into the library through which a server sends standard SCSI Medium Changer commands to control the library. An additional control path reduces the possibility that failure in one control path will cause the entire library to be unavailable. Use of the control path failover feature further reduces that possibility.

4.2.6 IBM 3583 Ultrium Scalable tape library The forth member of the IBM Ultrium family is the IBM 3583 Ultrium Scalable tape library, shown in Figure 4-3. It is a high-performance, reliable, scalable tape subsystem. It can accommodate from one drive up to a maximum of six drives, and provides space for up to 72 Ultrium cartridges. Designed for tape automation, the IBM 3583 tape library can attach to: 򐂰 AS/400 and iSeries server with OS/400 V4R4 or later 򐂰 RS/6000, RS/6000 SP, and pSeries servers with AIX 4.3.2 򐂰 Sun servers with Solaris 2.6, Solaris 7 or 8. 򐂰 HP A, L, N and V-class servers with HP-UX 11.0 򐂰 Netfinity, xSeries, and other PC servers supporting Microsoft Windows NT 4, Windows 2000 or Red Hat Linux 7.0 (either 2.2.16 or 2.4.2 kernel) The IBM 3583 tape library houses from one to six IBM 3580 Ultrium tape drives, which each have a native data transfer rate of 15 MB/sec and a cartridge capacity of 100 GB. The drives feature data compression hardware using an adaptation of the IBM LZ1 compression algorithm, which provides an effective data rate of up to 30 MB/sec and a cartridge capacity of up to 200 GB (with 2:1 compression) on IBM Ultrium media. Multiple drive models provide additional enhanced functions such as faster transfer of data, simultaneous backup, concurrent read-write operations, and fault tolerance. Optionally, it is possible to have an internal SCSI-fibre convertor that provides burst data rates up to 200 MB/sec and whose shortwave GBIC attaches to 2 GB/s Fibre Channel infrastructures as well as 1 GB/s infrastructures. Units shipped after 31 August 2001 have an integrated remote management unit (RMU) that allow SNMP monitoring of the library and provide Web access to a StorWatch Specialist for the 3583. The StorWatch 3583 Specialist provides the ability to perform all the functions that can be performed from the library’s operator control panel from a suitable Web browser. The RMU is available as an upgrade for earlier 3583 libraries. The IBM 3583 models feature cartridge storage capacities of 18, 36 and 72 cartridges, that is, capacities of 1.8 TB, 3.6 TB, and 7.2 TB of uncompressed data. Assuming a 2:1 compression, the largest model of the 3583, the Model L72, can store 14.4 TB of data.

Chapter 4. Tape building blocks

85

Model description The IBM 3583 Ultrium Scalable tape library has three different models. The major difference between the models is the number of storage cells shipped with the initial order: 򐂰 IBM 3583 Model L18: Supplied with space for 18 cartridges 򐂰 IBM 3583 Model L36: Supplied with space for 36 cartridges 򐂰 IBM 3583 Model L72: Supplied with space for 72 cartridges

Figure 4-3 IBM 3583 Ultrium Scalable tape library

Table 4-2 summarizes the 3583 models. Table 4-2 IBM 3583 Ultrium Scalable tape library model summary Model

Cartridge slots

Data capacity (native)

Data capacity (compressed)

IBM Ultrium Tape Drives

L18

18a

1.8 TB

3.6 TB

1 to 6

L36

36

b

3.2 TB

7.4 TB

1 to 6

72

c

7.2 TB

14.4 TB

1 to 6

L72

a. Plus one I/O station slot b. Plus one I/O station slot c. Including 12 I/O station slots

The model numbering is an indication of how many cartridge slots the model originally shipped with. However, all of the IBM 3583 models are the same physical size, and the two smaller models, L18 and L36, can be field upgraded to hold up to 72 cartridges by the addition of upgrade features. The IBM 3583 may be stand-alone or, with an optional feature, housed in a rack.

86

IBM Tape Solutions for Storage Area Networks and FICON

The installed drives may be any mixture of LVD or HVD attached, up to a total of six. The drives are ordered for plant or field installation using chargeable feature codes: 򐂰 Feature #8003 provides one IBM Ultrium tape drive with an LVD Ultra2/Wide SCSI adapter. 򐂰 Feature #8004 provides one IBM Ultrium tape drive with an HVD Ultra/Wide SCSI adapter. Each IBM Ultrium tape drive contains the electronics and logic for reading and writing data, control of the tape drive, management of the data buffer, and error recovery procedures. All tape drives are packaged as a common assembly that is a field replaceable unit (FRU), designed for quick removal and replacement. The cartridge capacities are unaffected by the number of drives installed. The 3583 Ultrium Scalable tape library is also available in an OEM version as the 3586-SL72 (see Table 4-3 on page 90).

4.2.7 IBM 3584 UltraScalable tape library The IBM 3584 is a larger modular library with the potential to house a maximum of 72 tape drives in the maximum six frames. Designed for automated tape handling, the IBM 3584 UltraScalable tape library, shown in Figure 4-4, is the largest member of the IBM Ultrium family of tape library storage solutions. The different LTO drive interfaces give you a wide range of attachment capability. The IBM 3584 is supported in these environments: 򐂰 򐂰 򐂰 򐂰 򐂰

AS/400 and iSeries with OS/400 V4R4 or later RS/6000, RS/6000 SP, and pSeries with AIX 4.3.2 Sun servers with Solaris 2.6, Solaris 7 or 8 Hewlett-Packard A, L, N, and V-class servers with HP-UX 11.0 Netfinity, xSeries, and other PC servers supporting Microsoft Windows NT 4 or Windows 2000

The IBM 3584 UltraScalable tape library provides tape storage solutions for the large, unattended storage requirements from today's mid-range, open systems up to high-end network servers. Combining reliable, automated tape handling and storage with reliable, high-performance IBM LTO Ultrium tape drives, the IBM 3584 offers outstanding retrieval performance with typical cartridge move times of less than three seconds. For convenience in migrating from DLT, the 3584 can support a mixture of LTO and DLT drives and media. The IBM 3584 can be partitioned into multiple logical libraries up to a maximum of 72. This makes it an excellent choice for consolidating tape workloads from multiple heterogeneous open systems servers. The partitioning can be done because of the 3584’s multipath architecture. This allows the control elements of the library to be addressed from every SCSI ID. When a 3584 is partitioned, each logical library consists of a distinct set of drives and storage slots. Cartridges are not shared between logical libraries to assure protection of the data. The IBM 3584 library can, as an option, be attached to a 10/100 MB/sec Ethernet. When this is done, the library offers both a Web interface, StorWatch 3584 Specialist, and a remote support facility for Call Home, when the library can use an optional modem to dial out and report problems automatically. The StorWatch Specialist allows you to monitor and control the 3584 remotely from a suitable Web browser using these functions:

Chapter 4. Tape building blocks

87

򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰

View library status Control path management SCSI ID management Drive cleaning Logical library configuration Move cartridges Inventory library Firmware update

The IBM 3584 is a modular tape library that consists of frames which house tape drives and cartridge storage slots. You can install a single-frame base library and add up to five additional frames. You can tailor the library to match your system capacity and performance needs from 14 TB to 248 TB (28 TB to 496 TB with 2:1 compression) and use up to 72 IBM LTO Ultrium tape drives.

Figure 4-4 IBM 3584 UltraScalable library base frame

You can install the three IBM 3584 models together to make up the IBM 3584 UltraScalable tape library: 򐂰 IBM 3584 Model L32: The base frame for the IBM 3584 library which can be installed on its own or in combination with the Model D32. It can contain up to 12 LTO drives. 򐂰 IBM 3584 Model D32: An expansion frame containing up to 12 LTO drives, up to five of which (either D32, D42 or a combination) can be installed with the Model L32 base frame.

88

IBM Tape Solutions for Storage Area Networks and FICON

Note: You must have the capacity expansion feature on the L32 (either feature #1603 or #1653) before you may add D32 frames. You must also install the L32 base frame first before you install the expansion frame. 򐂰 IBM 3584 Model D42: An expansion frame containing up to 12 DLT 8000 drives, up to five of which (either D32, D42 or a combination) can be installed with the Model L32 base frame. See the previous note box for Model D32. The IBM LTO Ultrium drives may be attached to a host system using HVD SCSI, LVD SCSI, or Fibre Channel connectors. Drives are added to the library with a specified interface type (LVD, HVD or FC) by ordering a feature code against the IBM 3584 model type: 򐂰 #1454: Installs one IBM LTO Ultrium drive with LVD interface 򐂰 #1455: Installs one IBM LTO Ultrium drive with HVD interface 򐂰 #1456: Installs one IBM LTO Ultrium drive with FC-AL interface Drives and drive power supplies are hot-swappable. The FC-AL fiber option provides greater system configuration with longer cable distances, up to 500m and longer. By using the IBM 2103 SAN Fibre Channel Storage Hub, the IBM 2109 SAN Fibre Channel Switch, or the IBM 3534 SAN Fibre Channel Managed Hub, greater distances up to 11 kilometers are possible. You must also install a Fibre Channel Patch Panel, feature #1462, in the 3584 when you are using one or more FC-AL Ultrium drive sleds. This is done for ease of cabling and to minimize the amount of handling for expensive, long fiber cables. DLT 8000 drives may be attached to a host system using HVD SCSI or LVD SCSI interfaces.

Summary of the IBM Ultrium family The 3580, 3581, 3583, and 3584 are part of the same family. This means that the drive, cartridge technology, and formats are the same so the cartridges are interchangeable between the libraries as dictated by the LTO format standards. However, the machines (for example, 3583 and 3584) are not upgradable from one to another. Nor can the Ultrium drives be exchanged between different libraries. The IBM LTO family of products is sold directly through IBM and its business partners and to other manufacturers under the IBM StorageSmart™ Solutions name, for integration into their library solutions. Table 4-3 shows how the StorageSmart product names correspond to the Ultrium building blocks.

Chapter 4. Tape building blocks

89

Table 4-3 IBM LTO products and corresponding IBM StorageSmart products IBM product

IBM description

Corresponding IBM StorageSmart name

IBM LTO Common Drive Sub-Assembly

Basic tape drive

StorageSmart by IBM T200 Ultrium tape drive

3580-L11 and H11

3580 Ultrium tape drive

StorageSmart by IBM TX200 Ultrium external tape drive

3581-L17 and H17

3581 Ultrium tape autoloader

StorageSmart by IBM SL7 Ultrium tape autoloader Machine Type 3586

3583-L18, L36, and L72

3583 Ultrium Scalable tape library

StorageSmart by IBM SL72 Ultrium Scalable tape library Machine Type 3587

3584-L32 and D32

3584 UltraScalable tape library

Not applicable

For the latest information about the IBM Ultrium family, refer to The IBM LTO Ultrium Tape Libraries Guide, SG24-5946, or go to the Web site: http://www.ibm.com/storage/lto

Attaching LTO systems to a SAN environment All four LTO Ultrium tape systems are available with both SCSI LVD and HVD interfaces. The 3584 LTO tape library is also available with a direct FC-AL attachment. There are two ways to attach an Ultrium tape library into a SAN environment: 򐂰 Direct attachment using a native Fibre Channel interface is available on the 3584. It enables you to attach each single FC-AL LTO tape drive, via the FC patch panel inside the 3584: – To an IBM 2103 Fibre Channel Storage Hub, for distance solutions – To an IBM 2109 SAN Fibre Channel Switch – To an IBM 3534 SAN Fibre Channel Managed Hub, for Windows solutions A native Fibre Channel connection has the advantage that you will have an increased peak data rate of up to 100MB/sec. You can use the tape units at greater distances without additional hardware. It also allows you greater systems configuration flexibility. 򐂰 By using IBM SAN Data Gateways, you can use any of the SCSI-attached LTO drives or libraries in a SAN infrastructure, summarized in Table 4-4. For drives with LVD interfaces, attachment can be through the SAN Data Gateway Router, 2108-R03. For drives with HVD interfaces, attachment is through either the SAN Data Gateway Router, 2108-R03, or the SAN Data Gateway, 2108-G07. In each case, the server needs an appropriate Fibre Channel adapter supported by these gateways.

90

IBM Tape Solutions for Storage Area Networks and FICON

Table 4-4 IBM 3580 and 3581 LTO SAN connections Model number

3580 Model L11

3580 Model H11

3581 Model L17

3581 Model H17

Connection type

Ultra2/Wide SCSI LVD interface

Ultra/Wide SCSI HVD interface

SAN connectivity

IBM SAN Data Gateway Router, 2108-R03, with feature code 2840, part number 2108R3L

򐂰

IBM SAN Data Gateway Router, 2108-R03, with feature code 2830, part number 2108R3D

򐂰

IBM SAN Data Gateway 2108-G07

Because the 3583 has an optional integral Fibre Channel connection, there are additional options, as shown in Table 4-5. Table 4-5 IBM 3583 LTO SAN connections Model number

3583 Model L18, L36, L72 feature code 8003

3583 Model L18, L36, L72 feature code 8004

Connection type

Ultra2/Wide SCSI LVD interface

Ultra/Wide SCSI HVD interface

SAN connectivity

򐂰

IBM SAN Data Gateway Router, 2108-R03, with feature code 2840, part number 2108R3L

򐂰

IBM SAN Data Gateway Router, 2108-R03, with feature code 2830, part number 2108R3D

򐂰

Internal SAN gateway

򐂰

IBM SAN Data Gateway 2108-G07

򐂰

Internal SAN gateway

Since the 3584 has native Fibre Channel drives, there are additional options, as shown in Table 4-6. Table 4-6 IBM 3584 LTO tape library SAN connections Feature code

1454

1455

1456

Connection type

Ultra2/Wide SCSI LVD interface

Ultra/Wide SCSI HVD interface

FC-AL interface (SW-FC)

SAN connectivity

IBM SAN Data Gateway Router 2108-R03 with feature code 2840, part number 2108R3L

򐂰

IBM SAN Data Gateway Router 2108-R03 with feature code 2830, part number 2108R3D

򐂰

IBM SAN Data Gateway 2108-G07

򐂰

Directly to a FC-AL adapter to most host system (for an example, see the 3466)

򐂰

IBM SAN Fibre Channel switches and hubs

4.2.8 IBM 3600 Series LTO tape automation family The 3600 Series comprises two models designed for attachment to xSeries and Netfinity servers. The 3600 Series 900GB/1.8TB LTO autoloader is the low-cost entry model. The 3600 Series 2/4TB LTO Library comes as rack or tower version. Both products have an LVD/SE SCSI connection that supports 80 MB/sec instantaneous data rate. The 900GB/1.8TB Series holds nine LTO cartridges, six in the removable magazine and three cartridges in the back of the tape unit. The 2/4 TB Series supports 20 cartridges and up to two LTO tape drives. The 3600 Series also includes a 20-Cartridge Expander Module, an LTO Drive upgrade option to install a second drive into the 2/4TB Series, and a Fibre Channel upgrade option.

Chapter 4. Tape building blocks

91

The Fibre Channel upgrade option is available with both of the 3600 Series LTO tape libraries. This FC upgrade option allows the tape library to be used in a FC installation. It acts as a FC router to provide attachment to supported FC host adapters through a supported FC switch. A Remote Management option card is provided with the 3600 Series 2/4 TB LTO Tape Library. It can be added to the 900 GB/1.8 TB LTO Tape Autoloader. This card is used to provide remote management and diagnostics from a Web browser. Some of the features that are included are: 򐂰 򐂰 򐂰 򐂰

Direct plug-in to a 10/100 BaseT Ethernet network Remote library management Error status (checking and control) Firmware downloads

Summary and SAN attachment for the 3600 LTO series You have two ways to connect the 3600 LTO tape library to a SAN environment as shown in Table 4-7. The preferred way is to use the Fibre Channel upgrade option, but it is also possible to use the SAN Data Gateway Router, 2108-R03. Table 4-7 Summary for IBM 3600 LTO Tape Library Model number

3600 LTO autoloader

3600 LTO library (native)

3600 LTO with FC o ption

Number of drives

1 LTO tape drive shipped

1 LTO tape drive shipped, 2 maximum

1 LTO tape drive shipped, 2 maximum

Number of cartridges

9 maximum

20 maximum

20 maximum

Maximum capacity (uncompressed/ compressed)

900MB / 1.8 TB

2 TB / 4 TB

2 TB / 4 TB

Connection type

Ultra -2 LVD SCSI

Ultra -2 LVD SCSI

FC-AL interface (SW-FC)

SAN connectivity

IBM 2108-R03 SAN Data Gateway Router with feature code 2840, part number 2108R3L

IBM 2108-R03 SAN Data Gateway Router with feature code 2840, part number 2108R3L

IBM SAN Fibre Channel switches and hubs

You can find more information about the 3600 LTO series on the Web at: http://www.pc.ibm.com/us/compat/storage/tape.html

4.3 IBM TotalStorage MP 3570 and 3575 IBM TotalStorage MP tape subsystems were designed for applications where fast access to mass data is important. Data retrieval requests that require minutes on other tape devices take only seconds on IBM TotalStorage MP. The IBM TotalStorage MP family comprises the 3570 drives and autoloaders, and the 3575 libraries. The IBM TotalStorage MP products are now superseded by Ultrium drives and libraries. We discuss them briefly, since you may have existing IBM TotalStorage MP libraries that you want to attach to a SAN.

92

IBM Tape Solutions for Storage Area Networks and FICON

4.3.1 IBM 3570 drive and autoloader The IBM TotalStorage MP 3570 tape subsystem models are compact, high-capacity storage devices that are available as stand-alone or rack-mounted units. The 3570-C00 is a single drive unit with a SCSI-2 interface. The 3570 Models C01 and C02 are stand-alone autoloader library units, and Models C11, C12, C21, and C22 are rack-mounted library units. Models C01, C11, and C21 have a single tape drive, and Models C02, C12, and C22 have two tape drives. There are corresponding, older B models. The library units can hold up to 20 cartridges in removable magazines. For the B models, each cartridge has an uncompressed data capacity of approximately 5 GB and up to 15 GB of compressed data with three to one (3:1) data compression. The drives have sustainable data rates of 2.2 MB/sec (native uncompressed), 6.6 MB/sec (3:1 compression), and up to 14 MB/sec (maximum compression). The burst data rate is 20 MB/sec. The average load and initialize time is 8 seconds, and the average search time to data for a loaded cartridge is 8 seconds. For the C models, there is a sustained data rate of 7 MB/sec for C format or C format XL media (native, uncompressed) and a 15 MB/sec sustained data rate for C-Format or C-Format XL media with maximum compression. The burst data rate is 20 MB/sec. The average load-to-ready time is eight seconds. There is a 14 second average search time to first byte of data for C-Format XL media.

4.3.2 IBM 3575 library The IBM TotalStorage MP 3575 Tape Library Dataserver models are compact, high-capacity, integrated storage devices that are available as stand-alone units. They use the 3570 drive. The IBM TotalStorage MP 3575 Tape Library Dataserver is attached to host processors that use the SCSI-2 interface. These systems include, but are not limited to, RS/6000®, RS/6000 SP, iSeries, Sun, HP-UX, and Windows NT. The IBM 3575 Tape Library Dataserver features a special multipath architecture. This architecture enables any two or three heterogeneous open systems applications (including iSeries applications) to concurrently share the larger 3575 models without needing middleware or a dedicated server to act as a library manager. You can accomplish this by partitioning the 3575 Tape Library Dataserver storage slots and tape drives into one, two, or three (depending on model) logical SCSI libraries with at least one SCSI control path for each logical library. The multipath architecture also features the ability to provide multiple SCSI control paths for any one logical library. This allows the cartridge inventory of one library to be shared by up to six iSeries servers running Backup Recovery and Media Services (BRMS) or for some open systems applications to support redundant SCSI library control paths. Each tape drive in an IBM TotalStorage MP 3575 Tape Library Dataserver provides: 򐂰 򐂰 򐂰 򐂰

High-speed data transfer Block-level access to data Automatic data buffering to balance system read/write speed High streaming and start and stop performance

One IBM TotalStorage MP 3570 Model C tape drive comes standard in the L06 model of 3575 Tape Library Dataserver. Two IBM TotalStorage MP Model C tape drives come standard in all other models. You can order additional IBM TotalStorage MP Model C tape drives as features of the 3575 device type, up to a maximum of six. The 3575 Tape Library Dataserver also supports the 3570 Model B1A tape drive. Chapter 4. Tape building blocks

93

All 3575 Tape Library Dataserver units use the IBM TotalStorage MP Fast Access Linear Tape cartridges with longitudinal serpentine recording. This cartridge style provides two tape spools with mid-point load for fast access to the metal particle media. There are three formats of 3570 cartridge: 򐂰 B format (original) 򐂰 C format 򐂰 C format XL (eXtended Length) The B-Format cartridge is fully read/write compatible between all IBM TotalStorage MP tape drives. The C-Format cartridge enables the performance features of the IBM TotalStorage MP 3570 C-Series Tape Subsystem. The C-Format XL cartridge increases the native data capacity from 5 GB to 7 GB (21 GB with 3:1 data compression). 3575 Models L06, L12, L18, L24, and L32 contain up to 324 cartridges and six drives. Each model includes a two-slot I/O station, a dual-gripper transport mechanism, and an operator panel with a display and keypad. The operator panel is the interface for all operator functions. The 3575 tape library can provide up to 6.8 TB of online data capacity at a 3:1 data compression ratio. With maximum compression, the Models L18, L24, and L32 can provide an aggregate data transfer rate of up to 324 GB per hour. In addition to the two I/O slots that are provided by the library I/O station, the user may optionally configure the library with 14 bulk I/O slots in place of 14 storage slots. This is a firmware option only. The library is not physically altered. For more information about the 3575, see: http://www.storage.ibm.com/hardsoft/tape/3575/index.html

Attaching 3570s and 3575s to a SAN Since 3570 drives only have SCSI interfaces, you must use a gateway (Table 4-8). Table 4-8 IBM 3570 and 3575 tape library SAN connection Model number

3570 and 3575

Connection type

Ultra/Wide SCSI fast/wide differential interface

SAN connectivity

IBM SAN Data Gateway 2108-R03 with feature code 2830, part number 2108R3D or IBM SAN Data Gateway 2108-G07

4.4 IBM TotalStorage 3590 The IBM TotalStorage 3590 drive provides the highest levels of performance and reliability of any IBM tape subsystem. Since its first shipment in September 1995, it has met with wide market acceptance. The IBM TotalStorage 3590 has a choice of native Fibre Channel attachment or UltraSCSI attachment. Each drive has two interfaces, both of the same type. The drive allows one of the two interfaces to be active. The other can be used in turn or in the event of a failure. With Fibre Channel interfaces, the IBM TotalStorage 3590 can be directly attached at greater distances and provide additional performance. Uniquely, the 3590 can attach to open systems host servers using a native Fibre Channel interface or to a S/390 host using Enterprise System Connection (ESCON) links or the newer FICON links. FICON links are based on Fibre Channel transport layers with a different I/O protocol instead of SCSI in the top layers. FICON attachment is covered in detail in Chapter 6, “FICON 94

IBM Tape Solutions for Storage Area Networks and FICON

attachment of IBM tapes” on page 137. You can also find more detail about the 3590 family in IBM TotalStorage Enterprise Tape: A Practical Guide, SG24-4632. The 3590 family includes a variety of members: 򐂰 3590 tape drives 򐂰 3590 cartridges 򐂰 3590 ESCON/FICON control unit Figure 4-5 shows the 3590 Model E11.

Figure 4-5 3590 - E11

4.4.1 IBM TotalStorage 3590 tape drives The IBM TotalStorage 3590 B11, E11 (Figure 4-5) and H11 are frame-mounted or rack-mounted and incorporate a 10-cartridge Automated Cartridge Facility (ACF) autoloader for high-capacity (up to 1.2 TB for Model E11s) unattended operation. The IBM TotalStorage 3590 B1A, E1A and H1A are designed for use in an IBM TotalStorage 3494 Tape Library, IBM TotalStorage Virtual Tape Server, and IBM TotalStorage 3590 Silo Compatible Tape Subsystem. These models do not have an ACF. The IBM TotalStorage 3590 tape drive is designed for high data integrity by using sophisticated error correction codes and factory-written servo tracks for precise head positioning. The IBM TotalStorage 3590 uses a bidirectional longitudinal serpentine recording technique. It also uses a fourth-generation magneto-resistive head that reads and writes 16 data tracks at a time. The drives use IBMLZ1 hardware compression to give a greater compression ratio than the IDRC compression technique used in 3490 drives. Predictive failure analysis microcode gives early warning of drive and media problems to maximize availability. The Statistical Analysis and Reporting System (SARS) analyzes and reports on tape drive and tape cartridge performance to help you determine whether the tape Chapter 4. Tape building blocks

95

cartridge or the hardware in the tape drive is causing errors, whether the tape media is degrading over time, and whether the tape drive hardware is degrading over time. The drives automatically write a block ID after each block of data. This is used to locate data rapidly. In a high-speed search or locate mode, the drive positions rapidly to a requested block. The drive is aware of which group of 16 tracks holds the block and does not read every intervening block. Both models read and write 16 tracks of data in parallel. When they reach the end of a tape, the heads index and the tape reverses and a further set of 16 tracks is written in the opposite direction. This continues until all the tracks have been written. This form of recording is described as serpentine longitudinal. An additional benefit is that a full tape does not need to be rewound. It is already at load point. The 3590 drives have two interfaces so that they can be attached to two servers, although only one interface is ever active at any moment. This provides additional availability if paths to the drive are configured correctly. If the drives are attached to a 3590 Model A60 for S/390 attachment or are attached to AIX, failover to the second path is automatic. The default is for the drives to have UltraSCSI interfaces. Fibre Channel interfaces are available by ordering feature code 3510 for the upgrade of an existing drive or feature code 9510 for a new drive. With a Fibre Channel interface, 3590s may be attached to a 2109 switch or 2103 hub. Attachment to a hub is only supported for distance. Multiple initiators and targets on the same loop are not supported. 򐂰 3590 Model Bxx tape drives The 3590 Model B drives have an uncompressed data rate of 9 MB/sec. They read or write 128-track format cartridges. The 3590 B drives can be upgraded to 3590 Model E drives. 򐂰 3590 Model Exx tape drives You can order the 3590 E models with either UltraSCSI or Fibre Channel interfaces. Each drive has two interfaces for availability reasons. An IBM TotalStorage 3590 Model E tape drive provides an uncompressed data rate of up to 14 MB/sec, approximately 55 percent faster than the IBM TotalStorage 3590 Model B. Furthermore, 3590Es write a total of 256 tracks across the tape, doubling the cartridge capacity compared to B models. The 3590Es may read 128-track or 256-track cartridges, and write 256-track cartridges. 򐂰 3590 Model Hxx tape drives As with the 3590 Model E drive, you can order the 3590 H models with either UltraSCSI or Fibre Channel interfaces. Each drive has two interfaces for availability reasons. An IBM TotalStorage 3590 Model H tape drive provides an uncompressed data rate of up to 14 MB/sec, which is the same rate as a 3590 Model E drive. With 3590Hs, a total of 384 tracks are written across the tape, increasing the cartridge capacity compared to E models by 50%. 3590Hs may read 128-track, 256-track or 384-track cartridges, and write only 384-track cartridges. There is an increase in mount times due to the requirement for the head to lock into the servo tracks. The 3590 H model is only supported by the A60 tape drive controller. There is a wide range of support for 3590s, which includes: 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰

IBM S/390 and zSeries with the 3590-A60 IBM RS/6000, SP and pSeries IBM AS/400 and iSeries IBM NUMA-Q® servers IBM Netfinity, xSeries, and Intel-based servers running Windows NT 4 or Windows 2000 HP servers Sun servers

Note that support for a SCSI drive does not imply that a Fibre Channel drive is supported. 96

IBM Tape Solutions for Storage Area Networks and FICON

For the latest information about supported servers, host bus adapters, software levels, and fabric components for both SCSI and Fibre Channel 3590 drives, see: http://www.storage.ibm.com/hardsoft/tape/3590/3590opn.html

4.4.2 IBM 3590 cartridges The 3590 cartridges use metal-particle half-inch tape in an enclosed cartridge shell. There are two capacities of 3590 cartridge media: 򐂰 The J cartridge: The High Performance Cartridge Tape (HPCT) providing 10 GB, 20 GB, or 30 GB before compression. It is identifiable by the blue tabs on the cartridge case. 򐂰 The K cartridge: The Extended High Performance Cartridge Tape (EHPCT) providing 20 GB, 40 GB, or 60 GB before compression. It is identifiable by the green tabs on the cartridge case. The higher of the three capacities shown is achieved in 384-track mode on a 3590H drive in an uncompressed format. Both types have exactly the same cartridge case so you can physically store them in the same libraries. The 3590 drives may require the Extended Media Support feature to use EHPCT cartridges. Check with your IBM customer representative for verification as to drive capabilities. Extended Media Support is feature 5780 on 3590Bs and feature 5781 on 3590Es. All 3590 Model H drives are shipped with this support as standard.

4.4.3 IBM 3590 Model A60 ESCON and FICON control unit The IBM TotalStorage 3590 Model A60 control unit, with up to four times the performance of the previous model, the 3590 Model A50, provides ESCON and FICON attachment of the IBM TotalStorage 3590 tape drive to S/390 enterprise servers and the new zSeries 900 servers. Using IBM Seascape architecture to combine proven IBM components (RS/6000 base, AIX operating system, PowerPC® 604E microprocessors, and PCI bus architecture), the Model A60 provides high performance and enables resource sharing to help reduce hardware and infrastructure costs. The IBM TotalStorage 3590-A60 control unit attaches to IBM TotalStorage 3590 Model B, Model E or Model H tape drives via an UltraSCSI interface. The use of the 3590-A60 for FICON attachment is covered in detail in Chapter 6, “FICON attachment of IBM tapes” on page 137. For more information about the 3590 subsystems, go to: http://www.storage.ibm.com/hardsoft/tape/3590/prod_data/g225-6633.html

Attaching 3590s to a SAN There are two ways to attach 3590s to a SAN, depending on whether they have UltraSCSI interfaces or Fibre Channel interfaces. UltraSCSI drives can be attached by using the 2108-G07 SAN Data Gateway or the 2108-R03 SAN Data Gateway Router. Fibre Channel drives can be attached directly to these fabric components: 򐂰 򐂰 򐂰 򐂰 򐂰

IBM 2103 hub (for distance extension only) IBM 3524 managed hub IBM 2109 switch IBM 2031 McDATA ES-1000 loop switch IBM 2042 Inrange FC/9000 Fibre Channel Director

Chapter 4. Tape building blocks

97

Usually, tape is installed in SANs using a single path from the tape drive to switch or host server. The failure of a drive data path, switch, or a host bus adapter causes the immediate tape drive operation to fail. What is needed in the event of a path or component failure is the ability, as with disk drives, to dynamically retry the tape drive command through an alternate set of paths or components. The 3590 Fibre Channel device driver for AIX provides this failover function. During installation, it allows you to configure IBM TotalStorage 3590 Fibre Channel drives in a SAN environment with redundant pathing that can be dynamically repathed, even during a job, in the event of a path or component failure. This repathing is completely transparent to the underlying application, host, or switch. This is possible because of an exclusive failover mechanism in the AIX tape device driver that allows the operator to allocate multiple paths to a 3590 drive. In the event of a path or component failure, the failover mechanism automatically retries the current tape job using an alternate, pre-configured path. This is accomplished without operator intervention, and in most cases, without aborting the current job in progress. Properly configured, no single point of failure in the network can cause a permanent failure. Moreover, since failover is managed at the device driver level, any application using the RS/6000 has this high-availability function.

4.4.4 IBM TotalStorage 3592 tape drives The IBM TotalStorage 3592 J1A tape drive is designed to take tape technology to the next level. With improved data transfer rate and increased storage capacity, this drive is the latest in a tradition of leading edge tape technologies.

3592 features The physical size of the 3592 drive is smaller than the 3590 family of drives as shown in Figure 4-6. Two 3592 drives can be fitted in the same space normally occupied by a 3590 drive. This feature the 3494 to contain twice as many drives in the equivalent Lxx or Dxx frames. Also, the reduced weight of the 3592 enables a single engineer to install the drives into a library, offering faster and lower cost maintenance.

98

IBM Tape Solutions for Storage Area Networks and FICON

Figure 4-6 Size comparison: 3592 on top of a 3590 drive

3592 performance characteristics The 3592 outstrips the 3590 native data rate but almost three to one yielding 40MB/sec native data rate as compared to the 14 MB/sec of the 3590E1A and H1A drives. Table 4-9 outline the performance characteristics of the 3592. Table 4-9 3592 characteristics Characteristics

3592 Tape Drive specifications

Display panel

Alphanumeric liquid crystal display

Media

3592 Data Cartridge

Host attachment

ESCON/FICON via 3592 J70 or 3590 A60; FCP without 3592 J70 or 3590 A60

Data capacity

300 GB per uncompressed cartridge

Device data rate (native)

40 MB/sec

Data transfer rate (maximum instantaneous)

200 MB/sec

Search/rewind speed

8 m/sec (315 inches/second)

Full cartridge rewind time (at the end of volume)

2 seconds

Maximum cartridge rewind time

60 seconds

Device data rate (3:1 compression sustained)

120 MB/sec

Chapter 4. Tape building blocks

99

3592 connectivity In addition to direct attachment, the IBM TotalStorage 3592 tape drive can attach to either the A60 or the J70 tape controller.

Host systems attachment The 3592 tape drive is supported in the following operating environments: 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰

AS/400 or iSeries RS/600 or pSeries pSeries Linux xSeries xSeries Linux zSeries Linux Sun Hewlett Packard Microsoft Windows NT Windows 2000 Windows Server 2003

For the latest details about specific hardware, software, and Fibre Channel support for the 3592, refer to the following Web site: http://www.storage.ibm.com/tape/drives/3592/index.html Software support for the 3590 or 3592 Tape Drives attached through the 3592 Model J70 Controller is available to host systems such as zSeries and S/390 through ESCON or FICON channels. The subsystem operates in either 3590 native mode, or 3590 or 3490E drive emulation mode (depending on the model). The 3590 or 3592 Tape Drives are added to the host system’s I/O configuration as though they were device type 3590 or 3490E. Note: Refer to enabling APAR OA02208, which describes the software required to enable support.

4.4.5 IBM 3592 cartridges The 3592 tape has an eight-character external label made up of a six-character volser and a two-character media type.

3592 cartridge compatibility This drive is designed to be incorporated into the same physical hardware as the 3590 tape drive. The 3592 cartridge is compatible with both the IBM TotalStorage 3494 and 3584 tape libraries.

DFSMS changes There are changes to the OPEN/EOV processing, specifically that SMS is called by OPEN processing and the Data Class is checked when the tape is being written from load point (scratch or specific request, file sequence 1, DISP=NEW). This is done for both SMS and nonSMS tape. Also the 3592 is a MEDIA5 media type for the tape media and an E1 (EFMT1) recording technology. The introduction of the 3592 brings with it a new DataClass parameter to enable the usage of the 3592 cartridge to be scaled for performance or for data storage. This parameter (Performance Scaling) is checked at OPEN processing. A value of Y ensures that the 3592

100

IBM Tape Solutions for Storage Area Networks and FICON

tape is only used to 60 GB, and a value of N allows the tape to be used up to 300 GB (both are native tape capacity). Scaling allows you to set how much data you want to store on tape. You may prefer to “scale down” some tapes to ensure quicker access to the data you need to retrieve or access to multiple streams of data in the case of a system restore. Any tape that has returned to scratch is formatted to the new required scaling (either 20% or 100%) when it is next mounted for a new write operation. The default value of the Performance Scaling parameter is N. For a comprehensive discussion about the DFSMS changes introduced with the 3592, see z/OS DFSMS Software Support for IBM TotalStorage Enterprise Tape System 3592, SC26-7514.

IBM 3592 J70 control unit The J70 controller introduces a higher power processor, enabling increased connectivity and performance for S/390 environments.

Design characteristics Physically, the IBM 3592 J70 control unit is less than half the rack height of the standard 3590 A60 control unit. The IBM 3592 J70 control unit is designed to be installed into: 򐂰 An IBM 3494 D24 frame 򐂰 An STK C10 silo frame 򐂰 A stand-alone rack

Performance The IBM 3592 J70 control unit has a data rate of up to 400 MB/sec. Figure 4-7 shows the host attachment of the 3592.

Host A

Host B

Host C

FICON Director

Host D

ESCON Director

3590 Model A60 or 3592 Model J70

FC Switch

3592 Drive (0)

3592 Drive (11)

Figure 4-7 3592 host attachment

Chapter 4. Tape building blocks

101

Availability The J70 controller offers increased availability with its dual hot-swap power supplies, non-disruptive code update (via library manager) and the ability to hot swap internal adapters.

Controller comparison The 3592 J70 controller features a high performance CPU (around one and a half times the performance offered by 3590 A60). This enables a larger range of host attachment options. See Table 4-10 for other feature comparisons. Table 4-10 Controller comparison: 3590 A60 versus 3592 J70

3590 A60

3592 J70

19-inch Rack Mount, 9 EIA units 3590 B, E, H; SCSI and Fibre Same as A60; maximum increases later

19-inch Rack Mount, 4 EIA units Same as A60

Supported internal fibre switches

2109 S16, 2109 F16

Same as A60

Expansion drawer for bandwidth or more drive support

No

Not today

Bandwidth maximum

200MB/sec (with fibre drives and FICON)

400 MB/sec+ (projected)

Physical size Supported drives 3592 J1A Fibre: 12 maximum

Host adapters

2 FICON (2Gb) & 4 ESCON ----> 1 FICON & 6 ESCON ----> 8 ESCON ----> Dual hot-swap power

High Availability Properties

---->

Non disruptive drive code update via library

Same as A60

4 FICON (2Gb) 3 FICON & 2 ESCON, 2 FICON & 4 ESCON, 1 FICON & 6 ESCON, 8 ESCON Dual hot-swap power Hot swap of dead adapters Non disruptive drive code update via library Future CA

4.4.6 IBM TotalStorage 3494 Virtual Tape Server The 3494 supports unique functions provided by the 3494 Virtual Tape Server (VTS). The VTS is available as two models: the B10, and the B20. (Two older models, the Model B16 and the B18 VTS, were withdrawn from marketing.) To the host server, the VTS appears as several fully configured IBM 3490E tape subsystems with up to 256 tape drives. Two VTSs may be coupled together to form the basis for a Peer-to-Peer VTS (Figure 4-8) with data being duplicated automatically between the two VTSs. A host system processes the data and sends it to the VTS. After the data is transferred to the VTS, the host processor is finished, until there is a further request for information. The VTS stores the data received from the host, as a virtual volume, on hard disk drives known as the tape volume cache within the Model 10 or B20 VTS frame. As more data is received from any attached system, the VTS continues to store these data as virtual volumes on the disk cache. This data, within minutes of the volume being closed, is copied to an IBM TotalStorage 3590 tape. When the high cache occupancy threshold is reached, the VTS deletes multiple virtual volumes to until the low threshold is reached. When a virtual volume is written to the physical tape, it is then called a logical volume. A tape volume can exist within the Tape Volume Cache (TVC) and on a physical 3590 cartridge at the same time. Each tape cartridge in the library is filled, greatly reducing the number of cartridges needed. Today, many cartridges stored in traditional tape automation solutions contain one file per tape and are, on average, only 15% used. By fully using cartridge capacity, the VTS can greatly reduce tape hardware, floor space and personnel expenses. 102

IBM Tape Solutions for Storage Area Networks and FICON

Figure 4-8 IBM Peer-to-Peer VTS

Not surprisingly, most data retrievals and deletions occur within 24 hours of when that data was created. We recommend that you size the TVC to allow these virtual volumes to be present within cache for at least this period of time. If the tape data is still within the TVC, it is retrieved at disk access speeds with no need for a real tape mount or any movement of cartridges. If the data requested from a host is not in the cache, the full logical volume is retrieved from the tape cartridge in the library, uploaded into the tape volume cache and then transferred to the host from the tape volume cache. Additionally, with feature code 4000 (Advanced Functions), IBM has enabled the ability to control the cache management of data for a z/OS® based processor. With feature code 40014004 (Advanced Policy Management), you can now control functions outboard of the processor such as: 򐂰 Create up to 32 volume pools: This feature enables the mapping of logical volumes to a predetermined pool of 3590 media. 򐂰 Create copies of logical volumes: In both peer-to-peer and non peer-to-peer environments, you can make a copies of logical volumes. The copy is made to a different storage pool than the primary copy. 򐂰 Control immediate or deferred processing within a Peer-to-Peer VTS: The copy of the logical volume can now be controlled by the storage administrator. 򐂰 Control cache residency: This allows you to manipulate the residency of logical volumes in the VTS Tape Volume Cache. For details specific to the VTS, see 6.5, “IBM 3494 Virtual Tape Server” on page 167, and the following Redbooks: 򐂰 IBM TotalStorage Virtual Tape Server: Planning, Implementing, and Monitoring, SG24-2229 򐂰 IBM TotalStorage Peer to Peer Virtual Tape Server: Planning and Implementation Guide, SG24-6115

Chapter 4. Tape building blocks

103

For more information about the 3494 subsystems, see either of these sites: http://www.storage.ibm.com/hardsoft/tape/3494/index.html http://www.storage.ibm.com/hardsoft/tape/3494_vts/index.html The IBM 3494-B18 Virtual Tape Server has the option of two or four SCSI interfaces. For a VTS Model B10 or B20 the maximum number of SCSI interfaces is eight. These can be attached to a SAN by using the IBM 2108-G07 SAN Data Gateway. For more details about the interface intermix capabilities, see 6.5, “IBM 3494 Virtual Tape Server” on page 167. A separate LAN or RS-232 connection is required from attached servers to the 3494 library manager. Note that a Peer-to-Peer VTS cannot have SCSI interfaces and, therefore, cannot be attached to a SAN.

Figure 4-9 IBM TotalStorage 3494 Enterprise Tape Library

4.5 Digital Linear Tape and SuperDLT Digital Linear Tape (DLT) technology is owned, developed, and manufactured by Quantum Corporation who bought the technology from Digital Equipment. DLT technology uses linear recording and writes each data track on the entire length of the tape (about 1,800 feet for DLTtape IV media). Data is first written onto a track along the entire length of the tape, and when the end is reached the heads are repositioned to record a new track again, along the entire length of the tape, now travelling in the opposite direction. This continues back and forth until the tape is full. Current DLT drives record either 128 tracks (DLT 2000XT and DLT 4000 drives) or 208 tracks (DLT 7000 and DLT 8000) on half-inch tape. Quantum is also developing Super DLTtape technology. Super DLTtape uses Advanced Metal Particle (AMP) technology and Laser Guided Magnetic Recording to increase the native capacity of Super DLTtape media to more than 100 GB.

104

IBM Tape Solutions for Storage Area Networks and FICON

The current IBM DLT library is the 3502. We discuss SAN attachment in the following section. Older DLT libraries, such as the 3447, are not supported for SAN attachment using a SAN Data Gateway although it is possible that they may work. For more information about DLT tape technology, see Netfinity Tape Solutions, SG24-5218, or The IBM LTO Ultrium Tape Libraries Guide, SG24-5946. Note also that DLT drives may be installed in the 3584 library to ease migration to LTO.

4.5.1 IBM 3502 DLT tape library family The 3502 tape library family is usually used with the Netfinity or xSeries. However, you can also use it with other systems with an LVD SCSI-2 interface.

IBM 3502-R14 and 3502-314 490/980 GB DLT library The IBM 490/980 GB DLT library is a 14-cartridge library using DLT 7000 tape drives. The library is available in both a tower model (3502 Model 314) and a rack mountable model (3502 Model R14). The library has an LCD panel on the front for the operator and is equipped with a barcode reader. It is equipped with one DLT 7000 tape drive as standard, but can be upgraded to hold up to three drives. The drive has a native data transfer rate of 5 MB/sec and each cartridge holds 35 GB uncompressed. The cartridges are grouped in two, seven-cartridge removable magazines. The library provides SCSI-2 single-ended interfaces and is equipped with 68-pin external connectors. You can configure the library to operate fully automatically, under control of a software application. Or you can use it to access tapes in a sequential mode.

IBM 3502-108 280/560 GB DLT autoloader The IBM 280/560 GB DLT autoloader is a robotic tape handling system based on DLT 7000 technology with an uncompressed 35 GB cartridge capacity (Table 4-11). It combines one DLT drive with eight cartridge slots. Two of those slots are fixed, while the six others are grouped in a removable magazine. It provides SCSI-2 single-ended interfaces and is equipped with 68-pin external connectors. Although the library is intended to be used as an external desktop device, it can be mounted in a Netfinity 9306 Model 300 rack. Table 4-11 IBM 3502 DLT tape library family summary Model number

3502-108

3502-314 and R14

Number of drives

1 DLT 7000

1 DLT 7000 standard 3 DLT 7000 maximum

Number of cartridges

8 DLT Type IV maximum

14 DLT Type IV maximum

Maximum capacity (GB) (uncompressed/compressed)

280/560

490/980

Connection type

SCSI-2 single ended

SCSI-2 single ended

SAN connectivity

IBM SAN Data Gateway 2108-R03 with feature code 2840, part number 2108R3L

IBM SAN Data Gateway 2108-R03 with feature code 2840, part number 2108R3L

IBM 7337 Model 305/306 DLT library The IBM 7337 Digital Linear Tape Library provides a solution for users who require a high-capacity automated tape library in an RS/6000 or pSeries environment. The IBM 7337 tape library has a dual-host function that allows two RS/6000 processors to be attached to it and operate it as two independent autoloaders. This enables potentially doubled throughput, concurrent read/write operations, and fault tolerance.

Chapter 4. Tape building blocks

105

This tape library incorporates 15 data cartridges and one or two drives in the 7337 Model 305 desktop or 7337 Model 306 rack-mount configuration. Using DLT 7000 drive technology, the IBM 7337 uses tape cartridges that provide 35 GB native capacity and has a native data transfer rate of 5 MB/sec. With data compression, you can store up to 70 GB on a single cartridge. This gives you a maximum library capacity of 1.05 TB with a data transfer rate of up to 10 Mbps (with compression). The IBM 7337 is also backward read/write compatible with legacy DLT formats. A barcode reader is standard and provides cartridge identification, inventory and media management. The IBM 7337 can operate in manual, auto loader, and random mode. These 7337 models are replaced by the 7337 Model 360. This library offers from one to six DLT 8000 drives with LVD SCSI interfaces, with a cartridge capacity ranging from 15 cartridges to 60 Type IV cartridges. This gives a library capacity reaching 2.4 TB, uncompressed. However, no support statement has been made for the attachment of this model to a SAN.

Summary and SAN attachment The 7337 may be SAN-attached through a 2108-R03 SAN Data Gateway Router as shown in Table 4-12. Both models are backward-compatible with earlier DLT 4000 and DLT 7000 drives. Table 4-12 IBM 7337 DLT tape library summary Model number

7337-305/306

Number of drives

1 DLT 7000 standard 2 DLT 7000 maximum

Number of cartridges

15 DLT Type IV maximum

Maximum capacity (uncompressed/compressed)

525 GB/1050 GB

Connection type

SCSI-2 differential interface

SAN connectivity

IBM SAN Data Gateway 2108-R03 with feature code 2830, part number 2108R3D

For more information, see 7337 Setup and Operator Guide, GA32-0399, or go to these Web sites: http://www.storage.ibm.com/hardsoft/tape/7337/index.html http://www.storage.ibm.com/hardsoft/tape/pubs/pubs7337.html

4.6 Other tape libraries There are many other tape products that are available or widely installed today. These products are not certified for SAN attachment. There is no special reason for them to be used in a SAN except when you need to read old media. However, you may choose to integrate existing libraries in your SAN as a first step towards storage networking. In general, we do not feel that 4 mm or 8 mm libraries have the features necessary to be called SAN-ready. The use of helical-scan recording offers high storage capacities, but at the expense of a complex drive, lower duty cycles, and media life than other formats. While 3490 drives offer the reliability needed for sharing, they no longer offer the capacity or performance that the 3580 or 3590 family offer.

106

IBM Tape Solutions for Storage Area Networks and FICON

4.7 SAN readiness summary We conclude the chapter by summarizing, in Table 4-13, the key SAN-readiness criteria for the tape subsystems discussed in this chapter. All figures in the table are shown for uncompressed data. Table 4-13 SAN readiness summary Capability

3502 DLT

3583 LTO

3584 LTO

3590/3494

3592/3494

Native Fibre Channel

No

No

Yes

Yes, to drives

Yes

Multiple drives

Up to 3

Up to 6

Up to 72

Up to 76 S/390, 32 FC

Sharing and partitioning

No

No

Yes

Yes

Yes

Remote management

No

No

No

Yes

Yes

Enterprise automation

No

No

Good

Excellent

Excellent

High performance

5 MB/sec

15 MB/sec

15 MB/sec

9 MB/sec 14 MB/sec

40 MB/sec

Large cartridge

35 GB

100 GB

100 GB

Up to 60 GB

300 GB

Library capacity

0.49 TB

7.2 TB

248 TB

374 TB

As we can see, there is no perfect subsystem. Each has its own strengths. Your planned usage will help you to decide which configuration to use. See Chapter 5, “Tape solutions” on page 109, for examples.

Chapter 4. Tape building blocks

107

108

IBM Tape Solutions for Storage Area Networks and FICON

5

Chapter 5.

Tape solutions This chapter discusses some practical tape storage area network (SAN) solutions for the open server environment. We use some of the SAN infrastructure and tape building blocks described in earlier chapters to create configurations, such as remote tape vaulting, which solve specific real business problems. Because one size does not necessarily fit all, there are usually multiple solutions for each category. You can personalize them by capacity and server connectivity. In addition to the hardware configurations, SAN and storage management are usually required. Because the solutions are designed for open server attachment, most storage management software that supports library sharing can be used. We show which management software is available for the components of each solution. Tivoli Storage Manager is the perfect complementary data and information manager for the open SAN environment. It supports more than 39 different platforms and provides full-function, centralized administration for data and storage management. A progressive backup methodology enables you to back up just the data that has changed, yet have one-step restore. Local area network (LAN)-free backup provides backup and restore over the high-speed SAN. This reduces network traffic and improves application and transaction response times. Library management is included in Tivoli Storage Manager Version 3.7 and later to allow library sharing.

© Copyright IBM Corp. 2000, 2001, 2002, 2003. All rights reserved.

109

5.1 Highly reliable, low cost backup and archive Backup is sometimes compared with insurance. However, there are two important differences. When you make a claim against insurance, the insured item is repaired or replaced. For example, if your photograph album is destroyed, you can buy a new album, but not the photographs themselves, which had the real value. In this case, the data is comparable to the photographs. It is the data that needs to be restored, and to do that, you have to be sure that not only was the backup data written successfully, but that it can also be read successfully. Secondly, you insure against the probability that an event may occur. It is unlikely that your house will burn down. In contrast, with backup, you protect against the certainty that you will need to restore. The vast majority of restores are for a single file due to user error. This section details three examples of open system SAN solutions for low-cost backup and restore: 򐂰 Backup for workgroups: Up to seven servers sharing 3.6 TB uncompressed capacity (see 5.1.1, “Backup for workgroups”) 򐂰 Backup for departments: Up to 14 servers sharing 7.2 TB uncompressed capacity (see 5.1.2, “Backup for departments” on page 113) 򐂰 Backup for large departments and for enterprises: Up to 28 servers sharing 248 TB uncompressed capacity (see 5.1.3, “Backup for large departments and enterprises” on page 115)

Business value This solution protects and manages valuable business data efficiently and reliably at distances of up to 10 kilometers. According to Datamation, recreating 20 MB of data can cost between $15,000 and $100,000 U.S. The solution reduces costs and simplifies storage management with commonly-managed and shared tape storage. Storage management efficiency needs to improve by up to 60% a year to keep up with the increase in stored data. Storage management costs typically account for more than four times the initial purchase cost of the hardware. The solution reduces backup time by transferring only changed data over high-speed SAN links. Up to 60% of LAN traffic can be due to data housekeeping.

5.1.1 Backup for workgroups This configuration provides dynamic tape sharing for up to seven heterogeneous servers for fast, reliable, cost-effective backup, archive, and restore, and up to 3.6 TB capacity.

Description The IBM 3583-L36 Scalable Tape Library provides low-cost, industry standard and certified Linear Tape-Open (LTO) storage for backup, restore, and archive. LTO is a significant breakthrough in open tape storage. It provides enterprise-level reliability, performance, and capacity with the 3583 providing up to six tape drives and up to 3.6 TB uncompressed capacity. The IBM 3583 is configured with four LVD UltraSCSI tape drives. It is connected to the servers through an IBM 2108-R03 SAN Data Gateway Router and an IBM 2109-S08 8-port switch (Figure 5-1).

110

IBM Tape Solutions for Storage Area Networks and FICON

Host 2

Host 1

....

Host 6

Host 7

IBM 2109 S08 FC Switch

IBM 2108 R03 Router Fibre Channel SCSI 1 2 3

4

SCSI Control Module

IBM 3583 Ultrium Scalable tape library Figure 5-1 Workgroup backup solution

In addition to the hardware configuration, Tivoli Storage Manager (or other industry recognized storage software) provides highly-automated, centrally scheduled, policy-managed backup and archive facilities to protect data in distributed environments. The SAN fabric is managed by the StorWatch SAN Fibre Channel switch specialist and the StorWatch SAN Data Gateway specialist.

Platform support and system requirements Refer to the following Web site for the latest list of supported servers, host adapters, operating system levels, and availability dates for: 򐂰 RS/6000 servers that support FC host adapters 򐂰 Intel servers with supported adapters running Microsoft Windows NT and Windows 2000 򐂰 Sun SPARC, UltraSPARC, and Ultra/Enterprise servers with supported host adapters http://www.storage.ibm.com/hardsoft/tape/3583/3583opn.html

Implementation notes For disk systems that support FlashCopy or Instant copy, it is possible to eliminate the back-up window by creating a time zero copy and backing up from the copy. If backup or extensive tape usage is likely to occur during peak productive time, then, for performance reasons, we strongly recommend that you have a separate Fibre Channel host bus adapter and connection for the tape. Tivoli Storage Manager Tape Library Sharing or an equivalent software product is required. For library sharing, all the servers must share the same LAN and protocol so that backup clients can communicate with the backup server for tape library services. Chapter 5. Tape solutions

111

The IBM 2108-R03 SAN Data Gateway Router has two Small Computer Systems Interface (SCSI) buses. To ensure adequate bandwidth, configure no more than two LVD tape drives per SCSI bus.

Sample configuration Table 5-1 shows a sample configuration provided for guidance only and excludes rack, cables, connectors, media, and country or region specific information. Six servers (four Intel servers running Windows NT or Windows 2000, one RS/6000 running AIX, and one Sun server running Solaris) are sharing an IBM 3583 Ultrium Scalable tape library with four drives and an uncompressed capacity of 3.6 TB. The servers are connected to the library through an IBM 2109 Fibre Channel Switch and IBM 2108-R03 SAN Data Gateway Router. Table 5-1 Sample configuration: Workgroup backup Product or feature code

Description

Qty

3583-L36

Ultrium Scalable tape library

1

8004

LTO Ultrium High-Voltage Differential (HVD) drive sled

4

8008

Redundant power module

1

9211

Attached to Sun system

1

9212

Attached to Windows system

1

9600

Attached to RS/6000 system

1

2109-S08

SAN Fibre Channel switch

1

2010

Shortwave Gigabit Interface Convertor (GBIC)

3

Included

StorWatch SAN Fibre Channel switch Specialist

1

2108-R03

SAN Data Gateway Router

1

2840

UltraSCSI Low-Voltage Differential (LVD) port

1

Included

StorWatch SAN Data Gateway Specialist

1

Additional information For the latest product and support information, refer to these Web sites: 򐂰 IBM 3583 Ultrium tape library http://www.ibm.com/storage/lto 򐂰 IBM 2109 SAN Fibre Channel Switch http://www.storage.ibm.com/ibmsan/products/2109/san_switch_solu.html? 򐂰 IBM 2108 Router http://www-1.ibm.com/servers/storage/support/san/2108r03.html 򐂰 SAN information http://www.ibm.com/san 򐂰 Tivoli Storage Manager http://www.tivoli.com

112

IBM Tape Solutions for Storage Area Networks and FICON

5.1.2 Backup for departments This solution provides dynamic tape sharing for up to 14 heterogeneous servers for fast, reliable, cost-effective backup, archive, and restore, and up to 7.2 TB capacity.

Description Apart from using a different model of the IBM 3583, this solution differs from the workgroup solution by using the IBM 2108-G07 SAN Data Gateway for additional SCSI connectivity. The IBM 3583-L72 Scalable tape library provides low-cost, industry standard and certified LTO storage for backup, restore, and archival. LTO is a significant breakthrough in open tape storage by providing enterprise-level reliability, performance, and capacity with up to six tape drives and up to 7.2 TB uncompressed capacity. The IBM 3583, with six HVD tape drives, is connected to the servers through an IBM 2108-G07 SAN Data Gateway and an IBM 2109-S16 16-port Fibre Channel switch, as shown in Figure 5-2.

Host 1

Host 2

....

Host 13

Host 14

IBM 2109 FC Switch

IBM 2108 G07 Gateway Fibre channel SCSI 1 2 3 4 5 6 SCSI Control Module

IBM 3583 Ultrium Scalable tape library Figure 5-2 Department backup solution

In addition to the hardware configuration, Tivoli Storage Manager or other industry-recognized storage software provides highly-automated, centrally scheduled, policy-managed backup and archive facilities to protect data in distributed environments. The SAN fabric is managed by the StorWatch SAN Fibre Channel switch specialist and the StorWatch SAN Data Gateway specialist.

Platform support and system requirements Refer to the following Web site for the latest list of supported servers, host adapters, operating system levels, and availability dates for: Chapter 5. Tape solutions

113

򐂰 RS/6000 and pSeries servers that support FC host adapters 򐂰 Intel servers with supported adapters running Microsoft Windows NT and Windows 2000 򐂰 Sun SPARC, UltraSPARC, and Ultra/Enterprise servers with supported host adapters http://www.storage.ibm.com/hardsoft/tape/3583/3583opn.html

Implementation notes For disk systems that support FlashCopy or Instant copy, it is possible to eliminate the back-up window by creating a time zero copy and backing up from the copy. If backup or extensive tape usage is likely to occur during peak productive time, then, for performance reasons, we strongly recommend that you have a separate Fibre Channel host bus adapter and connection for tape. Tivoli Storage Manager Tape Library Sharing or other equivalent software product is required. For library sharing, all the servers must share the same LAN and protocol so that backup clients can communicate with the backup server for tape library services. The IBM 2108-G07 SAN Data Gateway has four SCSI buses. To ensure adequate bandwidth to each drive, configure no more than three drives HVD tape drives per SCSI bus. Note that LVD drives are not supported on the IBM 2108-G07.

Sample configuration Table 5-2 shows a sample configuration, provided for guidance only, which excludes rack, cables, connectors, media, and country or region-specific information. Twelve servers (eight Intel servers running Windows NT or Windows 2000, two IBM RS/6000 running AIX, and two Sun servers running Solaris) share an IBM 3583 Scalable tape library with six drives and uncompressed capacity of 7.2 TB. The servers are connected to the library through an IBM 2109 Fibre Channel Switch and IBM 2108 SAN Data Gateway. Table 5-2 Sample configuration: Department backup

114

Product or feature code

Description

Qty

3583-L72

Ultrium Scalable tape library

1

8004

LTO Ultrium HVD drive sled

6

8008

Redundant power module

1

9211

Attached to Sun system

1

9212

Attached to Windows system

1

9600

Attached to RS/6000 system

1

2109-S16

SAN Fibre Channel Switch

1

2010

Shortwave GBIC

12

Included

StorWatch SAN Fibre Channel switch Specialist

1

2108-G07

SAN Data Gateway

1

2214

Dual SW Fibre Channel Port

1

Included

StorWatch SAN Data Gateway Specialist

1

IBM Tape Solutions for Storage Area Networks and FICON

Additional information For the latest product and support information, refer to these Web sites: 򐂰 IBM 3583 Ultrium Tape Library http://www.ibm.com/storage/lto 򐂰 IBM 2109 SAN Fibre Channel Switch http://www-1.ibm.com/servers/storage/support/san/2109s16.html 򐂰 IBM 2108 SAN Data Gateway http://www-1.ibm.com/servers/storage/support/san/2108g07.html 򐂰 SAN information http://www.ibm.com/san 򐂰 Tivoli Storage Manager http://www.tivoli.com/tsm

5.1.3 Backup for large departments and enterprises This solution provides dynamic tape sharing for up to 28 heterogeneous servers for fast, reliable, cost-effective backup, archive, and restore with up to 248 TB capacity.

Description The IBM 3584 UltraScalable tape library provides low-cost, industry standard and certified LTO storage for backup, restore, and archive. LTO is a significant breakthrough in open tape storage, providing enterprise-level reliability, performance, and capacity with up to 24 tape drives and up to 66.7 TB uncompressed capacity in this dual-frame configuration. However, you should note that the maximum configuration for a 3584 is six frames, holding up to 72 drives, with a maximum of 248 TB uncompressed capacity. Figure 5-3 shows the configuration with HVD SCSI drives connected through the IBM 2108-G07 SAN Data Gateway.

Chapter 5. Tape solutions

115

Host 1

Host 2

Host 3

.............

IBM 2109 FC Switch

Host 28

IBM 2109 FC Switch

IBM 2108 G07 Gateway Fibre Channel SCSI

Host 27

Host 26

1 2 3

IBM 2108 G07 Gateway

4 5 6 7 8 9 10 11 12 Library Controller

IBM 3584 UltraScalable Tape Library Figure 5-3 Backup for large departments and enterprises: SCSI tape

Figure 5-4 shows the same configuration with direct Fibre Channel attached tape drives. Notice that the two gateways are replaced by a switch. The reason for the switch-to-switch cascading is simply to have enough ports to connect 28 servers and 12 tape drives.

Host 1

Host 2

Host 3

.............

IBM 2109 FC Switch

Host 26

Host 27

Host 28

IBM 2109 FC Switch

IBM 2109 FC Switch Fibre Channel 1 2 3 4 5 6 7 8 9 10 11 12

Library Controller IBM 3584 UltraScalable Tape Library Figure 5-4 Backup for large departments and enterprises: Fibre Channel drives

In addition to the hardware configuration, Tivoli Storage Manager (or other industry recognized storage software) provides highly-automated, centrally scheduled, policy-managed backup and archive facilities to protect data in distributed environments. 116

IBM Tape Solutions for Storage Area Networks and FICON

The SAN fabric is managed by the StorWatch SAN Fibre Channel switch specialist.

Platform support and system requirements Refer to the following Web site for the latest list of supported servers, host adapters, operating system levels, and availability dates for: 򐂰 RS/6000 servers that support FC host adapters 򐂰 Intel servers with supported adapters running Microsoft Windows NT and Windows 2000 򐂰 Sun SPARC, UltraSPARC, and Ultra/Enterprise servers with supported host adapters http://www.storage.ibm.com/hardsoft/tape/3584/3584opn.html

Implementation notes Here we discuss various implementation considerations: 򐂰 For disk systems that support FlashCopy or Instant copy, it is possible to eliminate or significantly reduce the back-up window by creating a time zero copy and backing up from the copy. 򐂰 For performance reasons, we recommend that you have a separate Fibre Channel host bus adapter and connection for the tape path from each server to the IBM 2109 Fibre Channel switch. This is because input/output (I/O) to tape is typically quite long and can conflict with the large number of short I/Os to disk normally associated with transactional processing. 򐂰 Because there is no software, at this time, to provide automated dual path management for the 3584, there are single points of failure. If a host adapter or link fails, then one server loses its connection. If a whole switch fails, all the servers attached to that server lose their connection, but the remaining servers still have access to all the tape drives. An optional second power supply in the switch reduces the probability of a complete switch failure. It is possible to configure the SAN with no single point of failure other than the library controller and robotics by adding an additional host adapter and switch connection to each server and duplicating switches and paths. However, failover is not automatic because, although the physical path exists, the logical path must be manually reconfigured at each server. Note: The AIX device driver for Fibre Channel 3590s in a 3494 library provides automated failover to the second Fibre Channel port of the 3590 drive. 򐂰 Tivoli Storage Manager 3.7 or later, or another equivalent software product, is required. All the servers must share the same LAN and protocol. 򐂰 The IBM 2108-G07 SAN Data Gateway has four SCSI buses. To ensure adequate bandwidth to each drive, configure no more than three drives per SCSI bus. 򐂰 The maximum sustained data rate to each drive is 15 MB/sec (uncompressed) although the burst rate for HVD SCSI is 40 MB/sec. It is preferable to have four Fibre Channel links between sites.

Sample configuration Table 5-3 shows a sample configuration, provided for guidance only, and excluding rack, cables, connectors, media, and country or region specific information. The 21 servers (15 Intel servers running Windows NT or Windows 2000, three IBM RS/6000 running AIX, and three Sun servers running Solaris) share an IBM 3584 Ultra-Scalable tape library with 12 Fibre Channel Arbitrated Loop (FC-AL) drives and uncompressed capacity of 66.7 TB. The

Chapter 5. Tape solutions

117

servers are connected to the library through three IBM 2109 switches (see Figure 5-4 on page 116 for a configuration diagram). Table 5-3 Sample configuration: Enterprise backup Product or feature code

Description

Qty

3584-L32

Tape library base frame

1

1456

LTO Ultrium FC-AL drive sled

12

1653

Capacity expansion

1

9002

First expansion frame attachment

1

9211

Attached to Sun system

1

9212

Attached to Windows system

1

9600

Attached to RS/6000 system

1

3584-D32

Tape library expansion frame

1

9001

Driveless frame

1

2109-S16

SAN Fibre Channel Switch

3

2010

Shortwave GBIC

30

6106

Additional power supply

3

Included

StorWatch SAN Fibre Channel switch Specialist

1

Additional information For the latest product and support information, refer to these Web sites: 򐂰 IBM 3584 Ultrium UltraScalable Tape Library http://www.ibm.com/storage/lto 򐂰 IBM 2109 Fibre Channel switch http://www-1.ibm.com/servers/storage/support/san/2109s16.html 򐂰 SAN information http://www.ibm.com/san 򐂰 Tivoli Storage Manager http://www.tivoli.com

5.2 Remote tape vaulting The following solutions combine the efficiencies of library sharing and tape pooling with remote tape vaulting up to 10 kilometers, or more with the Finisar Optilinx 2000: 򐂰 Workgroup and small department solution: Remote tape vaulting for up to 14 servers sharing 7.2 TB capacity 򐂰 Large department and enterprise solution: Up to 28 servers sharing 248 TB capacity

118

IBM Tape Solutions for Storage Area Networks and FICON

Business value The value of these solutions is to protect and manage valuable business data efficiently and reliably at distances of up to 10 kilometers, or more with suitable fabric components. According to Datamation, recreating 20 MB of data can cost between $15,000 and $100,000 U.S. Remote tape vaulting eliminates the need to transport tapes manually to a safe site and significantly reduces the risk of handling damage or loss in transit. Availability is also increased because the tapes are always online so there in no risk of the tape you need being in transit. You can reduce costs and simplify storage management with commonly managed and shared tape storage. Storage management efficiency needs to improve by up to 60% a year to keep up with the increase in the amount of stored data, and storage management costs typically account for more than four times the initial purchase cost of the hardware.

5.2.1 Workgroup and small department solution This solution provides dynamic tape sharing and remote tape vaulting for up to 14 heterogeneous servers for fast, reliable, cost-effective backup, archive, and restore with up to 7.2 TB capacity. The solution provides both library sharing and tape pooling.

Description The IBM 3583 Ultra-Scalable Tape Library provides low-cost, industry standard and certified LTO storage for backup, restore, and archive. LTO is a significant breakthrough in open tape storage because it provides enterprise-level reliability, performance, and capacity with up to six tape drives, and up to a total uncompressed capacity of 7.2 TB. Figure 5-5 shows a configuration with HVD SCSI drives connected through the IBM 2108-G07 SAN Data Gateway. The IBM 2108-G07 is used because the IBM 2108-R03 does not have the option of longwave GBICs for extended distance. The SCSI Control Module has its own SCSI ID but does not require a dedicated SCSI connection (it can be shared with a drive). Therefore, only six drives and four SCSI buses provide plenty of SCSI bandwidth. In addition to the hardware configuration, Tivoli Storage Manager or other industry recognized storage software provides highly-automated, centrally scheduled, policy-managed backup, and archive facilities to protect data in distributed environments. The SAN fabric is managed by the StorWatch SAN Fibre Channel switch specialist and the StorWatch SAN Data Gateway Specialist.

Chapter 5. Tape solutions

119

.... Host 1

Host 14

Host 13

Host 2

IBM 2109 FC Switch Local site Up to 10 km Remote site IBM 2108 G07 Gateway Fibre Channel SCSI 1 2 3

4 5

6

SCSI Control Module IBM 3583 Ultrium Scalable tape library

Figure 5-5 Remote vaulting for workgroup and small department: SCSI tape

Platform support and system requirements Refer to the following Web site for the latest list of supported servers, host adapters, operating system levels, and availability dates for: 򐂰 RS/6000 servers that support FC host adapters 򐂰 Intel servers with supported adapters running Microsoft Windows NT and Windows 2000 򐂰 Sun SPARC, UltraSPARC, and Ultra/Enterprise servers with supported host adapters http://www.storage.ibm.com/hardsoft/tape/3583/3583opn.html

Implementation notes Here we discuss various implementation considerations: 򐂰 For disk systems that support FlashCopy or Instant copy, it is possible to eliminate or significantly reduce the backup window by creating a time zero copy and backing up from the copy. 򐂰 For performance reasons, we strongly recommend that you have a separate Fibre Channel host bus adapter and connection for the tape path from each server to the IBM 2109 Fibre Channel switch. This is because I/Os to tape are typically quite long and can conflict with the large number of short I/Os to disk associated with transactional processing. 򐂰 Tivoli Storage Manager, 3.7 or later, or an equivalent software product is required to manage the library. All the servers must share the same LAN and protocol. 򐂰 The IBM 2108-G07 SAN Data Gateway has four HVD SCSI buses. To ensure adequate bandwidth to each drive, configure no more than three drives per SCSI bus.

120

IBM Tape Solutions for Storage Area Networks and FICON

򐂰 In locations where dark fibre is expensive, the second link from the fibre switch can be eliminated. Performance may be adversely affected because the bandwidth between the two sites was halved. Availability is also reduced because the fibre and two FC ports are now extra single points of failure. 򐂰 The maximum sustained data rate to each drive is 15 MB/sec (uncompressed) although the burst rate for HVD SCSI is 40 MB/sec. Therefore, it is preferable to have two Fibre Channel links between sites. 򐂰 Because there is no software, at this time, to provide dual path management for the 3583, there are single points of failure. If a host adapter or link fails, then one server loses its connection. If a whole switch fails, all the servers attached to that server lose their connection, but the remaining servers still have access to all the tape drives. An optional second power supply in the switch reduces the probability of a complete switch failure.

Sample configuration Table 5-4 shows a sample configuration provided for guidance only and excludes rack, cables, connectors, media, and country or region specific information. Twelve servers (eight Intel servers running Windows NT or Windows 2000, two IBM RS/6000 running AIX, and two Sun servers running Solaris) share an IBM 3583 Ultrium Scalable Tape Library with six drives and uncompressed capacity of 7.2 TB. The servers are connected to the library through an IBM 2109 Fibre Channel switch and IBM 2108 SAN Data Gateways over a distance of 7.5 km. Table 5-4 Sample configuration: Workgroup vaulting Product or feature code

Description

Qty

3583-L72

Ultrium Scalable Tape Library

1

8004

LTO Ultrium HVD drive sled

6

8008

Redundant power module

1

9211

Attached to Sun system

1

9212

Attached to Windows system

1

9600

Attached to RS/6000 system

1

2109-S16

SAN Fibre Channel Switch

1

2010

Shortwave GBIC

10

2020

Longwave GBIC

2

6106

Additional power supply

1

Included

StorWatch SAN Fibre Channel switch Specialist

1

2108-G07

SAN Data Gateway

1

2214

Dual SW Fibre switch port

1

2319

Additional single Fibre Channel longwave port

2

Included

StorWatch SAN Data Gateway Specialist

1

5698-TSM

Tivoli Storage Manager

Chapter 5. Tape solutions

121

Additional information For the latest product and support information, refer to these Web sites: 򐂰 IBM 3583 Ultrium Scalable Tape Library http://www.ibm.com/storage/lto 򐂰 IBM 2109 Fibre Channel switch http://www-1.ibm.com/servers/storage/support/san/2109s16.html 򐂰 IBM 2108 SAN Data Gateway http://www-1.ibm.com/servers/storage/support/san/2108g07.html 򐂰 Tivoli Storage Manager http://www.tivoli.com/tsm

5.2.2 Large department and enterprise solution This solution provides dynamic tape sharing and remote tape vaulting for up to 28 heterogeneous servers for fast, reliable, cost-effective backup, archive, and restore with up to 248 TB capacity. It provides both library sharing and tape pooling.

Description The IBM 3584 UltraScalable Tape Library provides low-cost, industry standard and certified LTO storage for backup, restore, and archive. The IBM 3584 library is a significant breakthrough in open tape storage by providing enterprise-level reliability, performance, and capacity with up to 12 tape drives per frame, up to six frames, and up to a total uncompressed capacity of 248 TB. Figure 5-6 shows the configuration with HVD SCSI drives connected through the IBM 2108-G07 SAN Data Gateway.

Host 1

Host 2

Host 3

.............

Host 27

Host 26

IBM 2109 FC Switch

IBM 2109 FC Switch

Local site Up to 10 km Remote site

IBM 2108 G07 Gateway Fibre Channel SCSI

1 2 3

IBM 2108 G07 Gateway

4 5 6 7 8 9 10 11 12 Library Controller

IBM 3584 Ultra-scalable Tape Library

Figure 5-6 Remote vaulting for departments and small enterprises: SCSI tape

122

IBM Tape Solutions for Storage Area Networks and FICON

Host 28

Figure 5-7 shows the same configuration with direct Fibre Channel attached tape drives. Notice that the two gateways are replaced by a switch. Figure 5-7 also shows the option to replace two of the inter-site links with a short link between the two switches for locations where long distance fibre connections (dark fibre) are expensive. This does not affect availability, but can impact performance, so it is a trade-off between performance and cost.

Host 1

Host 2

Host 3

.............

IBM 2109 FC Switch

Host 27

Host 26

Host 28

IBM 2109 FC Switch

Local site Up to 10 km Remote site Fibre Channel SCSI

IBM 2109 FC Switch

1 2 3 4 5 6 7 8 9 10 11 12 Library Controller IBM 3584 UltraScalable Tape Library

Figure 5-7 Remote vaulting for departments and small enterprises: FC drives

In addition to the hardware configuration, Tivoli Storage Manager or other industry recognized storage software provides highly automated, centrally scheduled, policy-managed backup and archive facilities to protect data in distributed environments. The SAN fabric is managed by the StorWatch SAN Fibre Channel switch specialist and the StorWatch SAN Data Gateway Specialist.

Platform support and system requirements Refer to the following Web site for the latest list of supported servers, host adapters, operating system levels, and availability dates for: 򐂰 RS/6000 servers that support FC host adapters 򐂰 Intel servers with supported adapters running Microsoft Windows NT and Windows 2000 򐂰 Sun SPARC, UltraSPARC, and Ultra/Enterprise servers with supported host adapters http://www.storage.ibm.com/hardsoft/tape/3584/3584opn.html

Chapter 5. Tape solutions

123

Implementation notes Here we discuss various implementation considerations: 򐂰 For disk systems that support FlashCopy or Instant copy, it is possible to eliminate or significantly reduce the back-up window by creating a time zero copy and backing up from the copy. 򐂰 For performance reasons, we recommend that you have a separate Fibre Channel host bus adapter and connection for the tape path from each server to the IBM 2109 Fibre Channel switch. This is because I/Os to tape are typically quite long and can conflict with the large number of short I/Os to disk associated with transactional processing 򐂰 Because there is no software, at this time that provides automated dual path management for the 3584, there are single points of failure. If a host adapter or link fails, then one server loses its connection. If a whole switch fails, all the servers attached to that server lose their connection, but the remaining servers still have access to all the tape drives. An optional second power supply in the switch reduces the probability of a complete switch failure. It is possible to configure the SAN with no single point of failure (other than the library controller and robotics) by adding an additional host adapter and switch connection to each server and duplicating switches and paths. However, failover is not automatic because, although the physical path exists, you have to manually reconfigure the logical path at each server. 򐂰 Tivoli Storage Manager Tape, 3.7 or later, or an equivalent software product is required. All the servers must share the same LAN and protocol. 򐂰 The IBM 2108-G07 SAN Data Gateway has four HVD SCSI buses. To ensure adequate bandwidth to each drive, configure no more than three drives per SCSI bus. 򐂰 The maximum sustained data rate to each drive is 15 MB/sec (uncompressed) although the burst rate for HVD SCSI is 40 MB/sec, so it is preferable to have four Fibre Channel links between sites.

Sample configuration Table 5-5 shows a sample configuration provided for guidance only and excludes rack, cables, connectors, media, and country or region specific information. Twenty-one servers (15 Intel servers running Windows NT or Windows 2000, 3 IBM RS/6000 running AIX, and 3 Sun servers running Solaris) share an IBM 3584 UltraScalable Tape Library with 12 drives and uncompressed capacity of 66.7 TB. The servers are connected to the library through two IBM 2109 switches to two IBM 2108 SAN Data Gateways over a distance of 7.5 km. Table 5-5 Sample configuration: Enterprise vaulting

124

Product or feature code

Description

Qty

3584-L32

Tape library base frame

1

1455

LTO Ultrium HVD drive sled

12

1653

Capacity expansion

1

9002

First expansion frame

1

9211

Attached to Sun system

1

9212

Attached to Windows system

1

9600

Attached to RS/6000 system

1

3584-D32

Tape library expansion frame

1

IBM Tape Solutions for Storage Area Networks and FICON

Product or feature code

Description

Qty

9001

Driveless frame

2109-S16

SAN Fibre Channel Switch

2

2010

Shortwave GBIC

14

2020

Longwave GBIC

4

6106

Additional power supply

2

Included

StorWatch SAN Fibre Channel switch Specialist

1

2108-G07

SAN Data Gateway

2

2214

Dual SW Fibre switch port

2

2319

Additional single Fibre Channel longwave port

4

Included

StorWatch SAN Data Gateway Specialist

1

5698-TSM

Tivoli Storage Manager

Additional information For the latest product and support information, refer to these Web sites: 򐂰 IBM 3584 Ultrium UltraScalable Tape Library http://www.ibm.com/storage/lto 򐂰 IBM 2109 Fibre Channel switch http://www-1.ibm.com/servers/storage/support/san/2109s16.html 򐂰 IBM 2108 SAN Data Gateway http://www-1.ibm.com/servers/storage/support/san/2108g07.html 򐂰 SAN information http://www.ibm.com/san 򐂰 Tivoli Storage Manager http://www.tivoli.com/tsm

5.3 Disaster tolerant tape implementation The previous section looked at solutions for remote tape vaulting. This section looks at disaster tolerance. The similarity between remote tape vaulting and disaster tolerance is extended distance, and the main difference is that, for a fully disaster tolerant solution, two tape libraries are required. Unlike disk storage systems, most tape libraries do not have mirror or remote mirror capability. This means that the second copy needs to be initiated externally by a server. You can do this in many ways, depending on the total disaster tolerant implementation which includes server, disk, tape, and software. A typical disaster tolerant implementation consists of duplicate servers in a cluster and duplicate disk storage with peer-to-peer remote copy. However, the demand to include back-up and restore as part of a complete disaster tolerant solutions is increasing.

Chapter 5. Tape solutions

125

Business value This solution helps to keep your business running even in the event of a local disaster. It helps to protect and manage valuable business data efficiently and reliably at distances of up to 70 kilometers. Disaster tolerant tape libraries eliminate the need to manually transport tapes to a safe site and significantly reduces the risk of handling damage or loss. Availability may also increase because the tapes are always online, so there in no risk of the tape you need being in transit. The solution also helps you significantly reduce or eliminate the backup window by making a FlashCopy of the data in disk storage and doing intelligent backup to tape concurrent with production applications. It helps reduce costs and simplify storage management with commonly managed and shared tape storage. Storage management efficiency needs to improve by up to 60% a year to keep up with the increase of stored data. Storage management costs typically account for more than four times the initial purchase cost of the hardware.

5.3.1 Disaster tolerance for departments and small enterprises This solution provides a high-performance, disaster tolerant storage solution for up to 12 servers running Windows NT, Windows 2000, or UNIX servers.

Description This IBM SAN solution is designed to provide a seamless disk and tape disaster tolerant solution. Disk storage is included because it is an integral part of a disaster tolerant solution. Up to 12 Intel or UNIX servers can share up to 4.3 TB (or multiples of 4.3 TB) of disk storage. FlashCopy provides almost instantaneous copies for production, development and testing, as well as intelligent server-less backup with little or no disruption to production. The IBM 2106 Modular Storage Server (MSS) provides high-function, versatile storage for midrange and small enterprises. Storage capacity is expandable up to 4.3 TB in a variety of different RAID formats (0,1, 0+1, and 3/5), which can be intermixed. High availability is achieved through redundant power and cooling, dual controllers with automated failover and up to 512 MB battery-backed cache per controller, and multiple RAID levels. MSS Peer-to-Peer Remote Copy (PPRC) provides data replication over distances up to 10 km with standard Fibre Channel, and up to 70 km with additional extender components. Synchronous PPRC provides data consistency across the two sites by writing data at both sites before the operation completes. You can connect up to 12 servers to the MSS at each site, with each server having a clustered partner at the other site. This provides the advantage of storage consolidation and disaster tolerance. You can add an additional MSS with PPRC to the SAN at each site so each site can run productive applications and provide disaster tolerance for the other. SAN connectivity is through IBM 2109 Fibre Channel Switches. Each IBM 2109 can have up to 16 Fibre Channel ports. The configuration is designed to avoid single points of failure with two switches and two paths from each server with the MSS Data Path Optimizer (DPO). Note that there is no equivalent to DPO for the Ultrium tape. Although there is physically an alternate path, it needs to be logically and manually assigned. Backup and archive can be performed with minimum or no disruption to production by creating a FlashCopy and assigning it to a backup server running any compatible open server storage management software. The IBM 3584 Ultrium tape library provides up to 248 TB of

126

IBM Tape Solutions for Storage Area Networks and FICON

very high reliability tape storage. Up to 72 tape drives can perform backup and restore operations simultaneously. Storage can be managed and allocated locally or remotely with the Web-based StorWatch MSS Specialist and the MSS PPRC Manager. There are many different backup scenarios that can be applied to this physical configuration. Here are two: 򐂰 In the first scenario (see Figure 5-8), there are two completely independent backup servers, one at each site. Backup is performed individually at each site. This has the advantage that backup data does not have to be transferred between sites, since each is independent. It is also useful if not all the data at each site is being mirrored. For example, only critical data at the primary site may be being mirrored to the second site. In this case, a tape from Library 2 cannot restore a file for a user on Library 1 because the backups are different. Additional protection, such as using the Tivoli Storage Manager disaster recovery manager, is needed to protect against a media failure by creating a local copy. 򐂰 In the second scenario, the two sites are maintained in synchronization (servers and data). There is only one backup server at the master site. This backup server performs the backup and then makes a mirror of the backup data and the backup database in the remote library. Again, the Tivoli Storage Manager Disaster Recovery Manager can do this. The disadvantage with this scenario is that the backup data must be transferred over the remote link but the advantage is that both sets of tapes are identical. A passive backup server at the remote site can be used in the event of a catastrophic failover at the primary site. It needs to be initialized with the mirrored copy of the backup database.

Site 2

Site 1 LAN Host 1

.......

IBM 2109 FC Switch

Host 12

Host 12

IBM 2109 FC Switch

IBM 2106 Disk Storage

.......

IBM 2109 FC Switch

Up to 10km or 70km with additional fabric components

Host 1

IBM 2109 FC Switch

IBM 2106 Disk Storage

IBM IBM2109 2109FC fc Switch Switch

1 2 3 4 5 6 7 8 9 10 11 12

Library controller

IBM 3584 Ultra-scalable Tape Library

IBM IBM2109 2109FC fc Switch Switch

12 11 10 9 8 7 6 5 4 3 2 1 Library controller

IBM 3584 Ultra-scalable Tape Library

Figure 5-8 Disaster tolerance for department and small enterprise

Chapter 5. Tape solutions

127

Platform support and system requirements This solution is supported on Windows NT, RS/6000 and pSeries servers, and Sun SPARC, UltraSPARC, and Ultra/Enterprise servers with supported host adapters and an appropriate level of Solaris. For for a current list of MSS supported servers and operating systems, see: http://www-1.ibm.com/servers/storage/support/disk/index.html For the IBM 3584, see: http://www.storage.ibm.com/hardsoft/tape/3584/3584opn.html

Implementation notes Here we discuss various implementation considerations: 򐂰 Each server needs the appropriate cluster capability and software installed (for example, Microsoft Cluster Services or Veritas Cluster Server). 򐂰 UNIX and Windows NT servers cannot be mixed in the same MSS PPRC configuration. 򐂰 Dual path connections to servers running Windows NT or Sun Solaris require the MSS DPO software. 򐂰 For Windows NT, the maximum number of LUNs is 16. Dual pathing requires two LUNs per server. Therefore, with DPO, the maximum number of Windows NT servers per site is 8 or, without DPO, 12 per site. 򐂰 Carefully evaluate performance when attaching multiple servers. In particular, understand the total number of writes because each write must be copied to the remote site. You can add additional remote fibre links to increase the bandwidth. 򐂰 The maximum number of PPRC storage sets is 12 per MSS pair, which may limit the number of servers that can be connected to each MSS. 򐂰 FlashCopy for server-less backup can only be done at the target (remote) site. The FlashCopy Manager does not manage FlashCopy of PPRC storage sets. This means that server-less backup using FlashCopy because the data source cannot be performed on the same data at both sites. 򐂰 Distances greater than 10 km require the IBM 2109 extended distance features and appropriate extended fabric components, such as the IBM 2029 Fibre Saver or the Finisar Optilinx 2000 (formerly known as FLX-2000). See 1.4.3, “Extended distance fabrics” on page 13, for more information. 򐂰 Tivoli Storage Manager, 3.7 or later, or an equivalent software product is required. All the servers must share the same LAN and protocol. 򐂰 You can achieve dual active-active disaster tolerance by adding an additional MSS at each site connected to the same SAN. This enables each site to do productive work and act as the disaster tolerant site for the other.

Sample configuration Table 5-6 shows a sample configuration provided for guidance only and excludes rack, cables, connectors, media, and country or region specific information. This configuration describes one site. The configuration is completely duplicated at the remote site. There are three Intel servers running Windows NT, each with dual paths connected through two IBM 2109 switches with no single point of failure to one IBM MSS with 720 GB capacity. An IBM Ultrium Tape Library is attached to the SAN for backup, archive and restore. Table 5-6 shows the configuration for one site. Quantities must be doubled for a two-site configuration. Note that, although three IBM 2109 switches are used, there are only two

128

IBM Tape Solutions for Storage Area Networks and FICON

entries in the table. This is because two of them are identically configured so the configurator automatically doubles the quantities. Table 5-6 Sample configuration: Departmental disaster tolerance Product or feature code

Description

Qty

3584-L32

Tape library base frame

1

1456

LTO Ultrium FC-AL drive sled

12

1653

Capacity expansion

1

9002

First expansion frame

1

9211

Attached to Sun system

1

9212

Attached to Windows system

1

2109-S16

SAN Fibre Channel Switch

1

2010

Shortwave GBIC

9

6106

Additional power supply

Included

StorWatch SAN Fibre Channel switch Specialist

1

2109-S16

SAN Fibre Channel Switch

2

2010

Shortwave GBIC

6

2020

Longwave GBIC

2

6106

Additional power supply

2

2106-200

Modular Storage Server Controller Enclosure

1

4001

Additional 512 MB cache

1

7201

FC PCI host bus adapter for Windows NT and Windows 2000

2

5648-E15

MSS Control Module (PPRC)

1

5648-E19

MSS PPRC manager for Windows NT

1

5648-E17

MSS Data Path Optimizer for Microsoft Windows NT

1

Included

StorWatch MSS Specialist

1

2106-D20

MSS 10-Bay Dual Bus Disk Drive Enclosure

1

3072

72 GB 10000 RPM Disk Module

10

5.3.2 Disaster tolerance for medium and large enterprise This solution provides a high-performance, disaster tolerant and massively scalable storage solution for up to 64 servers running Windows NT or Windows 2000, or UNIX servers

Description This SAN solution is designed to provide a seamless disk and tape disaster tolerant solution. Up to 64 Intel or UNIX servers can share up to 11 TB (or multiples of 11 TB) of disk storage at each site. You can use server mirroring to provide data duplication between the two sites.

Chapter 5. Tape solutions

129

The IBM 2105 Enterprise Storage Server (ESS) is designed to provide superior storage sharing for multiple heterogeneous servers. High availability is achieved at each site with no single point of failure. The ESS has two dual active, powerful four-way RISC SMP processors, battery-protected mirrored cache, and RAID 5 protected serial disk, together designed to avoid single points-of-failure. A second, duplicate, site for full disaster tolerance can be situated at up to 10 km with standard Fibre Channel, or up to 70 km with additional extender components. Backup and archive can be performed with little or no disruption to production by creating a FlashCopy, and assigning it to a backup server running open server storage management software and a tape library such as the IBM 3584 Ultrium tape library, which provides up to 248 TB of very high reliability tape storage. Up to 72 tape drives can perform backup and restore operations simultaneously. Storage and SAN resource management can be performed locally or remotely with the IBM StorWatch Enterprise Storage Server Specialist and the Enterprise Fabric Connectivity Manager. SAN connectivity is through IBM 2109 Fibre Channel Switches. Each IBM 2109 can have up to 16 Fibre Channel ports. Multiple switches and duplicate paths are designed to avoid single points of failure. The ESS DPO manages and load balances the duplicate paths. Note that there is no equivalent to DPO for the 3584 tape. Therefore, although there is physically an alternate path, you must assign it logically and manually. Figure 5-9 shows a configuration with 11 servers at each site. You can add additional switches to increase connectivity for up to 64 servers at each site. There are many different backup scenarios that you can apply to this physical configuration. Here are two: 򐂰 The first backup scenarios is where there are two completely independent backup servers, one at each site. Backup is performed individually at each site. This has the advantage that backup data does not have to be transferred between sites, since each is independent. It is also useful if not all the data at each site is being mirrored. For example, only critical data at the primary site may be being mirrored to the second site. In this case, you cannot use a tape from Library 2 to restore a file to a user on Library 1 because the backups are different. You need additional protection, such as Tivoli Storage Manager, to protect against a media failure by creating a local copy. 򐂰 In the second backup scenario, the two sites are maintained in synchronization (servers and data). There is only one backup server at the master site. This backup server performs the backup and then makes a mirror of the backup data and the backup database on the remote library. Again, the Tivoli Storage Manager Disaster Recovery Manager can do this. The disadvantage with this scenario is that the backup data has to be transferred over the remote link. The advantage is that both sets of tapes are identical. A passive backup server at the remote site can be used in the event of a catastrophic failover at the primary site. It must be initialized with the mirrored copy of the backup database.

130

IBM Tape Solutions for Storage Area Networks and FICON

Site 2

Site 1 LAN Host 1

.......

IBM 2109 FC Switch

Host 11

Host 1

IBM 2109 FC Switch

IBM 2105 Disk Storage

.......

IBM 2109 FC Switch

Up to 10km or 70km with additional fabric components

IBM IBM2109 2109FC fc Switch Switch

12 11 10 9 8 7 6 5 4 3 2 1

Library controller

IBM 3584 Ultra-scalable Tape Library

IBM 2109 FC Switch

IBM 2105 Disk Storage

IBM IBM2109 2109FC fc Switch Switch

1 2 3 4 5 6 7 8 9 10 11 12

Host 11

Library controller

IBM 3584 Ultra-scalable Tape Library

Figure 5-9 Disaster tolerance for medium and large enterprise

Platform support and system requirements This solution is supported on Windows NT and Sun SPARC, UltraSPARC, and Ultra/Enterprise servers with supported host adapters running appropriate levels of Solaris. Refer to the following Web site for a current list of MSS supported servers and operating systems: http://www-1.ibm.com/servers/storage/support/disk/index.html For the IBM 3584, see: http://www.storage.ibm.com/hardsoft/tape/3584/3584opn.html

Implementation notes Here we discuss various implementation considerations: 򐂰 The number of servers that can be attached depends on the number of paths per server (single or dual) and the number of fibre connections to the ESS (up to 16). 򐂰 Carefully evaluate performance when attaching multiple servers. In particular, examine the total number of writes because each write must be copied to the remote site. You can add additional remote fibre links to increase the bandwidth. 򐂰 Appropriate clustering software may be required for each server in a warm start disaster tolerant configuration. 򐂰 Distances greater than 10 km require the IBM 2109 extended distance features and the appropriate extended fabric components such as the IBM 2029 Fibre Saver or the Finisar Optilinx 2000 (formerly known as FLX-2000). 򐂰 Tivoli Storage Manager, 3.7 or later, or other equivalent software for library sharing and backup is required. All the servers must share the same LAN and protocol.

Chapter 5. Tape solutions

131

Sample configuration Table 5-7 shows a sample configuration provided for guidance only and excludes rack, cables, connectors, media, and country or region specific information. This configuration describes one site. It is completely duplicated at the remote site. Eleven servers at each site (seven Sun servers running Solaris, two Intel servers running Windows NT, and two pSeries running AIX), each with dual paths connected through two IBM 2109 Fibre Channel switches with no single point of failure, connect to an IBM 2105 ESS with 840 GB capacity. An IBM 3584 Ultrium tape library with 12 Fibre Channel tape drives provides backup and restore. Note that although there are three IBM 2109 switches in this configuration, there are only two entries in the table. This is because two of them are identically configured. Table 5-7 Sample configuration: Enterprise disaster tolerance

132

Product or feature code

Description

Qty

3584-L32

Tape library base frame

1

1456

LTO Ultrium FC-AL drive sled

12

1653

Capacity expansion

1

9002

First expansion frame

1

9211

Attached to Sun system

1

9212

Attached to Windows system

1

9600

Attached to RS/6000

1

2109-S16

SAN Fibre Channel Switch

1

2010

Shortwave GBIC

10

6106

Additional power supply

1

Included

StorWatch SAN Fibre Channel switch Specialist software

1

2109-S16

SAN Fibre Channel Switch

2

2010

Shortwave GBIC

22

2020

Longwave GBIC

2

6106

Additional power supply

2

2105-F20

IBM TotalStorage Enterprise Storage Server

1

2122

18.2 GB Disk 8-pack

8

1832

FlashCopy for up to 2 TB

1

3022

Shortwave Fibre Channel host adapter

4

4004

16 GB Cache

1

9301

Modem country (region) group M01

1

9623

840 GB high performance

1

Included

StorWatch ESS Specialist software

1

5648-SWV

StorWatch ESS Expert software

1

IBM Tape Solutions for Storage Area Networks and FICON

5.4 Sharing S/390 tape library resources with open servers This solution enables library sharing (partitioning) for open servers and a zSeries. It provides high performance tape pooling with up to 748 TB capacity for up to 10 heterogeneous servers to an existing IBM 3494 tape library connected to a zSeries or S/390 server.

Business value This solution helps you to protect and manage valuable business data efficiently. According to Datamation, recreating just 20 MB of data can cost between $15,000 and $100,000 U.S. It eliminates or reduce the number of different tape formats and backup strategies by consolidating to the enterprise class IBM 3494 Tape Library Dataserver. You can reduce costs with shared tape library robotics and efficient storage management. Storage management efficiency needs to improve by up to 60% a year to keep up with the increase in the amount of stored data. Storage management costs typically account for more than four times the initial purchase cost of the hardware. You can also reduce backup time by transferring only the changed data over high speed SAN links. Up to 60% of LAN traffic can be due to data housekeeping.

Description This solution (see Table 5-10) adds open server support to an existing IBM 3494 connected to a zSeries server. In this case, the IBM 3494 tape library is partitioned so that the open server tape drives and slots are completely independent of the zSeries drives and slots. The tape drives in the open server partition are pooled so that any server has access to any drive through the SAN. The IBM 3494 Tape Library Dataserver provides the library controls and tape handling robotics for native Fibre Channel connected IBM 3590 tape drives. The IBM Magstar 3494 Tape Library is highly scalable, expanding from 1 to 16 library frames and space for up to 6,240 tapes giving 748 TB capacity. Different models of tape drives can be combined in the same library so open SAN connected servers and zSeries can share the same library. Library control commands are passed over a LAN or RS232 interface while data is transferred to or from the tape drives over directly attached high speed 100 MB/sec Fibre Channel links. Multiple servers can be connected to different tape drives in the same library with the IBM 2109 SAN Fibre Channel Switch. Optionally, the IBM Magstar 3494 High Availability Unit (Model HA1) eliminates the most common single points of failure by providing a second library manager, second cartridge accessor, and communication between the library managers. StorWatch Expert provides storage administrators with valuable storage asset, capacity, and performance information needed to effectively manage multiple and remote IBM TotalStorage Enterprise Storage Server and IBM Magstar 3494 Tape Libraries. The Expert consists of two components: the StorWatch Enterprise Storage Server (ESS) Expert and the StorWatch Enterprise Tape Library (ETL) Expert. Tivoli Storage Manager provides highly automated, centrally scheduled, policy-managed backup and archive facilities to protect data in distributed enterprises.

Chapter 5. Tape solutions

133

LAN

Host 1

Host 2

Host 3

.............

IBM 2109 FC Switch

Host 8

Host 9

Host 10

IBM 2109 FC Switch

Library control path

3494 Tape library

3590 Tape Drive

3590 Tape Drive

3590 Tape Drive

3590 Tape Drive

3590 Tape Drive

3590 Tape Drive

Figure 5-10 IBM 3494 partitioned library with open tape pooling

Platform support and system requirements Refer to the following Web site for the latest list of supported servers, host adapters, operating system levels, and Tivoli Storage Manager support for: 򐂰 򐂰 򐂰 򐂰

zSeries pSeries with supported host adapters running AIX Intel servers with supported adapters running Microsoft Windows NT and Windows 2000 Sun SPARC, UltraSPARC, and Ultra Enterprise servers with supported host adapters

http://www.storage.ibm.com/hardsoft/tape/3590/3590opn.html

Implementation notes Here we discuss various implementation considerations: 򐂰 You must partition the IBM 3494 between OS/390 and the open server environment. 򐂰 Tivoli Storage Manager (3.7 or higher) or equivalent software is needed to provide library resource management for the open servers. 򐂰 For performance reasons, we recommend that you have a dedicated Fibre Channel host bus adapter and connection for the tape path from each server to the IBM 2109 Fibre Channel switch. 򐂰 AIX provides an alternate path capability for the Fibre Channel IBM 3590 tape drives for automated failover. A 3590 drive has two interfaces, one of which is active. The AIX device driver can reroute traffic across a separate path to the second interface if necessary because of a failure in some part of the part to the first drive interface. 򐂰 An IBM 3590-A60 controller or Virtual Tape Server (VTS) is required to attach to S/390 via Enterprise System Connection (ESCON). A 3590-A60 is required for FICON attachment.

134

IBM Tape Solutions for Storage Area Networks and FICON

Sample configuration Table 5-8 shows a sample configuration, provided for guidance only and excluding racks, cables, connectors, tape cartridges, and country or region specific information. It assumes that an IBM 3494 Tape Library Control Unit is already installed and connected to an zSeries. An additional six servers (four Intel servers running Windows NT or Windows 2000, one pSeries running AIX, and one Sun server running Solaris) are added to one new drive frame, and six new IBM 3590 tape drives. The servers are connected to the library through two IBM 2109 switches. Each 3590 tape drive has two Fibre Channel ports and each port is connected to a different switch. Each server has a dedicated Fibre Channel adapter which is connected to one of the switches. This provides any-to-any connectivity between the server and tape drive. Table 5-8 Sample configuration: Sharing S/390 tape with open servers Product or feature code

Description

Qty

3494-L14 MES

Tape library Dataserver MES

1

9780

Extended media support

1

9200

Open system device drivers

1

9106

Attach to pSeries processor

1

9003

Drive Unit Attach

1

9211

Sun Attachment

1

9212

Attachment to Windows NT

1

5220

Ethernet adapter

1

5228

Tape control unit expansion

1

Included

StorWatch 3494 Tape Library Specialist

1

3494-D12

Tape Library Dataserver (Tape Drive Unit)

1

9780

Extended media support

1

3511

Install Fibre Channel drive

6

9510

Factory install fibre drive

6

9106

pSeries or RS/6000 attach

6

3590-E1A

Magstar 3590 tape drive

6

9510

Fibre Channel attachment

6

9600

pSeries or RS/6000 attach

6

9663

Plant install drive

6

5648-SWV

IBM StorWatch Enterprise Tape Library Expert

1

2109

SAN Fibre Channel switch

2

2010

Shortwave GBIC

5

Included

StorWatch SAN Fibre Channel switch Specialist

1

Chapter 5. Tape solutions

135

136

IBM Tape Solutions for Storage Area Networks and FICON

6

Chapter 6.

FICON attachment of IBM tapes The Fibre Channel protocol is unique because the two upper layers can carry other protocols. Protocols such as Small Computer Systems Interface (SCSI) and Internet Protocol (IP) can be mapped to the Fibre Channel transport service level. Therefore, the SCSI protocol, and thus SCSI commands, can be transported across the Fibre Channel infrastructure. A newer SCSI-3 protocol has been developed to allow for some of the special functions available with the Fibre Channel standard. IBM has used the physical transport layers of Fibre Channel together with a new protocol known as FICON (Fiber Connection) as a replacement for Enterprise System Connection (ESCON) connections between S/390 servers and control units. This chapter compares FICON and ESCON. It looks at the elements of a FICON infrastructure. It specifically looks at using FICON to attach 3590 tape drives to a S/390 server. You can find more details about FICON in the following books: 򐂰 򐂰 򐂰 򐂰

FICON (FCV Mode) Planning Guide, SG24-5445 Introduction to IBM System/390 FICON, SG24-5176 IBM S/390 FICON Implementation Guide, SG24-5169 Fiber Optic Link Planning (ESCON, FICON, Coupling Links, and Open System Adapters), GA23-0367

© Copyright IBM Corp. 2000, 2001, 2002, 2003. All rights reserved.

137

6.1 FICON overview The S/390 FICON architecture addresses the constraints in the existing ESCON architecture. It improves utilization of the existing fiber infrastructure: 򐂰 The S/390 architecture only allows 256 channels. 򐂰 Some installations have processors that are at or close to the limit of 256 channels. Consolidating multiple ESCON channels on to a single FICON channel provides relief and allows the input/output (I/O) bandwidth for the server to continue to grow. 򐂰 The S/390 architecture allows a maximum of 1024 addresses per channel. The FICON architecture supports up to 16,384 addresses on a channel. This is important for control units, such as the IBM TotalStorage Enterprise Storage Server, that provide large numbers of addresses. 򐂰 Data rate droops for ESCON distances in excess of 9 km. Enhanced-distance connectivity is enabled by the fact that the FICON channel can span up to 100 kilometers before significant data rate droop takes effect. However, such an implementation requires repeaters. 򐂰 Higher data rate can be obtained. When compared with ESCON, FICON reduces protocol overhead. This capability, combined with technology advances, allows a fiber channel link to perform at up to 100 MB/sec full duplex. In a normal operation, this gives a sustained throughput of 60 to 70 MB/sec. This approximately equates up to five concurrent ESCON bulk-data I/O operations, or up to eight concurrent ESCON transaction I/O operations (depending on the channel utilization of the ESCON channels) since ESCON provides 17 MB/sec, half duplex. 򐂰 Higher data rate means less fiber for a given rate. In some countries or regions, fiber is very expensive. By allowing aggregation of paths on to a single fiber, FICON may be very cost-effective. 򐂰 Utilization of fiber infrastructure is improved. The same amount of fiber and repeaters may be used to carry significantly more traffic, offsetting a possible future cost of additional fiber and repeaters needed to carry more data. 򐂰 Greater distance can be obtained with dark fiber. In some countries or regions, dark fiber is normally used without retransmission via repeaters. The use of FICON can provide significantly greater distance (10 km or, with an RPQ, 20 km) compared to ESCON LED limits of 3 km for 62.5 micron fiber or 2 km for 50 micron fiber. FICON channels can use existing 9 micron single-mode fiber, or 50 or 62.5 micron multi-mode fiber. Any existing fiber in use by ESCON can be used for FICON. Note that 50 micron and 62.5 micron fiber are only supported up to 550 meters. Nine micron is supported up to 10 kilometers (or 20 kilometers with an RPQ). FICON is the foundation for high-performance channels. The architecture and implementation allows for: 򐂰 Initial implementation to ESCON devices via the IBM 9032 Model 5 ESCON Director acting as a bridge 򐂰 Point-to-point to I/O subsystems with S/390 FICON interfaces

138

IBM Tape Solutions for Storage Area Networks and FICON

򐂰 Switched point-to-point via a FICON Director to I/O subsystems with S/390 FICON interfaces

6.2 FICON infrastructure A FICON infrastructure may be point-to-point or switched, using ESCON Directors with FICON bridge cards or FICON Directors to provide connections between channels and control units. This section looks at each of the components that you may find in a FICON environment.

6.2.1 Channels FICON channels are available on these IBM processors: 򐂰 9672 G5, up to 24 FICON channel cards 򐂰 9672 G6, up to 36 FICON channel cards 򐂰 zSeries 900, up to 96 FICON channel cards The FICON channel cards in the 9672 G5 and G6 processors provide a single FICON port. They give you the ability to attach to two different types of fiber: 򐂰 Feature 2314, longwave, single mode with distances reaching 10 to 20 km 򐂰 Feature 2316, shortwave, multi-mode with distances up to 550 m The FICON channel cards in the zSeries 900 processors provide two FICON ports and give you the ability to attach to two different types of fiber: 򐂰 Feature 2315, longwave, single mode with distances reaching 10 to 20 km 򐂰 Feature 2318, shortwave, multi-mode with distances up to 550 m FICON channels operate in one of two modes. 򐂰 FCV (FICON conversion) mode using a bridge to convert to ESCON protocols 򐂰 FC (FICON native) where Fibre Channel protocols are used all the way to a native FICON control unit

6.2.2 Fiber connections FICON uses Fibre Channel transport protocols and the same physical fiber. You can choose either of the following options: 򐂰 Single mode 9 micron fiber providing distances up to 100 km when used with fiber hops of 10 km (20 km by RPQ) between switches or repeaters 򐂰 Multi-mode using either 50 or 62.5 micron fiber, providing maximum distances of 550 m Note: Mode conditioning cables are also available to enable reuse of current multimode cables while single mode infrastructure is rolled out. Contact your IBM Support Center or service representative for details. RPQ 8P1984 approves support for a maximum distance of 20 km rather than 10 km. To apply for the RPQ, you must provide the following information: 򐂰 The requested distance in kilometers 򐂰 The link budget in database

Chapter 6. FICON attachment of IBM tapes

139

򐂰 The complete link configuration, including type of device attached at each end of the link, location and length of all jumper and trunk cables, number of optical connectors and splices, and location of all patch panels

6.2.3 Directors A FICON or ESCON infrastructure is likely to be used to connect many control units to several processors. Point-to-point links do not provide the flexibility or resilience needed for a high-availability system. Therefore, it is normal to use a Director to ensure single points-of-failure are removed from the topology.

IBM 9032-5 ESCON Director with FICON bridge The 9032 Model 5 ESCON Directors with FICON bridge adapters provide a bridge between ESCON and FICON environments. They can be used to attach several ESCON control units to a Director and then, through the bridge adapter, across a FICON channel to the S/390 server, as shown in Figure 6-2. A 9032-5 Director may have up to 16 bridge adapters. Some older 9032-5 Directors require an upgrade (known as the FICON enablement feature) which can be installed nondisruptively if the 9032 has high availability features. A bridge adapter is configured in place of an 8-port ESCON card and has one FICON port. It is hot-pluggable and may be plugged into any open slot. Each bridge adapter has eight internal link controllers, each of which manages one I/O operation between the FICON channel and a link to an ESCON Director control unit port. There is no affinity between an internal link controller and any ESCON port. The location of the bridge adapter is shown in Figure 6-1.

LED Port Card

FICON Bridge Card

Figure 6-1 Bridge adapter location

140

IBM Tape Solutions for Storage Area Networks and FICON

The bridge adapter receives commands and data over the FICON channel. One of the internal link controllers within the FICON bridge takes this information and transfers it using ESCON protocols to the appropriate ESCON control unit. In this configuration, the FICON channel is said to be running in conversion or FCV mode. Figure 6-2 shows a simple example of the use of a FICON bridge adapter. Only a single ESCON Director is shown. Additional Directors can be configured for added resilience, or between the Director shown and some of the ESCON control units for configuration flexibility or distance extension. ESCON Control Units (unmodified)

FICON Bridge

17 MB/sec Link Capability

ESCON 3494 VTS

S/390 server S/390 FICON Channel

100 MB/s (Full Duplex Link Capability)

10 to 20 km maximum unrepeated distance

ESCON Director

ESCON 3590 CU

ESCON 3592 CU

ESCON Environment

FICON Environment Figure 6-2 ESCON Director with FICON bridge card

This configuration allows several control units to be aggregated on a single FICON channel path. However, it does not offer any performance improvement for the data being transferred between the S/390 server and the control units. To gain performance improvements, we must use FICON along the entire data transfer path. To do this, we need native FICON capability at the server, the storage control unit and throughout the fabric. In most situations, this means that we need a FICON Director that is capable of switching FICON connections. A Director is often defined to be a switch with these additional characteristics: 򐂰 High availability 򐂰 High port count Although FICON shares the lower, physical protocol levels with Fibre Channel, only three switches (all Director-class, with high availability features) are announced for use in FICON environments: the IBM 2032 Models 1 and 064, and the IBM 2042 Model 1. All share these features: 򐂰 򐂰 򐂰 򐂰 򐂰

Longwave and shortwave attachment supported Hot-pluggable port cards Concurrent microcode upgrade CUP feature supported to allow host management Ethernet port for management

If you use these FICON switches, you have a configuration similar to the one shown in Figure 6-3.

Chapter 6. FICON attachment of IBM tapes

141

100 MB/s (Full Duplex Link Capability)

S/390 server S/390 FICON channel

100 MB/s (Full Duplex Link Capability) 10 to 20 km maximum unrepeated distance

FICON Switch

10 to 20 km max

S/390 FICON CU

S/390 FICON . CU . .

ESCON

S/390 FICON/ESCON CU

Figure 6-3 FICON Director

IBM 2032-1 Director The IBM 2032-1 is a McDATA ED-5000 Director with a maximum of 32 ports. It provides concurrent switching for both FICON and Fibre Channel. The 2032-1 Director provides a wide range of configuration options. One port card provides four Fibre Channel ports, with each port capable of delivering up to 100 MB/sec, full-duplex data transfer. The 3,200 MB/sec transmission bandwidth supports full nonblocking 32-port switch performance. Other configuration options include shortwave laser cards (transmission distance up to 500 meters), longwave laser cards (transmission distance up to 10 kilometers), or combination cards (one longwave port and three shortwave ports). You can combine cards for a maximum of 32 ports. The Director is based on the design of the ESCON Director, featuring redundant power supplies and cooling fans, along with redundant active components that support automatic failover. Other high-availability capabilities include hot swapping for all field-replaceable units, nondisruptive microcode updates, and automatic fault detection and isolation. In addition, call-home and e-mail capabilities automatically alert support and maintenance personnel to accelerate problem resolution. Directors with the High Availability Option provide redundancy for all active electronic components, helping to enable continuous data access and high performance in the event of a single component failure. Together, these features are designed to help enable nondisruptive maintenance and upgrades without loss of data access. Note: The 2032-1 does not support FL ports for direct attachment of Fibre Channel Arbitrated Loop (FC-AL) devices such as most tape drives. This means that you need to attach tape drives to an intermediate device, such as the IBM 3534-1RU managed hub, an IBM 2109 switch, or an IBM 2031-L00 McDATA loop switch, which can support FL ports.

IBM 2032-064 Director The IBM 2032-064 is a McDATA ED-2064, which is a more recent Director than the ED-5000. However, it shares many management and availability characteristics with the ED-5000.

142

IBM Tape Solutions for Storage Area Networks and FICON

The smallest configuration has 24 Fibre Channel ports. The number of ports can be increased to a maximum of 64 ports by adding 4-port modules, which are available in three forms: 򐂰 Four shortwave ports for distances up to 500 m 򐂰 Four longwave ports for distances up to 10 km 򐂰 Three shortwave ports and one longwave port You can intermix the modules in any combination. The Director is housed in an IBM 2032-C36 (McDATA FC-512) cabinet, which can hold up to four Directors.

IBM 2042-1 Director The IBM 2042-1 is an INRANGE FC/9000 with 64 ports, upgradable to 128. It may act as a FICON Director and as a Fibre Channel Director concurrently. The Director provides a wide range of configuration options. A single I/O module provides eight Fibre Channel ports, with each port capable of delivering up to 100 MB/sec, full-duplex data transfer. I/O modules are available with: 򐂰 Eight shortwave ports 򐂰 Eight longwave ports 򐂰 Four shortwave ports and four longwave ports The 6,400 MB/sec transmission bandwidth supports full nonblocking 64-port switch performance. Other configuration options include shortwave laser Gigabit Interface Convertors (GBIC) (enabling a transmission distance of up to 500 meters) and longwave laser GBICs (enabling a transmission distance of up to 10 kilometers) for a maximum of 64 GBICs. One or two Directors can reside in a single cabinet, which supports up to 128 ports in a floor space of six square feet. The Director features redundant power supplies and cooling fans, along with redundant active components that support automatic failover. Other high-availability capabilities include hot swapping for all field-replaceable units, nondisruptive microcode updates, and automatic fault detection and isolation. In addition, call-home and e-mail capabilities automatically alert support and maintenance personnel to accelerate problem resolution. This Director supports direct FC-AL attachment using FL ports.

6.2.4 FICON control units Control units may be attached by using direct fiber connection between the control unit and FICON channel or through Directors, either FICON Directors or ESCON Directors with FICON bridge cards. The use of Directors provides greater flexibility for configuration and reconfiguration. The Directors also extend distance by redriving the optical signal. We have already seen attachment using Directors. Figure 6-4 completes the configuration choices by showing a direct, point-to-point connection between a FICON channel and a native FICON control unit.

Chapter 6. FICON attachment of IBM tapes

143

S/390 server S/390 FICON Channel

ESCON Interface

100 MB/s (Full Duplex Link Capability) 10 to 20 km maximum unrepeated distance

S/390 FICON Interface

S/390 CU

Figure 6-4 FICON point-to-point connection

The first IBM FICON control unit is the IBM TotalStorage 3590-A60. The IBM TotalStorage Virtual Tape Server (VTS) Models B10 and B20 can also be configured with FICON channels. The IBM Infoprint® Color 130 Plus printer, the IBM 3170 Model 5, may also attach to FICON channels. This was announced in October 2000 with general availability in May 2001. Support for up to 16 FICON host adapters (shortwave, longwave, or both) in an IBM TotalStorage Enterprise Storage Server was announced in August 2001 with general availability in September 2001.

6.2.5 FICON management System Automation for OS/390 (program number 5645-006) provides tools for managing ESCON and FICON infrastructures. It replaces the ESCON Manager, adds new function, and adds support for FICON. The management functions provided by the I/O operations component allow you to make coordinated changes to your I/O configuration. I/O operations perform its control operations across multiple systems in a centralized manner. You can input a change on one system, and the I/O operations running on that system automatically communicate the change to copies of I/O operations or ESCON Manager on other systems that are potentially affected. Each I/O operations instance that is participating in the operation then uses system interfaces to coordinate the I/O configuration change with each operating system image. If the coordination is not successful (for example, if one OS/390 image rejects a request because it would remove connectivity to an allocated or reserved device), then the operation fails. This change processing is performed using the two-phase commit model of database systems, so that I/O operations can attempt back out of a failed operation. For example, if you change ESCON Director connectivity, I/O operations identifies the affected paths and varies them appropriately, across multiple system images as one operation (and attempts back out if the entire operation is not successful). I/O operations itself collects the information it displays (no separate configuration data file is necessary) and updates it dynamically as the configuration changes. Information functions are also multisystem, like control functions. The resources it reports on range from system images to channels, ESCON Directors and their internal connectivity configuration, control units, devices, converters, and coupling facilities.

144

IBM Tape Solutions for Storage Area Networks and FICON

For more details, refer to System Automation for OS/390 Planning and Installation, SC33-7038, or go to the following Web site: http://www.s390.ibm.com/products/sa/

6.2.6 Sharing with storage area networks A FICON environment needs fiber connections and, usually, Directors as well. You almost certainly will use fiber connections for your storage area network (SAN), and you may well decide to use switches or Directors. This raises the question of whether you can or should share these parts of the SAN fabric. It may also influence your choice of switch or Director. First, we should make it clear that FICON requires a FICON-capable server and software, and a FICON capable control unit. Although we can provide a fiber path between a FICON control unit and a Windows or UNIX server, it is of no value since there can be no communication because of different I/O command protocols. There is also one significant difference between FICON and native Fibre Channel. Because FICON (also known as FC-SB-2 for Fibre Channel Single Byte 2) uses only one byte for the switch port address and Fibre Channel can use three bytes, there can be valid SAN fabrics that cannot be addressed by FICON. Therefore, FICON configurations enforce a restriction that FICON Directors cannot be cascaded to prevent this. The SCSI attachments and the ESCON attachments share the 3494 library, but no other part of the infrastructure. The FICON and Fibre Channel attachments share the fiber infrastructure and the Director even though there is no communication between the FICON and Fibre Channel components. Also there is entirely separate management of the FICON and Fibre Channel environments. Table 6-1 Feature comparison: A60 versus J70

Physical size Supported drives 3592 J1A Fibre: 12 maximum

3590 A60

3592 J70

19-inch Rack Mount, 9 EIA units 3590 B, E, H; SCSI & Fibre Same as A60; maximum will increase later

19-inch Rack Mount, 4 EIA units Same as A60 Same as A60

Supported internal fibre switches

2109 S16, 2109 F16

Same as A60; Lower cost options & more ports later

Expansion drawer for bandwidth or more drive support

No

Not today. Has RIO-G hardware to expand.

Bandwidth maximum

200MB/sec (with fibre drives & FICON)

400MB/sec+ (measurement not yet done -- projected)

Host adapters

2 FICON (2Gb) & 4 ESCON ----> 1 FICON & 6 ESCON ----> 8 ESCON ----> Dual hot-swap power

High availability properties

---->

Non disruptive drive code update via library

4 FICON (2Gb) 3 FICON & 2 ESCON, 2 FICON & 4 ESCON, 1 FICON & 6 ESCON, 8 ESCON Dual hot-swap power Hot swap of dead adapters Non disruptive drive code update via library Future CA

6.3 3590-A60 control unit The IBM TotalStorage 3590 Model A60 tape controller is the follow-on control unit for the 3590 Model A50 controller and its predecessor, the 3590 Model A00. It provides S/390

Chapter 6. FICON attachment of IBM tapes

145

attachment for up to 12 Model B11, B1A, E11, E1A, H11,r H1A, J11, or J1A tape drives. The intermix of different drive models behind the same control unit is not allowed. The 3590 drives are attached to the 3590-A60 controller using either UltraSCSI or Fibre Channel interfaces. See Figure 6-5.

Figure 6-5 3590-A60 control unit, rear view

The IBM TotalStorage 3590 Model A60 tape controller is designed for attaching IBM TotalStorage 3590 drives to ESCON channels on ES/3090™, ES/9000®, or S/390 systems and FICON channels on 9672 G5, G6 and zSeries 900 systems. FICON support became available in December 2000. There is no support for attachment to parallel channels. The 3590 Model A60 controller provides multiple data transfer paths. It provides a path from one to four ESCON channel adapters, each with two ESCON channel attachments for a maximum of eight ESCON channel attachments. Or it can provide a path from one or two FICON channel adapters taking the place of ESCON channel adapters. See Table 6-2. Table 6-2 3590-A60 host channel adapters ESCON adapters

ESCON channels

FICON adapters

0

0

1 or 2

1

2

1 or 2

2

4

1 or 2

3

6

1

4

8

0

An ESCON adapter has two physical attachments while a FICON adapter has one. The maximum number of FICON adapters on one A60 controller is two. The FICON adapters may be both shortwave, both longwave or one of each. The 3590-A60 supports up to 12 3590 Model B, E, H or 3592J1A tape drives. All SCSI 3590 Model H drives have UltraSCSI interfaces with a peak data rate of 40 MB/sec. Some 3590 Model B drives also have an UltraSCSI interface, while earlier drives have a 20 MB/sec SCSI interface. Fibre Channel 3590Es and 3590Hs have a peak data rate of 100 MB/sec. Intermixing of drive types is not permitted.

146

IBM Tape Solutions for Storage Area Networks and FICON

Figure 6-6 shows the internal structure of the 3590 Model A60. The control unit function is performed in a single RISC processor that has a hard disk drive for storing microcode and diagnostic information, and two UltraSCSI ports or Fibre Channel ports, which are used to attach the 3590 drives. A 3590-A60 may have either Fibre Channel or UltraSCSI attachment to drives. Intermixing Fibre Channel and SCSI is not permitted. For large block sizes or highly compressible data, the higher interface speed with Fibre Channel drive attachment to the 3590-A60 can increase throughput.

RSF CE modem terminal

UltraSCSI

FICON

UltraSCSI ESCON

RISC control unit

SCSI: 3590 drives FC: 2109 switch

Fibre Channel Fibre Channel

Figure 6-6 3590-A60 internal structure

The controller can be at a maximum channel distance of 43 kilometers (27 miles) from the host when using fiber-optic cable between ESCON Directors. Or the controller can be up to 100 km (63 miles) when using 9 micron fiber in a FICON environment together with repeaters. Figure 6-7 shows how drives are attached with a SCSI back end for the A60 control unit. Each drive has two SCSI ports and each port is used. Two cables from the 3590 Model A60 go to two ports on the first drive. Two further cables connect into the plug on the first set of cables to continue the SCSI bus to the next drive. This sequence of connections repeats until the last drive is reached, when SCSI terminators are plugged into the plugs at the end of the last two cables.

S/390 host

3590-A60

3590 drive

3590 drive

3590 drive

3590 drive

Figure 6-7 3590 SCSI drive attachment

Figure 6-8 shows how drives are attached with a Fibre Channel back-end for the A60 control unit. The drives are attached through a 2109-S16 switch to two Fibre Channel interfaces within the 3590-A60. A maximum of 12 drives may be attached, depending on how the drives are housed. Only one of the two Fibre Channel ports on the 3590E drives is used. The other port is unused and may not be attached to anything else.

Chapter 6. FICON attachment of IBM tapes

147

Up to 12 Fibre Channel 3590E drives

3590 drive 3590 drive

S/390 host

3590-A60

2109 switch

3590 drive 3590 drive

... Figure 6-8 3590 Fibre Channel drive attachment

The switch must be a 2109 Model S16, not a Model S08, and must be ordered with enough shortwave GBICs (feature number #2010) for two ports for the 3590-A60 and one port for each 3590E drive. The switch is ordered separately from the 3590-A60. One power supply is standard for the switch. You may want to install feature #6103 to provide a second, redundant power supply. One IBM TotalStorage 3590 Model A60 tape controller may be installed in a 3494 D14 library frame. Due to its physical size, an IBM TotalStorage 3590 Model A60 tape controller cannot be installed in an L14 frame. The IBM TotalStorage 3590 Model A60 tape controller may also be installed in a standard 19-inch rack such as the 9309 Model 002 Rack Enclosure. There is no field upgrade to a 3590 Model A60. Upgrades from 3590 Model A50 or Model A00 controllers is done by replacement of the machine. When installed inside the IBM 3494 Tape Library, the A60 controller offers a choice of attachment to the Library Manager. Either the RS-422 attachment or the Attachment Concentrator Feature can be configured: 򐂰 Attachment support Mixing of 3590 B model, E model and H model or 3592 J model tape drives on the same 3590-A60 controller is not supported. 򐂰 L14 Frame support Because of its larger size, the 3590 Model A60 cannot be installed in a 3494 Model L14 Control Unit Frame. 򐂰 Seascape building block The IBM TotalStorage 3590 Model A60 tape controller consists of a standard RS/6000 model with PowerPC microprocessor, PCI bus and AIX operating system. This is the latest generation of IBM TotalStorage Tape Controller, succeeding the A00 and A50 controllers which preceded it. The IBM TotalStorage 3590 Model A60 tape controller is physically larger than the Model A50 controller, and internally contains a later model RS/6000 148

IBM Tape Solutions for Storage Area Networks and FICON

processor package. Figure 6-5 on page 146 shows a view of the IBM TotalStorage 3590 Model A60 tape controller. 򐂰 Hardware redundancy The A60 controller has several enhancements that provide increased availability. There are two pairs of cooling fans. The A60 has redundant power supplies with automatic failover and hot-swappable service capability. The A60 is designed so that almost all service functions can be performed without interruption to data access. Tape drive attachment provides redundant paths, so all drives can be used even if a back end hardware failure occurs. 򐂰 Call Home support To help prevent outages, the IBM Call Home function automatically opens a service alert when a problem occurs with an IBM TotalStorage 3590 A60 Control Unit. A service representative can be sent to the installation and status information can be transmitted to the IBM Support Center for evaluation. In addition, the Call Home function can send alerts to a pager service so that multiple people can be notified. Call Home is supported on the IBM TotalStorage 3590 A60 Control Unit in stand-alone, Silo, and IBM TotalStorage 3494 environments. This feature initiates a call directly to the IBM Support Center. By using the remote support attachment of the 3494, the modem attached to the 3494 is activated and, if the customer allows, the IBM Support Center is dialed. Actions that can be initiated include the ability to: – – – – –

Open a Problem Management Record (PMR) Dispatch a service representative Search the RETAIN® database Download error logs Alert IBM support personnel

򐂰 Engineered for enhancement The A60 is the base for enhancements in performance and attachment flexibility. The IBM TotalStorage 3590 tape drives with UltraSCSI attachment have native data transfer speeds that exceed the throughput capability of the A50 controller and Fibre Channel attachment increases maximum throughput still further. The A60 with its design and host attachment capability allow the exploitation of this performance capability. 򐂰 FICON attachment Each FICON longwave or shortwave adapter provides up to 64 logical paths, used when a Director switches the device to different host channels. A drive may be addressed simultaneously through an ESCON and a FICON path, even from the same system. The same device address must be used on each path in this case. FICON offers these improvements compared to ESCON for 3590 attachment: – Maximum 100 MB/sec data transfer rate A FICON channel has a rated maximum speed of 100 MB/sec. This means that the channel will not be a performance bottleneck for the transfer highly-compressible data. Also we can expect higher sustained throughput from FICON attachment than from ESCON attachment. For FICON adapter feature codes and compatibility, see Table 6-3 on page 152. – Maximum 100 km Using 9 micron fiber between repeaters or switches and 10 km links (20 km by RPQ), the maximum distance between channel and control unit is 100 km. Using 50 micron or 62.5 micron fiber, the maximum distance is 550 m (1800 feet).

Chapter 6. FICON attachment of IBM tapes

149

– Greater tolerance to extended distance At extended distances, ESCON suffers data rate droop, where the data rate reduces substantially. FICON suffers much less and is much more suitable than ESCON for remote vaulting applications. 򐂰 ESCON attachment Many of the functions of previous IBM tape control units have been moved into each IBM TotalStorage 3590 tape subsystem tape drive unit. Thus the primary function of the IBM TotalStorage 3590 Model A60 tape controller is to provide an interface from the SCSI interfaces on the IBM TotalStorage 3590 tape drives to the S/390 ESCON / FICON channels. To achieve this, the IBM TotalStorage 3590 Model A60 tape controller has an ESCON adapter and a SCSI adapter that provides attachment from the ESCON host channels to the IBM TotalStorage 3590 tape subsystem SCSI drive units. Optional feature #3412 provides the first two ESCON channel adapters and SCSI adapter. A second #3412, which is optional, provides the third and fourth ESCON channel adapter and SCSI adapter for increased configurability, availability, and throughput. 򐂰 Performance The IBM TotalStorage 3590 Model A60 tape controller is capable of achieving unconstrained data transfer through the ESCON channel. Performance varies with application and configuration used, but in general, the data transfer capacity of the controller with two ESCON paths is approximately double that of the predecessor A50 controller. With four ESCON paths, the potential now exists to further improve this data transfer speed. In an unconstrained configuration, over 120 MB/sec can be expected. Multiple hosts can access the IBM TotalStorage 3590 Model A60 tape controller through an IBM 9032 or 9033 ESCON Director or the ESCON Multiple Image Facility (EMIF). Note: The IBM TotalStorage 3590 tape subsystem does not support attachment to parallel channels. – 17 MB/sec The IBM TotalStorage 3590 Model A60 tape controller supports the maximum speed, 17 MB/sec, of the ES/9000 or S/390 Parallel Server ESCON channels. Together with IBMLZ1 compression and the improved IBM TotalStorage 3590 tape subsystem drive data transfer rate of 14 MB/sec, this speed can considerably improve performance for long sustained tape I/Os. – Maximum 43 km By using ESCON LED fiber or ESCON laser fiber cables, and 9032 or 9033 ESCON Directors or the 9036 ESCON Remote Channel Extender, the maximum distance between the ES/9000 or S/390 host and the tape subsystem attached using the IBM TotalStorage 3590 Model A60 tape controller can be up to 43 km with minimum loss of performance. 򐂰 256 logical paths The number of logical paths for each physical path is 64. A total of 256 logical paths are available with an IBM TotalStorage 3590 Model A60 tape controller. 򐂰 All paths to each drive All the physical paths on the S/390 host are available to each of the tape devices. All attached tape devices in the subsystem can use any path.

150

IBM Tape Solutions for Storage Area Networks and FICON

򐂰 Up to 12 3590 tape drives attached An IBM TotalStorage 3590 Model A60 tape controller can attach up to 12 3590-B1A, E1A, H1A or 3592 J1A IBM TotalStorage tape drives, but the models cannot be intermixed behind the same controller. Unlike the 3480, 3490, and 3490E, such tape control operations as forward space file, back space file, and locate record do not tie up control unit function. These operations are controlled by the drive itself. The control unit function in the 3590-A60 becomes free. 򐂰 Device type 3590 or 3490E The IBM TotalStorage 3590 Model A60 tape controller has two device type options. It can run in native 3590 mode and provide a 3490E emulation feature that enables attached IBM TotalStorage 3590 tape drives to appear to the host system as 3490E tape drives. The 3490E emulation feature can be used with IBM TotalStorage 3590 tape drives attached to an A50 or A60 controller installed in a 3590 Model A14 stand-alone solution, or in rack-mounted subsystem solutions. Note: This feature is not available when the 3590 Model A50 or A60 is installed in an IBM TotalStorage 3494 tape library. The choice of 3490E emulation or native 3590s is set by your service representative.

6.3.1 Installation options The 3590 Model A60 may be installed in a number of ways as explained in the following sections.

Stand-alone tape subsystem The 3590-A60 may be installed in a 3590-A14 frame or a standard 19-inch rack together with up to four 3590-B11, 3590-E11, or 3590-H11 drives. For SCSI drives with multiframe support, you can attach up to eight 3590 drives to a 3590-A60, requiring two A14 frames or two 1.6 m or 1.8 m racks. For Fibre Channel drives with multiframe support, you can attach up to 12 3590 drives to a 3590-A60, requiring three 3590-A14 frames or three 1.8 m racks. For 1.6 m racks, the maximum is reduced to drives for two frames or 10 drives for three frames because of the space needed to mount the 2109-S16 switch. Multiframe support requires additional features on the 3590-A14 frame and 3590-A60 to provide the correct mountings and connectors. The frame or rack housing the 3590-A60 must have four drives installed before you install any multiframe features.

Installation in a 3494 library The 3590-A60 may be installed in a 3494-D14 frame. Unlike earlier control units, it may not be installed in an L14 frame as it is too big. The D14 frame can also contain up to four 3590 drives. See Figure 6-9. Adjacent frame support allows the attachment of up to 10 3590-B1A, 3590-E1A, or 3590-H1A drives to a 3590-A60, with four in the D14 frame with the A60 and either six in an adjacent 3494-D12 frame, or two in an adjacent 3494-L12 or L14 frame. The maximum number of drives with adjacent frame support is 10, whether SCSI or Fibre Channel drives. There is only one adjacent frame.

Chapter 6. FICON attachment of IBM tapes

151

L12

D14

D14

D12

LM

D

D

D

D

D

D

D D

D

D

D

D

D

D

D

D

CU

CU

Figure 6-9 3494 adjacent frame support

When installed inside the IBM 3494 Tape Library, the A60 controller offers a choice of attachment to the Library Manager. Either the RS-422 attachment or the Attachment Concentrator Feature can be configured. The 3590-A60 is not supported in a 3495 library.

Installation in a STK silo-compatible frame One or two 3590-A60s may be installed in a 3590-C10 frame. With SCSI drives, this allows you to attach up to eight 3590-B1A, 3590-E1A, or 3590-H1A drives, mounted in two separate 3590-C12 frames, with four drives per frame. With Fibre Channel drives, this allows you to attach up to 12 3590-H1A drives that are mounted in three separate 3590-C12 frames, four drives per frame. With Fibre Channel attachment, the maximum distance between the C10 frame and a C12 frame is increased from 25 m to 475 m. If power sequencing is used, the maximum distance remains at 25 m.

6.3.2 Features Table 6-3 shows the feature codes for the features applicable to the 3590-A60 control unit. The FICON-related feature codes are shaded in gray. Table 6-3 3590-A60 features

152

Description

Feature Code

FICON long wave to ESCON 50 micron patch cable

0103

FICON long wave to ESCON 62.5 micron patch cable

0106

FICON short wave to ESCON 62.5 micron patch cable

0109

Remote support facility

2710

IBM Tape Solutions for Storage Area Networks and FICON

Notes

One of 2710, 2711 or 2712

Description

Feature Code

Notes

Remote support switch

2711

One of 2710, 2711 or 2712

Remote support attachment

2712

One of 2710, 2711 or 2712

Dual ESCON/SCSI adapter

3412

One - four

FICON longwave adapter

3432

Up to two

FICON shortwave adapter

3433

Up to two

Fibre Channel drive attach

3463

Zero or one

Fibre drive rack attachment

3465

Multiframe SCSI rack attachment

4065

Install A60 in rack

4641

Field Replace A50 with A60

4655

Replace A00/A50 with A60

4660

Field install of A60

4665

Attached to ES/3090, ES/9000, or S/390

9000

One

3590 drive attached

9059

One - four

Multiframe SCSI 3590-A14 attached

9060

Field merge A60 in 3494

9665

One optional

Plant install Model A60 in a 3494 library

9666

One optional

One optional

The new features for FICON attachment are: 򐂰 Feature 0103: This provides one conditioner patch cable that allows ESCON 50 micron trunk fiber to be reused for FICON. This 2m (6.5 feet) cable has an SC duplex connector on one end and an ESCON connector on the other end as shown in Figure 6-10. A patch cable is needed at each end of the ESCON fiber that is being reused (the control unit end and the channel end). Two sets are needed for a Fibre Channel switch.

SC Duplex Connector

ESCON Duplex Connector

Figure 6-10 Fiber connections as used in a mode conditioning cable

򐂰 Feature 0106: This provides a 62.5 micron conditioner patch cable for the same use as feature 0103, but for 62.5 micron ESCON cables. 򐂰 Feature 0109: This provides a 62.5 micron conditioner patch cable for the same use as feature 0103, but for 62.5 micron ESCON cables and shortwave FICON adapters. 򐂰 Feature 3432: This provides a fiber attachment for the A60. You can install up to two of this feature, each providing one longwave fiber port with an SC duplex connector. This is a

Chapter 6. FICON attachment of IBM tapes

153

longwave attachment suitable for attachment to 9 micron single mode fiber. A 31m (100 feet) 9 micron fiber cable is provided unless either feature 0103 or 0106 is also specified in which case a patch cable is supplied instead. 򐂰 Feature 3433: This provides a fiber attachment for the A60. You can install up to two of this feature, each providing one shortwave fiber port with an SC duplex connector. This is a shortwave attachment suitable for attachment to 50/62.5 micron multi-mode fiber. 򐂰 Feature 3463: This provides two Fibre Channel ports for the attachment of 3590E drives through a 2109-S16 switch. The UltraSCSI adapters are removed as part of an upgrade.

6.3.3 Software support for FICON 3590-A60s FICON-attached 3590s are supported as shown in Table 6-4. Table 6-4 Software support for 3590-A60 FICON attachment Operating system

Minimum level

OS/390

OS/390 2.6 and later with DFSMS/MVS® 1.3 and later plus Program temporary fix (PTF) for OW43574

Notes

EREP 3.5 plus PTF VM/ESA® guests

VM/ESA 2.2. PTFs for VM62090 needed for 2.2 and 2.3.

Applies to guests with native FICON support

EREP 3.5 plus PTF VM/ESA native

VM/ESA 2.3 plus PTF for VM62710. EREP 3.5 plus PTF

VSE/ESA™

2.3 and later

TPF

4.1 and later

Only for 3590 B models

In addition, the PTF for APAR OW44564 is needed for the Hardware Configuration Dialog (HCD) and IR42956 for the Hardware Configuration Manager (HCM). This information is provided for initial guidance. Contact your IBM Support Center for the latest information about PTFs.

6.3.4 Defining devices You define the 3590 drive device addresses to OS/390 by using HCD. We show I/O control program (IOCP) statements in this section for the sake of brevity, although you cannot use IOCP for the definitions. Example 6-1 shows how you define a 3590-A60 with up to 12 devices. The full address range of 16 devices should be defined.

154

IBM Tape Solutions for Storage Area Networks and FICON

Example 6-1 Defining 3590s without FICON Directors CHPID PATH=(42,43),SHARED, PARTITION=((LPAR1),(LPAR2)), TYPE=FC CNTLUNIT CUNUMBR=0650, PATH=(42,43), UNITADD=((00,16)),UNIT=3590 IODEVICE ADDRESS=(7040,016), UNITADD=0,CUNUMBR=(0650), STADET=Y,UNIT=3590

Example 6-2 shows how you would define a 3590-A60 using a FICON Director with the configuration illustrated in Figure 6-11.

OS/390 host 42

43

01

02

FICON director D2

D3

3590-A60

7040-F

Figure 6-11 3590-A60 configuration with FICON Director

Example 6-2 Defining 3590s through FICON Directors CHPID PATH=42,TYPE=FC,SWITCH=01 CHPID PATH=43,TYPE=FC,SWITCH=02 CNTLUNIT CUNUMBR=0660, PATH=(42,43), UNIT=2032, STADET=Y,LINK=(FE) CNTLUNIT CUNUMBR=0650, PATH=(42,43), Chapter 6. FICON attachment of IBM tapes

155

LINK=(D2,D3), UNITADD=((00,16)), UNIT=3590 IODEVICE ADDRESS=(7040,016), UNITADD=0,CUNUMBR=(0650), STADET=Y,UNIT=3590

6.3.5 Performance FICON channels have a peak data rate of 100 MB/sec compared to a peak data rate for ESCON channels of 17 MB/sec. These figures discuss the performance of the 3590-A60 rather than earlier products. Under many circumstances, the data transfer speed of the 3590-A60 is double that of the predecessor 3590-A50 controller. All measurements in this section were conducted in a point-to-point configuration, and run on an IBM G6 processor. Drives were attached to the 3590-A60 using UltraSCSI interfaces. FICON attachments used longwave adapters, although the performance of shortwave adapters at equal distances should be the same. Two programs were used that differed primarily in the access method they employed to write or read data: either QSAM or EXCP. For large block sizes or highly compressible data, the higher interface speed with Fibre Channel drive attachment to the 3590-A60 can increase throughput. This occurs when the drive can deliver data at a rate faster than the 40 MB/sec allowed by an UltraSCSI interface. Note: Some of these measurements use tape block sizes in excess of 32 KB. Before OS/390 2.10, you can get block sizes up to 32 KB using QSAM, and up to 64 KB using EXCP. DFSMSdss™ typically uses block sizes greater than 32 KB. With OS/390 2.10, you may use block sizes up to 2 GB, subject to device limitations. OS/390 large block size support has a number of implications which are described in DFSMS Release 10 Technical Update, SG24-6120. Figure 6-12 shows that the maximum native data rate of 14 MB/sec for a 3590E drive is achieved for a 16 KB block size when FICON-attached. ESCON attachment does not permit the maximum data rate to be achieved until block size is roughly 64 KB. Compaction was turned off for these measurements to see the native data rate without any compaction effects.

156

IBM Tape Solutions for Storage Area Networks and FICON

3590 E ESCON vs. FICON

DATA RATE (MB/S)

15 14 Compaction off 10

5

0 4K

8K

16K

32K

64K

128K

BLOCK SIZE ESCON

FICON

Figure 6-12 Single 3590E drive, FICON, and ESCON attachment

DFSMShsm™ uses a 16 KB block size for incremental backup and for migration tapes. QSAM block sizes are limited to 32 KB. In each case, FICON attachment provides greater throughput at the drive level. Figure 6-13 again shows measurements for a single drive. In this case, the data used for the measurements is compressible at a compression ratio of three to one (3:1), a ratio that is typical of much production data. The chart shows that a 3590E out-performs a 3590B for higher block sizes. However, the key point in the chart is that a single 3590 drive with data compressing at 3:1 can exceed ESCON throughput at a block size as low as 16 KB.

ONE DRIVE WRITING 3:1

DATA RATE (MB/S)

40 32 30

20

10

0 16K

32K

64K

128K

BLOCK SIZE 3590 B

3590 E

Figure 6-13 Single 3590 drive with 3:1 compression

The 3590E at a 128 KB block size writes data at about double the sustained capacity of an ESCON channel.

Chapter 6. FICON attachment of IBM tapes

157

Remember that the 3590 drives are attached to the A60 controller using an UltraSCSI interface which is capable of 40 MB/sec. The peak data rate from the drive depends upon the compression ratio of the data. For large block sizes, we can fully use the SCSI bus when the compression ratio reaches four to one (4:1). However, we would not normally want to configure single drives on a channel. The next measurements look at a 3590-A60 with four drives attached. Figure 6-14 shows a comparison between four 3590Es attached through a 3590-A60 to one ESCON channel and one FICON channel. The compression ratio for the data used was 3:1. In this case, the channel attachment is the variable.

FOUR 3590 E 3:1 70

DATA RATE (MB/S)

60 50 40 30 20 10 0 4K

16K

32K

BLOCK SIZE ESCON

FICON

Figure 6-14 Four 3590E drives, ESCON and FICON

In the ESCON case, even with four drives and a 32 KB block size, we cannot saturate one ESCON channel and can only achieve a data rate of about 11 MB/sec. In contrast, with FICON, we see considerably higher data rates for all block sizes. At a 32 KB block size, we effectively reached the sustained throughput limit for a FICON channel. This diagram shows how we can use one FICON channel to replace several ESCON channels. In this case, we could replace 4-6 ESCON channels with one FICON channel. Figure 6-15 shows a more specific case of consolidating channels. We no longer compare the throughput on one ESCON channel to one FICON channel but four ESCON to one FICON. This chart supports the consolidation of four ESCON channels to one FICON channel more clearly.

158

IBM Tape Solutions for Storage Area Networks and FICON

FOUR 3590 E 3:1 70

DATA RATE (MB/S)

60 50 40 30 20 10 0 4K

16K

32K

BLOCK SIZE ESCON

FICON

Figure 6-15 Channel consolidation, four to one

Figure 6-16 shows that there is a difference between the data rates that you will achieve for reads and for writes. For both FICON and ESCON cases, you get a higher data rate for writes than for reads.

FICON

30

30

25

25 DATA RATE (MB/S)

DATA RATE (MB/S)

ESCON

20 15 10

20 15 10 5

5 0

0 4K

16K

32K

BLOCK SIZE 1 WRITE

1 READ

4K

16K

32K

BLOCK SIZE 1 WRITE

1 READ

Figure 6-16 Read data rate comparison

Note: Any IBM performance data contained herein was obtained in a controlled environment based on the use of specific data. Actual results that may be obtained in other operating environments may vary significantly. These values do not constitute a guarantee of performance. Customers attempting to adapt these techniques to their own environments do so at their own risk.

Chapter 6. FICON attachment of IBM tapes

159

6.4 IBM 3592-J70 control unit The IBM TotalStorage 3592 Model J70 tape controller is the next step in IBM tape. It provides S/390 attachment for up to 12 3592 Model J1A tape drives. The 3590 drives are attached to the controller using either UltraSCSI or Fibre Channel interfaces. The IBM TotalStorage 3592 Model J70 tape controller is designed to attach IBM TotalStorage 3592 drives to ESCON channels on ES/3090, ES/9000, or S/390 systems and FICON channels on 9672 G5, G6, and zSeries 900 systems. There is no support for attachment to parallel channels. The 3592 Model J70 controller provides multiple data transfer paths (see Table 6-2): 򐂰 From zero to eight ESCON channel adapters, each with ESCON channel attachments for a maximum of ESCON channel attachments 򐂰 From zero or four FICON channel adapters taking the place of ESCON channel adapters Table 6-5 3592-J70 host channel adapters ESCON adapters

ESCON channels

FICON adapters

0 1 2 3 4

Note that an ESCON adapter has two physical attachments, while a FICON adapter has one. The maximum number of FICON adapters on one J70 controller is four. The FICON adapters may be both short wave, both long wave, or two of each. The 3592-J70 supports up to 12 3590 Model B, E, H or 3592-J1A tape drives. Note that all SCSI 3590 Model H drives have UltraSCSI interfaces with a peak data rate of 40 MB/sec. Some 3590 Model B drives also have an UltraSCSI interface, while earlier drives have a 20 MB/sec SCSI interface. Fibre Channel 3590Es, 3590Hs, and 3592s have a peak data rate of 100 MB/sec. Also note that intermixing drive types is not permitted. A 3590-A60 may have either Fibre Channel or UltraSCSI attachment to drives. Intermixing Fibre Channel and SCSI is not permitted. It is configured with four fibre lower interface channels or four SCSI buses, two additional fibre lower channels over the A60. New Reliability, Availability, and Servicability (RAS) function added in the 3494 LM for the service of the 3592 J1A drives in addition to the controller service interface includes the capability of: 򐂰 򐂰 򐂰 򐂰 򐂰

Downloading drive code Installing drive code Getting the VPD data Changing drive VPD data Setting the real time clock in both the controller and the drives

The controller can be at a maximum channel distance of 43 kilometers (27 miles) from the host when using fiber-optic cable between ESCON Directors or up to 100 km (63 miles) when using 9 micron fiber in a FICON environment together with repeaters.

160

IBM Tape Solutions for Storage Area Networks and FICON

An IBM TotalStorage 3592 Model J70 tape controller cannot be installed in an L14 frame. The frame must first be converted to a D24. The IBM TotalStorage 3592 Model J70 tape controller may also be installed in a standard 19-inch rack such as the 9309 Model 002 Rack Enclosure. There is no field upgrade to a 3592 Model J70. Upgrades from 3590 Model A60 or 3590 Model A50 or A00 controllers are done by replacement of the machine. When installed inside the IBM 3494 Tape Library, the J70 controller offers a choice of attachment to the Library Manager. Either the RS-422 attachment or the Attachment Concentrator Feature can be configured: 򐂰 Attachment support Mixing of 3590 B model, E model, and H model or 3592 J model tape drives on the same controller is not supported. 򐂰 Seascape building block The 3592 Model J70 continues the line of seascape architecture. 򐂰 Hardware redundancy The A60 controller has several enhancements that provide increased availability. There are two pairs of cooling fans. The A60 has redundant power supplies with automatic failover and hot-swappable service capability. The A60 is designed so that almost all service functions can be performed without interruption to data access. Tape drive attachment provides redundant paths, so all drives can be used even if a back-end hardware failure occurs. 򐂰 Call Home support To help prevent outages, the IBM Call Home function automatically opens a service alert when a problem occurs with an IBM TotalStorage 3590 A60 Control Unit. A service representative can be sent to the installation, and status information can be transmitted to the IBM Support Center for evaluation. In addition, the Call Home function can send alerts to a pager service so that multiple people can be notified. Call Home is supported on the IBM TotalStorage 3590 A60 Control Unit in stand-alone, Silo, and IBM TotalStorage 3494 environments. This feature initiates a call directly to the IBM support center. By using the remote support attachment of the 3494, the modem attached to the 3494 is activated and, if the customer allows, the IBM Support Center is dialed. Actions that can be initiated include the ability to: – – – – –

Open a Problem Management Record (PMR) Dispatch a service representative Search the RETAIN database Download error logs Alert IBM support personnel

򐂰 Engineered for enhancement The A60 is the base for enhancements in performance and attachment flexibility. The IBM TotalStorage 3590 tape drives with UltraSCSI attachment have native data transfer speeds that exceed the throughput capability of the A50 controller and Fibre Channel attachment increases maximum throughput still further. The A60 with its design and host attachment capability allow the exploitation of this performance capability. 򐂰 FICON attachment Each FICON longwave or shortwave adapter provides up to 64 logical paths, used when a Director switches the device to different host channels.

Chapter 6. FICON attachment of IBM tapes

161

A drive may be addressed simultaneously through an ESCON and a FICON path, even from the same system. The same device address must be used on each path in this case. FICON offers these improvements compared to ESCON for 3590 attachment: – Maximum 100 MB/sec data transfer rate A FICON channel has a rated maximum speed of 100 MB/sec. This means that the channel is not a performance bottleneck for the transfer highly-compressible data. Also you can expect higher sustained throughput from FICON attachment than from ESCON attachment. For FICON adapter feature codes and compatibility, see Table 6-3 on page 152. – Maximum 100 km Using 9 micron fiber between repeaters or switches and 10 km links (20 km by RPQ), the maximum distance between channel and control unit is 100 km. Using 50 micron or 62.5 micron fiber, the maximum distance is 550 m (1800 feet). – Greater tolerance to extended distance At extended distances, ESCON suffers data rate droop, where the data rate reduces substantially. FICON suffers much less and, therefore, is much more suitable than ESCON for remote vaulting applications. 򐂰 ESCON attachment Many of the functions of previous IBM tape control units were moved into each IBM TotalStorage 3590 tape subsystem tape drive unit. Thus the primary function of the IBM TotalStorage 3590 Model A60 tape controller is to provide an interface from the SCSI interfaces on the IBM TotalStorage 3590 tape drives to the S/390 ESCON / FICON channels. To achieve this, the IBM TotalStorage 3590 Model A60 tape controller has an ESCON adapter and a SCSI adapter that provides attachment from the ESCON host channels to the IBM TotalStorage 3590 tape subsystem SCSI drive units. Optional feature #3412 provides the first two ESCON channel adapters and SCSI adapter. A second #3412, which is optional, provides the third and fourth ESCON channel adapter and SCSI adapter for increased configurability, availability, and throughput. 򐂰 Performance The IBM TotalStorage 3590 Model A60 tape controller is capable of achieving unconstrained data transfer through the ESCON channel. Performance varies with application and configuration used, but in general, the data transfer capacity of the controller with two ESCON paths is approximately double that of the predecessor A50 controller. With four ESCON paths, the potential now exists to further improve this data transfer speed. In an unconstrained configuration, over 120 MB/sec can be expected. Multiple hosts can access the IBM TotalStorage 3590 Model A60 tape controller through an IBM 9032 or 9033 ESCON Director or the ESCON Multiple Image Facility (EMIF). Note: The IBM TotalStorage 3590 tape subsystem does not support attachment to parallel channels. – 17 MB/sec The IBM TotalStorage 3590 Model A60 tape controller supports the maximum speed, 17 MB/sec, of the ES/9000 or S/390 Parallel Server ESCON channels. Together with IBMLZ1 compression and the improved IBM TotalStorage 3590 tape subsystem drive data transfer rate of 14 MB/sec, this speed can considerably improve performance for long sustained tape I/Os.

162

IBM Tape Solutions for Storage Area Networks and FICON

– Maximum 43 km By using ESCON LED fiber or ESCON laser fiber cables, and 9032 or 9033 ESCON Directors or the 9036 ESCON Remote Channel Extender, the maximum distance between the ES/9000 or S/390 host and the tape subsystem attached using the IBM TotalStorage 3590 Model A60 tape controller can be up to 43 km with minimum loss of performance. 򐂰 256 logical paths The number of logical paths for each physical path is 64. A total of 256 logical paths are available with an IBM TotalStorage 3590 Model A60 tape controller. 򐂰 All paths to each drive All the physical paths on the S/390 host are available to each of the tape devices. All attached tape devices in the subsystem can use any path. 򐂰 Up to 12 3590 tape drives attached An IBM TotalStorage 3590 Model A60 tape controller can attach up to 12 3590-B1A, E1A, H1A, or 3592 J1A IBM TotalStorage tape drives, but the models cannot be intermixed behind the same controller. Unlike the 3480, 3490, and 3490E, such tape control operations as forward space file, back space file, and locate record do not hinder control unit function. These operations are controlled by the drive itself, and the control unit function in the 3590-A60 becomes free. 򐂰 Device type 3590 or 3490E The IBM TotalStorage 3590 Model A60 tape controller has two device type options. It can run in native 3590 mode and can provide a 3490E emulation feature that enables attached IBM TotalStorage 3590 tape drives to appear to the host system as 3490E tape drives. The 3490E emulation feature can be used with IBM TotalStorage 3590 tape drives attached to an A50 or A60 controller installed in a 3590 Model A14 stand-alone solution, or in rack-mounted subsystem solutions. Note: This feature is not available when the 3590 Model A50 or A60 is installed in an IBM TotalStorage 3494 tape library. The choice of 3490E emulation or native 3590s is set by your service representative.

6.4.1 Installation options The 3590 Model A60 may be installed in a number of ways as explained in the following sections.

Stand-alone tape subsystem The stand-alone 19-inch rack supports: 򐂰 򐂰 򐂰 򐂰 򐂰

Single cartridge solution (no ACL support) Up to 16 drives in a 2m rack without a controller Up to 12 drives in a 2m rack with a controller Up to 12 drives in a 1.8m or 1.6m rack without a controller Up to 8 drives in a 1.8m or 1.6m rack with a controller

Cradle features must be ordered on the 3592 J1A for this support. It contains power supplies, shelves, etc. Support is for stand-alone racks such as 7014 T00 and T42. No support planned with the 3590 A14 frame, no support for Chapter 6. FICON attachment of IBM tapes

163

Installation in a 3494 library The 3590-J70 may be installed in a 3494-D24 frame. The D24 frame can also contain up to eight 3592 drives. See Figure 6-17. Adjacent frame support allows the attachment of up to 10 3590-B1A, 3590-E1A, or 3590-H1A drives to a 3592-J70, with four in the D24 frame with the J70 and either six in an adjacent 3494-D22 frame, or two in an adjacent 3494-L22 or L24 frame. The 3592-J70 supports a 12 3592-J1A drive spread between the D24 and the D24 with adjacent drive support. The maximum number of drives with adjacent frame support using 3590-x1A drives is 10 and 12 with 3592-J1A drives, whether SCSI or Fibre Channel drives. There is only one adjacent frame.

L22

D24

LM D

D

D

D

D24

D D

D D

D D

D D

D D D D

D D D D

CU

CU

D22

D

D

D

D

Figure 6-17 3494 adjacent frame support

When installed inside the IBM 3494 Tape Library, the J70 controller offers a choice of attachment to the Library Manager. Either the RS-422 attachment or the Attachment Concentrator Feature can be configured. The 3592-J70 is not supported in a 3495 library.

Installation in a STK silo-compatible frame One or two 3592-J70s may be installed in a 3592-C20 frame. Up to 20 3592-J1A drives can exist within a single C20 frame, with up to 80 in a single Library Storage Module (LSM). Also cradle features are required for the 3592-J1A drive support. The 3592-J1A cannot be installed into a C12 or C14 frame.

6.4.2 Features Table 6-6 shows the feature codes for the features applicable to the 3592-J70 control unit. The FICON-related feature codes are shaded in gray.

164

IBM Tape Solutions for Storage Area Networks and FICON

Table 6-6 3592-J70 features Description

Feature Code

Notes

FICON long wave to ESCON 50 micron patch cable

0103

FICON long wave to ESCON 62.5 micron patch cable

0106

FICON short wave to ESCON 62.5 micron patch cable

0109

Remote support facility

2710

One of 2710, 2711 or 2712

Remote support switch

2711

One of 2710, 2711 or 2712

Remote support attachment

2712

One of 2710, 2711 or 2712

Dual ESCON/SCSI adapter

3412

One - four

FICON longwave adapter

3432

Up to two

FICON short wave adapter

3433

Up to two

Fibre Channel drive attach

3463

Zero or one

Fibre drive rack attachment

3465

Multiframe SCSI rack attachment

4065

Install A60 in rack

4641

Field replace A50 with A60

4655

Replace A00/A50 with A60

4660

Field install of A60

4665

Attached to ES/3090, ES/9000, or S/390

9000

One

3590 drive attached

9059

One - four

Multiframe SCSI 3590-A14 attached

9060

Field merge A60 in 3494

9665

One optional

Plant install Model A60 in the 3494 library

9666

One optional

One optional

The new features for FICON attachment are: 򐂰 Feature 0103: This provides one conditioner patch cable that allows ESCON 50 micron trunk fiber to be reused for FICON. This 2m (6.5 feet) cable has an SC duplex connector on one end and an ESCON connector on the other as shown in Figure 6-10. A patch cable is needed at each end of the ESCON fiber that is being reused (the control unit end and the channel end). Two sets are needed for a Fibre Channel switch.

SC Duplex Connector

ESCON Duplex Connector

Figure 6-18 Fiber connections as used in a mode conditioning cable

Chapter 6. FICON attachment of IBM tapes

165

򐂰 Feature 0106: This provides a 62.5 micron conditioner patch cable for the same use as feature 0103, but for 62.5 micron ESCON cables. 򐂰 Feature 0109: This provides a 62.5 micron conditioner patch cable for the same use as feature 0103, but for 62.5 micron ESCON cables and shortwave FICON adapters. 򐂰 Feature 3432: This provides a fiber attachment for the J70. You can install up to two of this feature, each providing one longwave fiber port with an SC duplex connector. This is a longwave attachment suitable for attachment to 9 micron single mode fiber. A 31m (100 feet) 9 micron fiber cable is provided unless either feature 0103 or 0106 is also specified in which case a patch cable is supplied instead. 򐂰 Feature 3433: This provides a fiber attachment for the J70. You can install up to two of this feature, each providing one shortwave fiber port with an SC duplex connector. This is a shortwave attachment suitable for attachment to 50/62.5 micron multi-mode fiber. 򐂰 Feature 3463: This provides two Fibre Channel ports for the attachment of 3590E drives through a 2109-S16 switch. The UltraSCSI adapters are removed as part of an upgrade.

6.4.3 Software support for FICON 3592-J70s FICON-attached 3590s are supported as shown in Table 6-7. Table 6-7 Software support for 3590-J70 FICON attachment Operating system

Minimum level

OS/390

OS/390 2.10

VM/ESA guests VM/ESA native VSE/ESA TPF

In addition, the PTF for APAR OW44564 is needed for the Hardware Configuration Dialog (HCD) and IR42956 for the Hardware Configuration Manager (HCM). This information is provided for initial guidance. Contact your IBM Support Center for the latest information about PTFs.

6.4.4 Defining devices You define the 3592 drive device addresses to OS/390 by using HCD. We show IOCP statements in this section for the sake of brevity, even though you cannot use IOCP for the definitions. Example 6-3 shows how you define a 3590-J70 with up to 12 devices. The full address range of 16 devices should be defined.

Example 6-3 Defining 3592-J70 without FICON Directors CHPID PATH=(42,43),SHARED, PARTITION=((LPAR1),(LPAR2)), TYPE=FC CNTLUNIT CUNUMBR=0650, PATH=(42,43), UNITADD=((00,16)),UNIT=3590

166

IBM Tape Solutions for Storage Area Networks and FICON

IODEVICE ADDRESS=(7040,016), UNITADD=0,CUNUMBR=(0650), STADET=Y,UNIT=3590

Similarly, Example 6-4 shows how you would define a 3590-J70 through a FICON director.

Example 6-4 Defining 3590s through FICON Directors CHPID PATH=42,TYPE=FC,SWITCH=01 CHPID PATH=43,TYPE=FC,SWITCH=02 CNTLUNIT CUNUMBR=0660, PATH=(42,43), UNIT=2032, STADET=Y,LINK=(FE) CNTLUNIT CUNUMBR=0650, PATH=(42,43), LINK=(D2,D3), UNITADD=((00,16)), UNIT=3590 IODEVICE ADDRESS=(7040,016), UNITADD=0,CUNUMBR=(0650), STADET=Y,UNIT=3590

6.5 IBM 3494 Virtual Tape Server For VTS Models B20 or B10 (Figure 6-19), a range of connectivity options are available including FICON. For a Model B18, either eight ESCON or four SCSI are available. The Peer-to-Peer VTS also supports either FICON or ESCON adapters.

Figure 6-19 IBM Total Storage Virtual Tape Server

Chapter 6. FICON attachment of IBM tapes

167

You can achieve attachment of a VTS to a FICON host using the FICON bridge card in an ESCON Director as shown in Figure 6-2 on page 141. Or you can use FICON native via a point-to-point configuration or switched configuration using a FICON Director (Figure 6-20).

FICON Frames

zSeries Server FICON Frames B2

A2

A3

A3

FICON / FC Director

A2

FICON Control Units CU-A A1

A1

B3

Full Duplex FICON Channel

B4

B1

A1 B2

Full Duplex

FICON Frames

A2 A3

B3 B4 B1

CU-B B1 B2 B3 B4

Figure 6-20 VTS FICON data flow in a switched configuration

6.5.1 Connectivity A range of interface options is available to enable the VTS to connect to an array of hosts. Figure 6-21 shows the latest of these options, which is the VTS FICON adapter card.

Figure 6-21 VTS FICON adapter card

With FICON channels connected to your VTS, you have the ability to: 򐂰 Reduce infrastructure, by replacing multiple ESCON channels with a single FICON channel, because there are: – Fewer channels to manage – Fewer Director ports to purchase – Fewer channel addresses used on the server 168

IBM Tape Solutions for Storage Area Networks and FICON

򐂰 Increase performance and distance: – The nominal transfer rate for FICON is 100 MB/sec versus 17 MB/sec for ESCON. – It enhances remote vaulting applications, since sites can now be up to 100 km away with minimal propagation delay. Table 6-8 shows the intermix capabilities for both the VTS Model B10 and B20. Table 6-8 VTS adapter intermix matrix for Models B10/B20 Model

F/C 3412 2 ESCON per F/C

F/C 3418 turns on 2ndEscon port

B10

2

B10

1

B10

1

1

B10

F/C 3422 SCSI

F/C 3415/16 FICON Note 2

2

ESCON Attached Fibers 4

4

2

4

2

2

FICON Adapter cards

FICON Attached Fibers

SCSI Adapter cards

2

2

B10 4

B20

4

B20

2

4

2

4

4

4

8

4

4

4

4

2

2

4

2

2

2

2

2

8

16

8

8

8

8

4

8

2

B20

Adapter slots

2 4

4 1

SCSI cable Ports

4

4

B10 B10

ESCON Adapter cards

4

8

8

B20

8

8

8

8

B20

4

4

4

4

B20

2

B20

2

2

4

4

4

4

4

8

4

4

8

4

4

8

Note: While the VTS has a range of model-dependent configuration options, where FICON is selected for use, it cannot be intermixed with SCSI interfaces. As for 3590-A60 controller and drives, changes are required to the IOCP to make the VTS capable of using the FICON adapters. Figure 6-22 shows is a list of potential IODF statements. CHPID PATH=OA,TYPE=FC,SWITCH=01 CHPID PATH=OB,TYPE=FC,SWITCH=01 CNTLUNIT CUNUMBER=500,PATH=(0A,0B), UNIT=3490 ,UNIT ADD=((00,16)), LINK=(29, 39),C UADD=0,

LIBRARY-ID=11111,LIBPORT-ID=01 IODEVICE ADDRESS=(200,16),UNIT=3490, CUNUMBER= (500) ,UNITADD=00

. . CNTLUNIT CUNUMBER=600,PATH=(0A,0B), UNIT=3490 ,UNIT ADD=((00,16)), LINK=(09, 19),C UADD=0,

LIBRARY-ID=22222,LIBPORT-ID=01 IODEVICE ADDRESS=(400,16),UNIT=3490, CUNUMBER= (600) ,UNITADD=00

Figure 6-22 Potential IOCP statements for a VTS with FICON adapters

Chapter 6. FICON attachment of IBM tapes

169

6.5.2 Software support As with any hardware, a certain level of software is required to support the functionality of the VTS with FICON. Table 6-9 shows the minimum software configuration levels. Table 6-9 Minimum software levels for VTS support of FICON channel architecture Operating system

Minimum level

Comments

OS/390

V2.9 (PTFs required)

z/OS

V1.1 (OW47844 recommended)

z/VM® native

V3.1 (VM62710 required)

RSU tape shipped with z/VM includes VM62710

z/VM guests

V3.1

z/VM with guests that support native FICON

VM/ESA native

V2.4 (VM62710 required)

VM/ESA guests

V2.4

VSE/ESA

V2.4

TPF

V4.1

VM/ESA with guests that support native FICON

6.5.3 Advanced functions with the VTS As discussed in 4.4.6, “IBM TotalStorage 3494 Virtual Tape Server” on page 102, the VTS is available with certain advanced functions. They enable the storage administrator to configure a range of outboard policy definitions. With these features, the VTS can: 򐂰 Control the grouping of logical volumes on to a specific pool of 3590 media. 򐂰 Create a second copy of your more important logical volumes, which enhances the high availability options. 򐂰 Control the copy mode of logical volumes within a peer-to-peer environment. In this way, critical volumes can be selected for IMMEDIATE copy while other, less critical volumes can be set to DEFERED copy. 򐂰 Control the residency of logical volumes within the Tape Volume Cache (TVC). With feature code 4000, we could only control the cache residency from a z/OS host via the DFSMS storage class definitions. With feature code 4001-4004, this same feature can be controlled at the VTS level through definitions either supplied at the Library Manager (LM) or the 3494 Specialist. You can find greater detail about these features in the redbook IBM TotalStorage Virtual Tape Server: Planning, Implementing, and Monitoring, SG24-2229.

6.5.4 Feature codes As with any VTS, several feature codes are supplied to enable the correct configuration to be ordered. Table 6-10 and Table 6-11 list some of the FICON and advanced functions related codes. For more information, see IBM TotalStorage Virtual Tape Server: Planning, Implementing, and Monitoring, SG24-2229.

170

IBM Tape Solutions for Storage Area Networks and FICON

Table 6-10 VTS: FICON feature codes Description

Model

Feature code

9 Micron LC/LC 31 meter

B10/B20

0201

9 Micron LC/SC 31 meter

B10/B20

0202

50 Micron LC/LC 31 meter

B10/B20

0203

50 Micron LC/SC 31 meter

B10/B20

0204

62.5 Micron LC/LC 31 meter

B10/B20

0205

62.5 Micron LC/SC 31 meter

B10/B20

0206

FICON enablement

B10/B20

3000

FICON attach - Long wavelength

B10/B20

3415

FICON attach - Short wavelength

B10/B20

3416

No FICON cable from plant

B10/B20

9700

Table 6-11 Some VTS advanced function feature codes Description

Model

Feature code

Advanced Functions

B18/B10/B20

4000

Advanced Policy Management

B18/B10/B20

4001

Advanced Policy Management > 250 GB

B18/B10/B20

4002

Advanced Policy Management > 500 GB

B18/B20

4003

Advanced Policy Management > 1000 GB

B18/B20

4004

Chapter 6. FICON attachment of IBM tapes

171

172

IBM Tape Solutions for Storage Area Networks and FICON

A

Appendix A.

Fibre Channel discussion Today, the industry considers Fibre Channel (FC) as the architecture on which most storage area network (SAN) implementations are built. Fibre Channel is a technology standard that allows data to be transferred from one network node to another at very high speeds. Current implementations transfer data at 100 MB/sec. The 200 MB/sec and 400 MB/sec data rates have already been tested. This standard is backed by a consortium of industry vendors and is accredited by the American National Standards Institute (ANSI). Many products are now on the market that take advantage of Fibre Channel’s high-speed, high-availability characteristics. Note: The word fibre in Fibre Channel uses the French spelling rather than the American-English spelling. This is because the interconnections between nodes are not necessarily based on fiber optics, but can also be based on copper cables. Some people refer to Fibre Channel architecture as the fibre version of Small Computer Systems Interface (SCSI). Fibre Channel is an architecture used to carry IPI traffic, Internet Protocol (IP) traffic, FICON traffic, FCP (SCSI) traffic, and possibly traffic using other protocols, all at the same level on the standard FC transport. An analogy can be Ethernet, where IP, NetBIOS, and Sytems Network Architecture (SNA) are used simultaneously over a single Ethernet adapter. Since these are all protocols with mappings to Ethernet. Similarly, many protocols are mapped onto FC. FICON is expected to be the standard protocol for S/390, and FCP is the expected standard protocol for the non-S/390 systems, both using Fibre Channel architecture to carry the traffic. The following sections introduce some basic Fibre Channel concepts. They begin with the physical layer and proceed to define the services offered.

© Copyright IBM Corp. 2000, 2001, 2002, 2003. All rights reserved.

173

Layers Fibre Channel is structured in independent layers, as are other networking protocols. There are five layers, where 0 is the lowest layer. The physical layers are 0 to 2, and the upper layers are 3 and 4.

Lower layers The lower layers are as explained here: 򐂰 FC-0 defines physical media and transmission rates. These include cables and connectors, drivers, transmitters, and receivers. 򐂰 FC-1 defines encoding schemes. These are used to synchronize data for transmission. 򐂰 FC-2 defines the framing protocol and flow control. This protocol is self-configuring and supports point-to-point, arbitrated loop, and switched topologies.

Upper layers Fibre Channel is a transport service that moves data fast and reliably between nodes. The two upper layers enhance the functionality of Fibre Channel and provide common implementations for interoperability. 򐂰 FC-3 defines common services for nodes. One defined service under development is multicast, to deliver one transmission to multiple destinations. 򐂰 FC-4 defines upper layer protocol mapping. Protocols such as FCP (SCSI), FICON, and IP can be mapped to the Fibre Channel transport service.

Topologies Fibre Channel interconnects nodes using three physical topologies that can have variants. Topologies include: 򐂰 Point-to-point: This topology consists of a single connection between two nodes. The entire bandwidth is dedicated for these two nodes. 򐂰 Loop: In this topology, the bandwidth is shared between all the nodes connected to the loop. The loop can be wired node-to-node. However, if a node fails or is not powered on, the loop is out of operation. This is overcome by using a hub. A hub opens the loop when a new node is connected and closes it when a node disconnects. 򐂰 Switched: A switch allows multiple concurrent connections between nodes. There can be two types of switches, circuit switches and frame switches. Circuit switches establish a dedicated connection between two nodes. Frame switches route frames between nodes and establish the connection only when needed. A switch can handle all protocols, since it does not look at the Fibre Channel layer FC-4.

174

IBM Tape Solutions for Storage Area Networks and FICON

Classes of service Fibre Channel provides a logical system of communication called classes of service, which are allocated by various login protocols. The five classes of service are: 򐂰 Class 1 - Acknowledged Connection Service This class dedicates connections through fabric equivalent of a dedicated physical link and delivers frames with acknowledgment in the same order as they were transmitted. 򐂰 Class 2 - Acknowledged Connectionless Service Multiplexes frames from multiple sources with acknowledgment. The frame order is not guaranteed. 򐂰 Class 3 - Unacknowledged Connectionless Service This is the same as Class 2, but without frame acknowledgment. The flow must be controlled at buffer level. 򐂰 Class 4 - Fractional Bandwidth Connection Oriented Service This is the same as Class 1, but with only a minimum of bandwidth guaranteed. If sufficient bandwidth is available, Class 2 and 3 frames share connections. 򐂰 Class 6 - Simplex Connection Service This is the same as Class 1, but it also provides multicast and pre-emption.

Appendix A. Fibre Channel discussion

175

176

IBM Tape Solutions for Storage Area Networks and FICON

B

Appendix B.

Terminology and other basics This appendix explains a few terms used in the book. It also provides explanations for the storage area network (SAN)-related topics.

© Copyright IBM Corp. 2000, 2001, 2002, 2003. All rights reserved.

177

StorWatch StorWatch, an IBM Enterprise Storage Resource Management (ESRM) solution, is a growing software family whose goal is to enable storage administrators to efficiently manage storage resources from any location within an enterprise. Widely dispersed, disparate storage resources can ultimately be viewed and managed through a single, cohesive control point.

Agent In the client-server model, the agent is the part of the system that performs information preparation and exchange on behalf of a client or server application. An example is the Agent SW installed on the SAN Data Gateway to talk to the StorWatch Data Gateway Specialist.

StorWatch Expert StorWatch Expert is the application that employs a three-tiered architecture. It includes a management interface, a StorWatch manager, and agents that run on the storage resource or resources being managed. Expert products employ a StorWatch database that you can use for saving key management data (for example, capacity or performance metrics). Expert products use the agents, as well as analysis of storage data saved in the database, to perform higher value functions. Such functions include reporting of capacity, performance, and so on, over time (trends), configuration of multiple devices based on policies, monitoring of capacity and performance, automated responses to events or conditions, and storage-related data mining.

StorWatch Reporter StorWatch Reporter reduces disk management costs and increases the productivity of storage administrators. It creates an inventory and collects disk capacity and utilization data for heterogeneous PC, UNIX, and OS/390 servers that reside across your network. You can then generate customized reports that help you to: 򐂰 򐂰 򐂰 򐂰

Understand your present disk capacity and utilization Identify trends in your storage usage Anticipate and avoid outages due to out-of-storage conditions Plan for growth in your enterprise

StorWatch Reporter also locates and launches Web-based storage software applications for centralized administration of storage software and hardware.

StorWatch Specialist StorWatch Specialist is an interface for managing an individual Fibre Channel device or a limited number of like devices (that can be viewed as a single group). It typically provides simple, point-in-time management functions such as configuration, reporting on asset and status information, simple device and event monitoring, and perhaps some service utilities.

SCSI Small Computer Systems Interface (SCSI) is a set of evolving American National Standards Institute (ANSI) standard electronic interfaces. It allows PCs to communicate with peripheral hardware such as disk drives, tape drives, CD_ROM drives, printers, and scanners, faster and more flexibly than previous interfaces. Table B-1 identifies the major characteristics of the different SCSI versions.

178

IBM Tape Solutions for Storage Area Networks and FICON

Table B-1 Characteristics of different SCSI versions SCSI version

Signaling rate (MHz)

Bus width (bits)

Maximum data rate (MB/sec)

Maximum number devices

Maximum cable length

SCSI-1

5

8

5

7

6m

SCSI-2

5

8

5

7

6m

Wide SCSI-2

5

16

10

15

6m

Fast SCSI-2

10

8

10

7

6m

Fast Wide SCSI-2

10

16

20

15

25m

UltraSCSI

20

8

20

7

1.5m

UltraSCSI-2

20

16

40

7

25m

Ultra2 LVD SCSI

40

16

18

15

12m

SCSI-3 SCSI-3 consists of a set of primary commands and additional specialized command sets to meet the needs of specific device types. The SCSI-3 command sets are used for the SCSI-3 parallel interface and for additional parallel and serial protocols. Such protocols include Fibre Channel, Serial Bus Protocol (used with IEEE 1394 Firewire physical protocol), and the Serial Storage Protocol (SSP).

SES SCSI Enclosure Services (SES) is an ANSI SCSI-3 proposal that defines a command set for soliciting basic device status (temperature, fan speed, power supply status, etc.) from a storage enclosures.

Third-party copy Third-party copy refers to an implementation of the SCSI Extended Copy command. The command causes data to be copied from one set of devices to another set (or the same set) of devices, the target devices. A copy manager is responsible for copying the data and reporting status back to the caller. The Extended Copy command is actually addressed to the copy manager. The copy manager is not responsible for preparing the target device by, for example, mounting a tape or rewinding a tape. The command is passed parameters that describe: 򐂰 The data that is to be copied: This describes inline data embedded in the command, or the address of the device holding the data and the amount of data to be copied. 򐂰 The location to whore the data is to be copied: This is the address of the device receiving the data.

Fibre Channel terms The Fibre Channel terms used in this redbook are defined here: 򐂰 B-Port (bridge port): Similar to an E-port and only connects to E-ports. 򐂰 E-Port (expansion port): A port on a switch used to link multiple switches together into a Fibre Channel switch fabric. 򐂰 F-Node (fabric node): A fabric attached node.

Appendix B. Terminology and other basics

179

򐂰 F-Port (fabric port): A port used to attach a Node Port (N_Port) to a switch fabric. 򐂰 FL-Port (fabric loop port): The access point of the fabric for physically connecting the user's Node Loop Port (NL_Port). 򐂰 G-Port (generic port): A generic switch port that is either a Fabric Port (F_Port) or an Expansion Port (E_Port). The function is automatically determined during login. 򐂰 L-Port (loop port): A node or fabric port capable of performing arbitrated loop functions and protocols. NL-Ports and FL_Ports are loop-capable ports. 򐂰 N-Port (node port): A Fibre Channel-defined hardware entity at the end of a link which provides the mechanisms necessary to transport information units to or from another node. 򐂰 NL-Port (node loop port): A node port that supports arbitrated loop devices. 򐂰 Point-to-point topology: An interconnection structure in which each point has physical links to only one neighbor resulting in a closed circuit. In point-to-point topology, the available bandwidth is dedicated. 򐂰 Zoning: In Fibre Channel environments, the grouping together of multiple ports to form a virtual private storage network. Ports that are members of a group or zone can communicate with each other but are isolated from ports in other zones.

180

IBM Tape Solutions for Storage Area Networks and FICON

Related publications The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook.

IBM Redbooks For information about ordering these publications, see “How to get IBM Redbooks” on page 182. 򐂰 IBM TotalStorage Virtual Tape Server: Planning, Implementing, and Monitoring, SG24-2229 򐂰 IBM TotalStorage Enterprise Tape: A Practical Guide, SG24-4632 򐂰 Magstar MP 3575 Tape Library Dataserver: Multiplatform Implementation, SG24-4983 򐂰 IBM S/390 FICON Implementation Guide, SG24-5169 򐂰 Introduction to IBM System/390 FICON, SG24-5176 򐂰 Netfinity Tape Solutions, SG24-5218 򐂰 FICON (FCV Mode) Planning Guide, SG24-5445 򐂰 Introduction to Storage Area Networks, SG24-5470 򐂰 Designing an IBM Storage Area Network, SG24-5758 򐂰 The IBM LTO Ultrium Tape Libraries Guide, SG24-5946 򐂰 IBM TotalStorage Peer to Peer Virtual Tape Server: Planning and Implementation Guide, SG24-6115 򐂰 Implementing an Open IBM SAN, SG24-6116 򐂰 DFSMS Release 10 Technical Update, SG24-6120 򐂰 IBM SAN Survival Guide, SG24-6143 򐂰 Tivoli Storage Manager Version 5.1 Technical Guide, SG24-6554

Other resources These publications are also relevant as further information sources: 򐂰 IBM 3494 Tape Library Dataserver Introduction and Planning Guide, GA32-0279 򐂰 Fiber Optic Link Planning (ESCON, FICON, Coupling Links, and Open System Adapters), GA23-0367 򐂰 7337 Setup and Operator Guide, GA32-0399 򐂰 IBM TotalStorage Tape Device Drivers Installation and User's Guide, GC35-0154 򐂰 z/OS DFSMS Software Support for IBM TotalStorage Enterprise Tape System 3592, SC26-7514 򐂰 System Automation for OS/390 Planning and Installation, SC33-7038

© Copyright IBM Corp. 2000, 2001, 2002, 2003. All rights reserved.

181

Referenced Web sites These Web sites are also relevant as further information sources: 򐂰 IBM Tape Storage publications http://www.storage.ibm.com/hardsoft/tape/pubs/prodpubs.html

򐂰 Tivoli Storage Manager http://www.tivoli.com/products/index/storage_mgr/

򐂰 Tivoli Storage Manager publications http://www.tivoli.com/support/public/Prodman/public_manuals/storage_mgr/admanual.html

򐂰 IBM SAN Data Gateway http://www.storage.ibm.com/hardsoft/products/sangateway/gatewayspec.htm

򐂰 IBM TotalStorage SAN Switch S08 and S16 http://www.storage.ibm.com/hardsoft/products/fcswitch/fcswitch.htm

򐂰 Software Engineering of America (SEA) http://www.seasoft.com

򐂰 Betasystems http://www.betasystems.com

򐂰 Computer Associates products http://www.cai.com/products/ca1.htm

򐂰 OpenTech Systems Tape/Copy http://www.opentechsystems.com/Tape-Copy.htm

򐂰 Innovation Data Processing http://www.innovationdp.com

How to get IBM Redbooks You can search for, view, or download Redbooks, Redpapers, Hints and Tips, draft publications and Additional materials, as well as order hardcopy Redbooks or CD-ROMs, at this Web site: ibm.com/redbooks

Help from IBM IBM Support and downloads ibm.com/support

IBM Global Services ibm.com/services

182

IBM Tape Solutions for Storage Area Networks and FICON

Index Numerics 2 Gigabit Fibre Channel performance 39 2031-016 68 2031-032 68 2031-L00 64 2032-064 71 2032-064 director 142 2032-1 142 2032-C36 cabinet 143 2042-001 57 2042-1 director 143 2108-G07 33 2108-R03 32 2109 Model S16 148 2109-F16 39 2109-M12 53 3494 library 151, 164 3502 105 3502-108 105 3534-F08 38 3570 92 3575 92 3586-SL72 87 3590 94 adjacent frame 151, 164 multiframe 151 3590 Model A00 145 3590 Model A50 145 3592-J70 control unit 160 3600 Series 2/4TB LTO Library 91 3600 Series 900GB/1.8TB LTO autoloader 91 64-port director 71 7140-160 35 7337 105

A Acknowledged Connection Service 175 Connectionless Service 175 address spaces 66 addresses assigned 50 adjacent frame support 151, 164 agent 178 AL_PA 65 AL_PA monitoring 49 ANSI 173 any-to-any access 12 any-to-any switch 12 any-to-any switched fabric 32 arbitrated loop 9, 174 Arbitrated Loop Physical Address (AL_PA) monitoring 49 arbitration 35–36 AS/400 24 ASIC 41–44, 50, 56, 63

asynchronous data transfer 9 asynchronous transfer mode (ATM) 55 ATM (asynchronous transfer mode) 55 autoloader 82, 93 automatic address assignment 48 auto-negotiation 51 auto-sensing capability 40 auto-sensing port 40 auto-sensing speed negotiation 48 auto-share feature 25

B B_Port 65–66 backplane 59, 76 module 62 upgrade 62 backup client 19 backup server 19 bandwidth 9–10 blade 55, 59–60 Bloom ASIC 41–43, 50 Boot Flash 41 bottlenecks 6 bridge port 65 bridging 64 broadcast 50 buffer 43 credits 13 pool 43 queuing 44 bus 2 operations 42

C cabinet 63 Call home 149, 161 cascaded fabric 12 cascaded switch 10, 13 cascading 11–12 central memory 43 architecture 43 channel zoning 34 Class 1 175 Class 2 175 Class 3 175 Class 4 175 Class 6 175 classes of service 175 CMI (Control Message Interface) 44 composite drives 36 Control Message Interface (CMI) 44 control processors 54 cooling 54

Index

183

copper 173 core fabric 55 Core Fabric Switch 53 CPU subsystem 41 CRC error 49 cross point chip 60 CTP card 69, 75 CTP2 72, 74

D dark fiber 138 data centric 7 data rate 2 data rate droop 138 data routing 56 data signaling rate 51 destination identifier (DID) 49 device-level zoning 48 diagnostic mode 47 diagnostics 51 DID (destination identifier) 49 Digital Linear Tape (DLT) 104 Director CTP card 75 fan module 75 FPM card 76 high availability 75 management software 76 power supply 75 SBAR assembly 75 disaster tolerance 26, 125 DLT (Digital Linear Tape) 104 DRAM 41 duplex fixed optical transceivers 74 duplex small form factor 74

E E_Port 45, 58, 65 ED-6064 Director management software 76 edge fabric 55 EFC server 70 EHPCT 97 embedded applications 47 embedded processor 42 encoding 174 end-to-end monitoring 49 ES-1000 64, 69 ES-3016 69 ES-3032 68–69 ESCON adapter 146, 160 ESCON Director 11 ESCON Manager 144 Ethernet 5 Extended Credit and Addressing Facility 60 extended distance 39

F F_Port 45, 58, 74

184

IBM Tape Solutions for Storage Area Networks and FICON

F16 39 fabric 8–11 attached switch 64 exploration 50 login (FLOGI) command 65 Fabric OS 41 Fabric OS Version 3.0 47 fabricenter equipment cabinet 70 factory default settings 46 fan assembly 62 fan-out 9, 33 FC-0 174 FC-1 174 FC-2 49, 174 FC-3 174 FC-4 174 FC-AL (Fibre Channel Arbitrated Loop) 9, 36, 64, 80 FC-GS 49 FC-IP 57 FC-LS 49 FCM 61 FCP 73, 173–174 FC-PH 49 FC-SW 49 FCV, See FICON conversion (FCV) mode Feature 1454 89 1455 89 8003 87 8004 87 fiber 173 Fiber Port Module (FPM) card 74 fibre 8 Fibre Channel 6, 8–9, 173–174 I/O card 59 point-to-point topology 9 switch 10 Fibre Channel Arbitrated Loop (FC-AL) 9, 36, 64, 80 Fibre Channel Control Module 61 Fibre Channel Patch Panel 89 Fibre Channel Switch Module 60 FICON 73, 173–174 adapter 146, 160 architecture 138 bridge adapters 140 cascading 145 channels 139 enablement feature 140 infrastructure 139 FICON 3592-J70 166 FICON conversion (FCV) mode 139, 141 filter-based monitoring 49 FIO blade 61 FIO card 59 FL_Port 45, 58 flannel chip 50 Flash File 41 flexible capacity 59 FLOGI command 65–66 flow control 174

FPM card 74, 76 Fractional Bandwidth Connection Oriented Service 175 frame 43 filtering 48 switch 174 framing protocol 174 FSW 60 FSW module 60–61 full duplex 9, 55

G G_Port 45, 69, 72, 74 gateway 6, 14 GBIC (Gigabit Interface Converter) 37, 59 Gigabit Interface Converter (GBIC) 37, 59

H H_Port 64, 66 HBA interoperability 52 heterogeneous inter-switch operations 52 High-Availability Option 57 hot disk sparing 35 hot-pluggable 54 hub 6, 9–10, 36, 174 port 64 HVD 82, 87, 89

INRANGE FC/9000 143 Fibre Channel Director 57 INRANGE FC/9000 Fibre Channel Director 57 Instant Copy 36 interchangeability of drives 89 internal fabric operation 48 internal link controller 141 internal pathing 59 interoperability 52 interswitch link (ISL) 12 trunking 38, 41 investment protection 39 IN-Vision Enterprise Manager 63 IOA 24 IOP 24 IP 173–174 iSCSI 5, 57 iSeries 24 ISL 49 ISL (interswitch link) 38, 41 ISL-Trunking 55 Isolated E_Port 45

J J cartridge 97

K K cartridge 97

I I/O operations 144 I2C bus 42 IBM 3580 Ultrium Tape Drive 81 IBM 3580-H11 82 IBM 3580-L11 82 IBM 3581 Model H17 83 IBM 3581 Model L17 83 IBM 3581 Ultrium Tape Autoloader 82 IBM 3583 capacity 85 Model L18 86 Model L36 86 Model L72 86 Ultrium Scalable Tape Library 85 upgrade 86 IBM 3584 LTO Ultrium drives 89 Model D32 88–89 Model L32 88, 91, 94 UltraScalable Tape Library 87 IBM 9032-5 140 IBM StorageSmart Solutions 89 IBM TotalStorage SAN Controller 160 35 IBM TotalStorage SAN Switch F08 38 IBM TotalStorage SAN Switch M12 53 IBM TotalStorage SAN Switch Specialist 41 inband management 70 InfiniBand 57 initialization 50

L L_Port 45 LAN 4 LAN-less backup 20 LED 46 library upgrade 89 light pipe 47 Linear Tape-Open, See LTO LIP (loop initialization process) 35 logical library 87 logical volume 102 loom chip 50 loop 9, 36, 174 device 65 initialization process (LIP) 35 switching hub 64 loop-back path 51 loop-free route 48 lowest-cost route 48 LTO tape subsystem LTO Ultrium tape family 79 LTO-Cartridge Memory (CM) 81 LTO-CM 81 LUN 56 level zoning 48 masking 34 zoning 52 LVD 82–83, 87, 89

Index

185

M Magstar 3590 94 Magstar MP 92 management 77 management software 76 McDATA ED-2064 142 McDATA ED-5000 142 McDATA ED-6064 Director 71 McDATA FC-512 143 memory 43 memory devices 43 messaging protocols 4 metadata 21 mid-point load 94 mini-buffers 43 multicast 174–175 multicast table 50 multiframe support 151 multipath 87, 93 multipath and single path 18 multiprotocol 57

N N_Port 45 name server zoning 52 name serving 41 NAS (Network Attached Storage) 4 NAS head 14 NetBIOS 4, 173 Network Attached Storage (NAS) 4 Network File System (NFS) 4 NFS (Network File System) 4 NL_Port 45 node 9 node-to-node 174 nonblocking ports 38, 40 non-cascaded switch 11 non-volatile block rewritable memory 60 read-only memory 60 NVRAM 61

O OEMI 2 operating system 49 support 53 optical ports 45 optical transceivers 45 OS/390 154, 166

P packets 5 partitioning 18 path rerouting 55 performance 39 Performance Bundle 41 performance monitoring 38, 41, 49, 55 physical media 174

186

IBM Tape Solutions for Storage Area Networks and FICON

point-to-point 174 connection 9 topology 9 virtual 9 pooling 24 port 44 port driver 50 port-level zoning 48 power 59 power module assembly 76 power supply 46, 54, 66, 69, 75 assembly 62 control 50 pre-emption 175 principal switch 50 selection 52 private device 66 Propagation delays 2 protocol FC converter 32 overhead 138 SCSI and IP 137 protocol-level zoning 48 public device 65

R real-time operating system 49 receive buffer memory 44 receive buffers 43 Redbooks Web site 182 Contact us xi redundant flash file 43 redundant switched data paths 71 redundant tape subsystem 27 remote management unit (RMU) 85 remote tape vaulting 26, 118 repeater 138 RETMA 46 RISC processor 42 RMU (remote management unit) 85 routers 6 routing path 49 routing table 50

S S/390 architecture 138 SAN (storage area network) 4–5 SAN Data Gateway 90, 97 SAN Data Gateway Router 32 SBAR assembly 75 scalable fabric 38 scalable performance 10 SCSI 2, 32–33, 173–174, 178 arbitration protocol 3 commands 3 protocol 3 SCSI Enclosure Service (SES) 179 SCSI-3 179 SCSI-3 protocol 137

SDRAM 42–43 Seascape 148, 161 self-discovering 9 self-healing 56 self-negotiate 38 SERDES (Serializer/Deserializer) 41 serial crossbar (SBAR) assembly 75 serial loop 36 Serializer/Deserializer (SERDES) 41 serverless backup 20 SES (SCSI Enclosure Service) 179 SFF transceiver 69, 74 SFP media 40–41, 44 SFP transceiver 45, 55 shared bus 3 shared mode 65 shared tape drive 25 SID (source identifier) 49 SID-DID pair 49 SilkWorm 12000 53 SilkWorm 3200 38 Simple Network Management Protocol (SNMP) 47, 51 Simplex Connection Service 175 skew 2 SL_Port 58 Small Computer Systems Interface (SCSI) 2 small form factor (SFF) transceiver 69, 74 Small Form-Factor Pluggable (SFP) media 40 SNA (Sytems Network Architecture) 173 SNMP (Simple Network Management Protocol) 47, 51 soft zoning 52 software applications 48 source identifier (SID) 49 spare ports 69 SSA connectivity 35 SSP 75 stand-alone tape subsystem 151, 163 stitch chip 50 STK silo-compatible frame 152, 164 storage area network (SAN) 4–5 storage virtualization 28 StorageSmart Solutions 89 StorWatch 178 agent 178 StorWatch Expert 178 StorWatch Reporter 178 StorWatch Specialist 178 SuperDLT 104 switch 6, 9–10, 174 switch controller 42 switch function 10 switch interoperability 51 switched 174 switched fabric 32 with cascading 12 without cascading 11 switched mode 65 switched point-to-point configuration 12 switched point-to-point fabric without cascading 11 synchronous data transfer 9

System Automation for OS/390 144 system services processor 75 Sytems Network Architecture (SNA) 173

T T_Port 58 tag 2 tape drive pooling or sharing 18 format specifications 80 pooling 24 sharing 24 volume cache 102 tape library 18, 32 sharing 18 TCP/IP 4 temperature monitoring 50 third-party copy 179 TL_Port 58 TPF 154, 166 traffic levels 56 transmission rates 174 transmitter negotiation 50 transport service 174 trunking 38, 41, 49 group 49 master port 49 port 49

U U_Port 45 UltraSCSI 32, 146, 160 Ultrium specifications 81 tape format 80 Unacknowledged Connectionless Service 175 unicast routing table 50 Universal Port Module (UPM) 72–73 Universal Port Modules 72 Upgrade Path/Extensible Core Architecture 63 UPM (Universal Port Module) 72–73

V Virtual Private SAN (VP SAN) 34 virtual tape 18 virtual volume 102 VM/ESA guests 154, 166 native 154, 166 volatile high-speed memory 60 VP SAN (Virtual Private SAN) 34 VSE/ESA 154, 166 VxWorks 49

W WAN 13 Web server 70, 77 World-Wide Name (WWN) 56 Index

187

world-wide name zoning 48

X XCA 63 architecture 63 XCAF FIO module 60

Z zones 68 zoning 35, 38, 48, 68

188

IBM Tape Solutions for Storage Area Networks and FICON

IBM Tape Solutions for Storage Area Networks and FICON

(0.2”spine) 0.17”0.473” 90249 pages

Back cover

®

IBM Tape Solutions for Storage Area Networks ks and FICON Exploit storage area networks (SANs) for your tape subsystem workload Improve your tape performance with FICON Share your tape libraries more effectively

The explosive growth of stored data, the increasing value of the data, and the fact that it is often distributed over multiple heterogeneous servers has created significant problems for backing up and archiving data. Also, the increased pressure for more productive IT time and less time for administrative tasks means that there is more data to backup in less time. This IBM Redbook explains how tape drives and tape libraries can use storage area networks (SANs) to solve these problems. It explains how you can exploit SANs to attach, share, and exploit IBM tape subsystems and tape libraries. The ability to share tape libraries across many hosts creates a tremendous financial advantage that can be an immediate benefit of implementing SANs in your enterprise. You can often achieve significant cost savings and increase data security by implementing the tape sharing and extended distance capabilities of SAN. This redbook also includes a practical description of the products and components that were made available with the IBM SAN product rollout. For a definitive guide to SANs and their implementation, refer to the redbook Designing an IBM Storage Area Network, SG24-5758. Although the primary focus in this redbook is on SAN tape solutions, you must also understand advances in SAN-attached disk storage, advances in copy functions, and storage management software.

INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION

BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.

For more information: ibm.com/redbooks SG24-5474-03

ISBN 0738499145