Linux with zSeries and ESS : Essentials 9780738499987

221 31 3MB

English Pages 102 Year 2003

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Linux with zSeries and ESS : Essentials
 9780738499987

Citation preview

Front cover

Linux with zSeries and ESS: Essentials Enterprise-class solutions with Linux and IBM TotalStorage Focus on zSeries and ESS

FCP support for Linux on zSeries

Bertrand Dufrasne Don Skilton Jonathan Wright

ibm.com/redbooks

International Technical Support Organization Linux with zSeries and ESS: Essentials September 2003

SG24-7025-00

Note: Before using this information and the product it supports, read the information in “Notices” on page ix.

First Edition (September 2003)

© Copyright International Business Machines Corporation 2003. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Contents Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .v Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

...... ...... ...... ......

....... ....... ....... .......

. xi . xi . xii . xii

Chapter 1. Introduction to Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 The foundation for Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 The GNU General Public License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Open source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Linux: The operating system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Where Linux fits in. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.1 Working with other operating systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.2 Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Linux distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.1 Red Hat Enterprise Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.2 SuSE Linux Enterprise Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.3 UnitedLinux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 2 2 3 3 4 4 5 5 5 6 6

Chapter 2. IBM and Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1 IBM’s commitment to Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.1.1 The IBM Linux Technology Centers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 Linux services and application solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2.1 IBM Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.2 IBM Global Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.3 IBM Support Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.4 Linux consultancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.3 IBM services and software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.3.1 WebSphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.3.2 DB2 for Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.3.3 Tivoli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.3.4 Lotus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.4 Platform services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.4.1 zSeries services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.4.2 iSeries services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.4.3 pSeries services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.4.4 xSeries services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.4.5 Consolidated platform matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Chapter 3. zSeries and Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Ways to run Linux on zSeries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 S/390 and zSeries architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

© Copyright IBM Corp. 2003. All rights reserved.

23 24 24 25

iii

3.1.3 Further information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Background to z/VM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Virtual networking with z/VM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Systems management functions built-in to z/VM . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Consolidation and scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Consolidation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 Some examples and further information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Linux application solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

26 27 28 29 29 29 29 31 31

Chapter 4. IBM TotalStorage and ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 IBM TotalStorage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Value proposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 IBM TotalStorage Proven™ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Enterprise Storage Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 ESS overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 ESS features and benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 ESS Specialist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

35 36 36 37 38 38 39 44

Chapter 5. FCP for zSeries with Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 zSeries and FCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 zSeries specific storage attachments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Distributed storage attachments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.3 zSeries-specific versus distributed storage controllers . . . . . . . . . . . . . . . . . . . . . 5.2 Connecting ESS to Linux on zSeries using FCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Hardware and software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 An implementation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

47 48 48 49 50 52 53 53

Chapter 6. ESS Copy Services and Linux. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Peer-to-Peer Remote Copy (PPRC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 PPRC Extended Distance (PPRC-XD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 ESS Copy Services Web User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Copy Services command line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 ESS Copy Services available with Linux on zSeries. . . . . . . . . . . . . . . . . . . . . . . . . . .

61 62 63 65 68 71 76 77

Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

79 79 79 79 80 80

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

iv

Linux with zSeries and ESS - Essentials

Figures 3-1 3-2 3-3 3-4 3-5 3-6 3-7 3-8 3-9 4-1 4-2 4-3 4-4 4-5 4-6 4-7 5-1 5-2 5-3 5-4 5-5 5-6 5-7 5-8 5-9 5-10 5-11 5-12 6-1 6-2 6-3 6-4 6-5 6-6 6-7 6-8 6-9 6-10 6-11 6-12 6-13 6-14

IBM eServer zSeries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Running Linux on zSeries in an LPAR or as a z/VM guest . . . . . . . . . . . . . . . . . . . . The relationship between IFLs and LPARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuration of CPs for a zSeries running z/OS and Linux under z/VM. . . . . . . . . . z/VM virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Server farm in a box. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Horizontal server growth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Samba file/print scenario - distributed solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Samba file/print scenario - zSeries solution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM’s Seascape® architecture - ESS Model 800 . . . . . . . . . . . . . . . . . . . . . . . . . . . ESS Model 800 major components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM TotalStorage Enterprise Storage Server for storage consolidation . . . . . . . . . . Enterprise Storage Server Model 800 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Disaster recovery and availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Storage area network (SAN) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ESS welcome screen. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Typical zSeries storage attachment concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SCSI addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . World Wide Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DASD addressing scheme. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DASD access translations FICON vs. FCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Storage Allocation panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Open Systems Storage panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modify Host Systems panel - add the Linux on zSeries host. . . . . . . . . . . . . . . . . . . Configure Host Adapter ports panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fixed Block Storage panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add volumes (panel 1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add volumes (panel 2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ESS Copy Services for zSeries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Synchronous volume copy PPRC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PPRC volume states - synchronous mode of operation . . . . . . . . . . . . . . . . . . . . . . PPRC-XD operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Duplex-pending XD volume state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PPRC-XD Basic operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FlashCopy point-in-time copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Incremental FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ESS Welcome screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Main menu ESS Copy Services Web User Interface. . . . . . . . . . . . . . . . . . . . . . . . . Volumes panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Logical Subsystems panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Paths entry panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tasks panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

© Copyright IBM Corp. 2003. All rights reserved.

24 25 26 26 28 30 30 32 33 38 39 40 41 42 44 45 48 50 50 51 52 54 54 55 56 57 58 58 62 63 65 66 67 68 70 71 72 72 73 74 75 76

v

vi

Linux with zSeries and ESS - Essentials

Tables 2-1 2-2 2-3 2-4 2-5 2-6 2-7 2-8 2-9 2-10 2-11 2-12 2-13 2-14 4-1 6-1

WebSphere for Linux product services matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Tivoli Security Management product matrix for Linux . . . . . . . . . . . . . . . . . . . . IBM Tivoli Storage Management product matrix for Linux . . . . . . . . . . . . . . . . . . . . . IBM Tivoli Configuration Manager product matrix for Linux . . . . . . . . . . . . . . . . . . . . Tivoli Workload Scheduler product matrix for Linux. . . . . . . . . . . . . . . . . . . . . . . . . . IBM Tivoli Performance and Availability product matrix for Linux . . . . . . . . . . . . . . . Tivoli Enterprise Console product matrix for Linux . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Tivoli NetView product matrix for Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TIBM Tivoli Service Level Advisor for Linux product matrix. . . . . . . . . . . . . . . . . . . . Lotus support matrix for Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . zSeries Linux information matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iSeries Linux information matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xSeries Linux information matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Web site matrix for each Linux platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Storage for Linux summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing PPRC,PPRC-XD, and FlashCopy on Linux for zSeries . . . . . . . . . . . . . .

© Copyright IBM Corp. 2003. All rights reserved.

14 16 16 16 16 17 17 17 18 18 19 20 20 21 37 77

vii

viii

Linux with zSeries and ESS - Essentials

Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law : INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces.

© Copyright IBM Corp. 2003. All rights reserved.

ix

Trademarks The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: IBM ™ e-business on demand™ eServer™ ibm.com® iSeries™ pSeries™ xSeries® z/Architecture™ z/OS® z/VM® zSeries® BladeCenter™ CICS® Domino™

DB2® Enterprise Storage Server® ESCON® FlashCopy® FICON™ HiperSockets™ IBM TotalStorage Proven™ IBM® Lotus Notes® Lotus® Multiprise® MQSeries® NetView® Notes®

NUMA-Q® OS/390® Parallel Sysplex® PowerPC® Redbooks™ S/390® Seascape® System/360™ Tivoli Enterprise™ Tivoli Enterprise Console® Tivoli® TotalStorage® WebSphere® Redbooks(logo) ™

The following terms are trademarks of other companies: ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel Corporation in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. C-bus is a trademark of Corollary, Inc. in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure Electronic Transaction LLC. Other company, product, and service names may be trademarks or service marks of others.

x

Linux with zSeries and ESS - Essentials

Preface IBM® has been committed to Linux long before it became trendy. This early adoption of the Linux potential explains why today IBM offers the widest range of platforms and products for deploying solutions on Linux, along with support and services. IBM TotalStorage® products are known for their high quality and reliability and work well with Linux. As part of a well designed Linux based e-business infrastructure, they can help you cut costs, consolidate infrastructure, and position you for the new on-demand world. This IBM Redbook presents high-level information on Linux in conjunction with IBM and TotalStorage products, giving proof points that these products can be deployed together to provide enterprise-class solutions. In particular this book looks at Linux with the zSeries® servers and IBM TotalStorage Enterprise Storage Server® (ESS) disk products. The book describes the Fibre Channel Protocol (FCP) support now available when connecting the ESS to a zSeries host running Linux. The last chapter describes the ESS copy services available for Linux on zSeries. This redbook is intended as a starting point and reference for IBM representatives, and Business Partners or clients who are planning Linux based solutions with IBM zServers and ESS storage products. Most of the information contained in this book is a compilation of the material from the Linux Handbook, SG24-7000, and Implementing Linux with IBM Disk Storage, SG24-6262-01. We encourage the reader to refer to these redbooks for more complete information, or for implementation details.

The team that wrote this redbook This redbook was produced by a team of specialists from around the world working at the International Technical Support Organization, San Jose Center. Bertrand Dufrasne is a Certified Consulting I/T Specialist and Project Leader for Disk Storage Systems at the International Technical Support Organization, San Jose Center. He has worked at IBM for 21 years in many IT areas. Before joining the ITSO he worked for IBM Global Services in the US as an IT Architect. He holds a degree in Electrical Engineering. Don Skilton is a Storage Systems Specialist with IBM Global Services, Australia. He has a degree in Mathematics, and joined IBM Australia in 1985. He has been working with mainframe storage software and hardware for the last 15 years. Areas of specialization in recent years include Tivoli® Storage Manager (especially for Linux on zSeries clients), storage management process engineering, conversion of third party tape management systems to IBM's DFRMM, and data center relocations. Jonathan Wright is a Technical Specialist in New Zealand. He has 10 years of experience in the Intel server and storage field. His areas of expertise include xSeries® hardware, Linux, clustering, and FAStT Storage. Thanks to the following people for their contributions to this project: Cristina Zabeu IBM Linux and Storage Solutions Market Leader

© Copyright IBM Corp. 2003. All rights reserved.

xi

Mary T. Morris IBM WW Linux Sales Leader Nick Harris, Ralph Cooley, Cameron Hunt, Randy Kuseke, Dan Lacine, Tomomi Takada, Bob Waite. Dirk Webbeler, Alexander Zaretsky, original authors of the Linux Handbook, A Guide to IBM Liux Solutions and Resources, SG24-7000. Ronald Annuss, James Goodwin, Paul McWatt, Arwed Tschoeke, original authors of the IBM Redbook Implementing Linux with IBM Disk Storage, SG24-6261-01. Mark S. Fleming David Fosmire IBM SSG Tucson Gustavo Castets Maritza M. Dubec International Technical Support Organization, San Jose Center

Become a published author Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. You'll team with IBM technical professionals, Business Partners and/or customers. Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you'll develop a network of contacts in IBM development labs, and increase your productivity and marketability. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.html

Comments welcome Your comments are important to us! We want our Redbooks™ to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: 򐂰 Use the online Contact us review redbook form found at: ibm.com/redbooks

򐂰 Send your comments in an Internet note to: [email protected]

򐂰 Mail your comments to: IBM Corporation, International Technical Support Organization Dept. QXXE Building 80-E2 650 Harry Road San Jose, California 95120-6099

xii

Linux with zSeries and ESS - Essentials

1

Chapter 1.

Introduction to Linux Linux is a UNIX-like open source operating system. It was the original creation of Linus Torvalds from Helsinki, Finland in 1991. He wrote the first kernel, which is the underlying program interfacing and running the computer hardware. Since the Linux source code is freely available, several companies have developed different distributions of Linux. A distribution is a complete system. The key component is the Linux kernel. Other utilities, services, and various applications can be included as well, depending on the distribution and the intended use. There is no standard distribution. Each distribution that is available has unique advantages. IBM was early to recognize the value of Linux: 򐂰 򐂰 򐂰 򐂰

Investing in Linux-related product development Forming alliances with key Linux distributors Contributing to the open-source community Aggressively supporting the platform

IBM believes this investment will benefit its customers as they continue to exploit Linux for their IT infrastructures and e-business. This chapter provides a brief introduction to Linux, and an historical perspective to its origins and relationship to open source and the GNU General Public License. The chapter continues with a high-level description of the Linux operating systems and its components, where Linux fits in the IT world, and Linux distributions.

© Copyright IBM Corp. 2003. All rights reserved.

1

1.1 The foundation for Linux We know that Linux refers to a UNIX-like operating system, so let us begin with a brief overview of the development of portable operating systems and open source to see the context in which Linux evolved.

1.2 The GNU General Public License In 1969, several AT&T Bell Labs employees1 began work on an operating system (OS) that would be called UNIX. A significant novelty of their development was that the OS was portable across hardware platforms. At the time, AT&T, for various legal reasons, was permitting free academic access to the source code to UNIX while charging over $20,000 (in 1976 dollars!) for commercial or government access. AT&T then halted publication of the source code in university texts as this revealed proprietary Bell Labs code. The era of collaborative programming had arrived. In the US, the academic variant of interest to UNIX became the Berkeley Systems Distribution (BSD),2 where virtual memory and networking were added. These advancements permitted large collaborative projects with contributors scattered throughout the world. Lawsuits eventually ensued among AT&T, the Regents of the University of California, and other parties over access to and distribution of the OS source code. Such constraints on intellectual property rights to code provided strong motivation for one researcher from the Artificial Intelligence Laboratories at the Massachusetts Institute of Technology to write an operating system that was both portable, and also would be licensed in a manner that would prevent its eventual constraint by intellectual property claims. The new OS was to be named GNU, a recursive acronym for “Gnu’s Not UNIX.” This work would be “copylefted” instead of copyrighted, licensed under the GNU General Public License (GPL), which stipulates that all programs run under or derived from GNU must have their source code published freely, relinquishing rights of control while retaining rights of ownership. This was the birth of free (as in freedom) software, in contrast to software in the public domain. By this time, vendors such as Sun, Hewlett-Packard, and IBM had proprietary commercial offerings derived from licensed AT&T UNIX that were gaining popularity with corporate customers. The nascent GNU development effort began by making tools such as editors, compilers, and file utilities available in source form that could be compiled and executed on any platform, standardizing and improving upon those offered by commercial vendors. Around 1990, programmers had contributed a nearly complete operating environment to GNU, with the exception of a kernel. The GNU kernel was to be based on a micro kernel architecture for improved portability. In the meantime, a small scale UNIX-like skeleton of an operating system3 called Minix was published in a text to be used as a teaching tool. It is here that Linus Torvalds enters the story. He decided to write a UNIX-like OS with improved functionality over that of Minix to run on readily available personal computers. He and colleague Lars Wirzenius published their source code under the GPL on the Internet for public comment, and Linux was born. Linux was a kernel without utilities, GNU was an operating environment lacking a finished kernel, and unencumbered non-kernel BSD pieces were available to complete the picture. In short order the components were combined with installation and maintenance tools and 1

Ken Thompson, Dennis Ritchie, and J.F. OssannaKen Thompson, Dennis Ritchie, and J.F. Ossanna Twenty Years of Berkeley UNIX: From AT&T-Owned to Freely Redistributable, Marshall Kirk McKusick, in Open Sources: Voices from the Open Source Revolution, O’Reilly, 1999 1-56592-582-3 3 Dr Andrew S. Tannenbaum, Vrije Universiteit, Amsterdam, The Netherlands 2

2

Linux with zSeries and ESS - Essentials

made available by distributors; the first of serious note is Slackware in 1993, followed by many others, making the GNU/Linux (or simply Linux, as it has since come to be known) combination readily available. In only a few years, a worldwide Linux community has evolved, comprised of programmers and users attracted by the reliability and flexibility of this “free” operating system. The term “open source” began to replace the term “free software” as the commercial adoption of GNU/Linux grew. There is in fact a difference, upon which hinges the fate of commercial ventures in this arena.

1.3 Open source Linux was developed under GNU, which has articulated a philosophy that defines “free code” or the user’s right to use the code, rather than defining what they cannot do, which is the case with proprietary software. This license allows the user to alter, distribute and even sell the code covered under its license as long as they allow those to whom they distribute the code to do the same. The General Public License promotes free code on the GNU Web page. It also provides protection for the developer and prevents a user from altering the code and then asserting proprietorship over the code. This does not mean the code cannot be sold. According to the GNU Web site, “free software” allows a user to run, copy, distribute, study, change, and improve the software. It must also be available for commercial use. Standards enable communication among heterogeneous software and systems. Open source can be viewed as a manifestation of this process, and the process itself as a necessity for extending the development of inter-networking. When there is a need for new software or hardware support, or a defect is discovered and reported, the software (creation or correction) can be swiftly done by the user who required the changes (or by the original author), with no need for a design review, cost analysis, or other impositions of a centralized development structure. This is made possible by open source code. IBM has recognized the efficacy of this community and sees the benefit of the rapid and innovative development of robust and stable code to provide the enabling layer for e-business applications. As a result of the evolutionary development of Linux, pieces of the code are located on various Web sites. Without some integration, it is difficult to install and upgrade the product, keep track of module dependencies and acquire drivers for the hardware. Distributions provide coherent bundles of software in one location that furnish the capabilities needed for a usable server or desktop. Generally, the products of different distributors have a lot of things in common, but there may be particular features or functions, which you may require that are not available from a given distributor.

1.4 Linux: The operating system Linux is a fully functional operating system similar to a UNIX system. It has all the standard features of enterprise UNIX systems. Management of the command structure is enabled through shells. We mention two of the many advanced shells that are available for Linux, the Bourne Again Shell (bash), and the tsch shell. Both shells support a complete shell-based programming language that you can use to create shell scripts. There are four fundamental components of Linux: 򐂰 Kernel: Provides program and hardware device management (such as disks and printers)

Chapter 1. Introduction to Linux

3

򐂰 Shell: Receives commands from the user and sends those commands to the kernel for the execution through a standard user interface. 򐂰 File structure: Files are organized into directories with the disk hardware. Each directory may contain any number of subdirectories each holding files. This is a similar structure to PC operating system file structures. 򐂰 Utilities In some solutions, typically with clusters, the Linux server does not need the traditional PC hardware BIOS. The hardware is directly controlled by Linux. This provides phenomenal boot-up times (three seconds is the current record). Linux has the same multiuser and multitasking capabilities as large UNIX servers. It provides the same level of system administration that you find on standard UNIX systems. Users can run several programs concurrently. You can create user accounts for different users and define their access rights to the files and system components. Installation of new devices, and network connection and control are also provided as standard in the Linux operating system. As a development environment, Linux has a powerful set of development tools for creating industrial-strength applications. The development toolset includes the GNU C Compiler. The Linux structure provides the ability for programmers to access the hardware of the computer and the networks to which it is connected. This is achieved by the provision of a hardware abstraction layer where programs can take advantage of hardware features through a standard applications programming interface (API). Linux programs can be portable to other versions of UNIX systems. Linux can use ANSI C, combined with one of several portability graphical user interface (GUI) toolkits. These programs can be written for both UNIX systems and Windows servers.

1.5 Where Linux fits in The adoption of Linux has been a rapid and changing process. It can be difficult to determine where Linux fits into the modern IT world. Linux plays a role in almost every area and is quickly making inroads where traditional operating systems ruled for years.

1.5.1 Working with other operating systems Linux is being used not only as the single operating system on a given computer system. It is also frequently used to populate virtual partitions on larger, non-Intel based systems such as the IBM , iSeries™, pSeries™, and zSeries servers. The virtual or logical partition provide: 򐂰 Server consolidation: An organization with existing Linux or UNIX servers can save hardware and personnel costs by consolidating those servers into virtual partitions on an existing IBM eserver platform. 򐂰 High availability: Multiple virtual partitions can provide a level of availability and application-based load balancing while retaining the management benefits of consolidated hardware. 򐂰 Infrastructure: Linux virtual partitions can provide security and infrastructure services to other partitions, which share the same virtual network.

4

Linux with zSeries and ESS - Essentials

1.5.2 Infrastructure Linux enters the mainstream markets by providing critical infrastructure services.

Web serving The combination of Linux and Apache offers an attractive package for customers. It provides a low cost, flexible solution for Web servers, with over 30% of the world’s Web sites running this combination. The demand is now moving toward high capacity Web sites, which users can interact with and that support high transaction rates.

File and print serving One of the basics for Linux implementation is the provision of inexpensive facilities such as file and print services. Linux offers a rapid return on investment (ROI) in this part of the infrastructure space. The management capabilities and low cost make this an easy solution to justify. Also, this is an important environment, but it does not typically have the operational importance of line-of-business applications. It is a relatively safe place for businesses to test this new technology.

Domain name server (DNS) and DHCP As a UNIX-clone, Linux is well proven at hosting Berkeley Internet Name Daemon (BIND) name servers and Dynamic Host Configuration Protocol (DHCP) services.

Router Linux is capable of advanced routing using inexpensive commodity hardware. Also, some router vendors have chosen Linux to be their embedded operating system.

Firewall and Intrusion Detection Services (IDS) Linux has been a popular provider of firewall and IDS services. Because of the advanced configuration and customization options, along with a small memory footprint, Linux has been an ideal solution for many organizations who want to avoid proprietary solutions.

1.6 Linux distributions Linux is available in many configurations from a variety of distributors. Linux advocates have strong preferences for one distribution over the other. Distributions from, for example, Red Hat, SuSE, Mandrake, and Debian, are presently available and offer their own advantages in package management, installation, development environment, and ease of use of various tools. In this book we only highlight two Linux distributors. IBM does not favor any specific Linux distribution, but instead IBM pushes for the standardization of Linux in general.

1.6.1 Red Hat Enterprise Linux 򐂰 Red Hat Enterprise Linux AS: Formerly known as Red Hat Linux Advanced Server, it is the core operating system and infrastructure enterprise Linux solution from Red Hat, supporting up to 8 CPUs and 16 GB of main memory, and is certified by the US Defense Information Systems Agency (DISA) as a Common Operating Environment (COE) compliant. It features High Availability Clustering and IP load balancing capabilities, asynchronous I/O support, Linux standard base interface conformity, improved SMP integration, and reduced I/O memory copy overhead.

Chapter 1. Introduction to Linux

5

򐂰 Red Hat Enterprise Linux ES: Provides the core operating system and networking infrastructure specially suited for medium-scale, departmental applications. 򐂰 Red Hat Enterprise Linux WS: The desktop/client counterpart for either Enterprise Linux ES or AS. Red Hat offers support and maintenance services for their enterprise distributions. For more information regarding Red Hat Linux, please visit: http://www.redhat.com

1.6.2 SuSE Linux Enterprise Edition Founded in 1992, SuSE Linux is based in Nuremberg, Germany, and is the most widely used distribution in Europe. The latest versions have a new graphical installation tool to simplify the installation. The advanced features include hardware acceleration for 3-D graphic chips and a journaling file system. SuSE also develops and supports a series of X Servers for newer, high-end graphic cards. The installation is available in German, English, Italian, and French. SuSE Linux comes with a large group of utilities and applications. SuSE is available in business versions. These include SuSE Linux Enterprise Server (SLES); servers for zSeries, xSeries, pSeries, and iSeries; and versions that support Alpha, PowerPC®, and Intel platforms. SLES is a package distribution of United Linux (see 1.6.3, “UnitedLinux) intended for server applications. For more information regarding SuSE Linux, please visit: http://www.suse.com

1.6.3 UnitedLinux UnitedLinux is a consortium formed in May 2002, established to combine the development efforts of several distributors. The vision is simple: partners combine the very best server operating system technology from leading distributors into a robust, single code based system. In January 2003 IBM joined UnitedLinux as a technology partner. The following United Linux consortium members have identical United Linux distributions: 򐂰 SuSE Linux Enterprise Server 8 (SLES 8) 򐂰 Turbolinux Enterprise Server 8 (TLES 8) 򐂰 Conectiva Linux Enterprise Edition Powered by United Linux All three companies provide distributions for Intel (xSeries), iSeries, pSeries, and zSeries. Turbolinux and Conectiva resell the SuSE distribution. Turbolinux markets primarily in the Asia Pacific, while Conectiva markets primarily in Latin America. Turbolinux and Conectiva will provide bug fixes to their customers within 30 days of their release by SuSE. SuSE is the lead UnitedLinux developer. Key elements of the UnitedLinux distribution include POSIX standard asynchronous I/O (AIO), raw I/O enhancements that provide high-bandwidth, low-overhead SCSI disk I/O, and direct I/O that moves data directly between the user space buffer, and the device performing the I/O, avoiding expensive copy operations, and bypassing the operating system’s page cache.

6

Linux with zSeries and ESS - Essentials

Other functionality in focus includes Hyper-Threading to enable multi-threaded server software applications to execute threads in parallel within each individual server processor; large memory support to take advantage of the Intel Physical Address Extension to support up to 64 GB of physical RAM and the full 4 GB of virtual addressing space per process; Internet Protocol Version 6 (IPv6), the next generation protocol designed by the IETF to replace the current version Internet Protocol; and LDAPv3, the latest available Directory Services protocol for better user base management, and application integration for mail servers and authentication servers.

Chapter 1. Introduction to Linux

7

8

Linux with zSeries and ESS - Essentials

2

Chapter 2.

IBM and Linux This chapter discusses IBM’s commitment to the Linux and open-source community, and reviews available IBM services that help Linux customers gain the greatest advantage of a Linux-powered IBM solution. It offers a scope of the tremendous acceptance and commitment Linux has today. The resources portrayed here provide a good, consolidated starting point to more detailed information. We recommend that you read and learn about these resources to help you stay current with Linux’s commitments and new services, which continue to grow rapidly. The information in this chapter was correct at the time this book was written. However, some of the information may change due to normal variations in the life cycle of each product. This is true for products other than Linux as well. This means that IBM may not support a specific product even though it may appear to be supported in this chapter. Therefore, we encourage you to investigate topics such as end-of-service dates, product withdrawals, and other support restrictions while gathering information from this chapter. Important: The services portrayed here are for IBM customers only, and may not be available in some specific countries or regions. Contact your local IBM Global Services (IGS) representative for details.

© Copyright IBM Corp. 2003. All rights reserved.

9

2.1 IBM’s commitment to Linux The history of IBM’s involvement with Linux is so large as to be outside the scope of this work. IBM is not a distributor of Linux. IBM is a contributor to Linux, a supplier of servers that run Linux, and a provider of support for customers who choose to use Linux. IBM’s commitment to Linux may best be illustrated with some of the many notable highlights over the last several years: 򐂰 As early as March 1999, IBM had announced key alliances, products, and global support for Linux, including among others distributors like SuSE Holding AG (all of which later founded United Linux along with distributor Conectiva S.A22), and Red Hat Software Inc. The IBM Thinkpad 600 became the first notebook computer listed as supported hardware by Red Hat. 򐂰 Linux made its official debut on S/390® servers in May 2000, with commercial deployment by a major ISP for server consolidation following shortly thereafter. In June, SuSE announced Enterprise Linux for the RS6000. 򐂰 In 2001, IBM won top honors at LinuxWorld for zSeries and iSeries systems running Linux, completing the availability of Linux across the entire eServer™ product line. Complementing this was the announcement of a broad range of storage options for Linux servers, including the Enterprise Storage Server (ESS) and Fibre Array Storage System (FAStT) external storage arrays. IBM’s Linux supercomputer systems were deployed in research and also in oil and gas exploration, and prepackaged Linux clusters for eBusiness made their debut. 򐂰 In 2002, continued emphasis on robust computing for small and medium businesses, server consolidation, retail, network infrastructure, advertising, telecom, and life sciences strengthened IBM’s position as the most committed Linux player across all market segments. By December 2002 IBM had made the eServer p630 available, the first pSeries system dedicated to Linux support. Internally, IBM began migrating its own business-critical services to Linux, demonstrating both an indisputable commitment to, and demonstrated scalability of, Linux eBusiness solutions. 򐂰 In 2003 IBM regularly announced new Linux hardware, ports of additional existing software to Linux, and new Linux services and support options.

IBM industry leadership supporting Linux IBM has over $1 billion investment in the Linux community. IBM has the Linux Technology Center, IBM Research and Open Source Development Labs, and IBM Global Services Consulting and Support, which all contribute to the Linux and open source community. Linux support across IBM is growing across the entire product line with support for TotalStorage disk and tape. IBM also supports middleware solutions such as DB2®, Websphere, Lotus® Domino™, MQSeries®, Tivoli Storage Manager and more, and continues to work closely with many leading Linux distributors.

Worldwide IBM Linux commitment The $200 million Linux initiative in each of the Europe and Asia Pacific markets including the creation of Linux Solution Porting Centers. These centers are located all around the world including Paris and Montpellier in France; Greenock and Hursley in the UK; Boeblingen in Germany; Warsaw in Poland; Budapest in Hungary; Tokyo, Shanghai, Beijing, Taipei, Seoul, and Bangalore in the Asia Pacific; and U.S. centers in San Mateo, CA, Chicago, and Waltham, MA. Centers allow customers and key ISVs to transition their applications to Linux. IBM supports application development on all IBM servers.

10

Linux with zSeries and ESS - Essentials

2.1.1 The IBM Linux Technology Centers The IBM Linux Technology Center (LTC), whose mission is to work directly with the Linux development community with a shared vision of making Linux succeed, has one of the industry’s most comprehensive Linux development teams. They are a proven, accepted part of the open source Linux community, successfully bringing enterprise features into Linux. By doing so they provide Linux technical expertise to the community. The LTC has a team of 250-plus, with a small part of the over community located in many centers around the world. These centers are located in Adelaide, Austin, Bangalore, Beaverton, Beijing, Bethesda, Boeblingen, Boston, Boulder, Cambridge, Canberra, Chicago, Endicott, Hawthorne, Hursley, Kirkland, Mt. Laurel, New York City, Portland, Poughkeepsie, Raleigh, Rochester, San Francisco, Somers, Urbana-Champaign, Yamato, and Yorktown. The Linux Technology center can be located on the Web at: http://www.ibm.com/linux/ltc

IBM contributes JFS to Linux Storage IBM’s journaled file system (JFS) technology, currently used in IBM enterprise servers is designed for high-throughput server environments, key to running intranet and other high-performance e-business file servers. IBM is contributing this technology to the Linux open source community with the hope that some or all of it will be useful in bringing the best of journaling capabilities to the Linux operating system. JFS is a log-based, byte-level file system, which was developed for transaction-oriented, high performance, high-throughput server environments, and is key to running intranet and other high-performance e-business file servers. JFS is a scalable and robust file system and provides fast file system restarts in the event of a system crash. JFS is currently shipping with many Linux distributions including Red Hat and SuSE Linux. More information about JFS for Linux can be found at: http://www-124.ibm.com/jfs/

EVMS The LTC is also working on volume management technology in the form of the Enterprise Volume Management System (EVMS), where the result of the effort is aimed at providing more volume management capability and interoperability in the kernel. For information on EVMS, refer to: http://evms.sourceforge.net/

2.2 Linux services and application solutions Although the general opinion is that Linux has a limited number of enterprise-level applications available, the Linux applications featured in the IBM Global Solutions Directory has grown to more than 2,900. IBM has ten porting centers around the world and is continually enhancing this portfolio. For more in depth information regarding Linux application solutions refer to Linux Handbook A Guide to IBM Linux Solutions and Resources, SG24-7000-00.

Chapter 2. IBM and Linux

11

2.2.1 IBM Services IBM’s main focus is on the creation, development, and manufacturing of the industry’s most advanced information technologies. This includes computer systems, software, networking systems, storage devices, and microelectronics. IBM Global Services translates these advanced technologies into business value for customers and helps in making information technology (IT) easy to acquire and manage. To learn more about IBM Global Services, see: http://www.ibm.com/services/

2.2.2 IBM Global Services IBM helps customers optimize the business value of Linux-based solutions through cost-effectiveness, choice, and competitive advantage. IBM has experienced Linux consultants and architects engaged in thousands of open-source projects worldwide. With this, IBM can help customers in all industries enhance resiliency, scalability, stability, and security with their Linux deployments. IBM Global Services is the largest part of IBM with over 140,000 employees in 164 countries or regions. It is widely recognized as the largest service company in the world.

2.2.3 IBM Support Line IBM offers “enterprise-grade” support for Linux through the Internet, and voice support 24 hours-a-day, 7 days-a-week. This support ranges from answering usage questions to identifying problems, all revolving around the customer’s needs. Recent analyst studies have shown that among the most important concerns related to Linux implementations, support is an important consideration for both solution and vendor selection. IBM offers world-class support for Linux as a standard offering, with several options including the ability to customize support according to the skills and experience of the customers current IT staff. Support Line is IBM’s support offering for e-infrastructure software. It is an established worldwide proven service, which covers IBM and non-IBM Software products. Support Line offers: 򐂰 򐂰 򐂰 򐂰 򐂰

Years of experience in world-wide support for IBM and non-IBM operating systems World-wide support structure The right tools The right people The right alliances

IBM Operational Support Services: Support Line for Linux This fee service provides remote Linux technical assistance to answer usage, installation, and technical questions, to address product compatibility and inter operability concerns, and to isolate and diagnose problems. IBM Linux engineers are available 24 hours-a-day to provide defect support (fixes). Support Line provides consistent, cross-platform Linux support for IBM platforms and Intel/AMD OEM hardware. With this service, customers receive telephone access and electronic access to IBM services specialists.

12

Linux with zSeries and ESS - Essentials

IBM Operational Support Services: Support Line for Linux Clusters This fee service provides remote Linux cluster technical assistance to answer usage, installation, and technical questions, to address product compatibility and inter operability concerns, and to isolate and diagnose Linux cluster problems. It provides specialized Linux cluster support with open source and OEM. It offers remote problem identification and technical support for your Linux clustered environments. This service complements your in-house skills by providing comprehensive knowledge of Linux cluster environments.

IBM Operational Support Services: Advanced Support IBM's Advanced Support provides the highest level of remote support available through IBM. The customer receives 24-×-7 coverage, 365 days-a-year. This service is tailored to meet unique customer needs for continuous, business-critical Linux system operation.

IBM Operational Support Services: Custom Linux Support IBM offers remote Linux engineering support for creating modified versions of Linux. This services provides full remote support once you implement this custom distribution throughout your enterprise or customer base. By leveraging IBM's vast industry of Linux knowledge, IBM helps you to avoid serious pitfalls that can occur when you create your own distribution. We help you understand the necessary interdependent packages and required modules. IBM has the critical mass to deliver support teams in multiple worldwide locations. It has used some of its best talent in multiple locations to create the Change team. IBM can draw on the skills of over 200 key members of the Linux Technology Center found in more than 20 locations worldwide. You can learn more about the Support Line at: http://www.ibm.com/services/its/us/supportline.html

2.2.4 Linux consultancy Linux acceptance by the marketplace is rapidly moving forward. It has gained credibility and is being deployed widely for Web serving and scientific and technical computing clusters in a relatively short time. It is now being ported as well to mission-critical areas such as workload consolidation, retail points-of-sale, and financial branch automation. IBM is committed to providing the same level of support normally associated with enterprise computing environments as Linux continues to move in to those key business and industry areas. IBM’s Linux service offerings are designed to help Linux customers achieve better flexibility and cost-to-benefit rates. IBM Global Services offers one of the industry’s most comprehensive portfolios of Linux consultative and support offerings, from planning and design, to implementation and technical support. IGS also offers a full portfolio of Linux Services and has been doing so since February 2001. Over 300 IBM consultants skilled in Linux are available worldwide to help customers design, build, and enhance their Linux solutions. For information about IBM’s services and support for Linux, refer to the following IBM Linux Services and support Web sites at: http://www.ibm.com/services/its/us

Chapter 2. IBM and Linux

13

2.3 IBM services and software IBM is rolling out services to help customers rapidly deploy and optimize Linux-based e-business environments. IBM services for Linux covers Software Group products such as WebSphere®, Data Management (DB2), Lotus, and Tivoli. IBM is also delivering migration services to help customers port applications to a wide variety of IBM and non-IBM platforms running Linux.

2.3.1 WebSphere Many customers need WebSphere as a front end to their legacy applications. Customers need to access legacy applications (for example, CICS®) through a browser interface to provide users (internal or external) with Web-enabled access to multiple existing applications. They also want solutions that do not require a long development cycle and that support an open-standards compliant integrated infrastructure. Customers want to leverage existing transactional applications, database assets, and existing investment in hardware platforms that have a superior scalability characteristic on which to run their new application. Every customer with an IBM mainframe capable of having Integrated Feature for Linux (IFL) presents a good fit for WebSphere for Linux. IFL-capable systems are the 9672 G Series (G5 and G6), Multiprise® 3000, zSeries 800, and zSeries 900 vendors. IBM provides a variety of services for WebSphere that go from migration to specific training in all WebSphere products. For WebSphere Software Services, see: http://www.ibm.com/software/ad/vaws-services/websphere.html

For more information about WebSphere training and technical enablement, see: http://www.software.ibm.com/wsdd/education/enablement/

IBM also provides support for the wide range of WebSphere products made to fit the Linux environment. Table 2-1 lists references where you can find support for WebSphere products that run on Linux. Table 2-1 WebSphere for Linux product services matrix

14

WebSphere product

Product detail

Support

WebSphere Application Server

Advanced Edition 4.0 for Linux on zSeries

http://www.ibmlink.ibm.com/cgi-bin/ master?xh=a3KRKds8D5usfr1USenGnN9332&re quest=announcements&parms=H%5f201%2d366 &xfr=N#div19

WebSphere Edge Server

IBM WebSphere Edge Server V2.0 for Linux multiplatform

http://www.ibm.com/software/ webservers/edgeserver/support.html

WebSphere Commerce Suite

WCS Business Edition for Linux Intel and Linux on the zSeries

http://www.ibm.com/software/ webservers/commerce/wc_be/support.html

WCS Professional Edition for Linux Intel

http://www.ibm.com/software/ webservers/commerce/wc_pe/support.html

WebSphere Personalization

WebSphere Personalization 4.0 for Linux on the zSeries

http://www.ibm.com/software/ webservers/personalization/support.html

WebSphere MQ (formerly MQ Series)

MQ V 5.2 for Linux on Intel and zSeries

http://www.ibm.com/software/ts/ mqseries/support/index.html

Linux with zSeries and ESS - Essentials

WebSphere product

Product detail

Support

WebSphere Portal

WebSphere Portal 4.1 for the Linux Intel Red Hat or SuSE 7.1

http://www.ibm.com/software/ webservers/portal/support.html

WebSphere Studio Application Developer

WSAD V4.0 for Red Hat/SuSE 7.1 or Higher on Intel

http://www.ibm.com/software/ad/ studioappdev/support/

WebSphere Transcoding Publisher

Red Hat Linux V7.1or SuSE 6.4 on Intel

http://www.ibm.com/software/ webservers/transcoding/support.html

2.3.2 DB2 for Linux DB2 for Linux Beta Version 8.1 is now available. You can learn more about this release on the IBM DB2 for Linux Web site at: http://www.ibm.com/software/data/db2/linux/validate/

IBM offers the following services related to DB2 for Linux: 򐂰 Remote DB2 Installation: This service is supported for DB2 for Linux running under a logical partition (LPAR), a guest system (VM), or a stand-alone Linux. For more information, see: http://www.ibm.com/services/its/us/drmklm12.html

IBM can also help customers to migrate to DB2 for Linux. Learn about a variety of options and helpful information on the Web at: http://www.ibm.com/software/data/db2/migration/

򐂰 DB2 for Linux support: For technical support, product information, and all other DB2 for Linux support-related issues, go to: http://www.ibm.com/software/data/db2/udb/support.html

For information about DB2 consulting services and other services, refer to: http://www.ibm.com/services/

2.3.3 Tivoli Customers’ growing use of Linux systems within their Tivoli-managed environments has extended IBM’s commitment to scale Linux services and support to Tivoli products. IBM announced Tivoli Linux enablement since the summer of 2002 for its security software and Web management products. For more information, see: http://www.ibm.com/software/tivoli/solutions/linux/

Linux enablement was also extended to Tivoli Storage Management, configuration, operations, performance, and availability products. For details on the current Linux supported Tivoli products, refer to: http://www.ibm.com/software/tivoli/

You can access services for Linux supported Tivoli products, which include consulting, training, and certification on the Tivoli services Web site at: http://www.ibm.com/software/tivoli/services/

Tivoli’s customer support is quite extensive and complete. Some services are available only for registered users. You can find Tivoli Customer Support on the Web at:

Chapter 2. IBM and Linux

15

http://www.ibm.com/software/sysmgmt/products/support/

Product support matrices for Tivoli Table 2-2 through Table 2-9 on page 18 describe Tivoli product support matrices for Linux for each area of Tivoli.

IBM Tivoli Security Management Table 2-2 lists product detail for Tivoli Security Management. Table 2-2 IBM Tivoli Security Management product matrix for Linux Product

Product detail

Tivoli Access Manager for e-business

Tivoli Access Manager for SuSE SLES7 for zSeries

IBM Tivoli Storage Management Table 2-3 lists product details for Tivoli Storage Management. Table 2-3 IBM Tivoli Storage Management product matrix for Linux Product

Product detail

Tivoli Storage Manager

TSM 5.15 Server for Linux on the x86 Platform TSM 5.15 Clients for Red Hat 7.1 and 7.2 x86 TSM 5.15 Clients for SuSE 7.1, 7.2 and 7.3 x86

IBM Tivoli configuration and operations Table 2-4 lists product details for Tivoli configuration and operations. Table 2-4 IBM Tivoli Configuration Manager product matrix for Linux Product

Product detail

IBM Tivoli Configuration Manager

TCM Server for SuSE 7.2 on x86 TCM Server for Red Hat on zSeries TCM Gateway for SuSE 7.2 on x86 TCM Gateway for SuSE 7.0 and Red Hat on zSeries TCM Web Gateway for Red Hat 7.1 on x86 TCM Endpoint on Red Hat 7.1 7.2 and SuSE 7.2 on x86 TCM Endpoint on Red Hat and SuSE 7.0 on zSeries

IBM Tivoli Workload Scheduler Table 2-5 lists product details for the Tivoli Workload Scheduler. Table 2-5 Tivoli Workload Scheduler product matrix for Linux Product

Product detail

IBM Tivoli Workload Scheduler

TWS V8.1 Job Scheduler Console on Red Hat 7.1 for x86* TWS V8.1Job Scheduler Connector on Red Hat 7.1 for x86

16

Linux with zSeries and ESS - Essentials

IBM Tivoli Performance and Availability Management products IBM Tivoli Performance and Availability Management Products for Linux consists of: 򐂰 򐂰 򐂰 򐂰

IBM Tivoli Monitoring IBM Tivoli Enterprise™ Console IBM Tivoli NetView® IBM Tivoli Service Level Advisor

The details for each product are provided in the following tables. Table 2-6 lists product details for Tivoli Monitoring. Table 2-6 IBM Tivoli Performance and Availability product matrix for Linux Product

Product detail

IBMTivoli Monitoring

IBM Tivoli Monitoring Server, Gateway and Endpoint for Red Hat 7.0 and 7.1 on Intel IBM Tivoli Monitoring Server, Gateway and Endpoint for Turbo Linux 6.1 and 6.5 on Intel IBM Tivoli Monitoring Server, Gateway and Endpoint for SuSE Linux 7.1 and 7.2 on Intel

IBM’s Tivoli Enterprise Console® consists of the following components: 򐂰 򐂰 򐂰 򐂰 򐂰

Event Server Gateway Endpoint Adapter UI Server Event Console

Table 2-7 outlines the support availability for each Tivoli Enterprise Console component on Linux. Table 2-7 Tivoli Enterprise Console product matrix for Linux Supported version by platform

Tivoli Enterprise Console component

Red Hat Linux for Intel 7.1 and 7.2

Event Server

Gateway

Endpoint Adapters

UI Server

Event Console

SuSE Linux for Intel 7.0 and 7.1

Event Server

Gateway

Endpoint Adapters

UI Server

Event Console

Turbo Linux for Intel 7.0

Event Server

Gateway

Endpoint Adapters

UI Server

Event Console

Table 2-8 lists product details for IBM Tivoli NetView. Table 2-8 IBM Tivoli NetView product matrix for Linux Product

Product detail

Tivoli NetView

Tivoli NetView 7.1.2 for Red Hat or SuSE 7.1 on Intel

Chapter 2. IBM and Linux

17

Table 2-9 lists product details for IBM Tivoli Service Level Advisor. Table 2-9 TIBM Tivoli Service Level Advisor for Linux product matrix Product

Product detail

Tivoli Service Level Advisor

Service Level Advisor SLM component for Linux Red Hat or SuSE 7.1 on Intel Service Level Advisor Reports Server for Linux Red Hat or SuSE 7.1 on Intel

2.3.4 Lotus To support and enhance your Notes® and Domino environment, Lotus offers a full range of professional services, including consulting, education, and customer support. See IBM’s Lotus Software site for more information: http://www.lotus.com/lotus/products.nsf/fa_prohomepage

Table 2-10 outlines the Lotus Family services and support matrix for Linux. Table 2-10 Lotus support matrix for Linux Product

Product detail

Service and support

Lotus Notes® Domino

Lotus Notes Domino V 6 Red Hat Linux, Version 7.2 or SuSE Linux, Version 8.0 on Intel

http://www.lotus.com/products/rnext.nsf/873769A79D9C5B22 85256A0800720B96/478B755CB7C8CB9C85256C400005EF2A?OpenDo cument&cwesite=domfamserv

2.4 Platform services The xSeries, zSeries, iSeries, and pSeries platforms work with Linux as an operating system today, among the other operating systems that exist in the marketplace. Just as software-based services for Linux are needed, so is a well-founded, platform-based service and support structure which is still growing. The IBM Linux Support Line provides Operational Support Services and a Premier Remote technical support service. For more information, see: http://www-1.ibm.com/services/its/us/supportline.html

IBM provides technical support for the major distributions of the Linux operating system running in the xSeries, zSeries, iSeries, and pSeries platforms, as well as some non-IBM applications that operate in a Linux environment. IBM helps answer how-to questions, performs problem source determination, and provides mechanisms for a solution. In addition, by leveraging partnerships with the key distributors of the Linux operating system, IBM provides defect-level support for the Linux operating system. Remote assistance is available through toll-free telephone or electronic access depending on the country or region. IBM provides services for all currently supported xSeries (including BladeCenter™), zSeries, iSeries, and pSeries platforms in a different degree of variety and complexity through its Linux Portal. For more information see: http://www.ibm.com/linux/

As Linux continues to grow, so will IBM’s commitment to enhance current services or provide new ones.

18

Linux with zSeries and ESS - Essentials

IBM also provides Learning Services for customers who are interested in knowing more about Linux, and making it their key operating environment for e-business. IBM’s Linux education and training services offer a full curriculum and help to establish a personal Linux education plan. For details about IBM’s Linux Learning services and Linux Training, you can visit: http://www.ibm.com/services/learning/us/catalog/linux/

2.4.1 zSeries services The IBM Managed Hosting for Linux on the zSeries service is intended to provide cost-effective managed server capacity on demand. If you are new to e-business on demand™, you can find out more information on the Web at: http://www.ibm.com/services/ondemand/thinking.html

With IBM Managed Hosting - Linux virtual services, the customer can tap into managed server capacity without the up-front expense of buying the physical hardware. Instead of the physical Web, database, and application servers that businesses currently rely on, virtual servers on the zSeries running Linux can be leveraged. This means that availability and reliability gain a boost while IT infrastructure is greatly simplified. If you are interested in finding out more about this service, refer to: http://www.ibm.com/services/e-business/hosting/mgdhosting/linux.html

Table 2-11 lists various resources for platform-based information about services that are available for the zSeries server. Table 2-11 zSeries Linux information matrix Information

Location on the Web

Linux for zSeries applications

http://www.ibm.com/servers/eserver/zseries/ solutions/s390da/linuxisv.html

Linux for zSeries operating system download

http://www.ibm.com/servers/eserver/zseries/ os/linux/dist.html

Linux for zSeries Development tools

http://www.ibm.com/servers/eserver/zseries/ os/linux/ldt/

Linux for zSeries Library

http://www.ibm.com/servers/eserver/zseries/ os/linux/press.html

Linux for zSeries Education

http://www.ibm.com/servers/eserver/zseries/ os/linux/ed.html

2.4.2 iSeries services IBM provides a variety of services for customers interested in consolidating their servers on the iSeries server. To learn more about these services, see: http://www.ibm.com/servers/eserver/iseries/linux/advs.html

Table 2-12 lists various references for platform-based information about services that are available for the iSeries server.

Chapter 2. IBM and Linux

19

Table 2-12 iSeries Linux information matrix Information

Location on the Web

Linux for iSeries applications

http://www.ibm.com/servers/eserver/iseries/ linux/apps.html

Linux for iSeries Education

http://www.ibm.com/servers/eserver/iseries/ linux/educ.html

Linux for iSeries planning

http://www.ibm.com/servers/eserver/iseries/ linux/plan.html

Linux for iSeries Developer Resources

http://www.ibm.com/servers/eserver/iseries/ linux/resc.html

Linux for iSeries Reference Library

http://www.ibm.com/servers/eserver/iseries/ linux/library.html

Linux for the iSeries support

http://www.ibm.com/servers/eserver/iseries/ linux/support.html

2.4.3 pSeries services The pSeries server makes a reliable foundation for fast 64-bit native Linux servers. IBM provides hardware and software support service for the new pSeries 630 6C4 and 630 6E4 Linux-ready express configurations that are now available. For information about these models, see: http://www.ibm.com/content/home/store_IBMPublicUSA/en_US/eServer/pSeries/ entry/6306C4.html http://www.ibm.com/content/home/store_IBMPublicUSA/en_US/eServer/pSeries/ entry/6306E4.html

IBM will provide more services for Linux for pSeries in response to growing interest from customers to use Linux as their native operating system in this platform. For more information, see the IBM Linux for pSeries Web site at: http://www.ibm.com/servers/eserver/pseries/linux/index.html

2.4.4 xSeries services IBM is leveraging enterprise scalability and availability at attractive prices through xSeries Linux enablement. IBM’s Linux combination with the xSeries has proven successful. Many resources and information are available to users who are interested in using Linux on the xSeries as their key environment to meet their needs. Table 2-13 lists various references that provide platform-based information about services that are available for the xSeries server. Table 2-13 xSeries Linux information matrix

20

Information

Location on the Web

Linux for the xSeries information

http://www.pc.ibm.com/us/eserver/xseries/ index.html

eServer Blades Center

http://www.ibm.com/servers/eserver/blades/

Linux with zSeries and ESS - Essentials

Information

Location on the Web

Linux clustering on xSeries

http://www.ibm.com/servers/eserver/clusters/

Linux for xSeries ISV resources

http://www.ibm.com/developerworks/offers/ linux-speed-start/isv.html

2.4.5 Consolidated platform matrix For more valuable information about IBM Linux for all IBM ~ services, as well as latest information, updates, new services available, etc., refer to the IBM Linux dedicated Web site for each platform as listed in Table 2-14. Table 2-14 IBM Web site matrix for each Linux platform Platform

Location on the Web

zSeries

http://www.ibm.com/servers/eserver/zseries/os/linux/index.html

iSeries

http://www.ibm.com/servers/eserver/iseries/linux/index.html

xSeries

http://www.ibm.com/servers/eserver/xseries/linux/index.html

pSeries

http://www.ibm.com/servers/eserver/pseries/linux/index.html

Chapter 2. IBM and Linux

21

22

Linux with zSeries and ESS - Essentials

3

Chapter 3.

zSeries and Linux This chapter provides a brief overview of the IBM Linux on the zSeries architecture.

zSeries with a focus on how to run

Since Linux can run on zSeries as a z/VM® guest, this chapter contains background information on z/VM for the reader to get an understanding of the features and functions that their Linux installation would inherit in that case. Finally, this chapter discusses scenarios for scalability and consolidation of Linux solutions on zSeries.

© Copyright IBM Corp. 2003. All rights reserved.

23

3.1 Introduction The IBM zSeries is the successor of the S/390 series and IBM’s premier server for e-business transaction workloads. The zSeries has a long history in running commercial applications with unmatched reliability and availability. In 1999 Linux was ported to S/390 and became available in early 2000. Since then, many new features and device support have been added, and more and more customers are using Linux for S/390 and zSeries either for running new workloads or consolidating servers. Linux for zSeries is supported on all zSeries machines, and on the predecessor S/390 models G5, G6, and MP3000. These are shown in Figure 3-1.

Figure 3-1 IBM eServer zSeries

3.1.1 Ways to run Linux on zSeries There are three ways to run Linux on the zSeries. These are: 򐂰 Basic or native mode – Linux uses the entire machine, without any other operating system. In basic mode, you can use a single Linux image on the zSeries “bare metal.” 򐂰 Logical partition or LPAR – You can logically partition the zSeries hardware into a maximum of 15 separate LPARs. A single zSeries, for example, can host z/OS® applications in one partition, VM in another, and Linux applications in additional partitions. 򐂰 z/VM – A customer can run Linux as a virtual machine using z/VM as a hypervisor. z/VM provides virtualization of CPU processors, input/output (I/O) subsystems, and memory. A customer running z/VM can run hundreds of Linux systems on a single zSeries. With z/VM, for instance, a customer can offer a complete Linux server environment to each application developer and host production system all on the same zSeries server. The decision on how to run Linux on zSeries is typically made based on the expected workload: 򐂰 Few (if any) workloads require running Linux in basic mode. 򐂰 Linux workloads with large resource requirements (such as memory footprint and processing power) may be best suited for running in an LPAR. This is particularly true if

24

Linux with zSeries and ESS - Essentials

the Linux systems will be running at peak workloads for extended periods, so that there is limited opportunity for resource sharing. Use of LPARs may also be appropriate when only a small number of images are required, since in this case the additional resource and support requirements of z/VM may not be justified. 򐂰 Running Linux as a z/VM guest offers the maximum flexibility. The number of Linux guests running under z/VM can be adjusted to meet workload requirements. Only the hardware resources limit the amount of concurrently running instances. Most customers choose this option to maximize their investment. The last two options are depicted in Figure 3-2. Note that this figure is only illustrative, and in a real production environment you would also expect to have an LPAR running z/OS, as depicted in Figure 3-4 on page 26.

:

Linux for zSeries

z/VM Guests

Linux

Partitions

z/VM 4.3 LPAR Hypervisor

PU

PU

PU

PU

PU

PU MEMORY

^ zSeries z900/z800 hardware Figure 3-2 Running Linux on zSeries in an LPAR or as a z/VM guest

3.1.2 S/390 and zSeries architecture The heart of zSeries and S/390 architecture is the multichip module (MCM) that contains up to 20 processing units (PU). These are commonly referred to as central processing units (CPUs) or engines. All PUs are identical, but can take on different roles in the system. Each PU can be one of: 򐂰 Central Processor (CP) PU: Used by the operating system for executing customer work 򐂰 System Assist Processor (SAP) PU: This is a PU that runs the channel subsystem Licensed Internal Code to control I/O operations. 򐂰 Integrated Facility for Linux (IFL) PU: An IFL engine is restricted to running Linux or z/VM. 򐂰 Integrated Coupling Facility (ICF) PU: This engine is used to run the Coupling Facility function for use in a Parallel Sysplex®.

The IFL engine An IFL is a processor reserved for Linux, or Linux under VM. The significance is that it cannot be used to run other operating systems, and its existence is not reflected in the system model numbers, MIPS rating, or other power ratings. These power rating methods have significant implications for software costs. Adding an IFL does not affect these costs, permitting the use of Linux without impacting other software costs.

Chapter 3. zSeries and Linux

25

Figure 3-3 on page 26 shows how IFLs may be allocated to logical partitions (LPARs). .

A:

LPAR IFL

B:

LPAR

Std

LPAR

LPAR

C:

IFL

IFL

IFL

IFL

Figure 3-3 The relationship between IFLs and LPARs

Note: IFLs must be dedicated to LPARs running Linux or z/VM. They can be shared by multiple LPARs, provided the LPARs are running Linux or z/VM. Also, an LPAR cannot contain a mix of IFLs or CPs. Figure 3-4 provides an example of the disposition of CPs for a zSeries server running Linux under z/VM and other workloads.

zSeries 900 1C6

z/OS

z/OS

z/OS

LPAR1

LPAR2

LPAR3

CP0

CP1

CP2

CP3

CP4

Production Systems

Develop. and Test Systems

Linux

Linux

CP5

z/VM LPAR IFL0

IFL1

IFL2

Figure 3-4 Configuration of CPs for a zSeries running z/OS and Linux under z/VM.

3.1.3 Further information Here are a few useful links to additional information. Linux for IBM eServer zSeries: http://www.ibm.com/servers/eserver/zseries/os/linux A discussion group for Linux on zSeries can be found at: http://www2.marist.edu/htbin/wlvindex?linux-390 The official home page of the Linux/390 project, which aims to provide a central source of Linux/390 information and software can be found at: 26

Linux with zSeries and ESS - Essentials

http://www.linuxvm.org zSeries mainframe servers: http://www.ibm.com/servers/eserver/zseries

3.2 Background to z/VM You can run tens to hundreds of virtual Linux systems under z/VM and, when running Linux as a z/VM guest, you inherit all the features of z/VM. This section highlights some of these features. Virtual Machine (VM) technology was introduced with the IBM System/360™, the original mainframe, back in the mid-1960s. At that time, VM enabled users of S/360 systems to run multiple copies of operating systems on a single mainframe. Being able to run multiple operating systems made it possible to migrate to new releases and versions on the same mainframe system. A user could run a production workload on the old level of software while, at the same time, performing the work required to migrate the environment to the new operating system. Over the course of 30-plus years, IBM made investments in hardware, architecture, and microcode, as well as in the VM product itself, to enhance the virtualization technology available with each successive line of mainframe computers. This ‘total system’ investment is why IBM zSeries line of servers offers the industry’s best-of-breed virtualization technology. The virtualization technology found in z/VM is unique. Some ‘virtual server’ solutions consist of running multiple applications on a single operating system using virtual storage techniques to meet memory demands. z/VM on the other hand uses the real resources at its disposal (processors, memory, I/O, network, etc.) to create virtual machines, each able to run its own independent copy of any operating system that supports the zSeries architecture (which includes Linux). An operating system running in a virtual machine does so as if it were executing on real hardware. Hardware architecture and facilities are presented to the ‘guest’ system (i.e. are virtualized) by z/VM in such a way that the guest operating system believes it is using a real system, not a virtual one. Figure 3-5 shows an example with multiple virtual machines, including three Linux machines and a z/OS. The physical hardware is not seen by the guests - instead, each guest sees the virtual machine presented by z/VM. z/VM is even able to create a virtual machine environment that is functionally richer than a ‘real’ environment. Examples of this include: 򐂰 Using data-in-memory techniques to transparently enhance guest system performance 򐂰 Simulating device networking facilities that do not exist in the real mainframe system 򐂰 Sharing a single copy of a kernel among several guest systems

Chapter 3. zSeries and Linux

27

Linux

Linux

Linux

CMS

VSE

z/OS

Virtual z/VM PR/SM I/O and Network Memory

Real

Processors Figure 3-5 z/VM virtualization

3.2.1 Virtual networking with z/VM Virtual networks use z/VM virtualization to provide connectivity between guests without needing real hardware. Virtual networks enable fast and cost effective TCP/IP communication between z/VM Linux guests. Note that connectivity in a z/VM virtual network is limited to guests running in a single z/VM image. Virtual networks cannot be used for inter-LPAR communication. There are three types of virtual networks available to Linux guests: 򐂰 Inter-user communication vehicle (IUCV): Point-to-point TCP/IP connections between Linux guests can be established using IUCV. The Linux for S/390 and zSeries kernels include an IUCV driver, allowing you to connect two Linux virtual machines. Linux treats IUCV connections as any other TCP/IP network connection. 򐂰 Virtual channel-to-channel (vCTC): vCTC networks provide point-to-point connectivity between guests without real channel allocation. z/VM provides the ability to define virtual channel-to-channel adapters so that users can connect virtual machines using the CTCA protocol. On z/VM, this is useful for connecting Linux virtual machines to other virtual machines that do not support IUCV, such as z/OS. Virtual vCTC networks can also connect Linux virtual machines. 򐂰 VM Guest LAN: IUCV and vCTC support point-to-point connections. This can be cumbersome to manage if you plan to connect a server image to a large number of virtual machines. z/VM 4.2 introduces a new capability called ‘Guest LAN’ support. A server image running z/VM connects to a Guest LAN using the Hipersocket protocol. Virtual Hipersockets look just like the real thing to a Linux server, or any software that supports real HiperSockets™. There is no limit to the number of virtual Hipersockets connections that can be defined in a z/VM environment, and there is no predefined limit on the number of Guest LANs that can be defined.

28

Linux with zSeries and ESS - Essentials

3.2.2 Systems management functions built-in to z/VM There are many built-in systems management functions in z/VM. We describe just a few of them. Customers wanting to exercise command and control over their ‘real’ discrete server farms typically have to purchase additional servers to run Command and Control software products, often times also requiring client code to be installed on each server image being managed. This can become expensive in terms of software licence fees (for the servers and client code) plus additional hardware and networking expenses for the servers, not to mention the added complexity. z/VM includes a lot of Command and Control function built into the product.

Resource utilization controls include the ability to allocate processor capacity on a per-image basis with a high degree of granularity. Adding more resources like memory, disk space, and data-in-memory support can be done quickly and easily, though sometimes requiring a re-boot of the affected Linux images depending on the resource being added. VM offers tools like REXX, the Programmable Operator (PROP) and Pipelines, to build automation routines that can help minimize the manual intervention required to manage a server farm. For example, you could capture console traffic from Linux servers and programmatically perform system operations based on the console data.

3.3 Consolidation and scalability In this section we explain the advantages that z/VM offers when you need to grow the capacity of your solution (scalability) or make more efficient use of disparate server resources (consolidation).

3.3.1 Scalability The zSeries platform has long been recognized for its ability to scale to support the consolidation of diverse workloads. Processors and disks (such as the Enterprise Storage Server) can be incrementally upgraded to meet growing demands. Once upgraded, the zSeries server can then scale or grow its Linux workload in two ways. Vertical growth - when more system resources are added to an existing Linux image. For example, a Linux virtual machine could be given more processor capacity, more virtual memory, more I/O devices, or more virtual networking bandwidth. Horizontal growth - when additional Linux systems (typically z/VM guests) are added. This is illustrated in Figure 3-7.

3.3.2 Consolidation Consolidation of real distributed servers onto a zSeries platform, and in particular as guests under z/VM (as illustrated in Figure 3-6 on page 30), can offer many advantages. Virtual servers on z/VM share total system resources, such as memory and processor capacity, thus optimizing resource utilization. Disk space can be also be shared amongst z/VM guests, and in particular read-only files (like application code and operating systems software) can be placed on one disk and made accessible to all the virtual Linux servers, thus minimizing maintenance workload and software costs. z/VM also provides virtual networking and built-in systems management functions, as described in 3.2, “Background to z/VM” on page 27. Lightly to moderately loaded servers that do not peak concurrently are good candidates for consolidating on a zSeries server running Linux. Chapter 3. zSeries and Linux

29

Traditional Server Farm

Server farmin a box

Intel

Intel

Sun

Sun

HP

Intel

Intel

Sun

Sun

HP

Intel

Sun

Sun

HP

Intel

Sun

Sun

HP

Intel Intel Intel Intel Intel

Intel Intel Intel

Sun Sun Sun

Sun Sun

IBM^

Linux Linux Linux Linux

IBM^

Linux Linux Linux Linux

IBM^

Linux Linux Linux Linux

HP

Sun

IBM^

Linux Linux Linux Linux

IBM^

HP

Linux Linux Linux Linux

IBM^

HP

Linux Linux Linux Linux

IBM^

Many single servers

z/VMon zSeries

Figure 3-6 Server farm in a box

Some key applications are usually deployed in a three-tiered mode; a database tier, an application tier, and a presentation tier. In a hybrid z/OS and Linux for zSeries environment, the database server resides on a z/OS image, while the application server runs on a Linux for zSeries instance. Both reside on the same hardware, enabling the customer to remove a layer of hardware and maintenance effort while still meeting the performance requirements that the solution demands. In addition, with the application and database now hosted on the one system, the real networks between them can be replaced by the networking features of zSeries (such as Hipersockets). When servers have been consolidated onto a zSeries server, and in particular when running as z/VM guests, the horizontal growth of your server farm can be accomplished with minimal time (typically minutes), and without requiring additional hardware or real estate, by simply adding another Linux virtual machine. This is shown in Figure 3-7. Moreover, you are still able to maintain the distributed environment paradigm of “one server one application.”

Linux on z/VM

Typical UNIX® environment

Network

Network Virtual networking options available with z/VM

App

App

App

App

App

App

App

App

UNIX

UNIX

UNIX

UNIX

Linux

Linux

Linux

Linux

z/VM

Shared processors, memory, network, ...

Server Server Server Server

Shared Disks

Figure 3-7 Horizontal server growth

A number of the features and benefits discussed above are illustrated in “Typical file/print serving scenario with Samba” on page 31, where we outline a scenario in which multiple

30

Linux with zSeries and ESS - Essentials

distributed file/print servers, using the Samba application, are consolidated into multiple Linux guests under z/VM.

3.3.3 Some examples and further information Some customer experiences with running multiple virtual Linux systems under z/VM can be found at: http://www.ibm.com/servers/eserver/zseries/news/pressreleases/2002 http://www.ibm.com/servers/eserver/zseries/os/linux/stories.html Some recent IBM white papers detailing experiences with multiple virtual Linux systems under z/VM, and using an IBM TotalStorage Enterprise Storage Server, can be found at the Web sites: http://www.vm.ibm.com/linux http://www.ibm.com/servers/eserver/zseries/library/whitepapers Also see the Redpaper Server Consolidation with Linux for zSeries, REDP0222, and the Web site: http://www.ibm.com/server/eservers/zseries/os/linux/consolidate.html An IBM portal for information on z/VM and Linux resources can be found at: http://www.vm.ibm.com/linux

3.4 Linux application solutions Linux distributions for zSeries contain such notable open source applications as Apache, Samba, IMAP, OpenLDAP, Perl, Sendmail MTA, Squid, and PostgreSQL. Linux applications are sourced from many areas. Open source applications are available for download from the Internet. Applications can also be developed for the zSeries. Compilers are available with the Linux for zSeries distributions. Solutions providers have announced products for Linux on the zSeries server. These include infrastructure applications, line-of-business applications, and Web-enablement applications. There are more than 200 applications from more than 120 vendors available for Linux on zSeries. For a detailed product list, see: http://www.ibm.com/servers/eserver/zseries/solutions/s390da/linuxproduct.html

Typical file/print serving scenario with Samba We outline a typical consolidation of distributed file/print servers onto a zSeries. Other scenarios are discussed in Linux Handbook, SG24-7000. This scenario describes providing file/print services using the Samba file print application. Samba is high quality open source code and is integrated in most Linux distributions. File/print serving does not place great demand on the compute resource of the processor. However, it exercises the I/O capabilities of the solution as users upload/download files or route them to printers. Servers spend a significant amount of time waiting for users to request print or file services.

Chapter 3. zSeries and Linux

31

The distributed solution consists of 150 Intel 2-way 1000 MHz servers, which is depicted in Figure 3-8. Note that the servers are connected to a physical network that requires hardware. Also note that each server is running a separate copy of the operating system and application code - this is duplicated for each of the 150 servers in this case. Each server requires disk space, and if additional disk storage is required for a server the alternatives are to purchase an additional entire disk volume and install it, or purchase a higher capacity disk, move the required software and data to it, and install it.

File/Print server

User Data System

File/Print server

User Data

Network

System File/Print server

User Data System Backup server

File/Print server

failover Stand by

Tape

System User Data System

Figure 3-8 Samba file/print scenario - distributed solution

Maintenance must be performed on each server individually. Some type of operator automation is required to monitor the 150 separate systems and respond to replies when necessary. Backup is done by a dedicated backup server, which communicates with client code on each system. The zSeries solution for this scenario is shown in Figure 3-9. Notice that the inter-server communication takes place within the machine, which allows the Linux instances to communicate at memory speeds, as opposed to physical networking speeds with the usual latency. In addition to improving inter-server communication, much of the cost of external networking hardware and software is avoided.

32

Linux with zSeries and ESS - Essentials

Internal network

z/VM TSM server

Tape

File/Print server

User Data + System variables

File/Print server

User Data + System variables

z/VM

Shared read/only access

File/Print server User Data + System variables

failover Standby

System

LDAP server

Figure 3-9 Samba file/print scenario - zSeries solution

Disk storage for the operating system and application code is shared by all 150 virtual servers, while they must be duplicated in the distributed case. Sharing the code offers the advantages of controlling the cost of disk storage and ease of maintenance. Since only one copy of the code exists, it must only be serviced in a single place, which results in savings in system management software and operation. z/VM can help you to share the available disk space amongst the servers. In the distributed model, increasing disk space meant that you needed to buy new disks. If there was free disk space on some servers, it could not be moved to the server that needed the increased disk storage. In the zSeries solution, z/VM allows you to partition disks into minidisks that you can easily allocate to a server that requires additional disk space. As in the distributed case, the 150 servers in operation will issue messages which must be monitored, and responded to in some cases. The PROP function in z/VM is designed for that purpose, avoiding the cost of additional software to perform the task. Backup is performed by Tivoli Storage manager (TSM). A TSM server is installed either on z/VM (as depicted in the figure), or in a Linux virtual instance (available with TSM version 5.2). TSM clients are installed on each of the Linux guests.

Chapter 3. zSeries and Linux

33

34

Linux with zSeries and ESS - Essentials

4

Chapter 4.

IBM TotalStorage and ESS This chapter starts with an overview of the major features and benefits of the IBM TotalStorage offering, and its support for Linux solutions. This chapter continues with a more specific description of the IBM TotalStorage Enterprise Storage Server, otherwise known as the ESS. The entire product family is discussed in the redbook The IBM TotalStorage Solutions Handbook, SG24-5250. Further details of IBM’s TotalStorage products, software, and solutions, including Linux support, can be found at the following Web sites: http://www.storage.ibm.com http://www.storage.ibm.com/linux

© Copyright IBM Corp. 2003. All rights reserved.

35

4.1 IBM TotalStorage Linux-enabled storage is a requirement for all servers running Linux, regardless of the hardware infrastructure they rely on. We can observe that as Linux moves from supporting infrastructure workloads such as Web serving, networking, file and print operations, into the mission critical applications at the core of the data center, more stringent requirements are placed on the storage infrastructure to deliver superior reliability, scalability and security, and performance. These are not new requirements for IBM's storage portfolio, which has comprehensive Linux support today with a commitment to make it even stronger with our new products and solutions. IBM continues to support Linux across all its server platforms and leads the industry in storage networking based on open, industry standards for heterogeneous platforms. IBM is the leader in delivering world-class disk and tape systems, storage management software, services, and integrated solutions and incorporates the following: 򐂰 Highly scalable storage - Enables administrators to manage growth and quickly respond to changing business needs, with ease of deployment and configuration 򐂰 High availability and fault-tolerant storage - Provides continuous and reliable access to data using technologies like RAID and clustering 򐂰 Improved data management - Helps administrators better control the security and growth of their data 򐂰 Increased storage utilization - Allocates storage through a centrally managed pool of storage 򐂰 Reduced administrative costs - Manage additional storage without having to add staff and, with IBM storage management tools, the ability to perform quicker problem resolution 򐂰 Platform independence - Enables sharing of data and possibility of simplification of backup procedures 򐂰 Centrally managed - Enables administrators to quickly respond to changing business needs, with ease of deployment and configuration 򐂰 Reduced administrative costs - Manage additional storage without having to add staff, and with IBM storage management tools, the ability to perform quicker problem resolution

4.1.1 Value proposition IBM provides a comprehensive Linux strategy that includes all elements, hardware, software, and management tools to provide complete infrastructure for the major IT workloads. IBM storage on Linux provides pre-tested combinations of disk (high-end and mid-range), tape and network storage and the major Linux distributions, and a clear strategy of deploying Linux in storage. IBM storage on Linux also provides alignment with IBM and software to offer the most appropriate combinations to serve a wide variety of customer needs, all exploiting the benefits of open source and heterogeneous environments made possible by Linux.

36

Linux with zSeries and ESS - Essentials

Table 4-1 IBM Storage for Linux summary IBM Storage for Linux

What it does

What it means

Built on open standards, demonstrating IBM's intention to support Linux on all its products.

Allows IBM to bring a full array of products to the Linux environment

Variety of choice in selecting the most appropriate storage for your needs while limiting investment in proprietary infrastructures

Pre-tested, documented and supported Linux storage configurations - disk, tape and networked storage.

Expedites implementation of IT infrastructures with superior reliability and support.

Take the “guesswork” out when choosing storage for your IT needs

Wide variety of choices of Linux distribution's and storage attachments.

Protect investment in IT infrastructure

Freedom of choice for your preferred Linux distribution

IBM TotalStorage's major advantage is its ability to offer complete storage solutions in a Linux environment. IBM's statement of support is also more robust than that of the competition, encompassing our IBM e-Server, software, and storage products. Linux is part of IBM storage's core development, and Linux support for our advanced functions is growing.

4.2 IBM TotalStorage Proven™ IBM has introduced the TotalStorage Proven program to help customers identify storage solutions and configurations that have been pre-tested for interoperability. This program is also intended to ease the burden on IBM Business Partners, independent software vendors (ISVs), and other storage solution providers with respect to independent interoperability testing. The IBM TotalStorage Proven program builds on IBM's already extensive interoperability efforts to develop and deliver products and solutions that work together with third party products. Under the Storage Proven program, IBM will continue its work with hardware vendors, ISVs, and solution developers to test their products on IBM's extensive line of storage products. The continuing benefits would include: 򐂰 Intended customer value: – Lower risk by investing in proven solutions – Lower Integration costs – Faster solution deployment More information about storage proven can be found at: http://www.ibm.com/totalstorage/proven You can find an updated list of companies that have tested their products with IBM storage in a Linux environment at: http://www.ibm.com/totalstorage/proven/solutions.htm

Chapter 4. IBM TotalStorage and ESS

37

4.3 Enterprise Storage Server This section introduces the Enterprise Storage Server and discusses the benefits it provides. The features discussed here are not specific to Linux, but remain for the most part available in a Linux environment as well. For more detailed information, please refer to the redbook IBM TotalStorage Enterprise Storage Server Model 800, SG24-6424.

4.3.1 ESS overview Figure 4-1 summarizes the ESS architecture and features.

Powerful Seascape Architecture RAID 5 and RAID 10

ms

SSA loops with faster and greater capacity disk drives

11

12

1

10

2 3

9 8

4 7

6

5

Storage sharing among different platforms: IBM e-business servers and non-IBM servers

TotalStorage

zSeries parallel I/O

More powerful SMP processors with Turbo option

Fault tolerant system

Point in time copy solutions IBM TotalStorage TotalStorage Enterprise Enterprise IBM Storage Server Server Model Model 800 800 Storage Disaster Recovery solutions: synchronous and non-synchronous

High capacity scalable storage

Figure 4-1 IBM’s Seascape® architecture - ESS Model 800

The ESS is IBM’s most powerful disk storage server, developed using IBM Seascape architecture. The ESS provides un-matchable functions for all the IBM family, and also for the non-IBM (that is, Intel-based and UNIX-based) families of servers. Across all of these environments, the ESS features unique capabilities that allow it to meet the most demanding requirements of performance, capacity, and data availability that the computing business may require. The Seascape architecture is the key to the development of IBM’s storage products. Seascape allows IBM to integrate the best of breed technologies, both from IBM and from the general industry, producing flexible and upgradeable storage solutions. This Seascape architecture design has allowed the ESS to evolve from the initial E models to the succeeding F models, and to the more recent 800 models, each featuring new, more powerful hardware and functional enhancements. Server is part of the ESS product name because one of the 38

Linux with zSeries and ESS - Essentials

most significant facets of the Seascape architecture strategy is the use of server technology within the storage system. The integration of actual server computers (SMPs) into the ESS helps reduce the need for expensive custom components, and also increases reliability through the use of proven server and operating system technologies. The diagram in Figure 4-2 depicts an ESS Model 800, and all of its major components. At the top of the frame are the disk drives modules (DDMs), and immediately under them are the processor drawers that hold the cluster symmetrical multiprocessors (SMPs). Just below the processor drawers are the I/O drawers that hold the SSA device adapters that connect the SSA loops. Just below the I/O drawers are the host adapter bays that hold the host adapters. At the bottom of the frame are the AC power supplies and batteries. Between the two cages of DDMs is an operator panel that includes an emergency power switch, local power switch, power indicator lights, and message/error indicator lights. For larger configurations, the ESS base enclosure attaches to an Expansion Enclosure rack that is the same size as the base ESS, and stands next to the ESS base frame.

Cages and disk drives

SMP Processors and Cache SSA Device Adapters

Host Adapters

Main Power Supplies

Batteries

Figure 4-2 ESS Model 800 major components

4.3.2 ESS features and benefits The ESS set a new standard for storage servers back in 1999 when it was first available, and since then it has evolved into the third-generation ESS Model 800. The IBM TotalStorage Enterprise Storage Server Model 800 introduces important changes that dramatically improve the overall value of the ESS and provide a strong base for strategic Storage Area Network (SAN) initiatives.

Chapter 4. IBM TotalStorage and ESS

39

Storage consolidation The ESS attachment versatility and large capacity enable data from different platforms to be consolidated onto a single high performance, high availability box. Storage consolidation can be the first step towards server consolidation, reducing the number of boxes you have to manage, and allowing you to flexibly add or assign capacity when needed. The IBM TotalStorage Enterprise Storage Server supports all the major operating systems platforms, from the complete set of IBM series of e-business servers and IBM NUMA-Q®, to the non-IBM Intel-based servers, and the different variations of UNIX, including Linux, as shown in Figure 4-3.

S/390: Count-Key-Data IBM RS/6000 and pSeries

IBM S/390 eServer zSeries 900

DG

Sun HP 9000

Netfinity

Other platforms: Fixed Block

IBM PC Server

AS/400

Other Intel-based Windows NT Servers

Enterprise Storage Server

Linux

Network SerialDisk and IBM 7133 Serial Disk Remote mnagement with IBM TotalStorage ESS Specialist IBM TotalStorage ESS Expert

Figure 4-3 IBM TotalStorage Enterprise Storage Server for storage consolidation

With a total capacity of up to 55 TB, and a diversified host attachment capability, SCSI, ESCON®, and Fibre Channel/FICON™ (the ESS Model 800) provide outstanding performance while consolidating the storage demands of the heterogeneous set of server platforms that must be dealt with nowadays.

Performance Key features that characterize the performance enhancements of the ESS Model 800 are: 򐂰 The ESS Model 800, with the optional Turbo feature for the SMP processors, is capable of delivering 2.5 times the throughput of its predecessor Model F20. 򐂰 64 Gb cache supports much larger system configurations and increases hit ratios, driving down response times. 򐂰 Double the internal bandwidth provides high sequential throughput for digital media, business intelligence, data warehousing, and life science application.

40

Linux with zSeries and ESS - Essentials

򐂰 Larger non-volatile storage (NVS) of 2Gb, with twice the bandwidth, allows for greater scalability for write-intensive applications. 򐂰 Third-generation hardware, such as more powerful SSA device adapters, provides response time improvements of up to 40% for important database applications 򐂰 2 GB Fibre Channel/FICON host adapters provide doubled performance sustained, and instantaneous throughput for both open systems and zSeries 򐂰 RAID 10 can provide up to 75% greater throughput for selected database workloads compared to equal physical capacity configures as RAID 5. While most typical workloads will experience excellent response times with RAID 5, some cache-unfriendly applications and some applications with high random write content can benefit from the performance offered by RAID 10. 򐂰 15,000 rpm drives provide up to 80% greater throughput, and 40% improved response time as compared to 10,000 rpm drives. This allows driving the workloads to significantly higher access densities, while also experiencing improved response times.

Figure 4-4 summarizes many of the ESS features mentioned above.

Third-generation intelligent storage server IBM TotalStorage Enterprise Two active cluster processors with Storage Server Model 800 Faster symmetrical multiprocessors (SMP) Optional Turbo feature TotalStorage

8, 16, 24, 32 or 64 GB cache options 2 GB non-volatile storage (NVS) 8 x 160 MBps SSA loops 18.2 / 36.4 / 72.8 GB capacity disk options Disk capacity intermix

10,000 rpm and 15,000 rpm disk options 10,000 rpm 18.2 / 36.4 / 72.8 GB disks 15,000 rpm 18.2 / 36.4 GB disks

Up to 55 TB of physical capacity Up to 16 Fibre Channel/FICON ports Up to 32 ESCON ports Up to 32 SCSI ports Connects to Storage Area Network

Base Enclosure

Figure 4-4 Enterprise Storage Server Model 800

Efficient cache management and powerful back-end The ESS is designed to provide the highest performance, for different types of workloads, even when mixing dissimilar workload demands. For example, zSeries servers and open systems put very different workload demands on the storage subsystem. A server like the zSeries typically has an I/O profile that is very cache-friendly, and takes advantage of the cache efficiency. On the other hand, an open system server does an I/O that can be very cache-unfriendly, because most of the hits are solved in the host server buffers. For the zSeries type of workload, the ESS has the option of a large cache (up to 64 GB) and most importantly it has efficient cache algorithms. For the cache unfriendly workloads, the ESS has a powerful back-end, with the SSA high performance disk adapters providing high I/O parallelism and throughput for the ever-evolving high-performance hard disk drives. Chapter 4. IBM TotalStorage and ESS

41

Data protection and Remote Copy functions Figure 4-5 illustrates the ESS disaster recovery and availability features.

Linux

Web interface and command line interface

zSeries UNIX Windows NT

TotalStor age

ESC

ON

Tot alS torage

iSeries

PPRC - synchronous and non-synchronous remote copy XRC - asynchronous remote copy Fault-tolerant design plus RAID 5 or RAID 10 data protection

Figure 4-5 Disaster recovery and availability

Many design characteristics and advanced functions of the IBM TotalStorage Enterprise Storage Server Model 800 contribute to protect the data in an effective manner.

Fault-tolerant design and concurrent maintenance The ESS incorporates a fault tolerant hardware design where major hardware elements such as disk drives, data paths, logic components, power supplies, and cache are redundant. For example, the NVS for cluster 1 is located in cluster 2, and the NVS for cluster 2 is located in cluster 1, so that in the event of a cluster failure the write data for the failed cluster will be preserved in the NVS of the other cluster. Also, the ESS has built-in call home facilities for reactive IBM support centre response and periodic proactive verification of ESS operational status. The architecture of the ESS Model 800 allows nearly all maintenance actions such as repairs, code upgrades, and capacity upgrades to be done concurrently with ESS online operation. Changes to the ESS configuration, such as defining logical disk volumes and assigning then to a host, can be done dynamically by the customer at any time.

RAID 5 or RAID 10 data protection With the ESS Model 800 there is now the additional option of configuring the disk arrays in RAID 10 disposition (mirroring plus striping) in addition to the RAID 5 arrangement, which gives more flexibility when selecting the redundancy technique for protecting the users data. RAID 5 consists of six or seven disks for user data, plus a parity disk. RAID 10 consists of a set of disks for user data plus their mirrored disk counterparts.

42

Linux with zSeries and ESS - Essentials

Remote copy The ESS Model 800 supports two types of Remote Copy, Peer-to-Peer Remote Copy (PPRC), and Extended Remote Copy (XRC). PPRC is a hardware-based solution for mirroring logical volumes from a primary site (the application site) onto the volumes of a secondary site (the recovery site), using either a synchronous or non-synchronous process. PPRC is a Remote Copy solution for open systems servers (such as xSeries) as well as for zSeries servers. PPRC V2 was recently announced, and it provides for cascading functions. PPRC is discussed more fully in 6.2, “Peer-to-Peer Remote Copy (PPRC)” on page 63.

Note: XRC is only available for the z/OS and OS/390® environments, and not for Linux on zSeries or open systems.

Point-in-time Copy function Users still need to take backups to protect data from logical errors and disasters. For all environments, taking backups of user data traditionally takes a considerable amount of time. Usually, backups are taken outside of prime shift because of their duration and the consequent impact to normal operations. Databases must be closed to create consistency and data integrity, and online systems are normally shut down. With the ESS Model 800, the backup time has been reduced to a minimal amount of time when using the FlashCopy® function. FlashCopy creates an instant point-in-time copy of data, and makes it possible to access both the source and target volumes immediately, thus allowing the applications to resume with minimal disruption. FlashCopy V2 was recently announced, which provides for multiple relationship and incremental FlashCopy functions. FlashCopy is discussed more fully in 6.4, “FlashCopy” on page 68.

Note: The IBM TotalStorage Enterprise Server Model 800 also provides a point-in-time copy function called Concurrent Copy, but this function is only available for the z/OS and OS/390 operating systems and not for Linux.

Standby capacity on demand To make the ESS Model 800 more scalable, and more able to accommodate dynamic and unpredictable workload requirements, the Standby Capacity on Demand feature has been announced. When your storage requirements increase, you can activate the additional capacity without any intervention from IBM, and without any interruptions to your work. When the standby capacity is activated, the ESS automatically reports the increase in capacity usage. Capacity on demand helps control costs while avoiding delays in acquiring storage that can result when the timing of capacity growth cannot be accurately forecasted.

Storage Area Network (SAN) The SAN strategy is to connect any server to any storage as shown in Figure 4-6 on page 44. As SANs migrate to 2 Gb technology, storage subsystems must exploit this more powerful bandwidth. Keeping pace with the evolution of SAN technology, the Enterprise Storage Server Model 800 is introducing new 2 Gb Fibre Channel/FICON host adapters for native server connectivity and SAN integration. These new 2 Gb Fibre Channel/FICON host adapters, which double the bandwidth and instantaneous data rate of the previous adapters available with the F Model, have one port with an LC connector for full-duplex data transfer over long-wave or short-wave fiber links. These adapters support the SCSI-FCP (Fibre Channel Protocol) and the FICON upper-level protocols. Each Fibre Channel port can be shared by multiple heterogeneous hosts and HBAs, potentially reducing the number of ports needed. LUN masking is included to support SAN security.

Chapter 4. IBM TotalStorage and ESS

43

LAN

ESCON FICON

Fibre Channel SAN switches & directors Any Server to Any Storage

TotalStorage

ESS Figure 4-6 Storage area network (SAN)

The extensive connectivity capabilities make the ESS the unquestionable choice when planning the SAN solution. For the complete list of the ESS fabric support, please refer to: http://www.storage.ibm.com/disk/ess/supserver.htm

For a description of the IBM TotalStorage SAN products, please refer to:

http://www.storage.ibm.com/ibmsan/products/sanfabric.html

4.3.3 ESS Specialist The TotalStorage Enterprise Storage Server includes the ESS Specialist, which is a Web-based interface located internally within the ESS that allows the storage administrator to monitor and manage storage from the ESS Master Console or from a remote workstation using a Web browser. The ESS Specialist interface is a set of Java applets, which are programs that are dynamically loaded by the browser, and which execute within your browser. When you request a change to the ESS configuration, the Java applets communicate with the microcode running on the ESS clusters to retrieve the current configuration data, submit the requested configuration change, and display the outcome of the request. For further details please refer to IBM TotalStorage Enterprise Storage Server Web Interface User’s Guide SC26-7448. When you open a Web browser and log on to your ESS Specialist you get a welcome screen, as shown in Figure 4-7.

44

Linux with zSeries and ESS - Essentials

Figure 4-7 ESS welcome screen

Note the two buttons on the left hand side: 򐂰 The ESS Specialist button displays the ESS Specialist main screen, from where you can set up and view the storage segmentation in your ESS (we will use this function in Chapter 5., “FCP for zSeries with Linux” on page 47). 򐂰 The Copy Services button leads to the ESS Copy Services main screen, from where you can invoke and manage the various Copy Services functions available on the ESS (we will be using this in Chapter 6., “ESS Copy Services and Linux” on page 61).

Chapter 4. IBM TotalStorage and ESS

45

46

Linux with zSeries and ESS - Essentials

5

Chapter 5.

FCP for zSeries with Linux This chapter gives details about the Fibre Channel Protocol (FCP) support for attaching the ESS to a zSeries server running Linux. Note that it remains possible to connect the Enterprise Storage Server to Linux on zSeries using traditional ESCON and FICON channels. This was in fact the only supported method when Linux for zSeries became available in early 2000. The Enterprise Storage Server can be configured as CKD disk, and generally speaking all current models of the Enterprise Storage Server are interoperable with all current models of the zSeries and all current levels of z/VM and Linux for zSeries, using ESCON or FICON channels. For specific details, refer to the ESS Interoperability Matrix and Host Systems Attachment Guide, SC267446, located at: http://www.storage.ibm.com/disk/ess/supserver.htm

© Copyright IBM Corp. 2003. All rights reserved.

47

5.1 zSeries and FCP In February 2003, the support of Fibre Channel attached devices was added to zSeries. (This support is only for Linux on zSeries, and not for z/OS or z/VM.) Using this new zSeries Fibre Channel Protocol (FCP), Linux for zSeries can access standard SCSI storage devices, and distributed storage in a SAN. This enables customers to preserve their existing investment in FCP/SCSI storage devices while consolidating distributed servers onto zSeries. A complete list of devices supported for attachment to zSeries systems using FCP can be found at: http://www-1.ibm.com/servers/eserver/zseries/connectivity/#fcp http://www-1.ibm.com/servers/eserver/zseries/connectivity/fcp.html

The Enterprise Storage Server has been successfully connected to Linux for zSeries using FCP, and this is discussed in 5.2, “Connecting ESS to Linux on zSeries using FCP” on page 52. In this section we give a brief overview of the traditional zSeries I/O architecture versus the zSeries Fibre Channel. This overview is based on the redbook Implementing Linux with IBM Disk Storage, SG24-6261-01. Additional details can be found in the before mentioned redbook, and also in the redpiece Getting Started with zSeries Fibre Channel Protocol, REDP0205, and redbook IBM eServer zSeries Connectivity Handbook, SG24-5444.

5.1.1 zSeries specific storage attachments The I/O component of z/Architecture™ is based on channels, control units, and devices. Channels provide a well-defined interface between a server and the attached control units. Originally implemented on parallel copper media, today’s channels such as ESCON and FICON use optical serial cables. Also, the parallel channel was a multi-drop interface, while both ESCON and FICON are switched point-to-point connections, as depicted in Figure 5-1.

ESCON ESCON// FICON FICON

Storage Controller

Switch (Director)

zSeries

Channel Adapter

ESCON ESCON// FICON FICON

Storage Devices

Figure 5-1 Typical zSeries storage attachment concept

ESCON channels use a unique physical interface and transmission protocol. FICON, on the other hand, is based on industry standard Fibre Channel lower-level protocols. In both cases, the higher-level I/O protocol used by software is based on channel programs consisting of channel command words (CCWs), and is unique to mainframes adhering to z/Architecture or its predecessors (such as the ESA/390 architecture). For access to disk storage, a specific set of CCWs is used, defined by the CKD protocol. This protocol describes not only the commands and responses exchanged across the channel, but also the format of the data as it is recorded on the disk storage medium. For tape, similar CCW-based command protocols

48

Linux with zSeries and ESS - Essentials

exist, but there are no associated access protocols for media such as DVDs or scanners, because the necessary software drivers and control units have never been provided.

zSeries and storage device addressing With z/Architecture, software addresses a storage device using a device number, which uniquely identifies the device. The device number is set by the IOCDS, and is an arbitrary 16-bit number expressed as four hexadecimal digits. It is informally know as an address. A device number is mapped to one or more physical paths to the device by the IOCDS. This physical path is typically described by a quadruple of hardware addresses consisting of: 򐂰 򐂰 򐂰 򐂰

Channel path identifier (CHPID): Identifies the channel that provides a path to the device Physical link address: Identifies a route through a switch or director Control unit address: Specifies the control unit Unit address: Identifies the device

Please refer to the left side of Figure 5-4 on page 51. There is typically more than one physical path to a device, to allow for redundancy for example. While these paths are specified using different address quadruples, software still uses a single device number to address the device, independent of the path that is chosen for any particular I/O.

5.1.2 Distributed storage attachments Traditionally, storage controllers in the distributed world have been attached through parallel SCSI cabling, which allows for only very limited distances between the server and the controller. The SCSI standard uses a control-block-based command and status protocol. Today, there is a clear distinction between the physical level and the command/status level of the SCSI protocol.

FCP advantages Based on the higher-level SCSI command/status protocol, a new standard has been defined called FCP, which stands for Fibre Channel Protocol for SCSI. FCP employs the SCSI command/status protocol on top of an underlying Fibre Channel transmission protocol. Due to the superiority of the optical Fibre Channel connection regarding speed, distance, and reliability, storage attachments through FCP have the highest growth rate in today’s distributed world.

FCP storage addressing Traditional SCSI devices have a simple addressing scheme. Originally, the device address, known as the target address, was simply a number in the range 0-7 (extended to 0-15 in newer SCSI architecture), with address 7 usually reserved for the host adapter. As shown in Figure 5-2, SCSI also supports the definition of Logical Unit Numbers (LUNs), where each LUN represents a partition of the target (or controller), such as a disk drive.

Chapter 5. FCP for zSeries with Linux

49

C ontro ller

C ontroller

H os t Ad apte r

ID 2

ID 1

ID 7

D isk D rive

D isk D riv e

D isk D rive

D isk D rive

LUN 0

LUN 1

LUN 0

LUN 1

Figure 5-2 SCSI addressing

For FCP, a new addressing scheme, based on World Wide Names (WWN) was developed. Each node and each port on a node (such as an FCP port) is assigned a WWN, respectively known as World Wide Node Name (WWNN) and World Wide Port Name (WWPN). In addition, addressable units within a node (such as a disk drive) are assigned a LUN name. This is illustrated in Figure 5-3.

End End Node Node WWNN WWNN

End End Node Node WWNN WWNN

Port, WWPN, ID

Port, WW PN

Sw Switch itch WWNN WW NN

Port, WWPN

Port, WWPN, ID

FCP LUN

FCP LUN

Port, WWPN, ID

Port, WWPN

Port, WWPN

Port, WW PN, ID

FCP LUN

Figure 5-3 World Wide Names

5.1.3 zSeries-specific versus distributed storage controllers Two types of storage controllers with Fibre Channel interfaces are currently available for zSeries: 򐂰 Those supporting the z/Architecture specific FICON interface based on CCW architecture, with the CKD command set for disk storage and similar sets of commands for tapes. 򐂰 Those supporting the SCSI-based FCP protocol, with SCSI command sets specific to the device type, such as disk or tape.

Both types of controllers are normally based on the same hardware building blocks. Their individual characteristics are achieved by different firmware handling the host interface. For example, the Fibre Channel/FCP host adapter on the ESS can be configured for either FICON or FCP protocols. The way the DASD addressing or device mapping is handled is illustrated in Figure 5-4, which summarizes the previous discussion.

50

Linux with zSeries and ESS - Essentials

Classical Classical zSeries zSeries I/O I/O devnum devnum

zSeries zSeries FCP FCP (SCSI) (SCSI) I/O I/O

Host Host Program Program

devnum devnum WWPN WWPN LUN LUN

IOCP IOCP CHPID

Host Host Program Program IOCP IOCP CHPID

CU UA

QDIO Queues

Ficon/Escon CHANNEL

FCP CHANNEL

Ficon/Escon SAN

Ficon/Escon SAN

devnum identifies I/O device

CU UA devnum identifies communication path to FCP channel WWPN (8 Byte) World Wide Port Name: Identifies Controller / Port (or SCSI Bridge). LUN (8 Byte) Logical Unit Number: Identifies I/O device.

Control Unit

Control Unit

Device

Device

Figure 5-4 DASD addressing scheme

zSeries and storage: Summary To summarize, classical zSeries operating systems such as z/OS and z/VM were designed for use only with storage controllers that supported protocols defined by z/Architecture. This changed with the advent of Linux for zSeries, since its storage I/O component is oriented toward SCSI protocols. Lacking the capability to access SCSI-based storage devices on a zSeries server system, it was necessary to add specific support to Linux for zSeries to enable it to function in a CCW-based zSeries I/O environment. However, this additional layer in Linux for zSeries is unique to zSeries, and does not allow it to exploit storage subsystems and applications that are dependent on a SCSI attachment. For this reason, an FCP attachment capability has been added to the zSeries systems, allowing the attachment of SCSI-based storage controllers, and enabling Linux for zSeries to access these controllers in the Linux-standard manner. Figure 5-5 shows the differences in format translation between the two access methods. The overhead for accessing block-oriented devices from a block-oriented device layer through a conversion of the requests to CKD format has been removed.

Chapter 5. FCP for zSeries with Linux

51

D la tio DA A S D a cc c ceessss tra trannssla tionnss L in u x b u ff e r ca che

L in u x b u ff e r cache

D A S D d riv er

S C S I d riv e r

F IC O N

O pen FC P

ESS

ESS

DASD

D ASD

b lo c k fo r m a t tran slatio n b lo ck < -> E C K D E C K D fo rm at

Figure 5-5 DASD access translations FICON vs. FCP

Supported devices A complete list of devices supported for attachment to the zSeries systems using FCP can be found at: http://www-1.ibm.com/servers/eserver/zseries/connectivity/#fcp http://www-1.ibm.com/servers/eserver/zseries/connectivity/fcp.html

Current information on FCP channel support for Linux on zSeries, and appropriate support for z/VM, can be found by selecting June 2003 stream at: http://www10.software.ibm.com/developerworks/opensource/linux390/index.shtml

5.2 Connecting ESS to Linux on zSeries using FCP The ESS Model 800 has been successfully connected through FCP to Linux on zSeries, and the results are described in the redbook Implementing Linux with IBM disk Storage, SG24-6261. A customer’s experiences connecting an ESS Model F20 to Linux on zSeries through FCP, during an Early Support Program, can be found at: http://www-1.ibm.com/servers/eserver/zseries/connectivity/ficon_resources.html

Some channel performance measurements for an ESS Model F20 attached to Linux on zSeries through FCP are located at: http://oss.software.ibm.com/linux390/perf_hints_tips.shtml

For full details of supported configurations of the Enterprise Storage Server and Linux on zSeries and FCP channels, please refer to the ESS Interoperability Matrix and the ESS Host Systems Attachment Guide, SC26-7446, located at: http://www.storage.ibm.com/disk/ess/supserver.htm

Note that Fibre Channel attachment of the Enterprise Storage Server to open systems hosts has been available for a few years, even though it only recently became available for zSeries hosts. For details, see the redbooks Implementing Fibre Channel Attachment on the ESS, SG24-6113, and IBM TotalStorage Enterprise Storage Server Model 800, SG24-6424, and the whitepaper Enterprise Storage Server Fibre Channel Attachment located at:

52

Linux with zSeries and ESS - Essentials

http://www.storage.ibm.com/disk/ess/whitepaper.htm

5.2.1 Hardware and software requirements The hardware and software features required for zSeries FCP function for Linux are described in the RedpaperGetting Started with zSeries Fibre Channel Protocol, REDP0205. We provide an overview here. zSeries FCP channels require a FICON card (feature 2315 or 2318), or a FICON Express card (feature 2319 or 2320). The type of firmware to be loaded into the FICON/FICON Express card, turning it into either an FCP channel or a FICON channel, is controlled by the definition of the channel type for that particular channel in the IOCP (the CHPID statement). Thus, by defining FCP type channels in the IOCP, the total number of FICON type channels that can be configured is reduced accordingly. The Linux for zSeries distribution required for FCP support is SLES8. FCP and SCSI controllers and devices can be accessed by Linux for zSeries with the appropriate I/O driver. Linux may run either natively in a logical partition, or as a guest operating system under z/VM.

Note: The z/VM Version 4, Release 3 is required to support FCP for Linux guests. However, z/VM itself does not support FCP devices, it supports FCP to the extent of passing it through to guest systems. Linux on zSeries cannot boot (or Initial Program Load (IPL)) from FCP attached disk. Hence, a certain amount of CKD disk is required for Linux on zSeries to hold the operating system.

5.2.2 An implementation overview As mentioned earlier, the redbook Implementing Linux with IBM Disk Storage, SG24-6261-01, describes an implementation of SuSE Linux (SLES8) on zSeries (both as a z/VM guest and also running natively in an LPAR), and the FCP attachment of an ESS Model 800 to these systems. We will briefly overview the ESS Specialist activities involved in that implementation; for further details, such as the specific hardware/software used, the steps to perform the installation of SuSE, the Linux host setup tasks, and so on, refer to the redbook. The ESS Specialist activities in this implementation involved defining a Linux on zSeries host, configuring ESS host adapters, configuring RAID arrays, and allocating storage, all using the ESS Specialist.

ESS Specialist We have introduced the ESS Specialist in 4.3.3, “ESS Specialist” on page 44. Using your Web browser you first display the welcome screen shown in Figure 4-7 on page 45 and then click the ESS Specialist button to display the ESS Specialist main screen. From there clicking the Storage Allocation button enables you to set up and view the storage segmentation in your ESS, as shown in Figure 5-6.

Chapter 5. FCP for zSeries with Linux

53

Figure 5-6 Storage Allocation panel

Since we are attaching Fibre Channel disk (fixed-block FB format) to Linux on zSeries, we chose the Open Systems Storage button this time, and the panel shown in Figure 5-7 now appears.

Figure 5-7 Open Systems Storage panel

We then perform the configuration actions described below by clicking the respective buttons at the bottom of the Open Systems Storage panel.

54

Linux with zSeries and ESS - Essentials

Define the Linux on zSeries host to the ESS Unlike hosts accessing CKD disks in an ESS, hosts which access FB formatted disks must be defined to the ESS. In order to define our Linux on zSeries host, we clicked the Modify Host Systems button of the Open System Storage panel showing the panel shown in Figure 5-8. The fields which have to be completed here are: 򐂰 Nickname - Unique identifier up to 29 characters long 򐂰 Host type - We chose Linux (zSeries) for any Linux on zSeries instance (either LPAR or running under z/VM) 򐂰 Host Attachment - Since we are configuring storage in FB format, the only option that will be available for Linux hosts is Fibre Channel attached, which is what we chose.

򐂰 Hostname/IP Address 򐂰 World-Wide Port-Name - The WWPN of the FCP port on your zSeries 򐂰 Fibre-channel ports - Select the FC port(s) on the ESS which the Linux on zSeries host should use to access the ESS.

After you have filled in the complete information click Add, and then Perform Configuration Update. You will be returned to the Open Systems Storage panel.

Figure 5-8 Modify Host Systems panel - add the Linux on zSeries host

Configure the ESS host adapters The Fibre Channel adapters on an ESS have a single port. Each FC port (or adapter) that the Linux on zSeries host will use to access the ESS needs to be configured. To do this select the Configure Host Adapter Ports button on the Open System Storage panel, and then click the icon for the Fibre Channel adapter to be configured. Figure 5-9 shows an example of the fields that are enabled once this has been done. Check that the value for Fibre Channel Topology is Point to Point (Switched Fabric), as this is the only environment supported for a Linux on zSeries environment. Repeat this for each of the Fibre Channel adapters that the Linux host will use to access the ESS, and then click Perform Configuration Update. You will be returned to the Open Systems Storage panel.

Chapter 5. FCP for zSeries with Linux

55

Figure 5-9 Configure Host Adapter ports panel

Configure RAID arrays Before any logical volumes can be defined and assigned to our Linux host, some of the available disk space on the ESS has to be formatted with track format of fixed-block, and a storage type of either RAID5 or RAID10 selected. Selecting button Configure Disk Groups on the Open System Storage panel takes us to the Fixed Block Storage panel, shown in Figure 5-10. From the Available Storage table, select a disk group and select a Storage Type of either RAID-5 or RAID-10. You will see entries appear in the Modification column in the table. Repeat this step as required, and then click Perform Configuration Update to apply the changes. You will be returned to the Open Systems Storage panel.

56

Linux with zSeries and ESS - Essentials

Figure 5-10 Fixed Block Storage panel

Allocate storage for Linux In order to create fixed-block formatted volumes (LUNs) for use by the Linux on zSeries host, we clicked Add Volumes on the Open Systems Storage panel. This leads to the panel shown in Figure 5-11. From this panel we selected the Linux host and the FC port to use, which highlights the connection between them. Then click Next. A second screen appears (Figure 5-12) where you can choose from the available arrays and add volumes of different sizes. To add a volume, select the array and the desired size of the volume and the number of such volumes, and then click Add. Click Perform Configuration Update and the volume will be created. Note that a message window will inform you that the actual format of the volumes is being performed in the background (this formatting may take some time). You have to wait until the formatting is complete before accessing the new volumes.

Chapter 5. FCP for zSeries with Linux

57

Figure 5-11 Add volumes (panel 1 of 2)

Figure 5-12 Add volumes (panel 2 of 2)

Note that although LUNs must initially be defined to a particular host and port, a volume can subsequently be added to additional ports. This is one of the actions you can perform by clicking the Modify Volume Assignments button on the Open Systems Storage panel. The volume is then accessible by any host configured to any port to which the volume has been added. When a volume is defined to a host through a Fibre Channel port, it becomes visible on all the ESS Fibre Channel ports, which were defined for that host.

58

Linux with zSeries and ESS - Essentials

Further details on many of the panels and processes presented in this chapter can be found in IBM TotalStorage Enterprise Storage Server: Implementing the ESS in your Environment, SG24-5420.

Chapter 5. FCP for zSeries with Linux

59

60

Linux with zSeries and ESS - Essentials

6

Chapter 6.

ESS Copy Services and Linux In this chapter we provide an overview of the Copy Services functions available with the IBM TotalStorage Enterprise Storage Server, with particular reference to Linux on zSeries. For full details of these functions the reader should refer to the redbooks Implementing ESS Copy Services with IBM eServer zSeries, SG24-5680, and Implementing Copy Services in an Open Environment, SG24-5757.

© Copyright IBM Corp. 2003. All rights reserved.

61

6.1 Introduction The Enterprise Storage Server Copy Services are enterprise-level functions that give you leading edge solutions to meet your needs for disaster recovery, data migration, and data duplication. The set of products which make up the ESS Copy Services suite are: 򐂰 򐂰 򐂰 򐂰 򐂰

Peer-to-Peer Remote Copy (synchronous PPRC) Peer-to-Peer Remote Copy Extended Distance (non-synchronous PPRC-XD) FlashCopy Extended Remote Copy (XRC) Concurrent Copy (CC)

Note: Two of the ESS Copy Services, Extended Remote Copy and Concurrent Copy, are only available for the z/OS and OS/390 operating systems, and are not available for Linux on zSeries. We provide very little detail of these copy functions in this book. FlashCopy and Peer-to-Peer Remote Copy (both synchronous PPRC and non-synchronous PPRC-XD) are hardware based functions of the Enterprise Storage Server, and can be used with open servers (UNIX and Intel based) as well as with zSeries and iSeries servers. Figure 6-1 illustrates these ESS Copy Services.

DATA MOVER DATA MOVER

local point-intime copy

Concurrent Copy

TotalStorage

XRC

py te co remo nces s u no ta chro d dis asyn unlimite r e v o

Sidefile

TotalStorage

103 PPRcCopy up to

hron sync

mote ous re

XoDte copy C-m PPoR us re tances

Km

over

FlashCopy

hron l dis sync ntinenta nonco

local point-in-time copy

Figure 6-1 ESS Copy Services for zSeries

With the ESS, we have copy functions for copying data either locally, that is within the same on-site ESS, or remotely, which is copying data to a separate remote-site ESS. 򐂰 The ESS copy functions that fall within the Remote Copy category are:

– Extended Remote Copy (XRC) – Peer-to-Peer Remote Copy (PPRC) – Peer-to-Peer Remote Copy Extended Distance (PPRC-XD)

62

Linux with zSeries and ESS - Essentials

򐂰 The ESS copy functions that fall within the local copy category are:

– FlashCopy – Concurrent Copy The ESS Copy Services copy functions can also be classified as either dynamic or point-in-time. Dynamic copy functions constantly update the secondary copy as applications make changes to the primary data source. Point-in-time copy functions provide an instantaneous copy or view of what the original data looked like at a specific point-in-time. For example, the Remote Copy functions listed above are all dynamic, while the local copy functions mentioned above are both point-in-time.

6.2 Peer-to-Peer Remote Copy (PPRC) Peer-to-Peer Remote Copy (PPRC) is a synchronous remote data mirroring technology used primarily as part of a business continuance solution for protection against disk subsystem loss, or complete site failure. It is also used for remote migration of data and application workloads, and for off-site backups. It is a real-time copy technique that synchronously mirrors a primary set of volumes (that are being used by applications) onto a secondary set of volumes (see Figure 6-2). Typically, the secondary volumes will be on a different ESS located at a remote location some distance away from the application site. Mirroring is done at a logical volume level.

Synchronous remote copy mirroring PPRC mirrored pairs are established on a logical volume level Connections between ESS systems is done using ESCON links Supports ESCON Directors, DWDM with dark fiber, and channel extenders open Supported distance 103 km (sync mode) zSeries system

Supported on

Open systems and zSeries servers

Management ESS Copy Services Web user interface ESS Copy Services Command Line Interface Additionally for the CKD environments: TSO commands (z/OS only) ICKDSF (excluding PPRC-XD) ANTRQST macro (z/OS only)

1

4

ESCON SSS TotalStorage

2

TotalStorage

3 Primary (source)

Secondary (target)

Figure 6-2 Synchronous volume copy PPRC

PPRC is a hardware solution, thus it is application independent and the relevant hardware feature needs to be available on both the local and the remote ESS. Because the copy function occurs at the storage subsystem level, the application does no need to know of its existence. PPRC guarantees that the secondary copy is up-to-date by ensuring that the primary volume update will be successfully completed only when the primary ESS receives acknowledgement that the secondary copy has been successfully updated. The sequence when updating records is as follows (refer to Figure 6-2): Chapter 6. ESS Copy Services and Linux

63

1. Write to primary volume (to primary ESS cache and non-volatile storage (NVS)). The application writes data to a primary volume on an ESS at the application site, and cache hit occurs. 2. Write to secondary (to secondary ESS cache and NVS). The application site ESS then initiates an I/O channel program to the recovery site ESS, to transfer the updated data to the recovery site cache and NVS 3. Signal write complete on the secondary. The recovery site ESS signals write complete to the application site ESS when the updated data is in its cache and NVS. 4. Post I/O complete. When the application site ESS (primary ESS) receives the write complete from the recovery site ESS, it returns I/O complete status to the application system. Destage from cache to the back-end disk drives on both the application ESS and the recovery site ESS is performed asynchronously. The synchronous technique of PPRC ensures that application-dependent writes will be applied in the same sequence upon the secondary volumes, thus providing application consistency at every moment. PPRC has tailorable parameters (CRIT and consistency groups) which identify what should be done if there are errors or delays in updating the secondary copy. From a PPRC perspective, the volumes within an ESS can be in one of the following states (refer Figure 6-3): 򐂰 Simplex: The initial state of a volume. A PPRC volume pair relationship has not been established yet between the primary and the secondary volumes. 򐂰 Pending: The initial state of a defined PPRC volume pair relationship, when the initial copy of the primary volume to the secondary volume is happening. This state also is found when a PPRC volume pair is re-synchronized after it was suspended. During the pending period, the volume pair is not in synchronization, and PPRC is copying tracks from the primary to the secondary volume. 򐂰 Duplex: The state of a PPRC volume pair after PPRC has fully completed the copy operation of the primary volume onto the secondary volume. At this moment the volume pair is in synchronization and all write updates to the primary volume are synchronously applied onto the secondary volume. 򐂰 Suspended: In this state of the PPRC pair, the writes to the primary volume are not mirrored onto the secondary volume. The secondary volume becomes out of synchronization. During this time PPRC keeps a bit map record of the changed tracks in the primary volume. Later, the volume pair can be re-synchronized, and then only the tracks that were updated will be copied. A PPRC volume pair will automatically go into suspended state, for instance, when the primary ESS cannot complete a write operation to the recovery ESS. 򐂰 Duplex pending-XD: This state is found when a volume pair is established in a non-synchronous PPRC-XD relationship. This PPRC volume state is explained in 6.3, “PPRC Extended Distance (PPRC-XD)” on page 65.

64

Linux with zSeries and ESS - Essentials

Application Site

Recovery Site

Volume State

Synchronization

Simplex Simplex

Volumes have no PPRC relationship

Duplex Duplex Pending Pending

Not in sync, but the PPRC pair is established

Duplex Duplex

PPRC pair is established, with volumes in full synchronization ( In Sync )

Suspended Suspended Duplex Duplex

PPRC pair is suspended, volumes are no more in sync ( Not in Sync )

Data Copy

0 1 0 1 0 0 0 1 0 0 0 0 1 0

Primary Volumes

Secondary Volumes

Figure 6-3 PPRC volume states - synchronous mode of operation

Copying PPRC secondary volumes A PPRC secondary volume can be a FlashCopy primary volume. This allows you, for example, to produce a consistent point-in-time tertiary copy of the mirror volume while it is suspended. With PPRC V2 you can perform Asynchronous Cascading PPRC, whereby a (synchronous) PPRC secondary volume can simultaneously serve as the PPRC primary volume for a PPRC-XD relationship to a remote site. This provides the capability for a three-site PPRC configuration.

6.3 PPRC Extended Distance (PPRC-XD) Peer-to-Peer Remote Copy Extended Distance (PPRC-XD) is a non-synchronous long-distance copy option for both open systems and zSeries servers. PPRC-XD can operate at distances well beyond the 103 km maximum supported distance for synchronous PPRC. It is an excellent solution for: 򐂰 򐂰 򐂰 򐂰 򐂰

Data copy Data migration Off-site backup Transmission of data base logs Application recovery solutions based on periodic point-in-time copies of data

In PPRC-XD, the primary volumes updates are mirrored to the secondary volumes in a non-synchronous operation while the application is running. Due to the non-synchronous transmission of the updates, the write operations at the primary site (and thus the application’s response time) are not affected by any transmission delay. When PPRC-XD

Chapter 6. ESS Copy Services and Linux

65

starts, a bitmap is built for each primary volume to keep a record of updated primary volume tracks. The first phase of the copy is a sequential copy of the primary volume tracks, which are sent in batches to the secondary volume. When all tracks have been copied, the initial copy is done, then PPRC-XD starts to periodically cycle through the bitmap for any updated tracks to send then in batches of updates to the secondary volumes. This is a throughput oriented very efficient way of non-synchronous mirroring. Figure 6-4 shows a simplified sequence of a PPRC-XD operation, illustrating its non-synchronous characteristics. The following is the sequence of the operation: 1. Host writes data to primary volume (to primary ESS cache and NVS). Channel End is returned to the host. 2. Primary volume bitmap is updated to reflect the update for later transmission to the secondary ESS. The ESS returns a Device End to the host, so the I/O operation is complete for the application 3. The primary ESS mirrors the updates onto the secondary ESS volumes. The updates are grouped in batches for efficient transmission. 4. The secondary ESS returns write completed to the primary ESS when the update is secured in the secondary ESS cache and NVS. The primary volume bitmap is updated to show that the primary update has been secured in secondary ESS cache and NVS.

Non-synchronous mode For the application, the write operation is completed once the updates are done to the primary volume

2

1

Application gets "write completed"

Write data to primary

primary volume

4

Secondary reports "write completed"

secondary volume

fiber links

Application site ESS

3 Write data to secondary

Recovery site ESS

Figure 6-4 PPRC-XD operation

With PPRC-XD there is an addition to the PPRC traditional volume states (refer Figure 6-3). The additional state is the duplex-pending XD state (refer Figure 6-5). While in this duplex-pending XD-state, PPRC is doing non synchronous mirroring of primary volume updates to the secondary site. PPRC-XD will periodically cycle through the bitmap of each volume for updated tracks and place them in a batch for copying to the secondary. The pair will stay in this status until a command is issued to go into synchronous mode (duplex state),

66

Linux with zSeries and ESS - Essentials

or a command is issued to suspend the pair (suspended state), or a command is issued to delete the PPRC-XD pair (simplex state).

Application Site 0 1 0 1 0 0 0 1 0 0 0 0 1 0

Recovery Site

nonsynchronous mirroring

Primary Volumes

Volume State

Duplex Duplex Pending XD XD Pending

Synchronization and consistency

Not in sync and fuzzy

Secondary Volumes

Figure 6-5 Duplex-pending XD volume state

Catch-up operation (go-to-synch) PPRC-XD catch-up is the name of the transition that occurs to a PPRC-XD pair when it goes from its normal out-of-sync condition until it reaches a full synchronous condition. At the end of this transition, primary and secondary volumes become fully synchronized. The catch-up transition can be accomplished by commanding PPRC to go-to-SYNC, so the volume pair leaves the duplex-pending XD state and reaches the duplex state. From this moment on, if the pairs were not immediately suspended, primary write updates are synchronously transmitted to the recovery site. Catch-up transition can also be accomplished by temporarily quiescing the application writes to the primary volumes, and waiting for PPRC-XD to finish synchronization. When doing the catch-up operation by commanding PPRC to go-to-SYNC, you will not want any synchronous copy operations to occur if the volumes being mirrored are separated by long distances, beyond the 103 km. For this there is a copy option that allows you to ask PPRC to suspend the pair as soon as synchronization is established. Figure 6-6 shows the basics of PPRC-XD.

Chapter 6. ESS Copy Services and Linux

67

simplex

suspended

incremental incremental resync resync XD option option XD

inital inital copy copy XD XD option option

changed tracks' updates only nonsynchronous

cc yynn s s opp --tto hh--uu o o g g ttcc ccaa

aaiirr d pp ennd sppe ssuus

0 1 0 1 0 0 0 1 0 0 0 0 1 0

New option XD when doing an initial copy or when doing an incremental resynchronization

ddee lleet tee p aaiir r

Nonsynchronous mirroring frees application writes from synchronous overheads PPRC-XD periodically cycles through the volumes for updated tracks Volumes are scheduled for subsequent transfer of updates to secondaries Secondary fuzzy copy is made consistent with adequate catch-up Volumes remain in duplex pending XD state until commanded to exit

full-duplex

suspended

simplex

Figure 6-6 PPRC-XD Basic operation

Data consistency While volume pairs are in the duplex-pending XD state, application writes to the primary volumes are copied to the secondary volumes non synchronously. This means that the secondary volumes are keeping a fuzzy copy of the data. Application-dependent writes are not assured to be applied in the same sequence on the secondary volumes as they were written on the primary volumes. Moreover, at any time there will be a certain amount of application-updated data that is not reflected at the secondary volumes; this data corresponds to the tracks that were updated since the last volume bitmap scan was done. The catch-up transition will bring the pairs back to synchronous, either by setting them to duplex or by stopping write updates to the primary. When reaching duplex state, the pairs can be temporarily suspended to flash the secondary before resuming the PPRC-XD relation between the pairs. The FlashCopy will be a tertiary consistent point-in-time copy. Or, if it is a database application and the archive logs are being mirrored, then once the pairs catch up and are suspended, the logs can be applied on the shadow database.

6.4 FlashCopy Today, data processing centers increasingly require applications to be available 24 hours per day, 7 days per week, with minimal application downtime for data backups or copies. FlashCopy allows the user to implement such solutions. FlashCopy provides an instantaneous point-in-time copy of data, also called a T0 copy. FlashCopy enables applications that use either the source copy or the target copy to have them available almost immediately. Only a minimal interruption to the application is required

68

Linux with zSeries and ESS - Essentials

for the FlashCopy relationship to be established. The copy is then created under the covers by the IBM TotalStorage Enterprise Storage Server with minimal impact on other ESS activities. FlashCopy may be used in conjunction with either local or remote copies of data create by PPRC. Note that a new version of FlashCopy (FlashCopy V2) has been recently announced, which eliminates some restrictions of FlashCopy V1, and adds some new functionality. The following brief overview of FlashCopy is largely worded in terms of FlashCopy V1, for simplicity, but we will specifically identify when additional FlashCopy features are available with FlashCopy V2. Additional details on FlashCopy V2 can be found in the redbook Implementing ESS Copy Services with IBM eServer zSeries, SG24-5680. FlashCopy is at the logical volume level (FlashCopy V2 may be invoked at the level of volume extents, or even datasets). It has the following restrictions: 򐂰 The source and target volumes must have the same track format. 򐂰 The target volume must be at least as large as the source volume. 򐂰 The source and target volumes must be within the same ESS logical subsystem (LSS). With FlashCopy V2 this condition is relaxed so that the source and target only need to be within the same ESS. 򐂰 A source and target volume can only be involved in one FlashCopy relationship at a time. With FlashCopy V2, it is possible for a volume to have multiple FlashCopy relationships.

When FlashCopy is invoked, the ESS establishes a FlashCopy relationship between the target volume and source volume. This relationship exists from the time you initiate a FlashCopy operation until the ESS copies all data from the source volume to the target volume. You may optionally request FlashCopy to not execute the background copy (NOCOPY option), in which case the relationship must be specifically withdrawn in order to terminate it. During the establish phase of the FlashCopy relationship, a bitmap is created for the relation. This bitmap is used by the ESS microcode to map source and target volumes as they were at the time when the FlashCopy was requested (T0 copy), as well as to manage subsequent reads and updates to the source and target volumes. As soon as the relationship is established, both volumes become available for application use, and the optional physical copy phase begins. (Note that applications do not have to be stopped for FlashCopy to be initiated. However, you need to consider that only data on the physical disk is copied, while data in the buffers in the applications will not be copied. For this reason, you will probably need to quiesce the applications while the FlashCopy relationship is being established.)

FlashCopy background copy mode (default) The background copy mode causes all tracks in the source volume be physically copied to the target volume. The background copy task runs at a lower priority than normal I/O on the ESS, thus ensuring minimal impact on normal application I/O processing. Using the bitmap, the ESS keeps track of which data has been copied from source to target, and manages the integrity of both copies. If an application wants to read some data from the target that has not yet been copied, the data is read from the source; otherwise, the read is satisfied from the target volume. Before updating a not-yet-copied track on the source volume, the ESS does an on-demand copy of the track to the target volume, thus preserving the T0 copy. Subsequent reads to this track on the target volume will be satisfied from the target volume, and subsequent updates to this track on the source volume are ignored by FlashCopy, thus preserving the T0 copy. Before updating a not-yet-copied track on the target volume, the ESS will perform an on-demand copy of this track to the target volume. After some time, all tracks will have been copied to the target volume, and the FlashCopy relationship will automatically end. Figure 6-7 illustrates how FlashCopy works when background copy is requested. Chapter 6. ESS Copy Services and Linux

69

Source

Target

FlashCopy command issued Copy immediately available

Write Read

Read

Time

Write

Read and write to both source and copy possible

T0

When copy is complete, relationship between source and target ends Figure 6-7 FlashCopy point-in-time copy

FlashCopy no-background copy mode (NOCOPY) The NOCOPY option causes the FlashCopy relationship to be established without initiating a background copy - only the source tracks which are updated are copied to the target. This may be useful if you need the copy only for a short time, such as making a backup to tape. While this relationship exists, before a track is updated on the source volume, the ESS does an on-demand copy of the track to the target volume, thus preserving the T0 copy. Similarly, before updating a track on the target volume, the ESS will perform an on-demand copy of this track to the target volume. A FlashCopy relationship established in this no-background copy mode remains active until one of the following occurs: 򐂰 A FlashCopy withdraw is issued to remove the relationship. 򐂰 All source device tracks have been copied onto the target device because they were updated. 򐂰 All target device tracks have been updated by user applications.

Incremental FlashCopy FlashCopy V2 has introduced a new function known as Incremental FlashCopy, which provides the capability to refresh a FlashCopy relationship. With Incremental FlashCopy, the the initial FlashCopy relationship between a source and a target is maintained. When a subsequent FlashCopy establish is initiated, only the data required to bring the target current to the source’s newly established point-in-time is copied. Incremental FlashCopy helps reduce the background copy completion time when only a subset of data on either the source or target has changed. Figure 6-8 gives a brief overview of how Incremental FlashCopy works.

70

Linux with zSeries and ESS - Essentials

Time volA

volB

volA

volB

volA

volB

volA

volB

FlashCopy initiated from volA to volB, change recording starts on both volumes. Background copy task copies data fromvolAto volB. Once FlashCopy is established, independent updates can proceed to volA and to volB.

Incremental FlashCopy initiated fromvolAto volB. Background copy task copies changed tracks fromvolAto volB. Change recording reset to track new changes (ready for the next incremental FlashCopy request).

Figure 6-8 Incremental FlashCopy

When FlashCopy is established for a large number of volumes, there is a finite amount of time between the first and last establish, so that copies may not be created at a consistent point-in-time. FlashCopy V2 introduces a Consistency Croup function, which can be used to create a consistent point-in-time copy across multiple volumes.

6.5 ESS Copy Services Web User Interface The ESS Copy Services that runs in the ESS provides a Web interface that can be used to invoke and manage the ESS copy functions. First, one ESS has to be defined as the ESS Copy Services Server (CSS), and this will hold all ESS Copy Services related information. Optionally, there can be a second ESS defined to be the backup ESS Copy Services Server. The recommendation is to define the Copy Services server at the remote (recovery) site and the backup CSS at the primary (application) site. Setting up the CSS in the ESS is done by an IBM Service Support Representative. To access the Copy Services Web User Interface (WUI) you first open the ESS Welcome Screen, discussed earlier in 4.3.3, “ESS Specialist” on page 44, and shown in Figure 6-9.

Chapter 6. ESS Copy Services and Linux

71

Figure 6-9 ESS Welcome screen

You then select Copy Services, which will connect your browser to the ESS that is specified as the Copy Services server. The main menu of the ESS Copy Services Web User Interface will be displayed (refer Figure 6-10).

Figure 6-10 Main menu ESS Copy Services Web User Interface

72

Linux with zSeries and ESS - Essentials

From the Copy Services main menu, you can access all ESS Copy Services panels by selecting one of the buttons on the left hand side. We will briefly look at each of these. For further details please refer to IBM TotalStorage Enterprise Storage Server Web Interface User’s Guide, SC26-7448, and Implementing ESS Copy Services with IBM eServer zSeries, SG24-5680.

The Volumes panel The Volumes panel is shown in Figure 6-11. From this panel you can: 򐂰 Receive information and the status of volumes defined in a logical subsystem (LSS) of the ESS 򐂰 Select source and target volumes for a PPRC or FlashCopy task 򐂰 Filter the output of the volume display to a selected range 򐂰 Search for a specific volume based on its unique volume ID 򐂰 Establish, terminate, and suspend PPRC copy pairs and optionally save the task 򐂰 Establish and withdraw FlashCopy pairs and optionally save the task 򐂰 Enter the multiple selection mode for PPRC and FlashCopy

Figure 6-11 Volumes panel

The Logical Subsystems Panel The Logical Subsystems panel is shown in Figure 6-12. This panel displays the ESSs and the logical subsystems within the ESS storage network; the storage network includes all ESSs that are configured to use the same ESS Copy Services Server. From the Logical Subsystems panel you can: 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰 򐂰

View all ESSs within the storage network View all logical subsystems (LSSs) within the storage network Get information about a logical subsystem and its status View and modify the copy properties of a logical subsystem Filter the output of a selected range Search for a specific logical subsystem based on its unique address Establish, terminate, and suspend PPRC copy pairs, and optionally save the task Chapter 6. ESS Copy Services and Linux

73

򐂰 Perform operations on consistency groups

Figure 6-12 Logical Subsystems panel

The Paths panel A path is used to send data between the primary and secondary PPRC pairs. The physical path consists of the ESCON connection between two ESSs, while a logical path describes the connection of the PPRC source and targets. Please note that before you can create PPRC copy pair relationships, you must use the Paths panel to establish logical paths between the source and target LSSs. The Paths panel is shown in Figure 6-13. From this panel you can: 򐂰 򐂰 򐂰 򐂰

74

Establish PPRC paths Remove PPRC paths Create a PPRC consistency group View information about paths

Linux with zSeries and ESS - Essentials

Figure 6-13 Paths entry panel

The Tasks Panel By using the Volumes, Logical Subsystems, and Paths panels you can save the setup of any data copy action within a task. A task can be any kind of FlashCopy, PPRC, and path operation. Using the Tasks panel you can run and modify saved tasks. You can also group multiple tasks into a single task group. This could be the case if multiple FlashCopy pairs from different LSSs have to be established at the same time in order to do a backup. This group of tasks can then by run by selecting the task group and running it. The Tasks panel is shown in Figure 6-14.

Chapter 6. ESS Copy Services and Linux

75

Figure 6-14 Tasks panel

6.6 Copy Services command line interface Using the ESS Copy Services command line interface (CLI), users of selected open systems servers are able to communicate with the ESS Copy Services Server from the host’s command line to manage PPRC and FlashCopy. For example, any of the tasks you have saved using the Web User Interface could be invoked from the host through a command such as rsExecute Task. The CLI is a Java-based application that runs on the host server. The host system does not necessarily need to be connected to storage assigned to one of the host adapter ports of the ESS. The only requirement is that the host from where you want to invoke the commands is connected through the network to the ESS that is defined as the primary ESS Copy Services Server. Detailed information on the ESS Copy Services command line interface can be found in IBM TotalStorage Enterprise Storage Server Command Line Interface User’s Guide, SC26-7494. Details of which open systems servers support the CLI can be found at in the “ESS Interoperability Matrix” located at the Web site below. Supported servers include Red Hat Linux and SuSE Linux. http://www.storage.ibm.com/disk/ess/supserver.htm

76

Linux with zSeries and ESS - Essentials

6.7 ESS Copy Services available with Linux on zSeries There are several ways of invoking and managing the various ESS Copy Services functions, for the different operating environments. For PPRC and FlashCopy, the most commonly used are the Copy Services Web User Interface (WUI) and the Copy Services command line interface (CLI). The only exception to this is the z/OS environment where TSO commands provide a convenient management mechanism. Table 6-1 summarizes the facilities available for invoking and managing PPRC, PPRC-XD, and FlashCopy for Linux on zSeries. Some further details for the items listed in that table are: 򐂰 WUI: Web user interface (refer 6.5, “ESS Copy Services Web User Interface” on page 71) 򐂰 CLI: command line interface (refer 6.6, “Copy Services command line interface” on page 76). Note that the CLI is run on a supported open systems server connected to the ESS, and not on the Linux for zSeries server itself. 򐂰 VM ICKDSF: In a VM environment, some PPRC functions can be performed using the ICKDSF utility. For full details see Implementing ESS Copy Services with IBM eServer zSeries, SG24-5680. 򐂰 VM CP flashcopy: In a VM environment, FlashCopy is supported using the CP command flashcopy. For full details see Implementing ESS Copy Services with IBM eServer zSeries, SG24-5680. Table 6-1 Managing PPRC,PPRC-XD, and FlashCopy on Linux for zSeries Linux running in LPAR

Linux running under VM

ESS connection via ESCON/FICON

WUI and CLI (SuSE SLES 8 only)

WUI and CLI VM ICKDSF (for PPRC) VM CP flashcopy (for FlashCopy)

ESS connection via FCP

WUI and CLI (SuSE SLES 8 only)

not formally supported

Note that theoretically the WUI and CLI can be used for invoking and managing ESS Copy Services functions for any Linux on zSeries system, regardless of the kind of attachment (ESCON/FICON or FCP), and regardless of whether Linux is running as a VM guest or natively in an LPAR. This is because the WUI and CLI interface directly to the ESS, without reference to the host operating environment. However, IBM has only tested these functions (and only provides official support for these functions) as per details in Table 6-1. Note that if the Linux for zSeries volumes are CKD (connected by ESCON or FICON), and they are defined to a z/OS system via the HCD, then copy services functions can be invoked and managed for those volumes using the z/OS system. (For details of z/OS facilities for managing copy services see Implementing ESS Copy Services with IBM eServer zSeries SG24-5680.) Obviously, this approach will introduce operational complexities, particularly for the z/OS system, and especially for functions that need to be performed on a regular basis. For this approach, it would be desirable to format the Linux volumes using the Linux on S/390 Compatible Disk Layout (CDL).

Chapter 6. ESS Copy Services and Linux

77

78

Linux with zSeries and ESS - Essentials

Related publications The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook.

IBM Redbooks For information on ordering these publications, see “How to get IBM Redbooks” on page 80. Note that some of the documents referenced here may be available in softcopy only: 򐂰 Linux Handbook A Guide to IBM Linux Solutions and Resources, SG24-7000 򐂰 Implementing Linux with IBM Disk Storage, SG24-6261-01 򐂰 IBM TotalStorage Enterprise Storage Server Model 800, SG24-6424 򐂰 IBM TotalStorage Enterprise Storage Server Implementing ESS Copy Services with IBM eServer zSeries, SG24-5680-01 򐂰 IBM TotalStorage Enterprise Storage Server: Implementing the ESS in Your Environment, SG24-5420 򐂰 IBM TotalStorage Enterprise Storage Server Web Interface User’s Guide, SC26-7346 򐂰 Linux for S/390, SG24-4987 򐂰 Linux for IBM zSeries and S/390: Distributions, SG24-6264 򐂰 Linux on IBM zSeries and S/390: Large Scale Linux Deployment, SG24-6824

Other publications These publications are also relevant as further information sources: 򐂰 IBM TotalStorage Enterprise Storage Server Web Interface User’s Guide, SC26-7448 򐂰 ESS Interoperability Matrix and Host Systems Attachment Guide, SC26-7446 򐂰 IBM TotalStorage Enterprise Storage Server Command Line Interface User’s Guide, SC26-7494

Online resources These Web sites and URLs are also relevant as further information sources: 򐂰 IBM Linux Web site: http://www.ibm.com/linux/

򐂰 The Linux Technology Center: http://www.ibm.com/linux/ltc

򐂰 The IBM TotalStorage Web site: http://www.storage.ibm.com/

򐂰 The IBM TotalStorage SAN fabric Web site: http://www.storage.ibm.com/ibmsan/products/sanfabric.html

© Copyright IBM Corp. 2003. All rights reserved.

79

򐂰 The IBM eServer Web site: http://www.ibm.com/eserver

How to get IBM Redbooks You can search for, view, or download Redbooks, Redpapers, Hints and Tips, draft publications and Additional materials, as well as order hardcopy Redbooks or CD-ROMs, at this Web site: ibm.com/redbooks

Help from IBM IBM Support and downloads ibm.com/support

IBM Global Services ibm.com/services

80

Linux with zSeries and ESS - Essentials

Index A addressing storage 49–50 applications 31

B background copy mode 69 BIND (Berkeley Internet Name Daemon) 5 Bourne Again Shell (Bash) 3 BSD (Berkeley Systems Distribution) 2

C cache 41 Cascading PPRC 65 CCW 48, 50–51 channel 48 CHPID (channel path identifier) 49, 53 CKD 48, 51, 53, 55 cluster 13, 36 command line interface (CLI) 77 Concurrent Copy 43 Consistency Croup 71 consolidation 29, 40 storage 40 control unit 49 copy functions 42, 63 dynamic 63 point-in-time 63 Copy Services 45, 61–62, 71 Concurrent Copy 43 Extended Remote Copy 43 FlashCopy 43, 68 Peer-to-Peer Remote Copy 43, 63 Peer-to-Peer Remote Copy Extended Distance 65

D data management services for Linux 15 DB2 for Linux Migration Services 15 support for Linux 15 DDM (disk drive module) 39 device number 49 DHCP 5 distribution 1, 3, 10, 13 domain name server (DNS) 5 drawer 39 dynamic 63

E

ESS 10, 38 Specialist 44, 53 Extended Remote Copy 43

F FAStT 10 FB (fixed-block) 54 FCP 48–50, 52 Fibre Channel Protocol. See FCP FICON 40–41, 48, 53 file and print serving 5 file structure 4 firewall 5 FlashCopy 43, 68 incremental 70

G GNU 1–3 GNU C Compiler (gcc) 4 GPL (General Public License) 1–2 guest 25, 27, 29 guest LAN 28

H high availability 4 Hipersocket 28, 30 host adapters 55 Host type 55

I IBM Global Services 12 Linux Consultancy 13 services for Linux 9 Services introduction 12 IBM Global Services 12 IBM Support Line 12 IBM TotalStorage Proven 37 ICKDSF 77 IDS (Intrusion Detection Services) 5 IFL.See Integrated Facility for Linux infrastructure 4 Integrated Facility for Linux 25 IOCP 53 iSeries iSeries Linux information matrix 20 iSeries services for Linux 19 Linux services for the iSeries 19 IUCV (Inter-user communication vehicle) 28

enclosure 39 ESCON 40, 48

© Copyright IBM Corp. 2003. All rights reserved.

81

J

router 5

JFS (Journaled File System) 11

S K kernel 1–3, 27

L legacy applications 14 Linux applications 31 cluster 13 distribution 1, 3, 13 guest 25 Integrated Feature for 14 Web site platform matrix 21 Linux Technology Center (LTC) 11 LinuxWorld 10 logical subsystems 73 Lotus IBM’s Linux services for 18 LPAR 24, 53 LUN 49–50, 57

M Minix 2 mirroring 63

N NVS 64

O on demand capacity 43 open source 1, 3, 11

T TCSH shell 3 Technology Center 10 Technology Partner 6 Tivoli 15–16, 18, 33 Linux supported Tivoli products 15

U UnitedLinux 6 UNIX 2–3

V

P path 74 Peer-to-Peer Remote Copy 43, 63 Peer-to-Peer Remote Copy Extended Distance 65 personal Linux education plan 19 point-in-time 63 point-in-time copy 70 PPRC path 74 PPRC Extended Distance 64 PPRC. See Peer-to-Peer Remote Copy PPRC-XD. See PPRC Extended Distance PROP 33 Proven 37 pSeries Linux for pSeries offerings 20

R RAID 36, 41–42, 56 Redbooks Web site 80 Contact us xii

82

Samba 31 SAN (Storage Area Network) 43 scalability 29 SCSI 49 Seascape 38 server consolidation 4, 40 services and IBM Software 14 IBM Managed Hosting 19 IBM’s Linux Learning services 19 shell 3–4 Bash 3 TSCH 3 SSA 39, 41 loop 39 storage consolidation 40 support IBM’s Linux Support Line 18 Support Line 13 systems management 29

Linux with zSeries and ESS - Essentials

vCTC (Virtual channel-to-channel) 28 Virtual Machine 27 virtual network 28 VM. See Virtual Machine volume 57, 63 primary 64, 68 secondary 64, 68 source 69 target 69

W Web serving 5 WebSphere product services matrix for Linux 14 services for Linux 14 World Wide Name.See WWN World Wide Node Name. See WWNN World Wide Port Name. See WWPN WWN 50 WWNN 50

WWPN 55

X xSeries xSeries Linux information matrix 20 xSeries services for Linux 20

Z z/VM 24, 26–27 guest 27, 29 zSeries zSeries Linux information matrix 19 zSeries services for Linux 19

Index

83

84

Linux with zSeries and ESS - Essentials

(0.1”spine) 0.1”0.169” 5389 pages

Back cover

®

Linux with zSeries and ESS: Essentials Enterprise-class solutions with Linux and IBM TotalStorage Focus on zSeries and ESS FCP support for Linux on zSeries

This IBM Redbook presents high-level information on Linux in conjunction with IBM and TotalStorage products, giving proof points that these products can be deployed all together to provide enterprise-class solutions. In particular this redbook looks at Linux with the zSeries servers and IBM TotalStorage Enterprise Storage Server (ESS) disk products. The book describes the Fibre Channel Protocol (FCP) support now available when connecting the ESS to a zSeries host running Linux. The last chapter describes the ESS copy services available for Linux on zSeries. This book is intended as a starting point and reference for IBM representatives, Business Partners, or clients who are planning Linux based solutions with IBM zServers and ESS storage products. Most of the information contained in this book is a compilation of the material from the Linux Handbook, SG24-7000, and Implementing Linux with IBM Disk Storage, SG24-6262-01. We encourage the reader to refer to those IBM Redbooks for more complete information, or for implementation details.

INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION

BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.

For more information: ibm.com/redbooks

SG24-7025-00

ISBN 0738499986