Migration to IBM Tivoli Configuration Manager Version 4.2 9780738428307

194 62 2MB

English Pages 166 Year 2002

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Migration to IBM Tivoli Configuration Manager Version 4.2
 9780738428307

Citation preview

Front cover

Migration to IBM Tivoli Configuration Manager Version 4.2 Learn the business reasons for migration to ITCM Software Distribution, Inventory and Framework migration Covers best practices for migration

Vasfi Gucer Omer Alti Chris Chan Murtuza Choilawala Ntokozo Xaba Tina Yee

ibm.com/redbooks

International Technical Support Organization Migration to IBM Tivoli Configuration Manager Version 4.2 December 2002

SG24-6616-00

Note: Before using this information and the product it supports, read the information in “Notices” on page xiii.

First Edition (December 2002) This edition applies to Tivoli Software Distribution 3.6, Tivoli Inventory 3.6, Tivoli Framework 3.6 and IBM Tivoli Configuration Manager 4.2. © Copyright International Business Machines Corporation 2002. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Contents Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv The team that wrote this redbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Comments welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Chapter 1. New features and the business value of migration . . . . . . . . . . 1 1.1 Business at faster pace while driving costs down . . . . . . . . . . . . . . . . . . . . 2 1.2 Factors driving success in business environments . . . . . . . . . . . . . . . . . . . 3 1.3 The ITCM 4.2 solutions in business environment . . . . . . . . . . . . . . . . . . . . 6 1.4 New since Software Distribution 3.6, Inventory 3.6 . . . . . . . . . . . . . . . . . . . 7 1.4.1 Software Distribution 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.4.2 Inventory 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.4.3 Software Distribution 4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.5 New since Software Distribution 4.1, Inventory 4.0 . . . . . . . . . . . . . . . . . . 15 1.5.1 Major Features of IBM Tivoli Configuration Manager 4.2 . . . . . . . . . 16 1.6 Variances of ITCM and Software Distribution 3.6.X . . . . . . . . . . . . . . . . . 22 1.6.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.6.2 Software packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 1.6.3 Source host component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 1.6.4 Software distribution targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Chapter 2. All about MDist2 and a comparison with MDist1. . . . . . . . . . . 25 2.1 Introduction to Multiplexed Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.2 MDist and distribution hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.2.1 Repeater configuration and placement . . . . . . . . . . . . . . . . . . . . . . . 28 2.2.2 Using Always and WAN options for interconnected TMRs . . . . . . . . 28 2.2.3 Repeater tuning in MDist1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.2.4 Timeouts and repeaters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.2.5 Active distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

© Copyright IBM Corp. 2002. All rights reserved.

iii

2.3 MDist2 components and functionalities . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.3.1 What is MDist2? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.3.2 MDist2 components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.3.3 What is new in MDist2?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.3.4 Asynchronous delivery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.3.5 Priority queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.3.6 Total resource limits per repeater . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 2.3.7 Data depots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 2.3.8 Checkpoint restart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 2.3.9 Assured delivery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 2.3.10 Disconnected endpoint support. . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 2.3.11 Mobile computing support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 2.3.12 Distribution control and status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Chapter 3. Planning and methodology for upgrading . . . . . . . . . . . . . . . . 83 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 3.2 Strategy and planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 3.3 Compatibility matrix - operating systems, databases . . . . . . . . . . . . . . . . 85 3.4 Plans for back-level systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 3.4.1 A "home style" approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 3.4.2 A "one that stayed behind" approach . . . . . . . . . . . . . . . . . . . . . . . . 87 3.4.3 A "decommission first, upgrade later" approach . . . . . . . . . . . . . . . . 87 3.5 Testing in a lab - Tivoli Database component . . . . . . . . . . . . . . . . . . . . . . 88 3.6 Considerations for upgrading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 3.6.1 Interoperability of components during the upgrade . . . . . . . . . . . . . . 90 3.6.2 Inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 3.6.3 Software Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 3.6.4 Schema changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 3.6.5 Query libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 3.6.6 Inventory issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 3.6.7 Software Distribution issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 3.6.8 Integrated upgrade issues. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 3.7 Tivoli Management Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 3.8 Upgrading Inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 3.9 Upgrading Software Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 3.9.1 From Version 3.6.2 to Version 4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . 97 3.9.2 From Version 4.0 to Version 4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 3.9.3 From Version 4.1 to Version 4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 3.10 Migration scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 3.10.1 Scenario 1- Shadow migration . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 3.10.2 Scenario 2 - Migration from all 3.6.x applications to 4.2 . . . . . . . . 104 Chapter 4. Upgrade Tivoli Management Framework to 4.1 . . . . . . . . . . . 111

iv

Migration to IBM Tivoli Configuration Manager Version 4.2

4.1 4.2 4.3 4.4 4.5

Is this an upgrade or a migration? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Overview of the TME Infrastructure upgrade . . . . . . . . . . . . . . . . . . . . . 112 Considerations for upgrading in phases . . . . . . . . . . . . . . . . . . . . . . . . . 113 Recommendations before the upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Upgrading to Framework 4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 4.5.1 Upgrade from 3.6.X of Tivoli Management Framework . . . . . . . . . 117 4.5.2 Upgrade from 3.6.5 of Tivoli Management Framework . . . . . . . . . . 119 4.5.3 Upgrade from 3.7B and 3.7.1 of Tivoli Management Framework . . 121 4.6 Post upgrade tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Chapter 5. Package migration . . . . . . . . . . . . . . . . . . . . 5.1 Deciding on the migration of packages . . . . . . . . . . . . 5.2 Planning for the package migration . . . . . . . . . . . . . . . 5.2.1 Setup the migration environment . . . . . . . . . . . . 5.3 Mapping package differences . . . . . . . . . . . . . . . . . . . 5.3.1 Software package attribute mapping. . . . . . . . . . 5.3.2 Commands translation . . . . . . . . . . . . . . . . . . . . 5.3.3 Package name translation. . . . . . . . . . . . . . . . . . 5.3.4 Destination name translation. . . . . . . . . . . . . . . . 5.3.5 Software Distribution actions. . . . . . . . . . . . . . . . 5.4 Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . 123 . . . . . . . . . . . . . . 124 . . . . . . . . . . . . . . 124 . . . . . . . . . . . . . . 124 . . . . . . . . . . . . . . 127 . . . . . . . . . . . . . . 127 . . . . . . . . . . . . . . 128 . . . . . . . . . . . . . . 128 . . . . . . . . . . . . . . 128 . . . . . . . . . . . . . . 128 . . . . . . . . . . . . . . 129

Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Related publications . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other resources . . . . . . . . . . . . . . . . . . . . . . . . Referenced Web sites . . . . . . . . . . . . . . . . . . . . . . How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . IBM Redbooks collections . . . . . . . . . . . . . . . . .

...... ...... ...... ...... ...... ......

....... ....... ....... ....... ....... .......

...... ...... ...... ...... ...... ......

. . . . . .

135 135 135 135 136 136

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

Contents

v

vi

Migration to IBM Tivoli Configuration Manager Version 4.2

Figures 1-1 1-2 1-3 1-4 1-5 2-1 2-2 2-3 2-4 2-5 2-6 2-7 2-8 2-9 2-10 2-11 2-12 2-13 2-14 2-15 2-16 2-17 2-18 2-19 2-20 2-21 2-22 2-23 2-24 2-25 2-26 2-27 2-28 3-1 4-1 5-1 5-2 5-3

Web Interface access control with WebSEAL . . . . . . . . . . . . . . . . . . . . 17 Web Interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Resource Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Directory query library and Directory queries. . . . . . . . . . . . . . . . . . . . . 21 Tivoli Desktop showing some of the new features of ITCM . . . . . . . . . . 23 Always flag setting example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Using WAN option for interconnected TMRs . . . . . . . . . . . . . . . . . . . . . 30 Net_load is distributed between connections of a single distribution . . . 38 MDist2 components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Asynchronous delivery concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Synchronous delivery concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 MDist2 repeater queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Available connections for each priority . . . . . . . . . . . . . . . . . . . . . . . . . 47 Software Distribution GUI - Install Software Package: setting priority . . 49 MDist2 maximum concurrent connections . . . . . . . . . . . . . . . . . . . . . . . 51 MDist1 max_conn configuration - multiple distribution scenario . . . . . . 53 MDist2 net_load / target_netload concepts . . . . . . . . . . . . . . . . . . . . . . 54 MDist1 positive net_load - multiple distribution scenario . . . . . . . . . . . . 56 MDist1 negative net_load - multiple distribution scenario . . . . . . . . . . . 57 MDist2 configuration - mem_max and disk_max . . . . . . . . . . . . . . . . . . 58 MDist1 configuration- mem_max, disk_max in multiple distribution. . . . 59 Depot concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Depots between repeaters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Software Distribution GUI, Install Software Package with depot . . . . . . 66 Depot directory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 An example of a large-scale distribution environment . . . . . . . . . . . . . . 68 Software Distribution scenario using a depot. . . . . . . . . . . . . . . . . . . . . 70 Implementation of checkpoint restart . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Retry option - gateway repeater and endpoint . . . . . . . . . . . . . . . . . . . . 73 Retry option - endpoint gateway and another repeater . . . . . . . . . . . . . 74 Software Distribution GUI - Time-out Settings . . . . . . . . . . . . . . . . . . . . 75 Automated Software Distribution scenario at power-on . . . . . . . . . . . . . 77 Distribution statuses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Scenario 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Tivoli Management Framework upgrade paths . . . . . . . . . . . . . . . . . . 113 Managed Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Managed Resources menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Select Create->ProfileManager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

© Copyright IBM Corp. 2002. All rights reserved.

vii

5-4 5-5 5-6

viii

Create Profile Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 The migrated software package properties . . . . . . . . . . . . . . . . . . . . . 131 Launch Software Package Editor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

Migration to IBM Tivoli Configuration Manager Version 4.2

Tables 2-1 2-2 2-3 2-4 2-5 2-6 2-7 5-1

Repeater flags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Timeout for distributions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Differences between gateway and managed node repeaters . . . . . . . . 42 MDist1 and MDist2 - comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Relationship between the MDist2 net_load and the connections . . . . . 54 Relationship of MDist2 net_load, target_net_load, and the connection . 55 Relation of disposable option and permanent_storage configuration . . 67 Software Distribution mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

© Copyright IBM Corp. 2002. All rights reserved.

ix

x

Migration to IBM Tivoli Configuration Manager Version 4.2

Examples 2-1 2-2 2-3 2-4 2-5 2-6 2-7 2-8 2-9 2-10 2-11 2-12 2-13 2-14 2-15 2-16 2-17 2-18 5-1

Information related to repeater configuration in the gatelog . . . . . . . . . 26 Repeater configuration for interconnected TMRs . . . . . . . . . . . . . . . . . 30 Endpoint ranges with wrptr -ge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Endpoint ranges with wrpt -e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Finding distribution route for endpoints . . . . . . . . . . . . . . . . . . . . . . . . . 31 Finding route to an endpoint in an interconnected TMR . . . . . . . . . . . . 32 Repeater parameters for aix-tmr1b . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Changing the swap directory for NT managed nodes . . . . . . . . . . . . . . 34 Global Timeout parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Setting timeout parameter for Inventory profiles . . . . . . . . . . . . . . . . . . 36 Changing the distribution packet size . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Active distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Distribution log file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Changing priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 The .toc and .dat files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Depot configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 wldsp command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 wdepot command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 wfptosp. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

© Copyright IBM Corp. 2002. All rights reserved.

xi

xii

Migration to IBM Tivoli Configuration Manager Version 4.2

Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces.

© Copyright IBM Corp. 2002. All rights reserved.

xiii

Trademarks The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: AIX® AS/400® DB2® IBM® IBM eServer™ Informix® iSeries™ NetView®

OS/2® OS/400® Perform™ Redbooks(logo)™ Sequent® SP™ ThinkPad® Tivoli®

Tivoli Enterprise™ Tivoli Enterprise Console® Tivoli Management Environment® TME® TME 10™ WebSphere® z/OS™

The following terms are trademarks of International Business Machines Corporation and Lotus Development Corporation in the United States, other countries, or both: Notes® The following terms are trademarks of other companies: ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel Corporation in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. C-bus is a trademark of Corollary, Inc. in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure Electronic Transaction LLC. Other company, product, and service names may be trademarks or service marks of others.

xiv

Migration to IBM Tivoli Configuration Manager Version 4.2

Preface Tivoli Inventory and Tivoli Software Distribution have evolved to become smarter, faster, and more efficient, since the earlier 3.6.X versions. IBM Tivoli Configuration Manager Version 4.2 uses all the best features of these post 3.6 versions and also adds new features and enhancements to create a powerful deployment, change, and asset management suite. This IBM Redbook explains both the business reasons and the technical implementation details for migrating from Software Distribution and Inventory 3.6.X to IBM Tivoli Configuration Manager Version 4.2. The topics include:  Business reasons for migration  Functional and architectural differences between IBM Tivoli Configuration Manager and 3.6.X versions of Software Distribution and Inventory  Planning and methodology of migration  Framework migration  Migration scenarios  Package migration This redbook will help you in all aspects of migration from Software Distribution and Inventory 3.6.X to IBM Tivoli Configuration Manager Version 4.2.

The team that wrote this redbook This redbook was produced by a team of specialists from around the world working at the International Technical Support Organization, Austin Center. Vasfi Gucer is an IBM Certified IT Specialist working at the ITSO, Austin Center. He worked with IBM in Turkey for 10 years, and has been with the ITSO since January 1999. He has more than eight years of experience in systems management, networking hardware, and distributed platform software. He has worked on various Tivoli customer projects as a Systems Architect in Turkey and the U.S. Vasfi is also a Certified Tivoli Consultant. Omer Alti is a Management Systems Integrator working for Reuters in London, UK. Reuters is the largest global information, news, and technology group, serving 151 countries. He has five years of experience in Systems and Network Management area working as a Design Architect and Project Leader in Turkey and a Developer in the UK. He holds a degree and postgraduate diploma in Electronics and Computing. His areas of expertise include Tivoli Framework, Software Distribution, Enterprise Console, Distributed Monitoring, NetView, and TIBCO.

© Copyright IBM Corp. 2002. All rights reserved.

xv

Chris Chan is an Senior Software Engineer with the Integrated Technology Services at IBM in Australia. He has worked for IBM for over eight years in various support roles and is currently with the Tivoli Support Team providing support to the Asia Pacific region. He has supported the Tivoli core products of Framework, Software Distribution, Inventory, Distributed Monitoring, Enterprise Console, Remote Control, and Netview since 1998. Murtuza Choilawala is an Advisory Software Engineer in Level 2 Framework Support with IBM Tivoli. He has 10 years of experience in Information Technology and has provided technical support for Tivoli for the past two years. Before joining Tivoli, he worked with the IBM Advance Technical Support team in Rochester, Minnesota, providing technical support to IBM Business Partners and Software Developers on iSeries and AS/400 systems. He was also an AS/400 Product Manager, while working with an IBM Business Partner in India. Ntokozo Xaba is a technical team leader working for Nedcor Bank, Ltd. Nedcor Bank, Ltd. has a large Tivoli deployment. Ntokozo is a Tivoli Certified Consultant for Inventory and Software Distribution products and has over seven years of IT experience. He spent his last five years on systems management project implementation. In addition to the Tivoli experience, he has experience with DB2 and MS SQL databases, Perl scripting, Microsoft SMS and Novell Zenworks. He is also a CCNA (Cisco Certified Network Associate). Tina Yee currently works with IBM Techline Tivoli as a Presales Support Specialist in Austin, TX. She interacts with Tivoli Sales Executives, Sales Engineers, and Business Partners as the first line of support for solutions consulting, marketing, proposals, demos, and technical issues. Her main area of concentration at Techline Tivoli is IBM Tivoli Configuration Manager and other configuration and operations products. She has been working for IBM since 1998. Thanks to the following people for their contributions to this project: Gabrielle Velez International Technical Support Organization, Rochester Center Jeff Achtermann, Debbie Bandera, Andrew Lucchesi IBM USA Peter Elliott IBM UK Antonella Paldera, Alessandro Raniolo IBM Italy The team also would like to thank contributors of the following Field Guides:  IBM Tivoli Configuration Manager Upgrade Scenarios  Compatibility Guide for Tivoli Applications  Tivoli Management Enterprise 3.7.1 Upgrade

xvi

Migration to IBM Tivoli Configuration Manager Version 4.2

Become a published author Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. You'll team with IBM technical professionals, Business Partners and/or customers. Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you'll develop a network of contacts in IBM development labs, and increase your productivity and marketability. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.html

Comments welcome Your comments are important to us! We want our Redbooks to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways:  Use the online Contact us review redbook form found at: ibm.com/redbooks

 Send your comments in an Internet note to: [email protected]

 Mail your comments to: IBM Corporation, International Technical Support Organization Dept. JN9B Building 003 Internal Zip 2834 11400 Burnet Road Austin, Texas 78758-3493

Preface

xvii

xviii

Migration to IBM Tivoli Configuration Manager Version 4.2

1

Chapter 1.

New features and the business value of migration This chapter provides an in-depth look at IBM Tivoli Configuration Manager 4.2 and the value it can provide to businesses looking to raise the bar on service delivery. IBM Tivoli Configuration Manager 4.2 can help enterprises recognize their ability to manage complexity in a dynamic business environment, maximize return on technology investments, and provide a secure, highly available infrastructure. While this chapter is intended for audiences using Software Distribution 3.6 and Inventory 3.6, readers using other configuration management applications (or none at all) will find this chapter very beneficial for understanding how IBM Tivoli Configuration Manager 4.2 will help realize the optimal return on investment. We discuss these topics:  “Business at faster pace while driving costs down” on page 2  “The ITCM 4.2 solutions in business environment” on page 6  “New since Software Distribution 3.6, Inventory 3.6” on page 7  “New since Software Distribution 4.1, Inventory 4.0” on page 15

© Copyright IBM Corp. 2002. All rights reserved.

1

1.1 Business at faster pace while driving costs down The speeds at which businesses move and grow require them to be more competitive than ever before. They must get ahead of the others while striving to minimize costs. When these are put into perspective, it's not so simple. How can a company stay competitive while decreasing costs? What are the tasks or goals required to achieve this objective? From an IT perspective, applications to manage systems must be smarter, faster, scalable, efficient, and secure. While businesses are growing and using non-traditional systems, it becomes more complex to manage them especially when configuration management is involved. Deployment of critical applications becomes time-consuming for desktop technicians, asset management becomes inconsistent or information is lost, departmental systems must be consistent, handheld devices become the standard for all field employees, or security implementations become an inhibitor to configuration management. The following questions are addressed as challenges for the typical IT Manager:  How do I do more with less?  How do I efficiently deploy/manage IT assets in a heterogeneous e-business environment?  How do I manage a rapidly changing set of IT resources across a geographically dispersed workforce?  How do I address the rapid growth of unmanaged pervasive devices?  How do I ensure that systems are automatically configured? Using the IBM Tivoli Configuration Manager 4.2 solution, an organization can manage complexity in a dynamic e-business environment, maximize return on technology investments, and provide a secure, highly available business infrastructure. Although Software Distribution 3.6 and Inventory 3.6 managed a heterogeneous environment, synchronized deployment, had a lot of scalability, and included WAN-smart capabilities, the IBM Tivoli Configuration Manager 4.2 can offer significantly more to make organizations efficient, flexible, and able to keep up with evolving business requirements. Since Software Distribution 3.6 and Inventory 3.6, there have been many new features and enhancements with succeeding versions. IBM Tivoli Configuration Manager 4.2 rolls up the best features and enhancements of 3.6, 4.0, 4.1, and adds the major features and enhancements of IBM Tivoli Configuration Manager 4.2. Major features of IBM Tivoli Configuration Manager 4.2 include:  Integrated installation mechanism  New Web GUI

2

Migration to IBM Tivoli Configuration Manager Version 4.2

    

Resource manager Device management Integration with enterprise directories Native packaging support Multicast distribution

Distributed enterprises are no longer just using traditional desktops or servers for end-users. Enterprises are becoming more demanding and require faster and more diverse deployment, change, and asset management in order to achieve their goals. It is not uncommon for end-users to have a desktop, laptop, pervasive device, and cellular phones all at once while e-businesses run at almost chaotic speed to support their business objectives. IBM Tivoli Configuration Manager 4.2 is the answer to helping businesses keep up with demanding deployment and asset issues. Sales information such as Price Lists can be sent to the pervasive devices on demand, mobile users can scan for hardware/software information of their laptops while at the same time downloading the TMA, and targets for a reference model can include an entire Accounting Department from an enterprise directory. For speedier distributions, organizations can leverage multicast. A 20 megabyte distribution to 300 computers at a bandwidth of 500 kilobytes/second can take an average of 40 seconds to distribute to all the endpoints. Using unicast, this would take an average of 3.3 hours.

1.2 Factors driving success in business environments Here are the factors that are driving success in today’s business environments.

Competitive advantage demands higher employee productivity Adding new employees to the company payroll is not going to give the business a competitive advantage. It is best to use the existing staff resources more effectively to obtain higher value. When workloads increase while staffing numbers remain the same, it is advantageous to implement enabling technologies that streamline operational tasks. Here is a scenario that we’ll call Business Need A.

Business Need A A major credit card company offers Visa and Mastercard, and with multiple offices across the U.S. has several different types of operations in different locations. The Chicago, Illinois office is the headquarters and this is where the Accounting, Business Operations, IT, and Finance departments are located. The Miami, Florida office includes all Customer Support and the New York office

Chapter 1. New features and the business value of migration

3

contains all Marketing and Sales employees. There are smaller scattered offices around the United States as well as remote employees. All offices, except the Miami office, are in service from 8:00AM to 6:00PM at their time zones. The Miami office, being a customer support center, is open 24 hours a day, 7 days a week. Their requirements for deployment and asset management is to run a hardware and software inventory scan once a week, distribute updated company-based software at least once per week (preferably during off hours or off-peak hours), take advantage of network bandwidth, leverage the company LDAP server, and to install a set of software based on user’s association by department.

How IBM Tivoli Configuration Manager 4.2 can help By using the IBM Tivoli Configuration Manager 4.2 Integrated Installation mechanism, deploying IBM Tivoli Configuration Manager cuts down the time and complexity of implementing Tivoli. With the flexibility and scanning performance of Inventory, collecting system hardware and software information will allow optimization of the network and the amount of data that is sent to the RIM database. For hardware scanning, you can select what information should be returned from a scan. For software scans, software signature matching can be performed at the endpoint instead of the configuration repository. Distribution of software packages can be done using the Activity Planner. They can choose their targets from an LDAP server. For instance, if the LDAP server has users categorized by departments then the endpoints can be chosen by departments from the LDAP server. An LDAP directory query can be created and run when choosing the targets from Activity Planner. Distributions for each department can be scheduled on a per-department basis. For example, headquarters in Chicago will receive distributions at non-peak work hours (approximately 3:00PM); customer support center in Miami would receive distributions at their non-peak work hours (approximately 1:00AM). To take care of network bandwidth, byte-level differencing will be used for updates. Multicast will be used for all distributions to decrease distribution time and network traffic.

Customers continue to raise the standard on service delivery Whether data refers to dynamic information or applications, customers need access to data whenever possible and must be able to retrieve it quickly and securely. Here is a scenario that we’ll call Business Need B.

4

Migration to IBM Tivoli Configuration Manager Version 4.2

Business Need B Consider a pharmaceuticals company that requires each salesperson to be able to download sales presentations while on business travel. It is mandatory that the presentations are available only to salespeople and no other employees in the company. The presentations can be pushed from the central console to the mobile endpoint in which the salesperson(s) can accept or deny the download. However, if the salesperson(s) reject the download, he or she can download it securely from the Web browser at a later time. Another requirement is for the pricelists to be pushed to the pervasive devices such as WinCEs.

How IBM Tivoli Configuration Manager 4.2 can help Using IBM Tivoli Configuration Manager 4.2 and the WebUI, it is possible to accomplish all these objectives quickly and securely while only giving access to the sales group. The Endpoint/Gateway proxy support allows Software Distribution and Inventory to manage devices that sit on the other side of a firewall from the Framework-based gateway. Endpoints in the DMZ can also pull packages from a secure Web browser while IBM Tivoli Configuration Manager 4.2 leverages IBM Tivoli Access Manager (formerly Policy Director), IBM WebSphere, and the Endpoint/Gateway proxy support. In Software Distribution 4.1, only mobile endpoints were able to see pop-up distribution notifications. This allowed them to accept/reject the distributions. In IBM Tivoli Configuration Manager 4.2, all Windows users can receive notifications regarding information about a distribution and enables the user to make decisions about the distribution. Distributions and Inventory can be done against pervasive devices. A pervasive device can be part of a reference model in Change Manager. It is possible to have multiple reference models such as WinCE configuration for Accounting employees, WinCE configuration for top-level executives, and so on. Since IBM Tivoli Configuration Manager 4.1 utilizes the Tivoli Enterprise Data Warehouse, an example of a report would be on operation results by subscriber.

Companies manage costs and maximize return on assets Companies must effectively manage costs and maximize the return on existing assets. This is seen in Scenario C.

Scenario C A state university has a total of 60,000 students, 2,000 faculty members, 5,000 non-faculty employees, and cannot afford more expenses due to funding and controlled costs. They would like to have a complete solution for deployment, change, and asset management, but they do not want to purchase more systems to manage the large amount of targets in the university.

Chapter 1. New features and the business value of migration

5

There is a heterogeneous mix of systems including: Windows NT, Windows 2000, Solaris, Linux, and AIX boxes (servers and workstations) used in the lab, in the IT department, and also by faculty/employee members. They would like to distribute software using native packaging efficiently and quickly. They also plan to do Inventory scans on hardware and software once every two weeks.

How IBM Tivoli Configuration Manager 4.2 can help IBM Tivoli Configuration Manager 4.2 includes native support for MSI (for Microsoft), Installp (for AIX), Pkgadd and Patchadd (for Solaris), and RPM (for LINUX). They can use Multicast for distributing packages. This is an option to map a package to one or more Inventory signatures or to specify some new signatures to represent the software package content. This signature can represent the software distributed by the software package. Within a software package definition (SPD) file, you can define which files are to be considered signatures. When the package is imported into the Tivoli environment and built, Software Distribution creates the new signature file entries in the configuration repository. This capability is useful especially when distributing in-house software for which Inventory has no signatures. They can also have multiple signatures logically bundled into a signature package. The signature package helps to identify a single software application or a collection of applications. For example, view a list of systems that should have a suite of applications such as word processing, spreadsheet, database, Web browser, statistics, drawing, and compiler with the latest versions for the lab machines. The university can use minimum hardware that are able to fulfill the Tivoli software, hardware, and business requirements for a complete IBM Tivoli Configuration Manager 4.2 solution. There is no need to buy other technologies since IBM Tivoli Configuration Manager 4.2 includes everything to get the Tivoli environment up and running and to start working on deployment, change, and asset management.

1.3 The ITCM 4.2 solutions in business environment IBM Tivoli’s emphasis on Change and Configuration Management has yielded improved efficiencies for thousands of enterprise customers. IBM Tivoli strives to evolve IBM Tivoli Configuration Manager as businesses change while maintaining itself as a best of breed solution in change and configuration management. The following sections list the Software Distribution and Inventory evolution from 3.6 to now.

6

Migration to IBM Tivoli Configuration Manager Version 4.2

1.4 New since Software Distribution 3.6, Inventory 3.6 This section summarizes new features available since Software Distribution 3.6 and Inventory 3.6.

1.4.1 Software Distribution 4.0 Here are the features in Software Distribution 4.0.

Software packaging format and Software Package Editor Software package is the new universal packaging format for Tivoli Software Distribution 4.0. A software package contains a collection of actions to be performed on a workstation or server. Also included with this new version is a Java-based software packaging facility called the Software Package Editor. It provides several ways to create and customize software packages including AutoPack. It does not require the Tivoli Framework, so the people building software packages do not have to be Tivoli administrators. Before any change management operations can be performed on it, the software package has to be imported into the Tivoli environment. The Software Package Editor window displays a graphical tree view of the software package and its contents. The left pane displays the software package icon. Any actions added to the software package are nested directly below the package icon in hierarchical form. A software package contains a number of actions to be executed on a target machine. These actions can be divided into the following categories:  The add object and remove object actions drive the engine to add the specified object to the system or to remove it from the system.  System actions, such as checking disk space, and restarting the target machine.  Program actions, such as executing a user-defined program; configuration, installation, and distribution (CID) program; InstallShield program; or Microsoft Setup program.

Variables Variables can be used to express any attribute value of type string contained in the software package, making a software package more generic for use on different target systems. For example, it is not necessary to create several software packages for different platforms. You can substitute the platform-specific information with variables and use the same software package for distribution to multi-platform networks.

Chapter 1. New features and the business value of migration

7

Conditions You set conditions on the actions contained in a software package or on the entire software package. Using conditions, you define the circumstances under which an action is executed. For example, you can specify which actions are to be executed on Windows NT with Service Pack 5 target systems and which are to be executed only on Windows NT with Service Pack 6 targets.

Third-party and native installation support Common third-party packaging formats are supported. Existing content prepared in these native formats, and the associated installation utilities and response files, can be packaged within the software package. Alternatively, they can be referred to as external files if they are already accessible by the target systems such as the following:    

Microsoft Setup InstallShield Configuration, installation, and distribution (CID) programs Microsoft Software Installer (MSI)

Automatic software package generation AutoPack automatically generates a software package by employing scanning and differencing technology, and comparing two successive “snapshots” of a preparation machine. Autopack was already included in Software Distribution 3.6 but this is included in Software Distribution 4.0 as part of the Software Package Editor. Before building the software package, check the AutoPack result in the Software Package Editor and remove any unwanted objects.

Import application definition files of a different format The PDF importer tool leads you through the process of importing a Microsoft package definition file (PDF) and converting it into a software package. Also tools for creating InstallShield and Microsoft Setup program objects that can be added to the software package was available with this version.

Software package distributions and operations Using the Tivoli Desktop, you can perform change management operations to fully automate the distribution and installation of software. New support features include the following:  Automatic undo: You can choose to perform an undo operation on the software package so that, in the event of a fatal error during the installation process, the undo operation is triggered automatically and returns the system to its state prior to the execution of the last installation or removal operation.

8

Migration to IBM Tivoli Configuration Manager Version 4.2

 Verification of installed software packages: You can perform a verify operation of a software package to verify that the target objects have been successfully installed on the target system.  Locked file support: During a distribution, if a locked file is encountered, normally it cannot be updated. Distributing the software package in transactional mode enables you to take advantage of the locked file support feature, which permits the updating of locked files by specifying that the target system is to be restarted at the time of installation or at the next regular system start up.  Shared object support: Because many applications use the same resources, shared files and directories must be identified in the software package. Using a shared attribute on files and directories, you can share files and directories among multiple applications.  File version support: You can set different properties on objects in the software package to manage their behavior. For example, if an object contained in a software package already exists on the target system, you can configure the object properties so that Tivoli Software Distribution replaces the existing copy only if the creation date of the copy is earlier than the file or directory contained in the software package.  Source repair functionality: The repair functionality reduces network congestion by identifying which source and target objects have been modified or corrupted since the last successful installation of a software package, and rebuilding the software package with only these objects rather than installing the entire package again.

Leveraging a more complete distribution engine (MDist2) Tivoli Software Distribution performs distributions of large amounts of data to multiple target systems. It relies on the Framework multiplexed distribution (MDist 2) service, which it uses to simultaneously distribute large amounts of data to targets in an enterprise. The benefits include the following:  Asynchronous delivery: The MDist2 service submits distributions and immediately communicates the status of each distribution on each target system without waiting until the distribution has completed for all targets.  Assured delivery: MDist2 distributions are cached in local files on each repeater. If a connection to a receiver cannot be established or is broken, the distribution is maintained in the cache until the connection is re-established and the distribution can be delivered. This feature includes the ability to resume distributions after network errors caused by machine restarts or power failures.  Checkpoint restart: When a distribution is interrupted due to a network failure, machine restart, or power failure, the distribution is automatically

Chapter 1. New features and the business value of migration

9

resumed from the point where the interruption occurred rather than resending the entire distribution from the source host. However, a distribution terminated by the administrator will restart from the beginning. Paused distributions exploit the checkpoint restart feature.  Distribution control: System administrators can use MDist 2 to access and control data distributions initiated by different applications. The graphical user interface (GUI) displays a list of all completed and pending distributions and their status on each endpoint (waiting, successful, failed, in progress, paused, interrupted, unavailable, cancelled, or expired). Administrators can pause, resume, and cancel distributions for some or all of the distribution destinations.  Pending distribution queues  MDist2 distributions can have three priority levels: Low, medium or high.  Depoting: A depot is a directory on the repeater site that enables you to temporarily or permanently store data segments associated with distributions. This enables software distributions to be stored on nodes closer to the targets. The administrator can push software packages from the depot to target systems associated with that depot.

Operating system upgrades Tivoli Software Distribution 4.0 supports operating system upgrades for the Windows operating system family.

Application configuration By installing software packages, you can perform actions such as changing the registry, adding desktop icons, adding statements to system files, and creating folders and shortcuts. Tivoli Software Distribution Version 4.0 eliminates the need to create scripts and programs to configure an application. You can package the required action in the software package.

Web Interface Enables Web users on UNIX, Windows 95, and Windows NT endpoints to retrieve Tivoli Software Distribution profiles. This interface enables endpoints to do the following:  Selectively download new software applications and data, or update existing software.  Download software packages that were not received at a prior distribution time.

10

Migration to IBM Tivoli Configuration Manager Version 4.2

1.4.2 Inventory 4.0 Here are the features in Inventory 4.0.

The new Tivoli Inventory graphical user interface (GUI) The new GUI provides the following features:  A DMI scan configuration dialog box that enables you to configure DMI scans for each type of DMI layer in your enterprise. Tivoli Inventory also automatically generates an SQL script that you can use to create the appropriate tables in the configuration repository for DMI scan data.  A window that enables you to add, edit, or delete software signatures.  A window that enables you to create a custom filter for software scans.  Dialog boxes that enable you to select which hardware components to scan for. Separate controls are provided for Windows and UNIX scans.  Context-sensitive online help.

Improved hardware scanning Here are the features for improved hardware scanning: 

A new UNIX hardware-scanning engine enables you to collect the following information from a UNIX scan: processor, operating system, IP address, partition, keyboard, memory, storage, network adapter, pointing device, and UNIX system parameters.



For both PC and UNIX hardware scans, you can select the type of information to be returned from a hardware scan and reduce the amount of data generated by the scan. This can improve the speed with which the scan runs, reduce the network bandwidth required to transport the scan data, and reduce the amount of time to write the data into the Tivoli Inventory database.



Tivoli Inventory can now collect data from endpoints using Windows Management Instrumentation (WMI). This feature is used automatically when it is available.

Improved software scanning You can now perform software signature matching at the endpoint rather than in the configuration repository. Using this method, software signature scans return data only for files that match a software signature. Because only data for matching files is sent to the configuration repository, software signature scans now return less data through the collector hierarchy. Also, because each endpoint performs the processing necessary to compare the software signatures to the scanned files, this scan is more efficient than performing signature matching at the configuration repository.

Chapter 1. New features and the business value of migration

11

Header scan You can scan the headers of files on supported Windows systems for header information. This information usually includes the company name, product name, and product version.

Scanning Windows registry On Windows systems, you can scan the Windows registry for information about installed products. On UNIX systems, you can scan the operating system for information about installed products and patches.

More UNIX software signatures The number of UNIX software signatures is increased significantly in Tivoli Inventory Version 4.0.

Scanning checksum values You can now generate checksum values for scanned files. The following checksum options are provided:  The Quick CRC option uses the CRC-32 algorithm to produce a 32-bit value based on the first 1 KB of each file.  The Full CRC option uses the CRC-32 algorithm to produce a 32-bit value based on the full contents of each file.  The MD5 option produces a 128-bit value based on the full contents of each file.

Desktop Management Interface capabilities on OS/2 The same capabilities that previous versions of Tivoli Inventory had for retrieval of Desktop Management Interface (DMI) data on Windows platforms is extended to supported OS/2 platforms.

Redesigned schema The schema has been simplified and designed to more closely resemble the CIM standard for inventory data. The DATE data type is now supported.

History tacking History tracking is an optional feature of Tivoli Inventory that enables you to track changes made to configuration repository tables. When enabled, this feature records every change or addition to inventory scan data. Previous versions of Tivoli Inventory stored only the last change.

12

Migration to IBM Tivoli Configuration Manager Version 4.2

Wake-on-LAN technology Tivoli Inventory can use MDist 2 functionality to automatically power up, or “wake up,” machines that have a wake-on-LAN-enabled network card that uses the Intel Wired for Management specification.

Mobile endpoint support You can create a profile for a customized inventory scan that can be pushed to a mobile or remote endpoint and scheduled to run when the endpoint is disconnected; the data is sent the next time the endpoint is reconnected to the network.

Endpoint-initiated scans Tivoli Inventory enables you to initiate a scan from an endpoint instead of distributing the scan from an inventory profile. This feature has many uses. For example, you can use endpoint-initiated scans to perform the following actions:   

Scan a machine each time it is rebooted Scan a laptop system each time a mobile user logs in to the network Scan a machine that is disconnected from the network

1.4.3 Software Distribution 4.1 Here are new features and enhancements in Software Distribution 4.1.

Version checks You can define a software package as versionable and specify whether it is a refresh package or a patch. Refreshes are not installed if a later version of the package is already installed. Patches are not installed unless the version to which the patch applies is already installed.

Dependency You can define an expression that makes the installation or removal of a software package dependent on meeting hardware and software prerequisites.

Byte-level differencing With byte-level differencing, you create and distribute delta packages for installation, thereby significantly reducing network traffic. A delta package is created from the differences this technology detects between the software package to be installed (version package), and the base package already installed on the target system, and only the delta package is distributed. ¶

Chapter 1. New features and the business value of migration

13

Data Moving This is a feature that provides commands to distribute, retrieve, and delete files from machines in a Tivoli Management environment. It supports ASCII/EBCDIC conversion and code page translation for text files.

Pristine Operating System install This tool provides the capability to make an unattended installation of operating systems (Windows 98 Second Edition, Windows NT 4.0 Workstation and Windows 2000 Professional; OS/2 4.0 and OS/2 4.5) on a system with no operating system installed (even with unpartitioned hard disks) or on a system that is to be completely reformatted. The installation process includes assigning the pristine system to a Tivoli gateway, and optionally initiating a Change Configuration Management process.

Windows 2000 Microsoft Installer support This provides the capability to use Tivoli Software Distribution to create a software package that embeds a Microsoft Software Installer (MSI) package. Both Microsoft Installer products and patches are supported.

Mobile support This feature enhances the capability to deliver software to mobile users. By means of mobile support, end-users can decide which distributions to download or reject. While downloads are in progress, users can also pause and resume them.

Activity Planner Activity Planner enables you to schedule the execution of a group of activities, submit them to be executed, and monitor and control them. Activities are single operations that are performed on a set of targets at specified times. Operations can include Tivoli Software Distribution Version 4.1 operations or Tivoli Management Framework tasks.

Change Management (CM) CM provides the capability of managing an environment by defining a reference model in which you specify the combination of software packages that satisfies the business needs for a set of workstation users within an enterprise. It helps you to bring the configuration of these machines from their current state to the specified target state. It also includes support for package versioning, software dependencies (prerequisites, co-requisites, and ex-requisites), hardware dependencies and endpoint logins.

14

Migration to IBM Tivoli Configuration Manager Version 4.2

Enhanced Tivoli Software Distribution Web Interface This interface enables Web users on UNIX and Windows endpoints to download, install, subscribe to, and verify software packages; access and subscribe to reference models for endpoint synchronization; and manage locally installed software packages and the local software package block repository.

OS/400 endpoint support The OS/400 endpoint support extends the capabilities of Tivoli Software Distribution to managing software packages for OS/400 target systems. Tivoli Software Distribution includes an OS/400 Software Package Editor (SPE). You use the SPE on an NT system, which has a TCP/IP connection to an OS/400 system that is used as a software package preparation site. You can build software packages that include OS/400 objects, such as libraries, objects, and licensed programs, and changes to OS/400 system values. I¶

Install from CD and file server Enables you to specify that the software package to be installed is to be retrieved from a file server or a CD-ROM rather than on the source host.

1.5 New since Software Distribution 4.1, Inventory 4.0 New features have been added to IBM Tivoli Configuration Manager 4.2 to leverage the diverse distributed environment while using the same robust features of Software Distribution, Inventory, and Framework. All Software Distribution and Inventory tools that were previously used are packaged into the IBM Tivoli Configuration Manager 4.2 product with additional features for tighter integration and extensive options. Now, deployment, change, and asset management can span beyond the traditional desktops and servers. At the same time, the functions of the products work together in a seamless manner for added value to the enterprise. If there are additional needs, then IBM Tivoli Configuration Manager 4.2 should be considered for these tasks:  For a simple and integrated installation of all components which enables faster deployment.  WebSphere technology to provide "pull" software distribution capability to end users as well as Web UI operations to non-TMA endpoints  Management of servers, desktops, laptops, PocketPC, PalmOS, and Nokia Communicator devices with a single solution.  Leveraging user information from an enterprise directory to determine targets for software distribution or inventory operations.

Chapter 1. New features and the business value of migration

15

 Native packaging support for AIX, Solaris, and Linux.  Multicasting to ensure efficient bandwidth utilization. While new features have been added as listed above, enhancements have been made to the already existing features of Software Distribution and Inventory. Most importantly, Software Distribution and Inventory are more tightly integrated in IBM Tivoli Configuration Manager 4.2. Change Manager and Activity Planner are now able to leverage Inventory. The benefit of this support enables an administrator to manage their Software Distribution and Inventory deployments via the same console and operational paradigm. Note: There are too many enhancements to name in this chapter. We recommend that you read Chapter 1 in the redbook All About IBM Tivoli Configuration Manager 4.2, SG24-6022, for further details about both new features and enhancements.

1.5.1 Major Features of IBM Tivoli Configuration Manager 4.2 Here are the major features of IBM Tivoli Configuration Manager 4.2.

Integrated Installation mechanism An interactive setup GUI is included to automate and speed the deployment of IBM Tivoli Configuration Manager 4.2. Install Shield MultiPlatform (ISMP) provides a graphical user interface (GUI) to collect user input for installation to multiple platforms. This allows the prerequisites to be installed before the selected components and allows the user to enter pertinent information at the screen prompts. This simplified installation is advantageous to the end-user, because not only are the infrastructure components hidden but it also automates the extra installation steps such as configuring files, databases, and so on.

New Web User Interface Whether configuration management operations are required to be done via the Web or in the case of security limitations such as firewalls (having your Web server in the Demilitized Zone or DMZ), the Configuration Manager Web Interface allows you to manage a number of Web objects while using a single URL. This support provides the ability to install software to a device that does not have the Tivoli agent, TMA, installed. This is achieved by downloading the appropriate software installation support to the device via a signed applet. The new Web User Interface (Web UI, or Web Interface) allows Software Distribution and Inventory to manage endpoints in the DMZ, via the proxy support, or to publish packages and profiles to the DMZ for endpoints, in the Internet, to access via a browser. This is important when the network

16

Migration to IBM Tivoli Configuration Manager Version 4.2

environments are becoming more secure. The endpoints are still able to be managed while in the DMZ. The Software Distribution component, Inventory component, and Change Manager deployment service each have a Java plug-in that is registered with the Web Interface. These enable IBM Tivoli Configuration Manager operations to be performed using the Web Interface. The prerequisite for the Web Interface is the Web Gateway components with WebSEAL security turned on. The IBM Tivoli Access Manager WebSEAL application is used to control the access to the Web Interface (Figure 1-1).

Figure 1-1 Web Interface access control with WebSEAL

Software packages and Inventory scan profiles are published to the Web Gateway components as Web objects and are stored there until they are pulled by the Web User Interface user. You can control the access to the published Web objects by using the wWeb command when publishing the Web objects. Using the Web Interface (Figure 1-2), the WebUI users can perform the following:  Install and verify software package  Perform inventory scan  View and synchronize reference models

Chapter 1. New features and the business value of migration

17

Figure 1-2 Web Interface

Tivoli Resource Manager Tivoli Resource Manager (also known as Resource Manager or TRM in short) is a service that runs to the Tivoli Server which provide you with the functionalities to manage resources, namely users and pervasive devices such as Palm devices and Nokia 9200 Communicator series devices. TRM and the Web Gateway components enable you to perform basic operations on the resource pervasive devices including distributing software, and scanning and customizing devices. The resource gateway is installed at the endpoint level and you can have multiple independent resource gateways which can connect to a centrally installed TRM. This configuration adds a fourth tier of resources to the three-tier Tivoli architecture of Tivoli Server, gateway and endpoint. The mapping of users with endpoints in a one-to-one relationship are stored in an Enterprise Directory server such as the Lightweight Directory Access Protocol (LDAP) server. You can use Resource Manager and Enterprise Directory Query Facility to view the associations and it enables you to perform software distribution and inventory scan on the endpoints associated with the users.

18

Migration to IBM Tivoli Configuration Manager Version 4.2

To work with resources, you must create resource groups and make them member of the resource groups. You can define either dynamic or static resource groups. Profiles are distributed to resource groups. The Resource Manager uses a database created in the RDBMS called the configuration repository, and it is used to keep track of devices and endpoints that it manages. The same configuration repository is used by the Inventory and Software Distribution components. Figure 1-3 shows the Resource Type Table panel of the Tivoli Desktop, when using the Resource Manager.

Figure 1-3 Resource Manager

Web Gateway The Web Gateway components consist of several components which work together to provide the funtionalities to manage resources, such as users and pervasive devices, and provide a Web based tool which enhances the management capabilities of IBM Tivoli Configuration Manager. Software distribution and Inventory scan can now be performed on pervasive devices and endpoints associated with users. The Web Gateway components provide the means for Web users to initiate inventory scan and software distribution via the Web interface.

Chapter 1. New features and the business value of migration

19

The TRM and Web Gateway components work together to provide the functionalities to manages resources while the Web infrastructure combined with the Web Gateway components provide a browser-based deployment tool called the Web Interface. It is possible to have the one machine performing both resource management and Web Interface functions but we recommend that a separate machine is used for each function. The Web Gateway components consist of mainly the Web Gateway database, Web Gateway server and the Web infrastructure. It requires several applications to work efficiently. These applications includes IBM DB2, IBM WebSphere Application Server with IBM HTTP Web Server, IBM Tivoli Access Manager, and IBM Tivoli Access Manager WebSEAL server. The Web Gateway has its own resource database and uses standard SQL commands to query its database. With regards to resource management, the Web Gateway database is a subordinate to the master database which is under the control of the Resource Manager. The Resource Manager access the master database via a rim host. Resource Manager notifies the Web Gateway databases of any changes to the master database and vice versa. For example, when you update the Resource Manager database, Resource Manager notifies Web Gateway to update its database. When a new device connects, it is automatically enrolled and Web Gateway notifies Resource Manager to update its database.

Integration with Enterprise Directories IBM Tivoli Configuration Manager 4.2 is now able to leverage enterprise directory information to initially address users-based operations inside Tivoli management environment. LDAP is the protocol used to access the enterprise directory. Using the Enterprise Directory Query Facility and Resource Manager, IBM Tivoli Configuration Manager can:   

Add/delete/modify association between user and endpoint Group different users in object containers Query users using search filtering

The Enterprise Directory Query Facility is designed to allow a Configuration Change Manager (Change Manager) administrator to use information stored in enterprise directories to select a list of users as subscribers to a reference model as targets for software distribution or inventory scan operations. The Enterprise Directory Query Facility consists of Directory Query Libraries and Directory Queries. Directory Query Libraries reside in policy regions and are created to contain Directory Queries. Directory Queries enable you to find information about the users or the workstations defined in the enterprise directory server.

20

Migration to IBM Tivoli Configuration Manager Version 4.2

Figure 1-4 shows the Directory Query Library and Directory Queries.

Figure 1-4 Directory query library and Directory queries

Native packaging support In addition to Microsoft Setup, MSI, InstallShield and OS/2 CID supported packaging, IBM Tivoli Configuration Manager 4.2 supports three new native packages for the most common UNIX flavors:  Installp (for AIX)  Pkgadd and Patchadd (for Solaris)  RPM (for LINUX) This will allow IBM Tivoli Configuration Manager 4.2 to: Provide built-in actions Use redirect and bundled installation Provide Wizard to wrap UNIX packages in software package (SP) Make native packages manageable at an enterprise scale by leveraging Tivoli  Software Distribution functionality  Support all native actions where available    

Chapter 1. New features and the business value of migration

21

Multicast distribution Multicast is a new feature of Tivoli Management Framework 4.1. It is an add-on to MDist2. Traditionally, MDist2 has a one-to-one TCP connection with each target. Therefore, if there are 50 targets, MDist2 must send the distribution data 50 times. Using multicast, the distribution data is sent only once, regardless of the amount of receivers. This is due to the fact that Multicast uses UDP broadcast packets. All the targets read from the same data stream. The benefit to using multicast is the decrease in the distribution time and network traffic. This is very useful when sending data to multiple targets over satellite or slow network links.

1.6 Variances of ITCM and Software Distribution 3.6.X Apart from all these enhancements here is a summary of the major architectural differences between IBM Tivoli Configuration Manager 4.2 and Software Distribution 3.6.x:

1.6.1 General One of the major changes that users of Software Distribution 3.6.x will notice is the change from using software packages instead of file packages for software distribution. In IBM Tivoli Configuration Manager, there is now a tighter integration of the Inventory component with Software Distribution. IBM Tivoli Configuration Manager now makes use of MDist2 (instead of MDist1 in Software Distribution 3.6) for distribution which provides a more efficient transfer of data to multiple targets. MDist1 and MDist2 differences are discussed in great detail in Chapter 2, “All about MDist2 and a comparison with MDist1” on page 25. The use of Java is more prevalent in IBM Tivoli Configuration Manager and you need to be at the support level before upgrading or installing IBM Tivoli Configuration Manager. There are several new icons introduced in the Tivoli Desktop which you need to be aware of. Some of these components are Java based and you need to install the Java component on the remote workstation when using a remote Tivoli Desktop. Figure 1-5 of the Tivoli Desktop shows some of the new icons.

22

Migration to IBM Tivoli Configuration Manager Version 4.2

Figure 1-5 Tivoli Desktop showing some of the new features of ITCM

1.6.2 Software packages IBM Tivoli Configuration Manager 4.2 uses software packages for software distribution and cannot use file packages created by Software Distribution 3.6.x. However, there is a migration command called wfptosp which have been provided to convert Software Distribution Version 3.6 (or later) file packages to IBM Tivoli Configuration Manager Version 4.2 software packages. Refer to the Reference Manual for Software Distribution, SC23-4712, for more information about the wfptosp command and migrating file packages to software packages. You can create software packages with the Java based tool: Software Package Editor.

1.6.3 Source host component The source host is the system which contains the files that are used to build the software package block and where the built software package blocks are stored. The source host also contains the files referenced in the software package and software package definition file. The source host must be a gateway or standalone repeater which can be any UNIX,OS/2, Windows NT and Windows

Chapter 1. New features and the business value of migration

23

2000 system with Tivoli Software Distribution Server or source host component installed. With Software Distribution 3.6.x, any machine in the Tivoli Region could be use as the source for the files in the file packages block while in IBM Tivoli Configuration Manager 4.2 you need to install the Tivoli Software Distribution source host component on specific machines and then set them as source host. You then must only use these source hosts as source for files to be put into a software package block.

1.6.4 Software distribution targets The software distribution targets for Software Distribution 3.6.x can be Tivoli managed nodes, PC managed nodes, or endpoints but in IBM Tivoli Configuration Manager 4.2, only endpoints can be targets of distributions. For those managed nodes that were targets in 3.6.x, you must install the Tivoli endpoint on them to enable them to be targets for IBM Tivoli Configuration Manager 4.2 distributions. While in 3.6.x, each Tivoli managed node or endpoint could be target to a distribution, in IBM Tivoli Configuration Manager (also in Software Distribution 4.0 and 4.1), the only accepted targets are the endpoint, so the user must install the TME Agent (lcf) on those managed nodes that were targets in old 3.6.x context.

1.7 Summary With the factors driving success in today’s business economy, IBM Tivoli Configuration Manager 4.2 will help the enterprise receive a higher return on investment by increasing employee productivity, use existing resources while at the same time maximizing them, improve service delivery while using less resources, leverage existing infrastructure, and allow more focus on customers’ needs. The scenarios best illustrate how IBM Tivoli Configuration Manager 4.2 can be flexible in most business needs. Since 3.6, Inventory and Software Distribution have evolved to become smarter, faster, and more efficient. IBM Tivoli Configuration Manager 4.2 uses all the best features and enhancements of post 3.6 and also adds new features and enhancements to create a powerful deployment, change, and asset management suite. For more information on IBM Tivoli Configuration Manager 4.2, refer to the redbook All about IBM Tivoli Configuration Manager, SG24-6612.

24

Migration to IBM Tivoli Configuration Manager Version 4.2

2

Chapter 2.

All about MDist2 and a comparison with MDist1 In this chapter we review multiplex distribution (MDist) with a special emphasis on the differences between MDist1 and MDist2. We discuss these topics:  “Introduction to Multiplexed Distribution” on page 26  “MDist and distribution hierarchy” on page 26  “MDist2 components and functionalities” on page 40

© Copyright IBM Corp. 2002. All rights reserved.

25

2.1 Introduction to Multiplexed Distribution Multiplexed Distribution (MDist) is a core Tivoli Framework service to provide a distribution function of large volumes of data to multiple recipients. To do such a distribution, a fan-out structure is used, where a node receives a data stream and forwards it on to multiple recipients, which in turn can forward themselves to other nodes. This fan-out node is called a repeater. Repeaters are used when the data being sent is greater than 16 kilobytes. There is only one instance when a repeater is never used in a distribution. That is when the distribution is less than 16 KB and only to one target. This then uses oserv to oserv communications. When the data is greater than 16 KB, then the technique used for delivery is called Bulk Data Transfer (BDT). It in turn uses a data communication form called an Inter Object Messaging (IOM). There are currently two versions of MDist present. In Framework 3.6.x and below, only MDist1 support is provided, while in Framework 3.7 and above, there are two versions, MDist1 and MDist2. Tivoli Management Framework provides a new improved distribution mechanism called MDist2. The following sections discuss MDist2 in detail, emphasizing the differences between MDist2 and the previous version of MDist. IBM Tivoli Configuration Manager components are written to use the MDist2 service.

2.2 MDist and distribution hierarchy A repeater functions by having a list of end targets, the name and arguments to a CORBA method to be executed on the end target, and an input stream. The data stream is sent to each end target and is exactly the same. The process called rpt performs this function. Information on the status of a distribution is only done when the last end target has received its data stream, that is when the log and/or the Notices is updated. This is a synchronous distribution. Distribution flow coordinates from the TMR by the repeater manager (rptm). This process also contains the configurations of all installed repeaters. Configuration of repeaters is done by the single command called wrpt. Data that comes to a repeater will be on one thread (In Spool Thread), while data that is sent to the end targets will be on other threads (Out Spool threads). A gatelog at debug level 9 shows this relationship as well as other repeater information. The thread information is highlighted in italics below as well as other repeater related information. Example 2-1 Information related to repeater configuration in the gatelog gwcache: hit key= gwcache: hit key=

26

Migration to IBM Tivoli Configuration Manager Version 4.2

mdist: mdist: mdist: mdist: mdist: mdist: mdist: mdist: mdist: mdist: mdist:

Registering repeater Manager: 1670943514.1.365 TMF_rptm_mgr::rpt_register called, tuning parms: mem_max = 10000 disk_max = 50000 disk_hiwat = 50000 repeater tuning values disk_time = 1 disk_dir = c:/Tivoli/db/win-inv01a.db/tmp/ net_load = 500 max_conn = 100 stat_intv = 180 Opening cache file: c:\Tivoli\db\win-inv01a.db\tmp\pmap2 location of swap file sched: got a job sched: got a job sched: got a job sched: got a job mdist: in_spool_thread started: TID = 12272d0 mdist: out_spool_thread started: tid = 122de68 client = [1670943514.7.508+#TMF_endpoint::endpoint#] mdist: out_spool_thread started: tid = 122dd08 client = [1670943514.8.508+#TMF_endpoint::endpoint#] mdist: out_spool_thread started: tid = 122db50 client = [1670943514.9.508+#TMF_endpoint::endpoint#] ….. new_session: 11a262c3, connecting to 172.24.1.15+8898... reader_thread: received data: session=11a262c1, type=9, len=52 mdist: in_spool_thread finished: TID = 12272d0 reader_thread: received data: session=11a262c3, type=9, len=52 reader_thread: received data: session=11a262c1, type=5, len=116 destroying session 11a262c1 mdist: Finished out_spool to 1670943514.7.508+#TMF_endpoint::endpoint# Results mdist: Result length for 1670943514.7.508+#TMF_endpoint::endpoint# = 42 mdist: out_spool_thread finished: tid = 122de68 client = [1670943514.7.508+#TMF_endpoint::endpoint#] mdist: results collected, ncomplete = 1 nactive = 2 reader_thread: received data: session=11a262c2, type=9, len=52 reader_thread: received data: session=11a262c3, type=5, len=116 destroying session 11a262c3 TRUNCATED...destroying session 11a262c2 mdist: Finished out_spool to 1670943514.8.508+#TMF_endpoint::endpoint# mdist: Result length for 1670943514.8.508+#TMF_endpoint::endpoint# = 42 mdist: out_spool_thread finished: tid = 122dd08 client = [1670943514.8.508+#TMF_endpoint::endpoint#] mdist: results collected, ncomplete = 3 nactive = 0 mdist: Distribution (2) finished.

Chapter 2. All about MDist2 and a comparison with MDist1

27

2.2.1 Repeater configuration and placement The location and configuration of repeaters should be based upon the network topology in terms of network speeds and network hardware (repeaters, bridges, etc) and the structure of the organization in terms of organizational units. When a TMR is installed, it will automatically become the first repeater in the hierarchy of the organization. Running the command wrpt will show the following for a TMR called win-tmr01a. win-tmr01a [1]

wd-

[]

The first element is the hostname, followed by the dispatcher number in the number in the brackets, then the repeater flags. In this case the w and d are set. The following shows the repeater flags and what they represent. To set a flag use the wrpt repeater_name flag command. Table 2-1 Repeater flags Flag

Description

Default (d)

This flag should be set for only one repeater per TMR. The repeater with the default flag is the one that services all repeaters that are not part of another repeater's range. The TMR server is the initial default repeater.

Nodefault

Turns off the default flag

Wan (w)

When a repeater has this flag enabled, that repeater is the machine used for all TMR to TMR profile distributions. This setting is the wan entry point, not the wan exit point. It determines what machine receives distributions into that TMR, not what machine is used to exit that TMR.

Noalways

Turns off the always flag.

Nowan

Turns off the wan flag.

Always (a)

When a repeater is set to always, it is always used during a distribution that goes to one of its targets. This flag is used in cases such as Firewalls and WAN connections/Slow Links. This is when the repeater that has the always flag has tuning parameters which are designed to reduce network load.

2.2.2 Using Always and WAN options for interconnected TMRs As other repeaters are created, a hierarchical tree will be constructed, where repeaters service other repeaters. This is through the configuration of ranges. Figure 2-1 shows the wrpt command output for Always option.

28

Migration to IBM Tivoli Configuration Manager Version 4.2

Figure 2-1 Always flag setting example

Figure 2-2 shows an example for repeater configuration for interconnected TMRs where the WAN option is implemented.

Chapter 2. All about MDist2 and a comparison with MDist1

29

Figure 2-2 Using WAN option for interconnected TMRs

Example 2-2 shows the repeater configuration we used for interconnect Windows 2000 and AIX TMR in our scenarios. The command wlsconn shows that we have a two-way interconnection between AIX and Windows 2000 TMR. Example 2-2 Repeater configuration for interconnected TMRs # wlsconn MODE NAME SERVER REGION win-tmr02a-region win-tmr02a 1831081883 # wrpt aix-tmr1b [1] wd- [] aix-inv01b [23] --- [] itsodev1 [26] --- [] win-tmr02a [1] wd- [] 3c056 [6] --- [] # wrpt -q aix-tmr1b aix-inv01b itsodev1 win-tmr02a 3c056 --[RPT:aix-tmr1b [1]] |--aix-inv01b [23] |--itsodev1 [26] |--[RPT:win-tmr02a [1]] | |--win-tmr02a [1] | |--3c056 [6]

30

Migration to IBM Tivoli Configuration Manager Version 4.2

For endpoints, there are a couple of commands that can be used to find information. The first is wrpt -ge. This lists the repeaters by their OID's and the endpoint ranges are also specified. For instance in the above example the output for the endpoints would be like Example 2-2. Example 2-3 Endpoint ranges with wrptr -ge # wrpt -ge 1375372617.1.578 wan default ep_range=4-10,12-14,19-22,35,38-40,42,51-54,56,59,62-65,69 1375372617.23.7 1375372617.26.19 ep_range=68,70-71 1831081883.1.577 wan default ep_range=3-4,7,9-10 1831081883.6.21 ep_range=8

An alternative view is Example 2-4. In this case the endpoint ranges are in the angled brackets. Example 2-4 Endpoint ranges with wrpt -e # wrpt -e aix-tmr1b [1] wd- [] aix-inv01b [23] --- [] itsodev1 [26] --- [] win-tmr02a [1] wd- [] 3c056 [6] --- [] #

The route to an endpoint can be found the in much the same way as managed node's route. For example using wrpt, the wep command we can determine the route for distributions for one or more endpoints at a time. Example 2-5 Finding distribution route for endpoints C:\Tivoli\bin\w32-ix86\bin>wrpt win-tmr01a [1] wd- [] win-inv01a [2] w-- [] win-arch01a [3] --- [] win-rptr01a [4] --- [] C:\Tivoli\bin\w32-ix86\bin>wep ls G 1370748664.3.24 win-arch01a-gw 1370748664.11.522+#TMF_endpoint::endpoint# G 1370748664.4.21 win-rptr01a-gw 1370748664.12.522+#TMF_endpoint::endpoint# 1370748664.14.522+#TMF_endpoint::endpoint# 1370748664.17.522+#TMF_endpoint::endpoint# 1370748664.20.522+#TMF_endpoint::endpoint# 1370748664.21.522+#TMF_endpoint::endpoint#

WIN-OME-A WIN-NTK-A WIN-CHRIS-A windows2K01 3C055 ibmtiv8

Chapter 2. All about MDist2 and a comparison with MDist1

31

1370748664.6.522+#TMF_endpoint::endpoint# WIN-ARCH01A 1370748664.7.522+#TMF_endpoint::endpoint# WIN-RPTR01A 1370748664.8.522+#TMF_endpoint::endpoint# WIN-TMR01A 1370748664.9.522+#TMF_endpoint::endpoint# WIN-TINA-A G 1370748664.1.591 win-tmr01a-gw 1370748664.19.522+#TMF_endpoint::endpoint# ibmtiv10 1370748664.22.522+#TMF_endpoint::endpoint# FUTURISM 1370748664.23.522+#TMF_endpoint::endpoint# shaker 1370748664.5.522+#TMF_endpoint::endpoint# WIN-INV01A

C:\Tivoli\bin\w32-ix86\bin>wrpt -q @ManagedNode:win-arch01a @endpoint:WIN-ARCH01A @endpoint:WIN-RPTR01A @endpoint:WIN-INV01A @endpoint:WIN-OME-A --[RPT:win-arch01a [3]] |--[RPT:win-arch01a [3]] | |--[RPT:win-tmr01a [1]] | | |--WIN-INV01A [5] | | | |--[RPT:win-rptr01a [4]] | | |--WIN-ARCH01A [6] | | |--WIN-RPTR01A [7] | | | |--WIN-OME-A [11] |

We can also exchange the resources between TMRs with wupdate and find the route to an endpoint in a connected TMR. Example 2-6 Finding route to an endpoint in an interconnected TMR # wlsconn MODE NAME SERVER REGION win-tmr02a-region win-tmr02a 1831081883

# wupdate -r endpoint win-tmr02a-region TRUNCATED... win-bkp01b 1375372617.14.522+#TMF_endpoint::endpoint# win-bkp03b 1375372617.5.522+#TMF_endpoint::endpoint# win-w200083b 1831081883.7.522+#TMF_endpoint::endpoint# win-w200200a 1831081883.3.522+#TMF_endpoint::endpoint# winarch01b 1375372617.7.522+#TMF_endpoint::endpoint# # wrpt aix-tmr1b [1] wd- [] aix-inv01b [23] --- [] itsodev1 [26] --- []

32

Migration to IBM Tivoli Configuration Manager Version 4.2

win-tmr02a [1] 3c056 [6] ---

wd[]

[]

# wrpt -q aix-tmr1b @endpoint:win-w200083b --[RPT:aix-tmr1b [1]] |--[RPT:win-tmr02a [1]] | |--win-w200083b [7] | # # wgateway Object 1375372617.1.578 1375372617.26.19 1831081883.1.577 1831081883.6.21

Name aix-tmr1b-gw itsodev1-gw win-tmr02a-gw win-tmr02b-gw01

Status u u u u

The wrpt -q aix-tmr1b @endpoint:win-w200083b command shows us the distribution route from the local TMR server aix-tmr1b to the endpoint win-w200083b in the interconnected TMR.

2.2.3 Repeater tuning in MDist1 There is a number of tuning parameters that are used to control the flow of the data stream from the source to the destination. Each repeater has to be configured correctly to avoid network and resource issues. To view the current settings of a repeater use the following command: wrpt -t RepeaterName

Example 2-7 Repeater parameters for aix-tmr1b # wrpt -t aix-tmr1b mem_max = 10000 disk_max = 50000 disk_hiwat = 50000 disk_time = 1 disk_dir = "/tmp" net_load = 500 max_conn = 100 stat_intv = 180

The above output also shows the values that are used. These will need to be changed to be applicable to the Tivoli environment. Each tuning parameter is described below and in further sections. .

Chapter 2. All about MDist2 and a comparison with MDist1

33

 mem_max (kilobytes) This is where data is spooled first during a distribution. It uses real memory (RAM) and not swap memory. Use the default value that is supplied and increase only if the system resources are available.  disk_max (kilobytes) This is the size of paging or swap that data will use once mem_max has been exhausted. The recommendation is that the size be up to 20% larger than the largest distribution that will go through the repeater.  disk_hiwat (kilobytes) This represents the hi-water mark for disk_max. Once the value of disk_hiwat has been reached, there is a slow down in the writing to disk every disk_time seconds.  disk_time (seconds) This is the delay to wait between disk block allocations once disk_hiwat has been reached. The default value of 1 is usually not changed.  disk_dir This is the directory where disk_max paging file is written, and therefore is of a size larger than disk_max. Note: If there are more distributions than one going on simultaneously, then the size of disk_dir must be able to handle such events. That is if disk_max is set at 200 MB and there are 3 distributions using this value, then disk_max must be at least 3 x 200 MB = 600MB. This is because MDist1 works as per distribution. The need to save the data of a distribution is due to slower targets that maybe present. By default under NT/W2000 disk_dir is %DBDIR%/tmp, while under UNIX it is /tmp that is used, which on many systems is used as a swap space by applications and the operating system. So a dedicated directory or filesystem should be used for UNIX based repeaters. Looking at the gatelog below, there are actually going to be two swap files created, one called swap.xxx, where xxx is a number in the disk_dir directory; the other is called pmapx, where x represents the distribution ID. You can define the $TMP variable in the odadmin environment to point to a new location. Example 2-8 Changing the swap directory for NT managed nodes C:\Tivoli\db\win-inv01a.db>wrpt -t win-inv01a disk_dir=c:/temp C:\Tivoli\db\win-inv01a.db>wrpt -t win-inv01a mem_max = 10000

34

Migration to IBM Tivoli Configuration Manager Version 4.2

disk_max = 50000 disk_hiwat = 50000 disk_time = 1 disk_dir = "c:/temp" net_load = 500 max_conn = 100 stat_intv = 180

If you rexec the oserv on the gateway and repeat the distribution then the swap file will be written to c:\tmp directory.  net_spacing (milliseconds) This is the time to wait between each 16 KB write to the network. By default it is 0, but you may want to change it if there is a slow network. Once it has been set, it will become visible by the wrpt command.  stat_inv (seconds) This value is a time-out that times how long it will take to send each Tivoli packet (normally 16 KB). If your network is too slow or saturated, stat_intv needs to be higher. If you are getting high-level TCP timeout errors, increase stat_intv, reduce max_conn or set SLOW_LINK. stat_intv is not used when distributing from a gateway to a Tivoli Managed Agent, the gateway's session_timeout is used instead. More information and configuration of timeouts is found in a later section.  max_conn This parameter deals with the number of targets that are distributed to a repeater.  net_load (kilobytes per second) This is the amount of network bandwidth used in a distribution. This should be tuned for the LAN and WAN links. This parameter is the reason you should have a repeater at the central site for each WAN type.

2.2.4 Timeouts and repeaters You can use wrpt command to change the repeater manager parameters.

Global Repeater Manager timeout This is set to prevent a hanging condition if there is a loss of contact with a target during a distribution. By default it is set to 0 or never timeout. Using the command line wrpt below shows the output:

Chapter 2. All about MDist2 and a comparison with MDist1

35

Example 2-9 Global Timeout parameter C:\Tivoli\db\win-inv01a.db>wrpt -T repeater manager timeout is infinite.

This timeout setting only applies to managed nodes not endpoints, which are serviced by the set_session_timeout.

Gateway Session timeout This is used exclusively for endpoint timeouts, and is equivalent to the Global Repeater Manager timeout explained earlier. It is set via the wgateway command and the default is 300 seconds or 5 minutes. It represents the maximum time a method downcall can be made before an error is generated. Applications like Tivoli Software Distribution 3.6 and Tivoli Inventory 3.6 can adjust the timeout. In the case of Software Distribution, it is through the use of the progs_timeout variable in the filepackage definition file. The value is set for BARC (Before, After, Removal and Commit) scripts to prevent a hanging condition. For a distribution to an endpoint, the gateway session timeout is set to the value of progs_timeout plus 10 seconds. The default value is 0, which is interpreted as 1 hour. For Tivoli Inventory, there is a profile setting which determines how long a scan will take on the target. The target in question is a PcManagedNode or an endpoint. The timeout value is set by using the wsetiprf command. The default values are as follows:  Scanner timeout period: 2700 seconds  Endpoint timeout period: 1800 seconds Example 2-10 Setting timeout parameter for Inventory profiles C:\>wsetiprf -t 50 @InventoryProfile:"HardwareScan" Scanner timeout period: 2700 seconds endpoint timeout period: 50 seconds

Stat_intv timeout This timeout is used exclusively between managed nodes and represents the time to send each packet. The error message associated with a failure is a "High level TCP timeout". Table 2-2 shows the relationship between distribution types and the timeouts.

36

Migration to IBM Tivoli Configuration Manager Version 4.2

Table 2-2 Timeout for distributions Function

Target Timeout for Distributions managed node -> managed node

gateway -> endpoint

Before

final_timeout (wrpt -T) stat_intv

progs_timeout final timeout 1 hour

Send

stat_intv

gateway session timeout

After

final timeout (wrpt -T)

progs_timeout final timeout 1 hour

repeater -> repeater

Infinite

Net load The amount of bandwidth used is one of the parameters that requires careful planning, to not overload the network with data where there are restrictions in place, namely shared or slow bandwidths. We suggest that 25% of the bandwidth be used, to not overload it. Therefore, 25% of a 10MBps Ethernet LAN is 312 KB/second, and 50% would be 625 KB/second. There are three types of net_load setting, a positive net_load (default type), a negative net_load and slow links, which change the packet size. In the use of a positive net_load, the distribution through the repeater uses a set amount of bandwidth, with each connection using part of the value. The amount is dependent upon the max_conn setting. For example, if there are only four connections and max_conn is set to 10, then it would be 500/4 = 125 KB/second. If there are 10 connections and max_conn is set to 10, then each connection will be 500/10 = 50 KB/second. If there are going to be 20 connections made, then the speed still will be 50 KB/second, because the max_conn value prevents more than 10 connections being made at once.

Chapter 2. All about MDist2 and a comparison with MDist1

37

Figure 2-3 Net_load is distributed between connections of a single distribution

Important: If there is more than one distribution going through a repeater, then the bandwidth will be multiplied by the number of distributions. That is, if there are four connections, each of 125 KB/second, but with two distributions taking place, then 2 x 125 KB/second = 250 KB/second of bandwidth will be used. For a negative net_load, the bandwidth is per connection and not per distribution as described for a positive net_load. Therefore, if net_load is specified as net_load = -500, then each endpoint will have a connection with a bandwidth used of 500 KB/second. But note that for four connections, the repeater itself will use 4 x 500 KB/second bandwidth from it. A slow link environment is used when there is a limited bandwidth available. Setting this variable is through the odadmin command: Example 2-11 Changing the distribution packet size odadmin env get > env.txt Edit env.txt and add a line with SLOW_LINK=TRUE odadmin environ set < env.txt Reexec the oserv of the Managed Node

The effect is to have the packet size reduced from 16 KB to 512 bytes and the net_load setting from KB/second to bytes/second. The consequence is that the net_load value must be much higher.

38

Migration to IBM Tivoli Configuration Manager Version 4.2

2.2.5 Active distributions There are three options to pass to the wrpt command to manage active distributions. They are:  Syntax wrpt -L wrpt -R -k id wrpt -A [-f] -k id

 Options -L

Lists all active distributions in a four-column format. The first column is the unique active distribution number, the second column is the distribution name (a label chosen by the application), the third column is the distribution's start time, and the fourth column gives statistics for the distribution in the following format: in/est_size [out_min-out_max].

in Amount of data transmitted at that time est_size Filepackage size out_min Smallest amount of data transferred out_max Largest amount of data transferred -R

Shows the repeater route.

-k

When used as an active distribution option (with the -R and -A options), -k specifies the target active process. Note: When used as a tuning option, -k causes configuration options to affect only the active distributions.

id

Specifies the unique process number of an active distribution, obtained with the -L argument and used with the -k argument.

-A

Aborts a distribution. The user is prompted "Are you sure?" unless the force argument (-f) is also given.

-f

Forces an operation (suppresses any confirmation prompt).

Example 2-12 shows two active distributions and the update status. Example 2-12 Active distributions c:\>wrpt -L 2 fp_distribute 01 02 22:46:48 3 fp_distribute 01 02 22:46:49

5136/0 [32-32] 5136/0 [32-32]

c:\>wrpt -L 2 fp_distribute 01 02 22:46:48 3 fp_distribute 01 02 22:46:49

5136/0 [4424-4424] 5136/0 [4224-4224]

Chapter 2. All about MDist2 and a comparison with MDist1

39

c:\>wrpt -L 2 fp_distribute 01 02 22:46:48 3 fp_distribute 01 02 22:46:49

5136/0 [5136-5136] 5136/0 [5136-5136]

2.3 MDist2 components and functionalities Tivoli Management Framework V3.7 and higher provides a new improved distribution mechanism called MDist2. All the IBM Tivoli Configuration Manager components use all the functionalities provided by MDist2. The following sections discuss MDist2 in detail, emphasizing the differences between MDist2 and MDist (also referred as MDist1).

2.3.1 What is MDist2? MDist2 is a Tivoli Framework service that provides Tivoli applications with the functionalities required to perform data transfers through a hierarchy of repeaters. It provides utilities to fully control and automate the application profile distributions. Tivoli Management Framework V3.7 and higher still supports MDist1 repeaters for backward compatibility. MDist2, however, extends MDist1 capabilities to handle the large-scale distribution needs of Tivoli applications. The following section describes the primary MDist2 components and how they work together.

40

Migration to IBM Tivoli Configuration Manager Version 4.2

Figure 2-4 MDist2 components

2.3.2 MDist2 components Figure 2-4 illustrates the primary MDist2 components:  Repeater manager: The Tivoli object that maintains configuration data for all repeaters in the TMR. It also determines the distribution path. There is one repeater manager per TMR.  Repeater site: The intermediate client that receives a single copy of data and sends it to another repeater site or target clients.  Repeater depot: The storage site for MDist2 distributions. Every repeater has a depot. Therefore, data can be stored on any repeater in the Tivoli environment. This storage mechanism helps reduce network traffic for frequently distributed data sets.  Repeater queue: The queuing mechanism for MDist2 distributions. Every repeater has a queue. The distribution is queued and its persistent information is kept as a local file. This queuing mechanism includes a retry function that enhances support for unreachable targets.

Chapter 2. All about MDist2 and a comparison with MDist1

41

 Distribution manager: The Tivoli object that updates status in the database. There is one distribution manager per TMR. Therefore, each TMR keeps track of all distributions it launches.  GUI: The JAVA interface used to view status and control distributions.  RIM: Stands for the RDBMS Interface Module. It is a common interface that Tivoli applications can use to store and retrieve information from a relational database, and it is used to store MDist2 distribution data. It is also important to note that there are two repeater types in MDist2:  Gateway repeater (TMF/LCF/gateway)  Managed node repeater (TAS/MANANGED_NODE/rpt2) A gateway repeater is linked into the gateway, whereas a managed nodes repeater is a stand-alone binary that runs on a managed node. Although both types of repeaters use the same code base and functionally are very similar, there are some differences between them. Table 2-3 summarizes these differences. Table 2-3 Differences between gateway and managed node repeaters Repeater Type

Log file

Targets

Lifetime

Gateway repeater

$DBDIR/gatelog

Repeaters and endpoints

Always up as part of a gateway

Managed node repeater

$DBDIR/rptlog

Repeaters only

Exits if queue is empty after 20 minutes of activity

2.3.3 What is new in MDist2? MDist2 provides the following new functions for distributions in large scale environments:  All new repeater functions: – – – – – – –

42

Asynchronous operation Priority queues Total resource limits (memory, disk, bandwidth) Data depots Checkpoint restart Persistent queues for assured delivery Queue data for disconnected endpoints

Migration to IBM Tivoli Configuration Manager Version 4.2

 GUI & CLIs: – Real time status – Distribution Control (cancel, pause/resume Table 2-4 provides a quick comparison of the differences between MDist2 and MDist1. Table 2-4 MDist1 and MDist2 - comparison MDist2 repeater

MDist1 repeater

To application

Asynchronous

Synchronous

Distribution Queue

Prioritized

Non-Prioritized

Resource Limit

Per repeater

Per distribution

Distribution Source

source host / depot

source host

Data Flow

Store and forward

Pipeline

Interrupted Distribution

Kept in queue / Retry

Aborted

Restart from Interruption

Restart from checkpoint

Manual restart back to original configuration

Disconnected endpoint

Supported by Queue

Aborted

Distribution Status

Available

Result only

Distribution Control

Available

Limited ability to cancel

Note: MDist2 is a Tivoli Framework service in such a way that any Tivoli application can use it. IBM Tivoli Configuration Manager components use all the functionalities provided by MDist2 to provide reliable, fast and controllable distributions.

2.3.4 Asynchronous delivery MDist2 provides an asynchronous interface to applications. Figure 2-5 explains this concept.

Chapter 2. All about MDist2 and a comparison with MDist1

43

1 2 3

Figure 2-5 Asynchronous delivery concept

1. The application submits a delivery request, and immediately gets a return. 2. The delivery operation is in progress. It may take a long time. 3. The final exit status of each distribution is returned through a callback function. The application does not need to be kept waiting until the delivery is completed for all destinations as with MDist1, which uses a synchronous interface. Figure 2-6 explains the synchronous delivery concept in MDist1.

1 2 3

Figure 2-6 Synchronous delivery concept

1. The application requests a delivery. 2. The delivery operation is in progress. It may take a long time. 3. After all targets receive the distribution, the final exit status of each distribution is returned.

44

Migration to IBM Tivoli Configuration Manager Version 4.2

Any targets encountering errors or are unreachable affect the throughput of the whole distribution process. Asynchronous delivery allows many independent delivery operations to be in progress. Asynchronous delivery enables efficient software package distributions to large numbers (thousands or more) of targets without being slowed down when encountering errors or unreachable targets. Also, the distribution log file, as shown in Example 2-13, has two parts:  Submission  Final results The Telephone_Directory^1.2 package was sent to targets named ausres42, ausres43 and ausres45. The distribution was successful for ausres42 and ausres45, but unsuccessful for ausres43, because the package was already installed on that target. Example 2-13 Distribution log file .Software Package: Operation: Mode: Time: ================= ausres45:

"Telephone_Directory^1.2" install not-transactional,not-undoable 2000-04-04 19:59:54

ausres43: ausres42: Operation successfully submitted. Distribution ID is 1757544609954 ================= Software Package: "Telephone_Directory^1.2" Operation: install Mode: not-transactional,not-undoable Time: 2000-04-04 20:00:50 ================= ausres42: Distribution ID: `1757544609954896390' Operation successful. ***************** ausres43: Distribution ID: `1757544609954896390' Operation unsuccessful.

Chapter 2. All about MDist2 and a comparison with MDist1

45

Current software package status is 'IC---'. The requested operation is not allowed for the software package Telephone_Directory^1.2 ***************** ausres45: Distribution ID: `1757544609954896390' Operation successful.

2.3.5 Priority queues Queues in MDist2 has the following functionalities:     

Handle a large number of concurrently active distributions Distribution to disconnected endpoints Persistent with automatic retry Distributions can be prioritized (high, medium, and low) Distributions may have a deadline that specifies when they expire

Figure 2-7 shows the structure of a queue in a repeater.

Figure 2-7 MDist2 repeater queue

To handle a large number of distributions, MDist2 repeaters use a queue mechanism. MDist2 distributions can have three priority levels: high, medium and low. Priority levels designate the order in which distributions are handled by repeaters, affecting the queue placement for each package. Distributions with higher priority levels are handled before ones with lower priority. Repeaters handle distributions with the same priority level in the order in which they are received. MDist2 allows the maximum number of concurrent connections to be specified for each priority level and for each repeater. A distribution with a given

46

Migration to IBM Tivoli Configuration Manager Version 4.2

priority level can use the number of connections reserved for its priority level plus any connections allocated for lower priority levels.

C o n c u r r e n t C o n n e c tio n s p e r R e p e a te r H

H

H

H

H

M

max_sessions_high

M

M

M

M

M

max_session_medium

L

L

L

L

L

max_sessions_low

H ig h P r io rity

M e d iu m P rio r ity

L o w P r io rity

Figure 2-8 Available connections for each priority

For example, suppose the queues are configured as follows:  max_sessions_high = 5  max_sessions_medium = 10  max_sessions_low = 40 In this example, high priority distributions can use up to 55 (5+10+40) sessions, medium priority distributions up to 50 (10+40) sessions, and low priority distributions up to 40 sessions. In the above example, we used the parameters max_sessions_high, max_sessions_medium, and max_sessions_low to represent the numbers of maximum concurrent connections for high, medium, and low distributions. Use the wmdist command with the -s argument to view or configure these MDist2 options. In the following example, we changed the maximum number of concurrent high priority distributions allowed on repeater chatham to 10. Example 2-14 Changing priority #wmdist -s chatham max_sessions_high=10 # #wmdist -s chatham repeater_id: rpt_dir:

1978508757.1.604 /usr/local/Tivoli/db/chatham.db/tmp/

Chapter 2. All about MDist2 and a comparison with MDist1

47

permanent_storage: max_sessions_high: max_sessions_medium: max_sessions_low: disk_max: mem_max: send_timeout: execute_timeout: notify_interval: conn_retry_interval: retry_ep_cutoff: net_load: packet_size: target_netload: debug_level: debug_delay:

TRUE 10 10 40 512000 65536 300 600 30 900 7200 500 16384 0 3 0

Priority assignment for distributions With IBM Tivoli Configuration Manager the priority level is assigned at distribution time, and includes a series of change management operations; install, remove, commit, accept, undo, and verify. The default priority value is Medium. Figure 2-9 shows an example of the Install Software Package dialog. Note the Priority Level group box for setting the priority to Low, Medium, or High.

48

Migration to IBM Tivoli Configuration Manager Version 4.2

Figure 2-9 Software Distribution GUI - Install Software Package: setting priority

To do the same operation from the command line, a series of change management commands (winstsp for install and so forth) provide the argument -l to specify the MDist2 related options. Set the priority level using the priority option, and its value as either h (for highest priority), m (for medium priority), or l (for low priority). For example, -l priority=h sets the priority level to high. Note: Priorities cannot be changed once a distribution has been submitted.

Retrying broken connections  Between gateway and TMAs: – Gateway repeater can intercept endpoint logins (for example, IBM Tivoli Configuration Manager forced a reboot of the endpoint). – Gateway repeater will also retry the connection every conn_retry_interval seconds until retry_ep_cutoff seconds have elapsed.

Chapter 2. All about MDist2 and a comparison with MDist1

49

 Between repeaters: – Repeater will retry the interrupted repeater every conn_retry_interval seconds until the distribution expires. This could be a long time.

2.3.6 Total resource limits per repeater When a repeater fans out a distribution to multiple connections, it does not use one connection at a time, but instead several connections at the same time. These distribution processes consume system and network resources. It is very important to manage these resources in order to improve performance and throughput not only for Tivoli Software Distribution, but also for the system itself when other applications are running on the repeater node. MDist2 repeaters provide some parameters that specify the total amount of resources a repeater can use. MDist1 repeaters also provide performance options. However, these configurations apply for each distribution. If multiple distributions occur, the resources allotted by the repeater settings are multiplied by the number of distributions initiated. This can lead to unexpected resource utilization, affecting system performance and throughput adversely and potentially causing a system hang. When using MDist1 repeaters, we do not recommend concurrent distributions in large-scale environments. MDist2 settings apply per repeater, improving manageability of system and network resources such as memory, disk space, and network bandwidth. Better system performance and throughput allow for faster and more efficient distributions. The following three sections cover MDist2 resource configuration options.

Distribution connections MDist2 repeaters provide three parameters that specify the number of concurrent connections allowed within a priority level:  max_sessions_high (5 connections by default)  max_sessions_medium (10 connections by default)  max_sessions_low (40 connections by default)

50

Migration to IBM Tivoli Configuration Manager Version 4.2

Note: Compare these parameters with the MDist1repeater parameter max_conn. The max_conn defines the total number of sessions (without priority), per distribution. However, with MDist2, the possible number of available connections is the sum of high, medium, and low connections. Connections never exceed this number, since they are applied per repeater, not per distribution as in the case of max_conn. These connections are shared among all active distributions. Normally, this parameter affects the distribution process between the gateway repeater and the target endpoint. Figure 2-10 explains how these parameters work.

TMR Server

2

TMR Server

Repeater Queue repeater Queue

1

Low

Medium

MDist2repeater repeater MDist2 endpoint Endpointgateway Gateway

3

High fan-out 5

s ion ect n on H le C b a H il a v A

max_sessions_high

M

L

H H

max_sessions_medium

4

M

max_sessions_low

Figure 2-10 MDist2 maximum concurrent connections

1. When a distribution request is submitted, it is sent to the gateway repeater first. 2. The MDist2 repeater processes the request, getting the priority and number of targets, and puts them into the designated queue.

Chapter 2. All about MDist2 and a comparison with MDist1

51

3. The repeater then fans out the data to the subscribers. The MDist2 repeater attempts to establish connections for each priority until the max_sessions for each priority is reached. The order is determined by priority. If there are no connections for a given priority, the repeater will try to borrow a connection from a lower priority. If the number of connections reaches the limit, the rest of the requests remain in the queue. 4. If distribution to any target is completed (succeeded or failed), the connection is released. 5. MDist2 attempts to establish the connection for another request waiting in the queue according to the priority level. In this example the connection for a low priority distribution is waiting to be dispatched, but there is a high priority request waiting and will therefore take precedence. The MDist2 repeater repeats this process until there are no remaining distribution requests. Note: The MDist1 repeater also manages concurrent connections within a distribution the same way except without a priority queue. When the distribution to any target is finished (succeeded or failed), MDist1 also attempts to establish a connection to another target immediately. Because the MDist1 max_conn parameter is applied per distribution, the number of connections established between a repeater and its targets multiplies by the number of concurrent distributions. Figure 2-11 shows a distribution scenario with MDist1.

52

Migration to IBM Tivoli Configuration Manager Version 4.2

TMR Server Source host

3

1

MDist1 repeater Fan-out

2

max_conn=10

4

max_conn=10

Figure 2-11 MDist1 max_conn configuration - multiple distribution scenario

1. A distribution request is sent to the MDist1repeater. 2. The MDist1 gateway repeater fans-out the data to the subscribers. In this case, the repeater attempts to open 10 TCP connections at the same time and distributes the data to every 10 machines in parallel. 3. Another distribution request is sent to the same MDist1 repeater. 4. The MDist1 repeater fans-out the data to another 10 subscribers. As a result, there are 20 TCP connections, not 10 connections. Therefore, if multiple distributions occur in an MDist1 environment, the number of concurrent connections can quickly result in poor or even unmanageable performance. With MDist2, the connections are shared among all active distributions and never exceed the given number.

Distribution speed configuration The MDist2 repeater provides two parameters that stipulate the network bandwidth used by a software distribution. They are specified in kilobytes per second.

Chapter 2. All about MDist2 and a comparison with MDist1

53

These parameters are:  net_load (500 KB by default)  target_net_load (disabled(0) by default) The net_load parameter of MDist2 specifies the maximum amount of network bandwidth that the repeater is allowed to use. And the target_net_load parameter of MDist2 specifies the maximum amount of network bandwidth that can be used on each connection to the target. These limits will be enforced regardless of how many distributions are active at the same time. Figure 2-12 shows how this parameter works.

b a nd w idth n et_ lo a d ta rg e t_ n e t_lo a d

N etw ork ne t_ lo ad = 5 00 M D is t 2 R ep ea ter

Targe ts

Figure 2-12 MDist2 net_load / target_netload concepts

To explain the difference of the net_load parameter between MDist1 and MDist2, we apply a multiple distribution scenario to both environments. The number of concurrent connections is strictly managed by the MDist2 priority queues. If multiple distributions occur, the number of concurrent sessions is not multiplied. Table 2-5 shows examples of the relationship between the net_load and the number of connections (where the concurrent sessions are limited to 10, and target_net_load is disabled). Table 2-5 Relationship between the MDist2 net_load and the connections

54

net_load

target_net_load

Targets

Speed / Connection

Total Speed / repeater

500

-

4

125 KB/second

500 KB/second

Migration to IBM Tivoli Configuration Manager Version 4.2

net_load

target_net_load

Targets

Speed / Connection

Total Speed / repeater

500

-

8

62.5 KB/second

500 KB/second

500

-

10

50 KB/second

500 KB/second

500

-

20

50 KB/second

500 KB/second

In this example, 10 or 20 targets have the same speed per connection value. That is because the number of concurrent sessions is limited to 10. Table 2-6 shows examples of the relationship amongst net_load, target_net_load, and the number of the connections (where the concurrent sessions are limited to 10). Table 2-6 Relationship of MDist2 net_load, target_net_load, and the connection net_load

target_net_load

Targets

Speed / Connection

Speed of repeater

500

50

4

50 KB/second

500 KB/second

500

50

8

50 KB/second

500 KB/second

500

50

10

50 KB/second

500 KB/second

500

50

20

50 KB/second

500 KB/second

In this example, 4 or 8 targets are limited to 50 KB/second, because the target_net_load is limited to 50 KB/second. The net_load and target_net_load parameters enable strict management of the network bandwidth the distribution consumes, both per repeater and per connection. Notes:  The MDist2 net_load parameter corresponds to the MDist repeater parameter net_load, but its behavior is different. With MDist2 it is applied per repeater, not per distribution. This bandwidth is shared among all active distributions.  An MDist1 repeater and an MDist2 repeater can coexist on the same node (for example, endpoint gateway), but in that case both should be configured individually.  MDist1 net_load wrpt -t net_load=500 (MDistI)

Chapter 2. All about MDist2 and a comparison with MDist1

55

MDist1 multiple distributions The net_load parameter of an MDist1 repeater is independently applied per distribution. The MDist1 net_load parameter provides two different options; the positive net_load and negative net_load. A positive net_load parameter is specified with a positive number. It means that each distribution will use a set amount of bandwidth. Each connection will use the amount of the net_load divided by the number of connections. Figure 2-13 explains how an MDist1 repeater manages the network bandwidth using a positive net_load.

net_load (500 KB/sec)

distribution bandwidth

Network net_load=500

MDist1 Repeater

Targets

Figure 2-13 MDist1 positive net_load - multiple distribution scenario

In this example, the net_load is set to 500 KB/second, and there are two distributions, both of which distribute the data to four distribution targets concurrently (the MDist1 max_conn is set to more than 4). Each distribution connection can use up to 125 KB/s bandwidth. But since there are two distributions to be executed, the total bandwidth can be up to 1,000 KB/second. The negative net_load is specified with a negative number. The negative sign is simply a flag that tells the repeater that the net_load setting is applied per connection, not per distribution. Therefore, each connection uses the specified network bandwidth. Figure 2-14 explains how an MDist1 repeater manages network bandwidth using a negative net_load parameter.

56

Migration to IBM Tivoli Configuration Manager Version 4.2

n e t_ lo a d ( 5 0 0 K B /s e c )

d is t rib u tio n

b a n d w id th

N etw o rk n e t_ lo a d = -5 0 0

M D is t I R e p e a te r

T a rg e ts

Figure 2-14 MDist1 negative net_load - multiple distribution scenario

In this example, the net_load is set to -500 (KB/s), and there are two distributions, both of which distribute the data to four distribution targets concurrently (the max_conn is set to more than 4). Each distribution connection can use up to 500 KB/s bandwidth. Because there are two distributions to be executed, the total amount of network bandwidth used is 4,000 KB/s. Tip: The MDist2 target_net_load parameter corresponds to the MDist1 negative net_load. If a value is specified for target_net_load, the total amount of network bandwidth can never exceed the MDist2 net_load value. The MDist2 net_load parameter does not accept a negative number. With MDist1 there is another option to control the network bandwidth. Setting the oserv environment variable SLOW_LINK to true causes the net_load parameter to be specified in bytes/second instead of kilobytes/second. Data transmission is performed in 1024 byte packets instead of the usual 16 kilobyte packets. SLOW_LINK does not exist in MDist2. It has been replaced by the MDist2 configuration parameter packet_size. To summarize, MDist1 settings apply per distribution (with the exception of the negative net_load) and can result in unexpected bandwidth utilization. With MDist2, the network bandwidth is shared among all active distributions and never exceeds the net_load.

Chapter 2. All about MDist2 and a comparison with MDist1

57

Memory and disk for spooling data The MDist2 repeater provides two parameters that specify values for maximum amounts of system memory and disk space that can be allocated to the repeater to spool data during distributions. The following are these two parameters:  

mem_max (65 MB by default) disk_max (500 MB by default)

Figure 2-15 explains the interaction between mem_max and disk_max during a distribution.

rpt_dir (depot) 1 Data

2

3

mem_max

Data

Data

Data

Data

Data

Data

Data

Data

Data

Data

Memory

Data

disk_max

Disk

Figure 2-15 MDist2 configuration - mem_max and disk_max

1. The data is distributed to the MDist2 repeater. 2. The data is initially stored on the disk (depot) of the repeater. The maximum amount of disk space that can be used for the data is a value specified by disk_max. 3. The memory is used as a buffer to distribute the data to the target. The maximum amount of memory that can be used for the data is a value specified by mem_max.

58

Migration to IBM Tivoli Configuration Manager Version 4.2

Notes:  Both MDist and MDist2 provide mem_max and disk_max parameters, but there are two important differences:  With the MDist2 repeater, these parameters are applied per repeater, not per distribution as with MDist. The maximum allotted memory and disk space are shared among all active distributions.  The MDist2 repeater adopts a “store-and-forward” mechanism using depots. Each repeater receives the entire distribution and stores it at least temporarily on disk before sending it on to its endpoints or the next level of repeaters. When setting the disk_max value, be sure that the file system has enough space to accommodate the amount required by any intended concurrently distributed software packages for temporary storage or the total amount used for software packages loaded permanently on the depot. Figure 2-16 shows how the MDist1 repeater manages mem_max and disk_max using the multiple distribution scenario.

Data

Data

4

6

5

Full Data Data Data

mem_max

Data

/tmp (disk_dir)

Data Data

3

Data

2

1

Data

Full

Data

Data

Data

Data

Data

Data

Data

Data

Data

Data

Data

Data

Data

Data Data

mem_max

Memory

disk_max

Disk

Figure 2-16 MDist1 configuration- mem_max, disk_max in multiple distribution

1. The data of the first distribution is distributed to the MDist1 repeater. 2. The data is initially spooled to real memory on the repeater. The maximum amount of memory used for spooling data is specified by the value for mem_max.

Chapter 2. All about MDist2 and a comparison with MDist1

59

3. After the mem_max is reached, the data spools to the disk on the repeater. The directory or file system to which data spools is specified by the disk_dir parameter. The maximum amount of disk space that can be used for spooling data is specified by the value for disk_max. 4. The data of the second distribution is distributed to the MDist1 repeater. 5. The repeater allocates another area in real memory. 6. After mem_max is reached again, the data spools to another space on the disk (under the same specified disk_dir directory). As long as multiple distributions occur in the MDist1 environment, memory allocation multiplies. Uncalculated resource utilization may affect system performance or cause a hung system.

2.3.7 Data depots MDist2 provides gateways with the ability to store transferred data in a local repository. This is called a depot, and its data entry is called a segment. A depot can store data (segments) temporarily or permanently. Basically the multiplexing functionality is the same between MDist1 and MDist2; that is, the repeater site receives a single copy of data and distributes it to the next tier of clients. However, there is a difference in whether the data can be stored in the distribution hierarchy or not. For MDist, the data is cached only while distributions are executed. When the distributions are completed, whether succeeded or failed, the data will be deleted. This can be described as a data pipeline. With MDist2, on the other hand, each repeater receives the entire distribution before sending it on to the next level of repeaters or to its endpoints. The data can be stored on the depot either temporarily or permanently. This type of distribution uses a store-and-forward mechanism. See Figure 2-17.

60

Migration to IBM Tivoli Configuration Manager Version 4.2

T MR Serv er Source host

c

h ac

s to

e

re Depot

MDist1 Repeater

MDist2 Repeater Fan -out

Fan-out

forw ard

Repeater Fan-out

Managed node

Endpoint

Figure 2-17 Depot concepts

Figure 2-18shows the structure of depots and usage between repeaters.

Figure 2-18 Depots between repeaters

The depots have the following distribution functionalities:  Repeaters can depot distribution data

Chapter 2. All about MDist2 and a comparison with MDist1

61

   

Checkpoint restart between repeaters and gateways and endpoints Reduce network traffic for frequently distributed data sets Data can be stored temporarily or permanently Data is transferred via store and forward

Segments A depot stores each distribution segment as a separate entity, which consists of two files with the extension of .toc and .dat, where both files share the same name (an 8-digit serial number). For example, the following are the .toc and .dat files for the depot called halibut. Example 2-15 The .toc and .dat files >set DBDIR DBDIR=c:\Tivoli\db\halibut.db >dir c:\Tivoli\db\halibut.db\tmp\depot 00000001.dat 00000001.toc 00000002.dat 00000003.toc 00000003.dat 00000003.toc

The .toc (table of contents) file maintains the information for each segment as shown below, and the .dat files are the actual data to be distributed. When the repeater loads segments into memory, the .toc file is scanned.  Segment: Segment description  Creation time: The time the segment first arrived at the depot  Last access time: The last time the segment was referenced by a distribution  Update count: The number of times the segment data was modified  Access count: The total number of destinations to which the segment was distributed  In use count: The number of connections using the segment  Is stored: Indicates whether the segment's data is stored in the depot IBM Tivoli Configuration Manager uses this function to store software packages in depots. In this case it is called a software depot. This provides the ability to store mission critical software on servers that are closer to the ultimate destinations. Also, the same software package can be stored in more than one depot.

62

Migration to IBM Tivoli Configuration Manager Version 4.2

We will introduce how the depot is used in the IBM Tivoli Configuration Manager context as follows.

Enable the depot to store software packages permanently A depot can store data (segments) temporarily or permanently, as defined by the MDist2 configuration option permanent_storage. Its value is either TRUE (allow segments to be stored in depot permanently) or FALSE (segments are deleted after distribution is completed). The default value is TRUE. In large-scale environments, you may have a requirement to keep software packages on local LAN sites to avoid software packages from being transferred over a WAN every time a distribution occurs. In this case, be sure that this option is configured TRUE using the wmdist -s command. In case it is not TRUE, enter the following command: wmdist -s central permanent_storage=TRUE

Depot configuration To see the current configuration of a depot, you may use the wdepot describe command. The following example shows the output of this command: Example 2-16 Depot configuration #wdepot central describe depot Location = d:/Tivoli/db/central.db/tmp/depot/ depot Size = 512000 (KB) Temporary Storage = 4294967295 (KB) Permanent Storage = 0 (KB) Total Storage = 4294967295 (KB) Free Space = 512001 (KB) # #wmdist -s central repeater_id: rpt_dir: permanent_storage: max_sessions_high: max_sessions_medium: max_sessions_low: disk_max: mem_max: send_timeout: execute_timeout: notify_interval: conn_retry_interval: retry_ep_cutoff: net_load:

1978508757.5.21 d:/Tivoli/db/central.db/tmp/ TRUE 5 10 40 512000 65536 300 600 30 900 7200 500

Chapter 2. All about MDist2 and a comparison with MDist1

63

packet_size: target_netload: debug_level: debug_delay:

16384 0 3 0

Define the depot directory with the MDist2 configuration option rpt_dir. Upon setting this option with a directory path, a .../depot/ subdirectory is created. The default directory of the rpt_dir option is $DBDIR/tmp/. Note: In the previous example, the repeater central is in Windows NT. For a UNIX node the default directory would be: /usr/local/Tivoli/db/central.db/tmp/depot/

Recall that the depot size is handled by the MDist2 configuration option disk_max (512 MB by default). The command wmdist with the -s argument can display or modify its value.

Loading software packages on depots Software packages can be loaded or unloaded on a depot prior to distribution. Tivoli Software Distribution 4.0 provides the wldsp command, which loads a software package on one or more depots. Similarly, the command wuldsp unloads a software package from one or more depots. The syntax of the commands are: Usage: wldsp @[SoftwarePackage:]spobj_name [repeater ...] Usage: wuldsp @[SoftwarePackage:]spobj_name [repeater ...]

Where: spobj_name: repeater:

Name (label) of the software package object Name of one or more repeater depots

Consider the following example: Example 2-17 wldsp command #wldsp @example^01 central Operation successfully submitted. Distribution ID is 1978508757953260009. # #wuldsp @example^01 central Operation successfully submitted. Distribution ID is 1978508757953260753.

64

Migration to IBM Tivoli Configuration Manager Version 4.2

The wldsp command submits a distribution which sends a software package to the repeater as a target. Similarly, wuldsp submits a distribution that deletes a software package from the repeater. Using the Framework command wdepot, you can also control the segments (software packages) on a depot, such as adding, deleting, listing, or getting information about each segment. The syntax of the command is: wdepot repeater_name list [id^version] [-l] wdepot repeater_name delete id^version wdepot repeater_name purge

Where: list: delete: purge: id^version: –l:

Lists all segment entries in the depot Deletes specified segment entries in the depot Deletes all entries in the depot ID and the version of the segment Lists all information for each entry

The following gives an example of the command: Example 2-18 wdepot command #wldsp @example^1 halibut Operation successfully submitted. Distribution ID is 1757544609954288623. #wdepot halibut list ame Version Test 1 07:40:17 First 1 10:25:26 example 1 #wdepot halibut list -i Entry #1: Id: Version: Bytes received: Location: Creation time: Last modification time: Receive time: Last access time: Update time: Access count:

Status P P P

Size(bytes) 12181684(100%)

Update time 2000/03/01

320669(100%)

2000/03/08

12181684(100%)

2000/03/0

example^1 -l example 1 12181684(100%) c:/Tivoli/db/halibut.db/tmp/depot/0000004.dat 2000/03/28 18:19:42 2000/03/28 18:19:42 2000/03/28 18:19:42 2000/03/28 18:19:42 2000/03/28 18:19:42 0

Chapter 2. All about MDist2 and a comparison with MDist1

65

Modification count: Reference count: Storage status:

1 0 Permanent

As shown above, the software package example^1 is converted to a file under the depot directory (0000004.dat). Note: To load a software package on a depot, it first must be built as a software package block (.spb).

Software package distribution One of the distribution options you can set is From depot. When selected, the software package loaded on the depot distributes instead of the original package on the source host. Tivoli Software Distribution provides both GUI and command line interfaces for installing software packages as well as setting distribution options. Figure 2-19 shows the Install Software Package Dialog with the distribution option, From depot, selected.

Figure 2-19 Software Distribution GUI, Install Software Package with depot

66

Migration to IBM Tivoli Configuration Manager Version 4.2

Set the same option from the command line with winstsp using the -l argument, which is used for all MDist2-related options. To tell the system to distribute data from the depot, type winstsp -l from_depot=y. Note: The From depot option requires software packages be loaded prior to the distribution with the wldsp command. Otherwise the distribution will fail. Another distribution option, Disposable, results in data stored on the depot to be only temporarily used. When this option is used, the software package associated with the distribution is deleted from the depot once the distribution is finished (either all endpoints have completed or the distribution has expired). You can set this option both from the GUI (see the check box Disposable in Figure 2-19 and from the command line. To set this option from the command line, use winstsp with the -l argument: winstsp -l disposable=y. Be sure not to use this option if you want to keep software packages on the depot. Note: When using the disposable option, you do not need to load software packages prior to the distribution. Nor do you need to unload software packages explicitly with wuldsp because they are automatically deleted from the depot after distribution. If the MDist2 repeater is configured to use the depot temporarily (with the permanent_storage=FALSE statement), the software package is deleted after distribution regardless, even if the disposable option is not set. Table 2-7 explains the relationship between disposable and MDist2 permanent_storage configuration options. Table 2-7 Relation of disposable option and permanent_storage configuration permanent_storage=FALSE

permanent_storage=TRUE

disposable=Y

deleted (unloaded)

deleted (unloaded)

disposable=N

deleted (unloaded)

remains (loaded)

Figure 2-20 shows the default depot directory on a repeater.

Chapter 2. All about MDist2 and a comparison with MDist1

67

MDist2 repeater

$rpt_dir

Depot

/depot/

Depot

/states/

Persistent Queue

Queue

(W indows NT/2000)

$DBDIR/tm p/ /tm p/

(UNIX)

Figure 2-20 Depot directory

Consider a typical three-tiered management structure environment over an WAN as illustrated in Figure 2-21.

Operation C enter T MR Server T EC Server Repeater etc.

Wide Area N etwork

Site D

Site A

EP G ateway

EP G atewa y

EP

EP

EP

EP

EP

EP

EP

Site B

EP

EP

Site C EP G ateway

EP Gateway

EP

EP

EP

EP

EP

EP

EP

EP

EP

EP

EP

Figure 2-21 An example of a large-scale distribution environment

In this environment, one or more endpoint gateways are located on each site, serving as repeater sites for the distribution. Note: Gateways are repeaters by default.

68

Migration to IBM Tivoli Configuration Manager Version 4.2

When distributing to a target on the remote side of a slow network connection, such as a WAN, the low bandwidth may affect the performance and throughput of the distribution and sometimes the entire network. Therefore, distributions need to be fast and efficient to reduce system and network load. To meet this requirement, MDist1 uses a fan-out mechanism for distributions to large numbers of targets and distributions that must cross a WAN. By configuring a repeater as an entry point for each LAN, the source repeater, distributing across a WAN, only needs to distribute to the individual LAN entry point repeaters. The individual LAN repeaters then “fan out” the distribution to other repeaters within its local hierarchy if enabled, or directly to the end targets, depending on how they are configured. Therefore, by enabling repeaters in a proper hierarchy configuration throughout a WAN, distributions to 1000 targets, for example, do not require 1000 data transfers across a slow network as it does when using only one point of distribution. Using the fan-out mechanism minimizes the frequency data crosses a slow network or WAN by using LAN repeaters to distribute within their own local network. File package blocks, as introduced in Tivoli Software Distribution 3.6, also provide a possible solution in slow network or WAN scenarios. A file package block is a static file containing a snapshot of a file package. This snapshot includes the file package definition, file package attributes, source files and directories, and configuration programs for a specific file package. A file package block of a specific file package gets created on each remote LAN repeater across a WAN. This enables each LAN entry point repeater to distribute the file package locally to its targets. In addition to this, IBM Tivoli Configuration Manager Software Distribution component provides a depot feature that performs the same function simpler and more efficiently. With the MDist2 depot, you can store (load) software packages on depots on any endpoint gateway (MDist2 repeater), and distribute them from the depot. The Figure 2-22 explains how the software package is created and distributed to each target.

Chapter 2. All about MDist2 and a comparison with MDist1

69

O peration C enter TM R S erve r S ource H ost

SP 1

2

W ide Area N etw ork

2 2

2 Site D

S ite A E P G atew ay

D e pot

3

3

EP

EP

EP

EP

EP

EP

EP

Site B

EP

EP

EP

Site C

E P Gatew ay

D e p ot

EP

EP

EP

E P G ateway

D epot

3

EP

E P G ateway

D epot

EP

3

EP

EP

EP

EP

EP

Figure 2-22 Software Distribution scenario using a depot

1. Create the software package. The software package is created on the source host. 2. To load a software package to a depot, execute the wldsp command with the appropriate options. Then the software package with its definitions is copied to each MDist2 repeater (endpoint gateway) that you specified. 3. To distribute the software package to each target (endpoint), execute the winstsp or other appropriate change management command with the From depot option. The endpoint gateway distributes the software package from the depot to each target. The advantage of using a depot becomes more apparent for larger software packages and slow WAN links. Software packages can be stored on local repeaters for fast and efficient distribution to systems on the same LAN when the source repeater is on the other end of a WAN connection. If a distribution error occurs, unlike MDist1, MDist2 retries the distribution without requiring the file to be transmitted from the source host over a slow link (WAN), it rather uses the depot to retransmit the file. The depot also provides checkpoint restart functionality as discussed in the next section.

70

Migration to IBM Tivoli Configuration Manager Version 4.2

2.3.8 Checkpoint restart When a distribution is interrupted due to a network failure, machine reboot, or power failure, it is automatically resumed from the last checkpoint on the receiver. This means that if an interruption occurs, entire files and configurations do not need to be retransmitted, but only the portions that did not arrive before the break in connection. This maximizes the use of valuable bandwidth by preventing the entire file package from having to be re-sent to the endpoint. Once the connection is re-established, the sending repeater (depot) contacts the receiving repeater and sends it the list of segments for the software package. The receiving repeater checks its depot for segments received against the list. If it has a partial segment, the receiver sends the incomplete segment back to the sending repeater. It tells the sending repeater not to send the segments that it received in full. Handshake occurs at the start of each connection (both repeaters and endpoints). Endpoint checkpoint restart requires the participation of the application. Figure 2-23 shows how checkpoint restart works.

Figure 2-23 Implementation of checkpoint restart

For example, suppose a ThinkPad is connected to a campus network and a distribution is performed to it as a background process when the user disconnects the system to use it remotely in midst of the transmission. The MDist2 repeater regards the distribution to this system as interrupted and sets a checkpoint. Once the user reconnects and the endpoint logs back in to the gateway, the distribution resumes from the interrupted point to completion. The MDist2 configuration parameter, packet_size, represents the checkpoint memory buffer size. The default value is 16 KB, and is recommended for the best network distribution performance. You can modify the packet_size value with the wmdist -s command. However, 16 KB is typically optimal and changing this value is not advised.

Chapter 2. All about MDist2 and a comparison with MDist1

71

As we will explain in the next section, the administrator can control the distribution process with pause, resume, or cancel options. The pause and resume actions are treated as any distribution interruption and use checkpoint restart. Canceled distributions, however, must restart from the beginning.

2.3.9 Assured delivery To handle a large number of distributions, MDist2 repeaters use a queue mechanism. MDist2 repeaters manage persistent information of distributions, keeping them stored in a file (MDist2.bdb) locally. This information is loaded into memory every time the repeater starts up. You can configure the directory of MDist2 including the depot location using the rpt_dir configuration option. In this directory, a subdirectory named /states maintains the persistent information of the MDist2 repeater queue (MDist2.bdb) as well as the log file (MDist2.log). If a connection to a receiver (both repeaters and target endpoints) cannot be established or is broken, the repeater keeps an initiated distribution in its queue, and waits for the connection to be (re)established. The repeater automatically retries to connect every pre-defined number of seconds until the connection is re-established or until the distribution automatically aborts after reaching the deadline specified by the calling administrator (through the application configuration, not MDist2). The interval of time between retries is defined by the MDist2 configuration option conn_retry_interval (900 seconds by default). For endpoints, MDist2 provides an additional deadline configuration option, retry_ep_cutoff (7200 seconds by default) that is non-application specific. When a distribution is in progress to an endpoint and the connection is broken, the gateway repeater tries to reconnect to the endpoint every defined number of seconds. Once retry_ep_cutoff is reached, the gateway repeater stops trying to connect to the endpoint and places the distribution in queue until the application-specified distribution deadline. Figure 2-24 illustrates the retry process after an interruption between an endpoint gateway (MDist2 repeater) and an endpoint.

72

Migration to IBM Tivoli Configuration Manager Version 4.2

TMR Server Source host

conn_retry_interval

2 MDist2 repeater Endpoint Gateway

Queue

1 interrupted or

unavailable

expired

3 Endpoint

4 distribution stays in queue

application dead line

retry_ep_cutoff 5

Figure 2-24 Retry option - gateway repeater and endpoint

1. The connection to the endpoint cannot be established or breaks. 2. The MDist2 repeater keeps this distribution in the repeater queue. 3. The MDist2 repeater attempts to establish or re-establish a connection to the designated target every MDist2 configured conn_retry_interval. 4. The distribution expires when the deadline defined by the application (for example, Software Distribution) is reached or the MDist2 configuration option retry_ep_cutoff is reached. Figure 2-25 illustrates an interruption between an endpoint gateway (MDist2 repeater) and another repeater.

Chapter 2. All about MDist2 and a comparison with MDist1

73

TMR Server Source host

conn_retry_interval

2 MDist2 repeater

Queue

1 interrupted or

unavailable

3 4

application dead line

expired

Repeater

Figure 2-25 Retry option - endpoint gateway and another repeater

1. The connection to the receiving repeater cannot be established or breaks. 2. The MDist2 repeater keeps the distribution in the repeater queue. 3. The MDist2 repeater attempts to establish or re-establish a connection to this repeater in an interval defined by the MDist2 configuration option conn_retry_interval. 4. Because MDist2 does not have a deadline configuration option for repeaters, the repeater continues to retry until the application deadline is reached. If an undelivered target is a repeater, the distribution aborts for all targets it serves beneath it. The sending repeater resumes the distribution to the receiving repeater upon reconnect, and in turn the receiving repeater distributes to the targets beneath it. Note: MDist1, unlike MDist2, stores no persistent state information and therefore cannot resume a time-out distribution. The administrator must re-distribute as a new operation. As described in the previous scenario, the application can define the deadline (time-out) of distribution. The Software Distribution component provides this setting as part of its change management options such as install and remove. The administrator sets the deadline when setting up a software package

74

Migration to IBM Tivoli Configuration Manager Version 4.2

distribution (see Figure 2-26). To reach the following dialog as seen in Figure 2-26, select the menu bar Advanced Options -> Time-out Settings.

Figure 2-26 Software Distribution GUI - Time-out Settings

As shown, there are four options:  Deadline: The date on which a distribution expires; that is, when it fails for unavailable targets.  Notification Interval: The length of time before MDist2 sends distribution results notification back to the sending application. See “Reporting results” on page 80.  Send Timeout: The length of time a repeater will wait for a target to receive a block of data. This time-out is used to detect network or endpoint failures. The default length of time is 300 seconds (five minutes). This option overrides the MDist2 send_timeout option.  Execution Timeout: The length of time a repeater will wait for Tivoli Software Distribution to return the result of a distribution after all the data has been sent. This time-out is used to detect network, endpoint, or script failures such as a script is running in an infinite loop. The default length of time is 300 seconds (five minutes).This option overrides the MDist2 execute_timeout option.

Chapter 2. All about MDist2 and a comparison with MDist1

75

Note: MDist2 also provides send_timeout and execute_timeout configuration options. Their roles are same as the previous example.  send_timeout (300 sec by default)  execute_timeout (600 sec by default) Applications that provide the same time-out options as MDist2 override the MDist2 setting.

2.3.10 Disconnected endpoint support Using the persistent queue data maintained by the MDist2 repeater, MDist2 enables support for disconnected endpoints. If an MDist1 repeater cannot establish a connection to a target endpoint, the distribution to the target aborts, and the administrator has to re-submit the distribution to this target. With MDist2, if the gateway repeater cannot establish a connection, the repeater keeps the distribution in its queue and waits for the endpoint to login. It retries connecting to the endpoint in an interval defined by the MDist2 configuration option conn_retry_interval, up to the deadline defined by the MDist2 configuration option conn_retry_cutoff. This functionality enables the solution to automate the distribution when the target system starts up. Figure 2-27 shows the sequence of events for disconnected support.

76

Migration to IBM Tivoli Configuration Manager Version 4.2

TMR Server Source host

1 2 MDist2 repeater

Queue

4

3 2

Login

4

Power Off

Figure 2-27 Automated Software Distribution scenario at power-on

1. The initial distribution request is sent to the MDist2 repeater. 2. The MDist2 repeater (endpoint gateway) attempts to establish a connection to the designated target. In this case the target is unreachable because it is powered off. The distribution stays in the repeater queue and retries to establish a connection to the endpoint at a given interval until the distribution reaches the deadline. 3. When the system starts up, the endpoint tries to log in to its endpoint gateway. 4. The gateway repeater of MDist2 intercepts the endpoint login and determines if there are any distribution requests in its queue for the endpoint. If there is, the distribution starts.

2.3.11 Mobile computing support The disconnected endpoint support function realized by the distribution queue with persistent information helps assure successful deployment to intermittently connected systems. If a mobile user is not connected at the time of the distribution, the information will remain in the queue for this user until the workstation re-connects to the network, or until the distribution reaches the expiration time set by the administrator.

Chapter 2. All about MDist2 and a comparison with MDist1

77

2.3.12 Distribution control and status Once distributions are submitted to MDist2, the Distribution Manager manages them. The Distribution Manager keeps distribution status and performs control operations such as pause, resume, and cancel. It assigns each distribution an ID and uses it to identify and track the distribution. The Distribution Manager maintains the list of all completed and pending distributions and their status for each destination in a relational database accessed by RIM. The stored distribution information updates dynamically as the status changes. Note: MDist2 repeaters can still perform basic repeater functions without a RIM database, however, you must configure a RIM database to control or receive status of distributions. MDist2 provides an option to automatically remove completed distributions from the database to preserve database space. To configure this option, use the wmdist -T command. The database_purge_interval value is the interval in seconds from the completion of distribution until its entry is deleted. The default is -1, which means the option is disabled. You can also delete distributions using the GUI or the wmdist -d command. MDist1 does not provide status information to the application until the distribution is finished. For example, when IBM Tivoli Configuration Manager initiates a distribution, the user cannot see which machines are encountering problems, or estimate when the distribution will finish. In large-scale environments where there are hundreds or thousands of targets, the need for real-time distribution status information increases greatly, and MDist2 should therefore be used.

Distribution status A distribution can be in any of the following states for each destination:  Status by Severity OK:

The distribution processed normally or placed in a state by user intervention.

Warning:

A recoverable error occurred in the distribution. States include Interrupted and Unavailable.

Error:

An error occurred in the distribution. States include Failed and Expired.

 Status by Progress Pending:

78

The distribution is still in progress. States include Waiting, Receiving, Sending, Paused, Interrupted and Unavailable.

Migration to IBM Tivoli Configuration Manager Version 4.2

Stored:

The distribution is stored on the mobile target (used for mobile computing).

Completed:The distribution is completed. States include Successful and Unsuccessful. Unsuccessful includes Failed, and Canceled.

Distribution and target states Canceled:

An administrator canceled the distribution.

Expired:

The distribution exceeded the timeout deadline.

Failed:

The distribution failed. Check the application log for more information.

Interrupted: The distribution was interrupted, such as by a connection break. The repeater depot continually tries to resume the distribution over specified intervals until either the MDist2 or application distribution timeout is reached. Paused:

An administrator halted the distribution for an indefinite period of time.

Receiving:

The target is in midst of receiving the distribution.

Rejected:

The endpoint user rejected the distribution. (This state is not in the current release, but is reserved for future use.)

Sending:

The repeater is sending data and the distribution can still be paused or canceled.

Successful: The distribution is successfully completed. Unavailable: The repeater cannot communicate with the target. The depot attempts the distribution again when the endpoint logs back in to the gateway. The state changes to Expired once the distribution timeout is reached. Waiting:

The target is waiting to receive the distribution. There is not yet communication between the depot and target.

These states mostly refer to the target endpoint; however, they include repeater statuses, such as sending, used in the distribution topology view or node tables. Figure 2-28 shows various distribution statuses.

Chapter 2. All about MDist2 and a comparison with MDist1

79

Figure 2-28 Distribution statuses

Reporting results We have already mentioned that MDist2 uses an asynchronous delivery method. Results are sent back to the application that called the MDist2 service. The following sequence of events occur when reporting the status: 1. The MDist2 repeater buffers the results of completed distributions. 2. The repeater sends the results back to the application and Distribution Manager, and the MDist2 database updates with the new target information. 3. The distributions for all targets complete. 4. The interval (minutes) specified by the MDist2 configuration option notify_interval elapses. This option is configured using the wmdist -s repeater_name notify_interval= command. The default value is 30 minutes. In large-scale environments MDist2 can group success or failure notifications into single messages to reduce the network traffic.

Interfaces - GUI and command line MDist2 provides both GUI and the command line interfaces to manage distributions. The GUI displays the statuses of distribution both by distribution and by endpoints. It can provide the information with different views. Different

80

Migration to IBM Tivoli Configuration Manager Version 4.2

statuses can be differentiated by color or organized in charts. This information updates automatically. It provides distribution management functionality such as pause, resume and cancel. Once you select a distribution or node associated with the distribution, you can use one of the operation icons to manage the distribution. To manage distributions from the command line interface, use wmdist. In addition to repeater configuration tasks, wmdist is used to retrieve distribution status both in summary and in detail format, or perform an operation directly from the command line or from shell scripts.

Pausing and resuming a distribution Upon pausing a distribution, the connection for the distribution closes. It re-establishes once the distribution is resumed. The Distribution Manager uses the distribution ID to track it. Upon initiating the pause, MDist2 sends a pause signal to the targets. This signal routes using the same path as the distribution. The endpoint state does not change until the pause command catches up to the distribution and the Distribution Manager receives confirmation status. As a result, it is possible that distribution to an endpoint will complete before the pause command can reach the distribution.

Canceling a distribution Upon canceling a distribution, the Distribution Manager uses the distribution ID to identify the distribution being canceled. To perform the cancel operation, MDist2 sends a cancel signal to the targets. This signal routes using the same path as the distribution being canceled. The state of an endpoint does not change until the cancel signal catches up to the distribution and the Distribution Manager receives confirmation status. As a result, it is possible for a distribution to an endpoint will complete before the cancel command can reach the distribution.

Summary of distribution control and status Below is a summary of control and status of distributions in IBM Tivoli Configuration Manager.

Distribution Status    

Asynchronous notification. Final and some intermediate states (paused) stored in RIM RDBMS. Viewable with GUI. Database entries persist after distribution finishes

Distribution Control  Abort and Pause/Resume available for: – Entire distributions – Individual endpoints for a distribution

Chapter 2. All about MDist2 and a comparison with MDist1

81

– Use with scheduling to set distribution windows – Pause closes connections. Resume will re-establish the connection. Application must handle checkpoint restart.

Distribution Manager  Separate distinguished object: Distribution Manager  Stores final status in a RIM database: – Distribution States: Table containing list of active and completed distributions. Columns: Dist ID, User, Label, Size, Source application, Source node, Start time, Finish time, Last update time, Expire time, Target count, Number of targets in each state, Min, Max, and Average time in each state. – Target States: Table containing an entry for every endpoint of every distribution. Columns: Dist ID, Node OID, Parent node, State, Start time, Finish time, Last update time, Time in each state.  To reduce network traffic, only final states are stored in database. Possible exceptions: Pause and Unavailable.  Results are returned from repeaters to the source host through the same repeater path used to distribute the data. The source host passes the statuses to the distribution manager.  Can be configured to periodically purge completed distributions.  Provides interface for pause/resume and cancel for entire distributions or selected endpoints of a particular distribution.

82

Migration to IBM Tivoli Configuration Manager Version 4.2

3

Chapter 3.

Planning and methodology for upgrading This chapter describes the planning steps and methodology of upgrading to IBM Tivoli Configuration Manager V4.2. Along with an introduction, we discuss these topics:  “Strategy and planning” on page 85  “Compatibility matrix - operating systems, databases” on page 85  “Plans for back-level systems” on page 86  “Testing in a lab - Tivoli Database component” on page 88  “Interoperability of components during the upgrade” on page 90  “Inventory issues” on page 93  “Software Distribution issues” on page 95  “Integrated upgrade issues” on page 95  “Tivoli Management Framework” on page 96  “Upgrading Inventory” on page 96  “Upgrading Software Distribution” on page 97  “Migration scenarios” on page 101

© Copyright IBM Corp. 2002. All rights reserved.

83

3.1 Introduction IBM Tivoli Configuration Manager 4.2 provides you with key benefits and business value as described in Chapter 1. The focus of this book is to assist you in planning, considering different aspects of migration, and recommending few proposed solutions based on some scenarios. We understand that the discussion in this book is not an exhaustive route to migrating current environments to IBM Tivoli Configuration Manager 4.2, but the discussion and methodology discussed will help in the initial efforts for your deployment. At an enterprise level, planning is the most important step to perform any kind of upgrade or migration to your current Tivoli Management Environment. Planning provides the foundation for a good test environment and a swift game plan to execute in your production. Planning also provides the most important benefit of a minimum amount of unscheduled down time to your production environment. IBM Tivoli Configuration Manager 4.2 is a combination of following products:        

Tivoli Management Framework Version 4.1 Tivoli Software Distribution Version 4.2 Inventory Version 4.2. Activity Planner Version 4.2 Change Manager Version 4.2 Resource Manager Web Interface Version 4.2 IBM Tivoli Directory Query Version 4.2

In this book we will be covering two scenarios for moving your current environment to IBM Tivoli Configuration Manager 4.2. Scenario 1 will be a migration from a complete 3.6.x version to IBM Tivoli Configuration Manager 4.2; and Scenario 2 will be an upgrade from Inventory 4.0, Software Distribution 4.0 to IBM Tivoli Configuration Manager 4.2. Here is a list of some key reference materials:  IBM Tivoli Configuration Manager Planning and Installation, V4.2, GC23-4702  Tivoli Management Framework Release Notes V4.1, GI11-0890  IBM Tivoli Configuration Manager Release Notes Version 4.2, GI11-0934  Tivoli Management Framework Planning for Deployment Guide V4.1, GC32-0393  Tivoli Inventory Version 4.0 - Migration Guide from Version 3.6.2, SG24-7020  IBM Tivoli Configuration Manager Upgrade Scenarios Field Guide

84

Migration to IBM Tivoli Configuration Manager Version 4.2

 Compatibility Guide for Tivoli Applications Field Guide  Tivoli Management Enterprise 3.7.1 Upgrade Field Guide Note: You can access Tivoli Field Guides from the following Web site: http://www-3.ibm.com/software/sysmgmt/products/support/Field_Guides.html

3.2 Strategy and planning In the years since Tivoli products were first launched, there have been many additions and changes to the family of operating systems supported by the Tivoli products. In some cases, these changes were necessitated by support of the operating systems themselves by the operating system vendors. In other cases, the strategic direction of our customers has changed to come into alignment. If your company requires assistance performing upgrades, IBM Global Services and IBM Tivoli Services are both organizations that you can use to help migrate any existing systems in your environment that need to be updated for interoperability and compatibility. In addition, IBM Sales can help you acquire new systems that you need to replace any outdated hardware.

3.3 Compatibility matrix - operating systems, databases There is a spreadsheet available that outlines the current versions of operating systems, platforms, and databases as compatible with the various versions of Tivoli products. This spreadsheet is available at: https://www.tivoli.com/secure/Tivoli_Electronic_Support/tsp.nsf/(Attachment)/At tachments/$File/TivoliPlatformSupportNonConfidential.xls

The operating systems that are removed from support, include the following:         

IBM AIX all versions prior to 4.3.3 Sun Solaris, all versions prior to 7.0 (2.7) HP-UX, all versions prior to 11 or 11i Windows NT, all versions prior to 4.0 SP 6a Red Hat Linux, all versions prior to 7.2 SuSE Linux, all version prior to 7.2 NetWare, all versions prior to 5.1 OS/2, all versions prior to 4.0 OS/400, all versions prior to 4.5

Chapter 3. Planning and methodology for upgrading

85

The databases that are removed from support are:     

IBM DB2, all versions prior to 7.1 (except z/OS, which is 7.0) Informix, all versions prior to 9.20 Microsoft SQL Server, all versions prior to 7.0 SP 1 Oracle, all versions prior to 8.1.7 or 9i Sybase, all versions prior to 12.0

3.4 Plans for back-level systems If you currently have any of these back-level systems in your environment, as Tivoli managed platforms or operating systems, you will need to do some work to planning to migrate to the latest versions of IBM Tivoli products. If your environment contains many of these outdated systems, retain Tivoli Services or IBM Global Services to consult on migration strategies. You can determine your course of action yourself. If you need help with any of these options, you can contact Tivoli Services or IBM Global Services for assistance. Potential approaches include:  A “home style” Tivoli Region, where the back-level systems can be retained as long as necessary.  A “one that stayed behind” Tivoli Region, where an existing Tivoli Region remains at the current code levels and back-level systems are migrated to this region. Systems that are not outdated are migrated to Tivoli Regions that will be upgraded.  A “Decommission first, upgrade later” strategy, where you “clean house” and resolve your conflicting back-level problems by the removal or upgrade of these systems before upgrading your Tivoli environment. Each of these approaches is discussed in some detail in the following sections.

3.4.1 A "home style" approach In this approach, you create a new Tivoli Region at the minimum software levels to support the needs of your systems, and connect it to your existing environment. Existing back-level platforms would be “migrated” to this newly created Tivoli Region according to a plan. This environment would be essentially untouched until you decommission these systems from your main production environment and later upgrade these systems with the latest code releases. This approach provides a means for removing the impediments that prevent you from gaining the maximum benefits from the new products while still being able to use a greater number of the back-level machines in your environment. There is

86

Migration to IBM Tivoli Configuration Manager Version 4.2

some additional cost for the activity and the systems, but the systems can be reclaimed at the end of your project when all back-leveled systems are updated or decommissioned. This project to upgrade Tivoli software can be a catalyst for removing outdated software, platforms, and databases from your IT infrastructure. This project provides a long-term lower cost for supporting multiple versions of platforms, operating systems, and databases. This approach might be suitable where there are a substantial number of back-level systems across your environment but the cost of moving systems that can be migrated from any single existing Tivoli Region is too prohibitive to do so.

3.4.2 A "one that stayed behind" approach In this approach, you select an existing Tivoli Region to be a legacy system. This approach is similar to the "home style" approach. This legacy region must be at the minimum software levels to support the back-leveled systems. Systems that can be upgraded with the latest IBM Tivoli code would be migrated from the legacy Tivoli Region to Tivoli Regions that can be upgraded or that are already upgraded. Back-leveled systems in other Tivoli Regions would be migrated to this legacy region. This approach might be superior to the previous one in environments if your environment has a significant number of back-level systems. Identify the machines in the Tivoli Regions that can be upgraded and migrated. This approach is less cost-prohibitive.

3.4.3 A "decommission first, upgrade later" approach In this approach, back-level systems are systematically decommissioned, or removed, from the supporting Tivoli tool. After decommissioning machines, you install the correct software and add the machine back into your Tivoli environment. This approach might be strategic in environments where there are significant numbers of back-leveled systems that makes other upgrade and migration approaches prohibitively expensive. This approach might also be strategic where there are just a few back-level systems that you want to remove from your environment. This approach, depending on the number of machines involved, can delay the delivery of new value from the new product set. This approach can be viewed as opportunity cost.

Chapter 3. Planning and methodology for upgrading

87

3.5 Testing in a lab - Tivoli Database component It is critical that a test environment be used to validate that these steps are suitable for the production environment. One of the best means to ensuring a valid and “real world” test is to take a production TME database from the Tivoli Management Region server and to place this in the lab. By doing this, it is possible to truly test the database portion of the upgrade and ensure that there are no specific customizations that are impacted. Also, it is good practice to use similar hardware in your test environment. Note: It is not possible to maintain the endpoint and managed node and gateway list because this information is not “transferable”. However, the lab environment should then add additional managed nodes and endpoints that properly reflect the production environment to ensure a sound test case. The steps needed to import the production database are as follows: 1. Go to the $DBDIR and execute tar cvf ../dbdir.tar on production server. 2. The test Tivoli Management Region must be at the same version and patch level of the production Tivoli management region. Profiles or tasks that are referencing files (scripts or software packages) from the production machine should be copied over if you intend to test the functionality of the upgrade. 3. On the test Tivoli management region, bring down the oserv. cd $DBDIR/.. mv host.db to host.db.save mkdir host.db cd host.db tar tvf {path to}/dbdir.tar

(Do the tvf on the “tar”ed database image from the production server first to make sure everything in the tar file starts with ./ and if it does, proceed.) 4. Run tar xvf {path to}/dbdir.tar 5. Start the oserv with the following command: oserv -N ali -k $DBDIR -b $BINDIR/.. -B $LD_LIBRARY_PATH (or -B $LIBPATH or -B $SHLIB ....)

6. At this point, you will receive references to managed nodes not being reached. Be patient; it might take a long time. 7. Assuming the database has come up now, exercise the database a little as follows: wgetadmin

88

Migration to IBM Tivoli Configuration Manager Version 4.2

If this fails complaining about root@host, then you must run the odadmin/add_hostname_alias command for the host on your Tivoli Management Region server. odadmin odlist wlsinst -ah wls /Library/PolicyRegion wlookup -ar ManagedNode wlsconn

8. Break any interconnections with the wdisconn -s -r command. 9. Remove all the managed nodes (this is optional, but it will speed things up in the end, especially if you are testing and need to start over again). The fastest way to remove them in this case is with the odadmin odlist rm_od {od} command. (If any of these commands gives you a “resource still in use” message, you must reexec the oserv and then try the rm_od command again.) You can get the dispatches from the odadmin odlist command and put them in the following loop: for i in 2 3 4 6 8 9 15 25 do odadmin odlist rm_od $i done

Note: The for loop is created from odadmin odlist, which might be different in your environment. 10.After removing all managed nodes, run the wchkdb -u(x) command twice. 11.Ideally, tar up this new slimmed-down $DBDIR in case you need to go back to it. Tip: If you plan to apply patches or upgrades, you must also modify the locations attribute to reflect the new Tivoli Management Region server name and the interpreter type and directory paths for the binaries, libraries, database, and so forth, if they have also changed. To do this, extract the locations attribute, edit it, and then replace it as follows: 1. idlcall –v `wlookup Installation` get_host_locations ‘{ 1 “ hostname” }’ > somefile 2. Carefully edit somefile 3. idlcall –v `wlookup Installation` replace_host_locations < somefile

Chapter 3. Planning and methodology for upgrading

89

3.6 Considerations for upgrading This section lists issues that you should consider before performing the upgrade.

3.6.1 Interoperability of components during the upgrade Interoperability describes whether different Tivoli products, or different versions of the same Tivoli product, can communicate with each other. This section lists any interoperability issues that you should be aware of while upgrading to Tivoli Configuration Manager Version 4.2. Attention: This field guide, Compatibility Guide for Tivoli Applications, A Guide for the Latest IBM Tivoli Software Compatibility and Interoperability, will provide the current compatibility matrix. Field guides are available at this Web address: http://www-3.ibm.com/software/sysmgmt/products/support/Field_Guides.html

3.6.2 Inventory Inventory V4.2 is not an upgrade of Tivoli Inventory V3.6.2. While Inventory V4.2 does not require the installation of previous versions of Inventory, it can coexist within a Tivoli Region with Tivoli Inventory V3.6.2. Inventory V4.2 has its own resources and features and does not share resources with Tivoli Inventory V3.6.2. The primary options for migration to Inventory V4.2 are as follows:  Run Inventory V4.2 concurrently with Tivoli Inventory V3.6.2.  Replace Tivoli Inventory V3.6.2 with Inventory V4.2. You might choose to use both the new version and the previous version of Tivoli Inventory in the same environment for the following reasons:  To ease the migration from the previous version to the new version.  To keep your current, customized environment operational while beginning to exploit the new features offered by Inventory V4.2.  To support Tier 2 endpoints until Tier 2 support is added to Inventory V4.2. If you are going to continue using a Tivoli Inventory V3.6.2 configuration repository after upgrading to Tivoli Configuration Manager for Inventory V4.2,

leave the RIM object named inventory pointing to the configuration repository for Tivoli Inventory V3.6.2.

90

Migration to IBM Tivoli Configuration Manager Version 4.2

3.6.3 Software Distribution Software Distribution V4.2 can be installed as a new product or upgraded from Software Distribution V4.1. You cannot upgrade directly to Software Distribution V4.2 from Tivoli Software Distribution V3.6.2 or V4.0. Software Distribution V4.2 can coexist within a Tivoli Region with Tivoli Software Distribution V3.6.2. However, Software Distribution V4.2 does not require the installation of previous versions of Software Distribution. Software Distribution V4.2 cannot coexist within a Tivoli Region with Tivoli Software Distribution V4.0 and V4.1. You must install V4.2 in an environment that is separate from the environment that contains V4.0 or V4.1. If you choose to work with Software Distribution V4.2 and V3.6.2 in the same environment, your final configuration should be as follows: Tivoli Management Framework V4.1 with:  Tivoli Inventory V3.6.2 and Tivoli Software Distribution V3.6.2  Tivoli Inventory V4.2 and Tivoli Software Distribution V4.2

Tips for working in a mixed environment The following tips are provided to help you work in a mixed environment, where both versions of Software Distribution are running.  Software Distribution V4.2 works with endpoints at a minimum level of Tivoli Management Framework V4.1. There is a limitation in running Software Distribution V4.2 in an environment in which the endpoints are at a back level. The scenario involves an environment where The Tivoli Region and gateways have Tivoli Management Framework V4.1 and Tivoli Software Distribution V4.2 installed, but the gateways manage back-level endpoints (endpoints that are still running Tivoli Management Framework V3.7 and Tivoli Software Distribution V4.0). You should not use Software Distribution V4.2 to distribute to the back-level endpoints, but if it happens, an error occurs. The same problem exists if you are running in the same scenario but the endpoint level is at Tivoli Management Framework V3.7.1, without patch 3.7.1-TMF-0021 installed, and Software Distribution V4.1 is at GA Level. However, there are no limitations if you are using endpoints in an environment in which patches 4.1-COU-0024 and 3.7.1-TMF-0021 are installed. If you are distributing to mobile systems, patch 3.7.1-TMF-0024 must also be installed. The same problem might exist if the endpoints are Tier 2 and running Tivoli Management Framework V3.7, and Tivoli Software Distribution V4.0.  If you run Tivoli Software Distribution V3.6.x, with Tivoli Inventory V3.6.x, in the same environment and Tivoli Management Framework has been upgraded to V4.1, you can continue to work as usual and distribute V3.6.x file

Chapter 3. Planning and methodology for upgrading

91

packages and file package blocks. However, an error condition can occur while distributing an Autopack to an HP system.  If you are upgrading Software Distribution from V4.1 to V4.2 and you plan to use the old Web Interface with the new infrastructure, we recommend that you apply the Software Distribution fixpack 3 or 4.1-COU-0159 and 4.1-COU-0160 patches; they enable the interoperability between the V4.1 Web Interface and Software Distribution V4.2.

Tips for interoperability with Tivoli Inventory The following tips apply to interoperability between older versions of Tivoli Software Distribution (3.6.2, 4.0 and 4.1) and Tivoli Inventory V4.0. However, you might find this information useful during intermediate stages of your upgrade procedure.  Tivoli Software Distribution V3.6.2 uses a view that is removed in Tivoli Inventory V4.0. If you plan to use Tivoli Software Distribution V3.6.x in an environment in which Tivoli Inventory is at V4.0, ensure that patch 3.6.2-COU-0027 is installed, and run the following script: $BINDIR/TME/COURIER/SCRIPTS/swdist_queries_362new.sh. This restores the needed view.  If you plan to use Tivoli Software Distribution V4.0 in an environment in which Tivoli Inventory is at V4.0, problems might occur in using the new Inventory database schema because of a hard-coded RIM object name. To continue the operability of your environment in this intermediate phase of the upgrade, install the following patches: 4.0-COU-0082, 4.0-COU-0083, 4.0-COU-0084, and 4.0-COU-0085.  If you plan to use Tivoli Software Distribution V4.1 in an environment in which Tivoli Inventory is at V4.0, install the following patches to prevent problems because of the hard-coded RIM object name: 4.1-COU-0117, 4.0-COU-0118, 4.1-COU-0119, 4.1-COU-0120. Another error occurs, because Tivoli Software Distribution V4.1 references the sd_scan.nfo file for retrieving variable information, rather than the new Inventory file sdinst.nfo file. To fix this problem, apply Software Distribution fixpack 3.

Tips for interoperability with Tivoli Enterprise Console The following tips apply to interoperability between older versions of Tivoli Software Distribution (3.6.2, 4.0 and 4.1) and Tivoli Enterprise Console (V3.7.x or V3.8). However, you might find this information useful during intermediate stages of your upgrade procedure.  If you upgrade Tivoli Enterprise Console from V3.6.x to V3.7.x before upgrading to Tivoli Configuration Manager V4.2, you might need to install the following patches:

92

Migration to IBM Tivoli Configuration Manager Version 4.2

– If you plan to use Tivoli Software Distribution V4.1, install patch 4.1-COU-0016 and 3.7-TEC-0008 or install Tivoli Enterprise Console V3.8. – If you are using Tivoli Software Distribution V4.0 with Tivoli Enterprise Console V3.6.x, install patches 4.0-COU-0037 and 3.7-TEC-0008 or Tivoli Enterprise Console V3.8.  If you upgrade Tivoli Enterprise Console to V3.8 before upgrading to Tivoli Configuration Manager V4.2 and you are running Tivoli Software Distribution V4.1, install the patch for APAR IY34167.

3.6.4 Schema changes The Tivoli Configuration Manager V4.2 schema is an extension of the Tivoli Inventory V4.0 schema. The Tivoli Inventory V4.0, schema is a complete redesign of the Tivoli Inventory V3.6.2 schema. Some common table and view names still exist in the V3.6.2 and newer versions. To prevent problems caused by these common names, Tivoli Inventory V3.6.2 and newer versions of Inventory must use different schemas. The default user for the Tivoli Inventory V3.6.2 schema is tivoli. The default user for the Tivoli Configuration Manager V4.2 schema is invtiv. To enable coexistence, Tivoli Configuration Manager V4.2 and Tivoli Inventory V3.6.2 use different RIM objects. The RIM objects used by Tivoli Configuration Manager V4.2 are named inv_query and invdh_1 and point to the Tivoli Configuration Manager V4.2 configuration repository. The RIM object used by Tivoli Inventory V3.6.2 is named inventory. See IBM Tivoli Configuration Manager Planning and Installation, V4.2, GC23-4702, for information about creation of the Tivoli Configuration Manager V4.2 schema and RIM objects. See the IBM Tivoli Configuration Manager Database Schema Reference, SC23-4783, for information about the V4.2 schema.

3.6.5 Query libraries To query the new database schema, Tivoli Configuration Manager V4.2 provides a new set of query libraries. See IBM Tivoli Configuration Manager Planning and Installation, V4.2, GC23-4702, about creating the new query libraries. See the IBM Tivoli Configuration Manager Database Schema Reference, SC23-4783, for detailed information about the new queries.

3.6.6 Inventory issues Be aware of the following issues while using Inventory during the upgrade.

Chapter 3. Planning and methodology for upgrading

93

Profile changes The InventoryProfile resource has been renamed InventoryConfig, and the BasicInventoryProfile policy object has been renamed BasicInventoryConfig. A single InventoryConfig profile can be configured for both PC and UNIX targets. The InventoryConfig profile also includes all of the new hardware and software scan options available in Inventory V4.2. It is possible to migrate existing InventoryProfile objects to InventoryConfig profiles. However, migrated profiles will not include the new scan options available in the InventoryConfig profile. Therefore, we recommend that you create new profiles. See the section about migrating inventory profiles in the redbook Tivoli Inventory Version 4.0 - Migration Guide from Version 3.6.2, SG24-7020. When Inventory V4.2 and Tivoli Inventory V3.6.2 coexist in the same Tivoli Region or in interconnected Tivoli Regions, observe the following guidelines for distributing profiles:  You can distribute Inventory V4.2 profiles only to Tivoli Management Framework V4.1 endpoints.  You can distribute Tivoli Inventory V3.6.2 profiles to the following endpoints: – Tivoli Management Framework V3.6.5, V3.7B, or V3.7.1 endpoints. – Endpoints that have been upgraded from Tivoli Management Framework V3.6.5, V3.7B, or V3.7.1 to Tivoli Management Framework V4.1. You cannot distribute Tivoli Inventory V3.6.2 profiles to new Tivoli Management Framework V4.1 endpoints. Data from the Tivoli Management Framework V3.6.5, V3.7B, or V3.7.1 endpoints is stored in the Tivoli Inventory V3.6.2 configuration repository. Data from the Tivoli Management Framework V4.1 endpoints is stored in the Tivoli Configuration Manager V4.2 configuration repository. After the entire installation is migrated to Tivoli Management Framework V4.1, the old 3.6.5 endpoints can be removed. Inventory V4.2 provides a Java-based graphical user interface (GUI) that enables you to edit inventory profiles. You can run the Inventory GUI from the Tivoli Server, any managed node, or any machine with Tivoli Desktop and the appropriate Java Runtime Environment installed.

Command line changes Inventory V4.2 has many new or changed command line interface (CLI) commands to support the new features of Inventory V4.2. Also, several Tivoli Inventory V3.6.2 commands no longer exist. For information about the new Inventory V4.2 commands, see the User’s Guide for Inventory, SC23-4713. For information on Tivoli Inventory V3.6.2 commands that have changed or are no

94

Migration to IBM Tivoli Configuration Manager Version 4.2

longer used in the newer versions of Inventory, see the section about changed CLI commands in the redbook Tivoli Inventory Version 4.0 - Migration Guide from 3.6.2, SG24-7020.

3.6.7 Software Distribution issues If you plan to use Software Distribution in connected Tivoli Regions, it is recommended that you have only one configuration repository for all the connected regions.  On AIX systems, you might see the following error while installing Activity Planner: FRWTT0003E An instance named "Activity Planner" of resource "Classes" was not found. CO_APM=.

We recommend that you restart the installation without uninstalling.  On AIX systems, you might see the following error while installing Change Manager: FRWTT0003E An instance named "Activity Planner" of resource "Classes" was not found. CO_CCM=.

We recommend that you restart the installation without uninstalling.  On systems running the Solaris Operating Environment (referred to in the rest of this document as Solaris), the upgrade of Activity Planner from V4.1 to V4.2 could produce the following error: "Activity Planner" inizialization failed. Check that the RIM object has been correctly created".

If this error occurs, run the wapmrim command and provide the correct RIM object name.  If you are using Tivoli Software Installation Service to install Software Distribution V4.2, and Inventory V4.2 is not installed, the installation fails with an error. You must install Inventory V4.2 before installing Software Distribution V4.2.

3.6.8 Integrated upgrade issues Currently, there are no known issues that affect the integrated upgrade functionality. However, we recommend that you keep the following points in mind to prevent problems.  Familiarity with the Activity Planner component is required before using the integrated upgrade.  The integrated upgrade process requires and embeds, at minimum, Version 1.3.0 of Java Virtual Machine (JVM). The Tivoli Server must be able to

Chapter 3. Planning and methodology for upgrading

95

support this level of Java or the integrated upgrade will not work. Keep in mind that the same requirement applies to other Tivoli Configuration Manager components that use Java on the server.  To reduce complexity, select only a few Tivoli Configuration Manager components at a time from the proposed application selection menu. For the first phase of the upgrade, select only Tivoli Management Framework and Activity Planner.  The integrated upgrade does not perform database migrations from previous release databases to the Tivoli Configuration Manager V4.2 databases. Perform the database migration for those applications that you are about to upgrade before beginning the integrated upgrade.  Unless you already have a copy of all the Tivoli Configuration Manager V4.2 product CDs available on the system, do not un-check the "copy all files" checkbox when you are requested to locate images for copy.  The integrated upgrade tool presents you with a list of proposed actions before performing any action. Verify that this list of proposed actions matches your expectations before proceeding.

3.7 Tivoli Management Framework Chapter 4, “Upgrade Tivoli Management Framework to 4.1” on page 111 discusses, in detail, the upgrade process for Tivoli Management Framework to V4.1. Please refer to this chapter for this upgrade process.

3.8 Upgrading Inventory You can perform a fresh installation of Inventory V4.2 in the same environment as Tivoli Inventory V3.6.2 without disrupting the use of the V3.6.2 product. However, before you install Inventory V4.2 in the same Tivoli Region as Tivoli Inventory V3.6.2, you must upgrade the Tivoli Region to Tivoli Management Framework V4.1, including the prerequisite Scalable Collection Service and Java products. The set of data collected by Inventory V4.2 has been significantly expanded from Tivoli Inventory V3.6.2. The Inventory V4.2 data model has been changed to accommodate new hardware and software collection features. This data model does not correspond directly to the Tivoli Inventory V3.6.2 data model. However, there are some direct mappings of Tivoli Inventory V3.6.2 data and Inventory V4.2 data. There is no mechanism to directly migrate data from the Tivoli Inventory V3.6.2 database to the Inventory V4.2 database. Instead, we recommend that you run

96

Migration to IBM Tivoli Configuration Manager Version 4.2

both versions side by side, running reports from both schemas, and then discontinue use of Tivoli Inventory V3.6.2 after the migration is completed. See the redbook Tivoli Inventory Version 4.0 - Migration Guide from 3.6.2, SG24-7020, to cover more information about schema mappings and possible migration of data. If the same database installation is to be used for both Tivoli Inventory V3.6.2 and Tivoli Configuration Manager V4.2, you must create separate schemas for the products. We strongly recommend that you upgrade the database to a Tivoli Configuration Manager V4.2 supported database level before upgrading. However, Inventory V4.2 can run on the Tivoli Inventory V4.0, supported database levels, as listed in the "Supported RDBMS Servers" section of the Tivoli Management Framework Release Notes V4.1, GI11-0890. See your database documentation for information about upgrading the database product. If you are upgrading hardware as a part of the migration to Tivoli Configuration Manager V4.2, you might choose to install a new version of the database on the new servers, while keeping Tivoli Inventory V3.6.2 data in the old database until all endpoints are migrated to Tivoli Configuration Manager V4.2. Any customizations that you have done to Tivoli Inventory V3.6.2 need to be reproduced in Inventory V4.2, for example scheduled tasks or custom scripts, MIF files, tables, signatures, or queries. Inventory V4.2 has many of the same custom data gathering features as Tivoli Inventory V3.6.2, including custom MIF files, custom software signatures, and DMI information. See the User’s Guide for Inventory, SC23-4713, for information about collection of custom data in Inventory V4.2. Note: Please refer to Tivoli Inventory Version 4.0 - Migration Guide from Version 3.6.2, SG24-7020, to get complete details on the Inventory Migration Solutions.

3.9 Upgrading Software Distribution This section provides pre-upgrade recommendations for Software Distribution from several starting points.

3.9.1 From Version 3.6.2 to Version 4.2 This scenario runs as a fresh installation, where V4.2 could replace or run in parallel with V3.6.2.

Chapter 3. Planning and methodology for upgrading

97

Tasks  Review the IBM Tivoli Configuration Manager Release Notes Version 4.2, GI11-0934, to ensure proper operating system levels and patches.  Check the Tivoli Support Web Site for updated patches, because this document might not reflect the current patch releases.  We strongly recommend that you make backups before beginning the upgrade and at each milestone during the upgrade.  Verify that the configuration repository has the V4.2 schema.  Ensure that all prerequisites are installed (Java RIM, Java Client Framework, Java Runtime Environment and Inventory have been installed with the correct level). See the IBM Tivoli Configuration Manager Release Notes Version 4.2, GI11-0934, for more information.  Ensure that all Software Distribution gateways have Tivoli Management Framework V4.1 installed.  Change the endpoint allow_install policy to disable ALL new logins. By using the statement 'exit 1', endpoints will be refused logins and will return after the endpoint login_interval, usually 30 minutes, and attempt to login again.

3.9.2 From Version 4.0 to Version 4.2 This scenario is a completely new installation. In this case, you do not have to address the current settings of your environment or the software packages already stored in your object repository. Instead, you can rebuild the environment from the beginning. You can decide to install the new environment on new hardware where no preceding installations have run. Keep this environment separate from the production environment and connect it to existing regions only when the installation has been completed and thoroughly tested.

Tasks  Review the IBM Tivoli Configuration Manager Release Notes Version 4.2, GI11-0934, to ensure proper operating system levels and patches.  Check the Tivoli Support Web Site for updated patches, because this document might not reflect the current patch releases.  We strongly recommend that you make backups before beginning the upgrade and at each milestone during the upgrade.  Make sure there are no distributions in progress or in pause state.  We recommend that software packages be in a final state (no operations in transaction or waiting for a reboot).  Verify that the configuration repository has the V4.2 schema.

98

Migration to IBM Tivoli Configuration Manager Version 4.2

 Make sure there are no login policies to automatically distribute software packages.  Ensure that all prerequisites are installed or upgraded to the V4.1 level (the correct versions of Java RIM, Java Client Framework, Java Runtime Environment and Inventory have been installed). See the IBM Tivoli Configuration Manager Release Notes Version 4.2 , GI11-0934, for more information.  Ensure that all the Software Distribution gateways have Tivoli Management Framework V4.1 installed.  Change the endpoint allow_install policy to disable ALL new logins. By using the statement 'exit 1', endpoints will be refused logins and will return after the endpoint login_interval, usually 30 minutes, and attempt to login again.

3.9.3 From Version 4.1 to Version 4.2 Upgrade directly to V4.2.

Recommendations before applying the direct upgrade This section supplements the information about the Upgrade Plan Generator program in the Tivoli Configuration Manager documentation. Before applying the direct upgrade scenario or using the Upgrade Plan Generator, review this information as well as the Upgrade Plan Generator information in the IBM Tivoli Configuration Manager Release Notes Version 4.2 , GI11-0934.

Before starting the Upgrade Plan Generator  Ensure that there is enough disk space available on the drive set by the variable $(TEMP); the required space should be at least 130 MB.  If you want to create a local depot with installation images, ensure that there is enough disk space to store the images (1.5 GB). The original images must always be available during the installations. Therefore, we recommend that you create a depot or store the entire image path during this phase or the Upgrade Plan Generator will fail.  Ensure that the entire region to which you are applying the Upgrade Plans Generator is at the correct level required (Tivoli Management Framework V3.7.1, Software Distribution V4.1, and Inventory V4.0). If the product registered on one or more managed nodes is not at the required level, the upgrade process will not be applied to those nodes.  Make a backup copy of your object database for the whole environment and before each step.

Chapter 3. Planning and methodology for upgrading

99

 Remember that the information related to the upgrade is stored under $(DBDIR)\Upgrade or the path that you specified. Do not make any changes within that path or the upgrade procedure will stop.

When the Upgrade Plan Generator is running The Upgrade process produces two summaries:  One summary of the preliminary software to install on the Tivoli Server and needed to start the second phase of the procedure.  Another summary for all the software to be installed in the Tivoli Region (by running the activity plans). After the tasks described in the first summary complete, you must manually migrate the planner database schema before running the generated plans in the second summary. If you don't run this step before starting the second phase, all the generated plans cannot be saved in the database (but only in XML format). After you complete the tasks described in the second summary, verify that all the data in the summary corresponds to your network configurations. Verify that you can locate all machines with correct roles. If something is missing, manually check the level of software running on the problem machines using the following command: wlsinst -ah

When the Upgrade Plan Generator produces the list of plans  Make a backup copy of your object database for the whole environment.  Make sure the Images created during the first phase are available.  If you create multiple plans and scheduled them, only the first one will be automatically submitted and the remaining ones must be manually submitted in the same order produced by the Activity Plan generator.  Before submitting your plan make sure there are no activities running concerning the applications you are upgrading.  During some plans, you might be requested to run a reexec of the object dispatcher; take care to customize this option in your plan if you want to manually run this step when it is convenient.  Monitor the plan execution using the Activity Plan Monitor and the log file produced by this plan and related task activities.  The Upgrade Plan Generator program does not upgrade the database schemas, including the Tivoli Configuration Manager configuration repository and the MDist 2, Activity Planner and Change Manager databases. Depending on the application to upgrade, you could be requested to manually run the database schema. For more information about installing these schemas, see the Tivoli Configuration Manager documentation.

100

Migration to IBM Tivoli Configuration Manager Version 4.2

Tasks See IBM Tivoli Configuration Manager Planning and Installation, V4.2, GC23-4702, for details related to the Upgrade Plan Generator scenario.

3.10 Migration scenarios IBM Tivoli Configuration Manager 4.2 is not a single product migration from your existing environment but a bundle of products. Customers could have different levels of Tivoli applications installed. In this book we will not cover all scenarios available or all combinations of software migration to IBM Tivoli Configuration Manager 4.2, but we concentrate on two scenarios which should help in covering most of the customers needs. Even though the title of this book is “Migration to IBM Tivoli Configuration Manager 4.2”, each component would not qualify to be a migration. Tivoli Management Framework would be a example of this where a migration within Tivoli would be significantly be defined as architectural changes, which almost require redeployment of that product. On the other hand, products like Tivoli Software Distribution 4.2 and Tivoli Inventory 4.2 are complete architectural changes from their 3.6.x counterparts, and they would be qualified to be called a migration.

3.10.1 Scenario 1- Shadow migration This type of installation is sometimes referred to as a shadow installation. You install Inventory and Software Distribution V4.2 as a fresh installation, but you install it in parallel with the back-level version of both applications. In this scenario, you acquire new hardware for your Tivoli Server and gateways and then install Tivoli Management Framework V4.1, Scalable Collection Service V4.1, and Inventory V4.2, on the new machines. You run this new Tivoli Region in parallel with your existing Tivoli environment. Next, you migrate endpoints over from the back-level environment to the V4.1 environment. As endpoints are migrated, you add them as subscribers to Inventory V4.2 profiles. Scan data for the migrated endpoints will be stored in the new database. Reports can be run against both the Tivoli Inventory V3.6.2 data in the back level database and the Inventory V4.2 database. After all endpoints are migrated to the new level, only the new database is needed. Tivoli Software Distribution V3.6.2 can coexist with V4.2. You can install V3.6.2 in the same environment as V4.2 or in a separate environment.

Chapter 3. Planning and methodology for upgrading

101

Note: Tivoli Software Distribution V4.0 and V4.1 cannot coexist in the same environment as V4.2. You must install V4.2 in an environment that is separate from the environment that contains V4.0 or V4.1. We recommend that you install V4.2 in a new Tivoli region, test it, and then connect the new region with the existing one.

Dependencies on other components for this scenario 1. You need to first upgrade Tivoli Management Framework to 4.1. Follow the Chapter 4, “Upgrade Tivoli Management Framework to 4.1” on page 111 to achive this step. 2. Scalable Collection Service V4.1 must be installed as a patch. 3. Java Runtime Environment 1.3 needs to be installed. 4. Java Client Framework 4.1 needs to be installed. 5. Java RIM 4.1 needs to be installed. 6. Data Moving component requires MDist2 GUI to be installed. 7. Change Manager depends on Inventory and Activity Planner. 8. Web Interface depends on Software Distribution server and Change Manager. Note: Tivoli Management Framework V4.1 and Inventory V4.2 must be installed before you upgrade to Software Distribution V4.2.

Advantages 1. Your back-level environment can run without interruption during the migration. 2. Easier setup — This scenario is a fresh installation of Tivoli Configuration Manager components. 3. Tivoli Configuration Manager V4.2 components are installed on currently supported hardware. 4. You can start with a fresh installation of your database as well. 5. You do not need to consider interoperation of Tivoli Configuration Manager V4.2 components in a mixed Tivoli Management Framework V3.6.5 and Tivoli Management Framework V4.1 environment. 6. You do not have to migrate your software packages immediately. 7. You can start using the Tivoli Management Framework V4.1 features. 8. You can experiment with the new versions of Tivoli Management Framework and Software Distribution in a test environment.

102

Migration to IBM Tivoli Configuration Manager Version 4.2

Considerations  You must acquire new hardware for the new Tivoli Region.  The new environment will not include all the definitions and objects that are in place in the production environment. Therefore, the objects to manage, policy to apply, and configurations settings must be redefined.  The new environment will not include all the definitions and objects that are in place in the production environment. Therefore, you must redefine the objects to manage, policy to apply, and configuration settings

Topology considerations  Scalable Collection Service V4.1 is a prerequisite, and used by Inventory V4.2 by default. If scan data cannot be returned by Scalable Collection Service, the data is returned to the inventory data handler by MDist 2.  A new topology can be configured to suit the fresh installation, while leaving the back-level installation topology as it is.  See the IBM Tivoli Configuration Manager Planning and Installation, V4.2, GC23-4702, for detailed information about Tivoli Configuration Manager V4.2 topologies.  You should carefully plan the role of this new environment. For example, will you connect this Tivoli Region to your production region? Will this Tivoli Region become the new hub for a hub and spoke arrangement?

Operating systems Upgrades of operating systems might be required before the installation. See the IBM Tivoli Configuration Manager Release Notes Version 4.2, GI11-0934, for supported operating systems.

Database/external software The existing database can continue to be used, or a fresh database can be installed on new hardware as a part of upgrading hardware for Tivoli Configuration Manager V4.2 installation. Software Distribution requires the Tivoli Configuration Manager configuration repository. For details about the supported RDBMS versions, see the IBM Tivoli Configuration Manager Release Notes Version 4.2 , GI11-0934.

Steps for direct migration to IBM Tivoli Configuration Manager 1. Review the IBM Tivoli Configuration Manager Release Notes Version 4.2, GI11-0934, to ensure proper operating system levels and patches.

Chapter 3. Planning and methodology for upgrading

103

2. Check the Tivoli Support Web Site for updated patches, because this document might not reflect the current patch releases. 3. Because this is a completly fresh install of IBM Tivoli Configuration Manager 4.2 on a new hardware. Review the IBM Tivoli Configuration Manager Planning and Installation, V4.2, GC23-4702, and All about IBM Tivoli Configuration Manager Version 4.2, SG24-6612.

3.10.2 Scenario 2 - Migration from all 3.6.x applications to 4.2 This scenario deals with a customer environment which has all of the applications on V3.6.x. The customer has two-way interconnected TMRs and is running:  Tivoli Management Framework 3.6.5 with current patches  Tivoli Inventory 3.6.2 and components with current patches  Tivoli Software Distribution 3.6.2 and components with current patches This type of installation is sometimes called an in-line installation. Figure 3-1 shows this type of migration.

Figure 3-1 Scenario 2

104

Migration to IBM Tivoli Configuration Manager Version 4.2

You install Inventory and Software Distribution V4.2 as a fresh installation, but you install it in parallel with the back-level version of both applications. In this scenario, you first upgrade Tivoli Management Framework to V4.1, install Scalable Collection Service V4.1, Inventory V4.2 followed by Software Distribution V4.2 and its componenets. This means that you continue to use the current Software Distribution release (V3.6.2, V4.0 or V4.1), and in parallel install V4.2. See the IBM Tivoli Configuration Manager Planning and Installation, V4.2, GC23-4702, for more information about installing Software Distribution V4.2. Next, you migrate endpoints over from the back-level environment to the V4.1 environment. As endpoints are migrated, you add them as subscribers to Inventory V4.2 profiles. Scan data for the migrated endpoints will be stored in the new database. Reports can be run against both the Tivoli Inventory V3.6.2 data in the back level database and the Inventory V4.2 database. After all endpoints are migrated to the new level, only the new database is needed.

Dependencies on other components for this scenario 1. You need to first upgrade Tivoli Management Framework to 4.1. Follow the Chapter 4, “Upgrade Tivoli Management Framework to 4.1” on page 111 to achive this step. 2. Scalable Collection Service V4.1 must be installed as a patch. 3. Java Runtime Environment 1.3 needs to be installed. 4. Java Client Framework 4.1 needs to be installed. 5. Java RIM 4.1 needs to be installed. 6. Data Moving component requires MDist2 GUI to be installed. 7. Change Manager depends on Inventory and Activity Planner. 8. Web Interface depends on Software Distribution server and Change Manager. Note: Tivoli Management Frmework, version 4.1 and Inventory version 4.2, must be installed before you upgrade to Software Distribution, version 4.2.

Advantages 1. You do not have to upgrade the existing hardware. 2. You can start with a fresh installation of your database as well. 3. You do not have to migrate your software packages immediately. 4. This method is fast and immediate. 5. You can start using the Tivoli Management Framework V4.1 features.

Chapter 3. Planning and methodology for upgrading

105

Considerations  Tivoli Inventory V3.6.2 scans will be interrupted during the migration to Tivoli Management Framework V4.1. (The installation of Inventory V4.2 does not interrupt Tivoli Inventory V3.6.2, but Tivoli Management Framework V4.1 must be installed first).  Tivoli Inventory V3.6.2 endpoints can be scanned by Tivoli Inventory V3.6.2 profiles only. Inventory V4.2 endpoints can be scanned by Inventory V4.2 profiles or Tivoli Inventory V3.6.2 profiles. Tivoli Inventory V3.6.2 scan data is stored in Tivoli Inventory V3.6.2 configuration repository, and Inventory V4.2 data is stored in Tivoli Configuration Manager V4.2 configuration repository.  You will need to recreate the current settings of your environment and the software packages already stored in the object repository.

Topology considerations  Any existing flaws or weaknesses in the topology will be repeated in Inventory V4.2 topology.  Tivoli Inventory V3.6.2 topology might not take advantage of Scalable Collection Service in the repeater hierarchy.  An inventory data handler has been introduced, allowing the management of inventory data to be off-loaded from the Tivoli Server. The current topology might not take advantage of this feature.  Because this is a completely new installation, you must have a clear idea about how Software Distribution will work in your network. Choose the managed node that will serve as the Software Distribution source host and the gateways that will be the Software Distribution gateways. Remember that your source host is the machine where the source code for building packages is stored (so adequate disk space is required) and that it is the machine where the build processes run.

Operating systems Upgrades of operating systems might be required before the installation. See the IBM Tivoli Configuration Manager Release Notes Version 4.2, GI11-0934, for supported operating systems.

Database/external software The database might need to be upgraded before the installation. If the same database installation is to be used for both Tivoli Inventory V3.6.2 and Inventory V4.2, it is important that they have separate schemas. See IBM Tivoli Configuration Manager Planning and Installation, V4.2, GC23-4702, for details on setting up the Tivoli Configuration Manager V4.2 schema. We strongly recommend that you upgrade the database to the Tivoli Configuration Manager

106

Migration to IBM Tivoli Configuration Manager Version 4.2

V4.2 supported database level before upgrading. However, Inventory V4.2 can run on the Tivoli Inventory V4.0 supported database levels, as listed in the "Supported RDBMS Servers" section of the Tivoli Management Framework Release Notes V 4.1, GI11-0890. See your database documentation for information about upgrading to the current version of the database product.

Steps for migrating to IBM Tivoli Configuration Manager 4.2 1. Review the IBM Tivoli Configuration Manager Release Notes Version 4.2, GI11-0934, to ensure proper operating system levels and patches. 2. Check the Tivoli Support Web Site for updated patches, because this document might not reflect the current patch releases. 3. We strongly recommend that you make backups before beginning this process, and for the utmost in recoverability, that backups be made after each major upgrade step. We also recommend that you do a file level backup. 4. Outputs of current Tivoli Environment could be gathered for future reference: wlsinst -avh odadmin odlist wep ls

5. Before upgrading any applications, Tivoli Management Framework needs to be upgraded to Tivoli Management Framework 4.1. The single most important node is the TMR Server, and all considerations and plans should ensure that the Tivoli Management Region server is always part of the first set of nodes to upgrade. While most upgrade checkpoints will validate that the Tivoli Management Region server does have the appropriate patches installed, it is still critical to ensure that the set of procedures have the Tivoli Management Region server as part of the initial install. The main reasons for ensuring that the Tivoli Management Region server is part of the initial install are related to the database portions of each upgrade or install, and to the mechanisms around dependencies for endpoint code. For the interconnected TMR’s scenario that we have — because of several object repository enhancements that provide more reliable database integrity in 4.1 — it is best to upgrade a Hub Tivoli Management Region first, to quickly gain some stability and reliability for your infrastructure. This is especially true for environments with many spoke Tivoli management regions with large numbers of endpoints and the use of the command wupdate used to update resources to the Hub.

Chapter 3. Planning and methodology for upgrading

107

Note: If all your TMR Servers and managed nodes are in local network then an alternative mode would be to disconnect these TMR’s and then run: wchkdb [-o outfile1] -u -x > somefile1 wchkdb [-o outfile2] -u -x [-f outfile1] > somefile2

6. Sequentially upgrade each of the Tivoli management regions. Note: Upgrade of Tivoli Management Framework is covered in Chapter 4, “Upgrade Tivoli Management Framework to 4.1” on page 111. After upgrading Framework the next step would be to look at Inventory. Section 3.8, “Upgrading Inventory” on page 96 explains the methodology involved in doing Inventory migration. Complete details are provided in the Tivoli Inventory Version 4.0 - Migration Guide from 3.6.2, SG24-7020. 7. Our recommendation is to install Tivoli Inventory V4.2 along with its components and run it in parallel with Tivoli Inventory 3.6.2. The following products should be installed a. Tivoli Inventory V4.2 b. Tivoli Inventory Gateway V4.2 c. Scalable Collection Service V4.1 8. Install new query libraries. 9. Configure the Tivoli Configuration Manager configuration repository. 10.Configure Inventory V4.2 profiles. The new InventoryConfig profiles have many new options for hardware and software scanning. The options you choose will impact the amount of data stored at the database. You might want to set up a limited number of test machines to help choose which configuration is most suitable for your needs. 11.Upgrade endpoints to the Tivoli Management Framework V4.1 level. After an endpoint is migrated, it can be made a subscriber to the newly created Inventory V4.2 profiles. If scans are scheduled on a regular basis, endpoints can be migrated between scheduled distributions. Note: The customizations that you have done to Tivoli Inventory V3.6.2 need to be reproduced in Inventory V4.2, for example, custom tables, signatures, scheduled tasks, custom queries, and so on. 12.Rescan the environment over time.

108

Migration to IBM Tivoli Configuration Manager Version 4.2

13.After Installing Tivoli Inventory V4.2, the next step would be to look at Tivoli Software Distribution 4.2. Software Distribution 4.2 is also a complete new product by itself we recommend replacing the product by running it in parallel with Software Distribution 3.6.2. Install Software Distribution Server, Package Editor and Gateway on the Tivoli Server and on all the machines where you plan to use Software Distribution. 14.Install the Activity Planner and Change Manager on the Tivoli Server. 15.Install the Web Interface on the Tivoli Server. 16.Create databases to work with Activity Planner and Change Manager. 17.Run the script to properly configure the database for Activity Planner. 18.Create a RIM object for Activity Planner. 19.Run the script to properly configure the database for Change Manager. 20.Create a RIM object for Change Manager. 21.Use the wapmplugin command and -s option to correctly register the Software Distribution plug-in to Activity Planner. 22.Use the wccmplugin command and -s option to correctly register the Software Distribution plug-in to Change Manager 23.Configure Software Distribution to work with Inventory and Tivoli Enterprise Console using the wswdmgr command and -s option and the wsetinvswd command and y option. 24.Make sure the name of the RIM object to work with Inventory is correctly set using the wswdcfg command and -h and -s options. 25.Run some tests to verify the installation: a. Create and distribute a software package. b. If you want to migrate your Software Distribution 3.6.X file packages to software packages, migrate these file packages using the instructions in Chapter 5, “Package migration” on page 123 and test these packages. c. Check the existence of a Data Moving request. d. Test the database connections using all the defined RIM objects. 26.Log in to the Activity Plan Editor and Change Manager.  If you are running a fresh install using the new installation program, follow these steps: a. Create databases and configure database users as necessary. (The installation will complete all necessary database configuration procedures.) b. Start the setup from the CD under the Fresh Install path.

Chapter 3. Planning and methodology for upgrading

109

c. Fill in the data as requested. d. After the installation is completed, take care to continue the Tivoli Region configuration and installation; from the Tivoli Desktop, continue the installation of Software Distribution on managed nodes as requested by your topology, and install the Software Distribution gateways as requested by your topology. e. Configure Software Distribution to work with Inventory and Tivoli Enterprise Console using the wswdmgr command and -s option and the wsetinvswd command and y option. f. Run some tests to verify the installation: i. Create and distribute a software package. ii. Check the existence of a Data Moving request. iii. Test the database connections using all the defined RIM objects. iv. Log into the Activity Plan Editor and Change Manager.

110

Migration to IBM Tivoli Configuration Manager Version 4.2

4

Chapter 4.

Upgrade Tivoli Management Framework to 4.1 In this chapter we provide a baseline for project managers and technical leads to best plan and implement an upgrade of the Tivoli Management Enterprise (TME) infrastructure to Tivoli Management Framework version 4.1. We discuss these topics for a Framework upgrade:  “Is this an upgrade or a migration?” on page 112  “Overview of the TME Infrastructure upgrade” on page 112  “Considerations for upgrading in phases” on page 113  “Recommendations before the upgrade” on page 115  “Upgrading to Framework 4.1” on page 117  “Post upgrade tasks” on page 122

© Copyright IBM Corp. 2002. All rights reserved.

111

4.1 Is this an upgrade or a migration? The in-line or shadow migration procedures done during the Tivoli Management Framework 3.1 or 3.2 to 3.6 transition were considered a migration, because it was moving from one architecture to another (PC managed node or managed node to endpoint) and introduced not only new revisions of software, but entirely new processes and design that required considerable planning. While the current upgrade from 3.6.x to 4.1 still requires proper planning and fallback measures, this upgrade introduces new revisions and functionality, but maintains the current architecture and design.

4.2 Overview of the TME Infrastructure upgrade Before upgrading your TME Infrastructure, planning the upgrade is one of the most important steps. We emphasize this process in Chapter 3, “Planning and methodology for upgrading” on page 83 where we discuss the planning and methodology to upgrade your Tivoli Management Environment. In this chapter we also discuss setting up a lab environment to check the database portion and ensure that there are no specific customizations that are impacted. In general, we recommend that the 4.1 upgrade occur in the following order: 1. Upgrade in a test or development environment first. (This is obviously critical.) 2. Upgrade a "non-enterprise" Tivoli Management Region environment (a smaller environment without interconnected Tivoli management regions, and so on, if available). 3. Because of several object repository enhancements that provide more reliable database integrity in 4.1, it is best to upgrade a Hub Tivoli Management Region first to quickly gain some stability and reliability for your infrastructure. This is especially true for environments with many spoke Tivoli management regions with large numbers of endpoints and the use of the command wupdate used to update resources to the Hub. 4. Sequentially upgrade each of the Tivoli management regions. 5. After upgrade of all TMR Servers and managed nodes in all of the regions, begin the upgrade of the endpoints.

112

Migration to IBM Tivoli Configuration Manager Version 4.2

3.6.X

3.6.5

3.7

3.7-TMF-0003 3.7-TMF-0004

4.1

3.7B

3.7.1 Figure 4-1 Tivoli Management Framework upgrade paths

4.3 Considerations for upgrading in phases While it is ideal to upgrade all nodes at one time, business and environmental reasons might dictate that the upgrade must take place over the course of several sessions. The single most important node is the Tivoli Management Region server, and all considerations and plans should ensure that the Tivoli Management Region server is always part of the first set of nodes to upgrade. While most upgrade checkpoints will validate that the Tivoli Management Region server does have the appropriate patches installed, it is still critical to ensure that the set of procedures have the Tivoli Management Region server as part of the initial install. The main reasons for ensuring that the Tivoli Management Region server is part of the initial install are related to the database portions of each upgrade or install, and to the mechanisms around dependencies for endpoint code. When determining a plan for installation or upgrading, first identify the nodes that might require additional time or consideration. These include the following:  Nodes across a slow link  Product servers that have no change control authorization  Nodes that are considered unstable (the oserv process is not stable, or the node is incapable of correcting name resolution issues)

Chapter 4. Upgrade Tivoli Management Framework to 4.1

113

To address these potential situations of nodes that require extra time or consideration, the following steps should be taken:  If upgrading from Tivoli Management Framework, Version 3.7B, do not enable SSL or SinglePortBDT functionality until all nodes are at Version 4.1  Do not upgrade any endpoints to version 41004 or above until after all gateways are upgraded.  Do not install any application that requires back reference versions of Framework until all nodes have received the Tivoli Management Framework upgrades.  If managing a node over a very slow link, one consideration for the update involves moving an installation image of the upgrade to each of the target nodes through a means other than the Tivoli installation. This could include using ftp, Software Distribution, or a physical CD-ROM. The following provides a brief explanation of one method of managing this “install image”.  This is an unsupported method if it is installed, but one that the current customer base has done before. Check the TME 10 archives for information on details - ftp://www.redbooks.ibm.com/redbooks/tme10_archive/.) Note: Unsupported is really a misnomer, because it does not mean that IBM/Tivoli does not support this methodology. This usually means that level 3 will not make a special note of the “unsupported” code changes made to your environment, and therefore, patches that modify this area of code will probably overwrite your changes. This would also be applicable to new code releases from development. In this context, it means that level 2 and Services will assist you in implementing the solution successfully, where appropriate, but if code changes are needed to make the solution work in your environment, this will not be done, because it is unsupported. a. Using the ls –Rl or dir /s command, list the Tivoli file structure on a test managed node with exact file structure and save to file. An alternative approach is to use a program like Wise Packaging or AutoPack. b. Upgrade node to Version 4.1. c. Using either the ls –Rl or dir /s command, list the Tivoli file structure again and save to a separate file (or a packaging software). d. Identify all new files, including the .installed files using the two files that you just saved. e. With the exception of the $DBDIR directory (also exclude the .installed files for the database portion), save the updated files.

114

Migration to IBM Tivoli Configuration Manager Version 4.2

f. Move the new files to the target node (ftp, CD-ROM, or so on). g. Shut down the oserv process on the managed node. h. Copy over the new files. (Ideally, save files that are to be replaced in case of the need to rollback.) i. From the Tivoli Management Region server, initiate the upgrade, using the same sequence as previous. The only portion of the upgrade that requires any delivery is the database portion. If this is not correct, the image was not properly delivered to the target. By updating only the database portion, the managed node or gateway update requires very little bandwidth and provides the proper updates in the object repository.

4.4 Recommendations before the upgrade  Review the Tivoli Management Framework Release Notes V 4.1, GI11-0890, to ensure proper operating system levels and patches.  Check the Tivoli Support Web Site for updated patches because this document might not reflect the current patch releases.  These steps upgrade the entire Tivoli management region, including all of its managed nodes, in a single session. Review 4.3, “Considerations for upgrading in phases” on page 113, if environment or business needs dictate a staged upgrade.  We strongly recommend that you make backups before beginning this process, and for the utmost in recoverability, that backups be made after each major upgrade step. Tip: Apart from doing a Tivoli backup using wbkupdb, you should do a file level backup during the upgrade processes.  Run wchkdb to detect any database inconsistencies. It is advisable to correct any database inconsistency before starting an upgrade. 

Run otherpages to detect any database corruption. It is advisable to correct any database corruption before starting an upgrade. – Command syntax: otherpages $DBDIR/odb.bdb (unix) otherpages %DBDIR%\odb.bdb (w32-ix86, i.e. NT)

Chapter 4. Upgrade Tivoli Management Framework to 4.1

115

– Expected output for a clean database (using BASH): bash$ otherpages $DBDIR/odb.bdb Parsing database tree Checking for referenced, but freed pages... Checking for valid page references

– This command should be run when the oserv is shutdown. This ensures that the results are accurate and consistent. If errors are returned (other than the message seen above), please contact IBM/Tivoli Support.  Because of possible MDist timeouts during the upgrade, set the repeater time-out to infinite with the wrpt –T 0 command.  Review your repeater settings, especially net_load. If net_load is set low, there will be high probability of a time-out situation occurring. If you are managing slow links, consider setting the stat_inv to a high number (1800 seconds, for example).  Turn off auto upgrade of endpoints (endpoint login_policy).  Any migration or isolation code used in the endpoint policies should also be disabled. This prevents any activity during upgrade that requires the endpoint manager, Tivoli Name Registry, and gateway processes.  If you are using the address_notif_interval command (endpoint variable for DHCP clients), consider upgrading the gateway targets first and then proceed with the remaining managed nodes. After the gateways are updated, the endpoint manager can be shutdown. This prevents a possible situation where rpc_max_threads is exceeded and, as a result, the oserv might no longer respond.  Consider whether to upgrade or install the Java components (Java, Swing, JavaHelp, Client Framework, JRIM, MDist2 Graphical User Interface) on managed nodes or gateways. The combined size of these applications and components is approximately 245 MB. If no administrators log into these managed nodes and gateways, you do not need to update them, which will significantly decrease the install time, especially if you are faced with targets over slow links.  Ensure that the SLOW_LINK variable is not set in your oserv environment (odadmin environ get). If it is set, this will significantly slow down the install time.  Ensure that all managed node labels are resolvable through DNS and that the labels of the managed nodes match the hostname command.

116

Migration to IBM Tivoli Configuration Manager Version 4.2

4.5 Upgrading to Framework 4.1 In this section we cover different scenarios and a step-by-step approach to upgrade of Tivoli Management Framework. In our previous sections we covered the recommendations, preparation and planning for this upgrade.

4.5.1 Upgrade from 3.6.X of Tivoli Management Framework If your environment is at 3.6, 3.6.1, 3.6.2, 3.6.3 or 3.6.4, upgrading to 4.1 is a two step process. You, need to first upgrade to 3.6.5, and then to 4.1 using the new upgrade path of 3.6.5 to 4.1 of Tivoli Management Framework. Figure 4-1 on page 113 shows the upgrade path from various release. 1. Change the endpoint ‘allow_install’ policy to disallow ALL new logins (use the statement ‘exit 1’). 2. Ensure that no scheduled routines will manipulate the TME environment. (These routines include such things as cron jobs using the wupdate command, activities on spoke Tivoli management regions, TME scheduled jobs, and so forth.) 3. Run the odadmin db_sync command. This will force all memory resident transactions to be written to disk. 4. Shut down the oserv on the Tivoli Management Region and do a file-level backup of the Tivoli file system. odadmin shutdown clients odadmin shutdown 1

5. Restart the oserv. odadmin start 1 odadmin start clients

6. If the machine is a gateway, then stop the gateway process. wgateway stop

7. Put the Tivoli Management Region into maintenance mode.

Chapter 4. Upgrade Tivoli Management Framework to 4.1

117

Note: The next steps can be done for only the Tivoli Management Region server, or you can also upgrade all or most of your managed nodes at the same time. In maintenance mode, all Tivoli Management Framework-based activity should be suspended, including the distribution, subscription, and editing of any type of profile (for example, user, security, Software Distribution, Inventory, Tivoli Distributed Monitoring, and Tivoli Monitoring profiles). The purpose of maintenance mode is to restrict any operations that might require access to the object database to prevent the introduction of inconsistency during updates of the object database. Some products, such as Tivoli Remote Control, will not function when the Tivoli Region is in maintenance mode, because users cannot manipulate objects on the Tivoli Desktop. For example, you cannot open policy regions or profiles. However, other products, such as the Tivoli Enterprise Console console, can function because of their special states in the Tivoli Region. We recommend that you alert users that all Tivoli Management Framework-based products are expected to be non-functional while the Tivoli Region is in maintenance mode. If you need to use Tivoli Remote Control while the Tivoli Region is in maintenance mode, you should establish the session before the Tivoli Region is put into maintenance mode. After it is established, this session will not be impacted by maintenance mode. 8. Upgrade to 3.6.5. 9. Restart all managed nodes: odadmin reexec clients

10.After the nodes have restarted, run: odadmin reexec 1

11.If the machine is a gateway, then stop the gateway process: wgateway stop

12.IPlace the Tivoli Management Region in maintenance mode. 13.Use the Upgrade from 3.6.5 to 4.1 CD and upgrade to 4.1. 14.Shutdown oserv: odadmin shutdown all

15.Perform a file-level backup of the Tivoli system. 16.If target nodes are NT or Windows 2000, reboot your system to ensure the proper reload of TivoliAP.dll.

118

Migration to IBM Tivoli Configuration Manager Version 4.2

17.Restart oserv: odadmin start all

18.Install 4.1 Java components from the Tivoli Management Framework 4.1 CD (Java, JavaHelp, Client Framework, JRIM, MDist 2 GUI). 19.if you will be using any applications that were designed to run on the 3.6 or 3.7 level framework, then apply patch 4.1-TMF-0004, 4.1-TMF-0005, and 4.1-TMF-0006. If you will be using Tier 2 endpoints, then you will also need patch 4.1-TMF-0008 (which is at the 3.7 TMF patch 3.7-TMF-0044 level). The patches contain: – – – –

3.6.5 lcf_bundle 3.6.5 MN Libraries 3.7.1 fixpack 3 MN Libraries 3.7 patch 44 - supplies Tier 2 endpoint bundles for mips-irix5, osf-axp, reliant-unix, sequent, solaris2-ix86, and uw2-ix86 platforms

20.Run the odadmin shutdown all command. 21.Perform a file-level backup of the Tivoli file system. 22.Run the odadmin start all command. 23.Enable the allow_policy, adjust the repeater timeout, and ensure that the Tivoli Management Region is no longer in maintenance mode.

4.5.2 Upgrade from 3.6.5 of Tivoli Management Framework Using the Upgrade from 3.6.5 to 4.1 CD you can now upgrade to Tivoli Management Framework directly without upgrading to 3.7B and 3.7.1 of Tivoli Management Framework. In this section we cover step-by-step instructions about how to upgrade from 3.6.5 to 4.1. Recommendations, planning and preparations was covered in previous sections. Figure 4-1 on page 113 shows the upgrade path from various release. Change the endpoint ‘allow_install’ policy to disallow ALL new logins (use the statement ‘exit 1’). 1. Ensure that no scheduled routines will manipulate the TME environment. (These routines include such things as cron jobs using the wupdate command, activities on spoke Tivoli management regions, TME scheduled jobs, and so forth.) 2. Run the odadmin db_sync command. This will force all memory resident transactions to be written to disk.

Chapter 4. Upgrade Tivoli Management Framework to 4.1

119

3. Shut down the oserv on the Tivoli Management Region and do a file-level backup of the Tivoli file system. odadmin shutdown clients odadmin shutdown 1

4. Restart the oserv: odadmin start 1 odadmin start clients

5. If the machine is a gateway, then stop the gateway process. wgateway stop

6. Put the Tivoli Management Region into maintenance mode. 7. Use the Upgrade from 3.6.5 to 4.1 CD and upgrade to 4.1. 8. Shutdown oserv: odadmin shutdown all

9. Perform a file-level backup of the Tivoli system. 10.If target nodes are NT or Windows 2000, reboot your system to ensure the proper reload of TivoliAP.dll. 11.Restart oserv: odadmin start all

12.Install 4.1 Java components from the Tivoli Management Framework 4.1 CD (Java, JavaHelp, Client Framework, JRIM, MDist 2 GUI). 13.if you will be using any applications that were designed to run on the 3.6 or 3.7 level framework, then apply patch 4.1-TMF-0004, 4.1-TMF-0005, and 4.1-TMF-0006. If you will be using Tier 2 endpoints, then you will also need patch 4.1-TMF-0008 (which is at the 3.7 TMF patch 3.7-TMF-0044 level). The patches contain: – – – –

3.6.5 lcf_bundle 3.6.5 MN Libraries 3.7.1 fixpack 3 MN Libraries 3.7 patch 44 - supplies Tier 2 endpoint bundles for mips-irix5, osf-axp, reliant-unix, sequent, solaris2-ix86, and uw2-ix86 platforms

14.Run the odadmin shutdown all command. 15.Perform a file-level backup of the Tivoli file system. 16.Run the odadmin start all command. 17.Enable the allow_policy, adjust the repeater timeout, and ensure that the Tivoli Management Region is no longer in maintenance mode.

120

Migration to IBM Tivoli Configuration Manager Version 4.2

4.5.3 Upgrade from 3.7B and 3.7.1 of Tivoli Management Framework In this section we cover step-by-step instructions about how to upgrade from 3.7B or 3.7.1 to 4.1. Recommendations, planning and preparations was covered in previous sections. Figure 4-1 on page 113 shows the upgrade path from various release. 1. Change the endpoint ‘allow_install’ policy to disallow ALL new logins (use the statement ‘exit 1’). 2. Ensure that no scheduled routines will manipulate the TME environment. (These routines include such things as cron jobs using the wupdate command, activities on spoke Tivoli management regions, TME scheduled jobs, and so forth.) 3. Run the odadmin db_sync command. This will force all memory resident transactions to be written to disk. 4. Shut down the oserv on the Tivoli Management Region and do a file-level backup of the Tivoli file system: odadmin shutdown clients odadmin shutdown 1

5. Restart the oserv: odadmin start 1 odadmin start clients

6. If the machine is a gateway, then stop the gateway process: wgateway stop

7. Put the Tivoli Management Region into maintenance mode. 8. Use the 4.1 Upgrade and upgrade to 4.1 9. Shutdown oserv: odadmin shutdown all

10.Perform a file-level backup of the Tivoli system 11.If target nodes are NT or Windows 2000, reboot your system to ensure the proper reload of TivoliAP.dll. 12.Restart oserv: odadmin start all

13.Upgrade 4.1 Java components from the Tivoli Management Framework 4.1 CD (Java, JavaHelp, Client Framework, JRIM, MDist 2 GUI). 14.if you will be using any applications that were designed to run on the 3.6 or 3.7 level framework, then apply patch 4.1-TMF-0004, 4.1-TMF-0005, and 4.1-TMF-0006. If you will be using Tier 2 endpoints, then you will also need

Chapter 4. Upgrade Tivoli Management Framework to 4.1

121

patch 4.1-TMF-0008 (which is at the 3.7 TMF patch 3.7-TMF-0044 level). The patches contain: – – – –

3.6.5 lcf_bundle 3.6.5 MN Libraries 3.7.1 fixpack 3 MN Libraries 3.7 patch 44 - supplies Tier 2 endpoint bundles for mips-irix5, osf-axp, reliant-unix, sequent, solaris2-ix86, and uw2-ix86 platforms

Note: You should not install patches 4.1-TMF-0004, 4.1-TMF-0005, and 4.1-TMF-0006 If you have patches (like 3.7.1-TMF-0088, 3.7.1-TMF-0089, 3.7.1-TMF-0091) or a later 3.7.1 Fixpack installed, then 4, 5, and 6 do not have to be installed. 3.7.1-TMF-0088, 3.7.1-TMF-0089, 3.7.1-TMF-0091 (and later) supersede 4.1-TMF-0004, 4.1-TMF-0005, and 4.1-TMF-0006. 15.Run the odadmin shutdown all command. 16.Perform a file-level backup of the Tivoli file system. 17.Run the odadmin start all command. 18.Enable the allow_policy, adjust the repeater timeout, and ensure that the Tivoli Management Region is no longer in maintenance mode.

4.6 Post upgrade tasks We recommend that you run a set of tasks and validations after you finish the upgrade procedures. These tasks include not only standard Tivoli commands, but also the various customizations and tasks that are in use in your environment. These include the following:  Run the wmannode command on all managed nodes.  Run the wep status command.  Launch the Tivoli Desktop.  Check tasks and scheduled jobs.  Run the wbkupdb command.  Check any scripts in cron.

122

Migration to IBM Tivoli Configuration Manager Version 4.2

5

Chapter 5.

Package migration This chapter covers the migration of Software Distribution Version 3.6.x file packages to the Version 4.x software packages. Due to differences in packaging technology, file packages created in Software Distribution 3.6.x cannot be used in the 4.x environment. This means that unless you intend running both versions 3.6.X and 4.X in parallel, you will need to convert all 3.6.X file packages to Version 4.X software packages. We discuss these topics:  “Deciding on the migration of packages” on page 124  “Planning for the package migration” on page 124  “Mapping package differences” on page 127  “Migration” on page 129

© Copyright IBM Corp. 2002. All rights reserved.

123

5.1 Deciding on the migration of packages Migrating packages can be complex task depending on the complexity of a package. In some instances changes are required in order to get the software package to work. We also highly recommend that you re-test all migrated packages to ensure that they are working as expected. For these reasons it might be best to recreate your packages, rather than migrating. For the minimum, after migrating the packages, we recommend that you open the packages with Software Package Editor and make the necessary changes in these packages to exploit the new features, available with Version 4.X levels of the product. Note: By Version 4.x, we refer the following products:  Software Distribution V4.x  Software Distribution V4.x  IBM Tivoli Configuration Manager V4.2 Migrated packages can be used in any of these products, but since our main focus is IBM Tivoli Configuration Manager V 4.2, we assume that you are working in Framework V 4.1 and IBM Tivoli Configuration Manager V 4.2 environment. You can use the packages created in Software Distribution 4.x as is in IBM Tivoli Configuration Manager V 4.2. But if you want to use some new features available with IBM Tivoli Configuration Manager V 4.2 for the Software Distribution 4.x packages, you can always open these packages in Software Package Editor (unless you migrate to software package block format) and make the necessary changes in these packages to leverage these new features.

5.2 Planning for the package migration Once you decide to migrate your packages, the following preparation tasks need to be performed.

5.2.1 Setup the migration environment The following products must be installed in the environment in order to perform package migration.  Framework V 4.1  IBM Tivoli Configuration Manager V 4.2 Package migration is also supported for Framework 3.7.x and Software Distribution 4.x environments, but since our main focus is IBM Tivoli

124

Migration to IBM Tivoli Configuration Manager Version 4.2

Configuration Manager V 4.2, we assume that you are working in Framework V4.1 and IBM Tivoli Configuration Manager V4.2 environment. Note: Autopack migration is only supported on a Windows NT managed node. File Package Block migration is supported on the following platforms:    

Windows NT/2000 HP-UX Solaris AIX

You first need to create a profile manager where migrated packages will be created. The profile manager must be created in a policy region that supports the SoftwarePackage, as a Managed Resource. Do the following: 1. Create a policy region such as ws.prod.swd.er1.pr. 2. Right-click on the policy region and select Managed Resources as in Figure 5-1.

Figure 5-1 Managed Resources

3. In the Set Managed Resources menu, ProfileManager and SoftwarePackage must be displayed in the current resources column, as shown in Figure 5-2.

Chapter 5. Package migration

125

Figure 5-2 Managed Resources menu

4. To create a profile manager that will contain the migrated package, do the following: a. Double-click on the policy region that will hold migrated packages. b. Select Create->ProfileManager from the Policy Region dialog menu bar, as shown in Figure 5-3.

Figure 5-3 Select Create->ProfileManager

126

Migration to IBM Tivoli Configuration Manager Version 4.2

5. Enter name of profile manager that will be used as a container for migrated packages. This is shown in Figure 5-4.

Figure 5-4 Create Profile Manager

6. Copy source files, Autopack objects or .PAK file to the IBM Tivoli Configuration Manager V 4.2 server.

5.3 Mapping package differences We discuss a number of things that you need to be aware of regarding the mapping of package differences.

5.3.1 Software package attribute mapping When migrating packages, old software package attributes are mapped to new attributes that are supported in IBM Tivoli Configuration Manager V 4.2. Refer to Table 51 in Chapter 6 of the Reference Manual for Software Distribution, SC23-4712, for a full list of attribute mapping. Understanding attribute mapping will aid when troubleshooting package migration. Note: When distributing Autopack that was migrated using the wfptosp command, a migr_autopack directory is created on the target system during distribution.

Chapter 5. Package migration

127

5.3.2 Commands translation Similar to attributes, commands are also mapped to IBM Tivoli Configuration Manager V 4.2 commands during the migration. Refer to Table 52 in Chapter 9 of the Reference Manual for Software Distribution, SC23-4712, for a full description of Version 3.6 to 4.X (including IBM Tivoli Configuration Manager) command mapping.

5.3.3 Package name translation It is important to consider name changes when migrating packages. You must decide on a naming convention for the new software package that will be created. In version 3.6.x file packages, an environment variable $fpname is used to identify the name of the file package. The $fpname variable is migrated to Version 4.x package using environment variable fpname which is defined in the default stanza. This variable is automatically modified to SP_$fpname^1.0. For example, if file package name is "itso", the value of fpname will be: SP_itso^1.0

The software package definition will have the following structure: package name="itso" title="Migrated file package" version="1.0" ... default variables fpname="SP_itso^1.0"

Software Distribution 4.x supports versioning, which was not supported in 3.6.x. Because 3.6.x packages do not have a version number, 1.0 is automatically assigned to all migrated packages.

5.3.4 Destination name translation The xxx_platform_prefix- is converted to $(target_dir$(os_name). The os_name variable will be converted to the appropriate value during a Version 4.x software distribution. Default_dest - becomes the destination directory for all file objects where the full name is not specified. This variable stays the same regardless of the platform type that is being distributed too.

5.3.5 Software Distribution actions Table 5-1 contains software distribution mapping:

128

Migration to IBM Tivoli Configuration Manager Version 4.2

Table 5-1 Software Distribution mapping Software Distribution 3.6 action

Software Distribution 4.X action

Before After

Install

Removal After Removal

Remove

Commit

Commit

On_error

Cleanup (The program always runs, not only in case of error)

5.4 Migration To migrate a file package to a software package object, you must use the wfptosp command. The wfptosp command requires the file package definition file or file package name as input. You must have Software Distribution 3.6.x installed to use the file package name option. In this scenario we use the file package definition file option. The procedure is as follows: Note: Using the export function in Software Distribution 3.6.x, you can create file package definition file. 1. Copy the file package definition file (.fpd) file to a directory on the software distribution server as shown below: Copy WinNTSP6.fpd c:\temp\migrate Note: File package definition file contains only the package definition and attributes and not source files. Source files specified in file package definition file must be available on the source host in order for the software package to work. The source file path name must match that in the file package definition file. 2. Use wfptosp command to migrate file package definition to a software package object as the following in Example 5-1. Please refer to Reference Manual for Software Distribution, SC23-4712, for a detailed discussion of all options of the wfptosp command. Example 5-1 wfptosp cd %BINDIR%\TME\SWDIS\MIGRATION bash

Chapter 5. Package migration

129

./Wfptosp -c migrated_packages.pm -h TIVER1SRC1DV1 -n WinNTSP^6.0 c:/temp/migrate/WinNTSP6.fpd

Where:  -c migrated_packages.pm is the profile manager that was created in the preparation stage.  -h TIVER1SRC1DV1 is the source host that the migrated software package will be created on.  c:\temp\migrate\WinNTSP6.fpd is the full path of the file package definition file. Tips:  The wfptosp command must be executed in bash on a Windows NT platform. You must also change current directory to %BINDIR%\TME\SWDIS\MIGRATION before running command.  The -n option must be used when migrating file package definition file. Otherwise the software package name will be SP_^1.0. If the version number is not specified in the File Package or Autopack the default 1.0 is used. You may override this by using the -n option with a version number. 3. It is very important to verify that the file package was migrated correctly. To verify that software package was migrated correctly do the following: a. Double-click profile manager containing software package. b. Right-click on software package object and select Properties as shown in Figure 5-5.

130

Migration to IBM Tivoli Configuration Manager Version 4.2

Figure 5-5 The migrated software package properties

c. From Properties Software Package dialog box select Launch Software Package Editor button. This will launch the Software Package Editor as shown in Figure 5-6. Use Software Package Editor to verify the configuration of the software package and optionally add new options to leverage the the functions of IBM Tivoli Configuration Manager V4.2. Note: You must have Software Package Editor installed in order to use the Launch Software Package Editor button.

Chapter 5. Package migration

131

Figure 5-6 Launch Software Package Editor

Migration of file package blocks and Autopack are same as file package definition file with the exception that no source files are required since these formats contain source files. File package block to a software package block migration must be done on the same platform that it was created. For example, if a file package was created on a Windows NT platform, it must be migrated on a Windows NT TMR. Tips:  To migrate file package, file package block or Autopack directly to a software package block (built format) use the -p switch, where is the full path to the software package block file name. You must use the -a when migrating Autopack, where is the Autopack name or full path to the .PAK file.  You may need to make minor changes to the package after migration is complete. This is not possible if you migrate directly to software package block format. You can migrate to a software package definition file or soft package object, make changes to the package, and then convert package to software package block.

132

Migration to IBM Tivoli Configuration Manager Version 4.2

Abbreviations and acronyms ACL

Access Control List

AP

Activity Planner

APE

Activity Plan Editor

APM

Activity Plan Monitor

CM

Change Manager

CORBA

Common Object Request Broker Architecture

DHCP

Dynamic Host Configuration Protocol

DMI

Desktop Management Interface

EP

Endpoint

GA

General Announce

IBM

International Business Machines Corporation

ISMP

Install Shield MultiPlatform

ITCM

IBM Tivoli Configuration Manager

ITSO

International Technical Support Organization

JSP

Java Server Pages

LDAP

Lightweight Directory Access Protocol

MDist

Multiplex Distribution

MIF

Management Information Format

MSI

Microsoft Software Installer

RDBMS

Relational Database Management System

RPM

Red Hat Package format

SMBIOS

System Management BIOS

© Copyright IBM Corp. 2002. All rights reserved.

SP

Software Package

SPB

Software Package Block

SPD

Software Package Definition

SSL

Secure Socket Layer

TMR

Tivoli Management Region

TRM

Tivoli Resource Manager

TWG

Tivoli Web Gateway

V

Version, such as V4.2

XML

Extensible Markup Language

133

134

Migration to IBM Tivoli Configuration Manager Version 4.2

Related publications The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook.

IBM Redbooks For information on ordering these publications, see “How to get IBM Redbooks” on page 136.  Tivoli Inventory Version 4.0 - Migration Guide from Version 3.6.2, SG24-7020  All about IBM Tivoli Configuration Manager Version 4.2, SG24-6612

Other resources These publications are also relevant as further information sources:  IBM Tivoli Configuration Manager Planning and Installation, V4.2, GC23-4702  IBM Tivoli Configuration Manager Database Schema Reference, SC23-4783  Tivoli Management Framework Release Notes V4.1, GI11-0890  IBM Tivoli Configuration Manager Read Me First V4.2, GI11-0934  IBM Tivoli Configuration Manager Introduction, GC23-4703  IBM Tivoli Configuration Manager Reference Manual for Software Distribution, SC23-4712  IBM Tivoli Configuration Manager User’s Guide for Inventory, SC23-4713  Tivoli Framework Planning for Deployment Guide, GC32-0393

Referenced Web sites These Web sites are also relevant as further information sources:  Field Guides Web site http://www-3.ibm.com/software/sysmgmt/products/support/Field_Guides.html

 Tivoli support Web site https://www.tivoli.com/secure/Tivoli_Electronic_Support/

© Copyright IBM Corp. 2002. All rights reserved.

135

 Tivoli archives ftp site ftp://www.redbooks.ibm.com/redbooks/tme10_archive/

How to get IBM Redbooks You can order hardcopy Redbooks, as well as view, download, or search for Redbooks at the following Web site: ibm.com/redbooks

You can also download additional materials (code samples or diskette/CD-ROM images) from that site.

IBM Redbooks collections Redbooks are also available on CD-ROMs. Click the CD-ROMs button on the Redbooks Web site for information about all the CD-ROMs offered, as well as updates and formats.

136

Migration to IBM Tivoli Configuration Manager Version 4.2

Index Symbols /states 72

A Aborted 43 Access count 62 active distributions 39, 51 Activity Planner 14 AIX 16 allow_install policy 99 allow_policy 119 Always option 28 amount of receivers 22 Architectural differences 22 architecture 112 ASCII/EBCDIC conversion 14 Asynchronous delivery 9, 43 Asynchronous interface 43 attribute mapping 127 auto upgrade 116 Automatic undo 8 AutoPack 7, 92

B bandwidth 38 BARC 36 BasicInventoryConfig 94 BasicInventoryProfile 94 Before, After, Removal and Commit See BARC 36 broken connection 74 built-in actions 21 bundled installation 21 business value 1 Byte-level differencing 13

C cache 9, 43 callback 44 cancel 72 Canceling a distribution 81 Change Management 14

© Copyright IBM Corp. 2002. All rights reserved.

checkpoint restart 9, 71 checksum options 12 CM See Change Management 14 code page translation 14 Commands translation 128 concurrent connections 46, 50 configuration repository 11 configuration, installation, and distribution See CID 7 conn_retry_cutoff 76 conn_retry_interval 49, 72, 74, 76 connections 51, 53 CORBA 26 cron 122

D Data flow 43 Data Moving 14 decommission first, upgrade later approach 87 default priority 48 Delivery operation 44 delta package 13 Demilitized Zone 16 Dependency 13 Depot 41, 60, 63 Depot commands delete 65 list 65 purge 65 Depot configuration 63 Desktop Management Interface See DMI 12 disconnected support 76 disk_dir 34 disk_hiwat 34 disk_max 34, 58, 60 disk_time 34 Disposable 67 Distribution control 78 Distribution status 78 Canceled 79 Expired 79

137

Failed 79 Interrupted 79 Paused 78 Receiving 78 Rejected 79 Sending 79 Successful 79 Unavailable 79 Waiting 78 DMI 12 DMI scan data 11 DMZ 16

E endpoint 13, 24, 36, 82, 112 endpoint gateway 70, 77 endpoint login_policy 116 Endpoint-initiated scan 13 establish connection 52 execution_timeout 75–76 exit status 44

F fan-out 53, 69 file package block 69 file server 15 file system 34, 59 File version support 9 file-level backup 118 firewall 16 Framework 15 fresh installation 101 From Depot 66, 70 Full CRC 12

I IBM Tivoli Configuration Manager 15 idlcall 89 Improved Hardware Scanning 11 Improved Software Scanning 11 In use count 62 In-line installation Advantages 105 Considerations 106 Dependencies 105 Topology considerations 106 in-line installation 104 Install from CD 15 InstallShield 8 Integration with Enterprise Directories 20 Intel Wired for Management specification 13 interconnected TMR 33 intermediate client 41 interoperability 92 interruption 71 inv_query 93 Inventory Version 3.6.2 97 Inventory Version 3.6.2 configuration repository 90 Inventory Version 4.0 11, 97 Inventory Version 4.2 90, 97 InventoryConfig 94 InventoryProfile 94

J Java component 22 Java interface 42

L

G gatelog 26 gateway 68 Gateway repeater 42, 53 Gateway Session timeout 36 Global Repeater Manager timeout 35

H Header scan 12 High level TCP timeout 36 home style approach 86 hostname command 116

138

HP system 92

LAN 69–70 LAN repeaters 69 large software packages 70 LDAP directory query 4 LDAP server 4 Linux 16 load a software package 66 load depot 64 Local Area Network See LAN local hierarchy 69

Migration to IBM Tivoli Configuration Manager Version 4.2

M maintenance mode 118 managed node 24, 112 Managed node repeater 42 master database 20 max_conn 35 max_conn parameter MDist 1 52 MDist 2 51 max_sessions_high 47 max_sessions_low 47 max_sessions_medium 47 MDist 1 59 MDist 2 46, 55 MDist 1 comparison 43 MDist 2 components Distribution manager 42 GUI 42 Repeater depot 41 Repeater manager 41 Repeater queue 41 Repeater site 41 MDist 2.bdb 72 MDist 2.log 72 MDist2 9, 22 MDist2 distributions 9 MDist2 service 9 mem_max 34, 58 memory resident transactions 119 Microsoft Installer support 14 Microsoft Setup 8 Microsoft Software Installer See MSI 8 migration 23, 84, 90, 96–97, 129 minimize costs 2 Mobile endpoint support 13 Mobile support 14 MSI 8 multicast 22 Multicast distribution 22 multicasting 16 multiple connections 50 multiple distributions 54 multiplexed distribution 9

N Native packaging support 21 negative net_load 56

negative number 56 Net Load 37 net_load 35, 54 net_load parameter MDist 1 55 MDist 2 55 net_spacing 35 network bandwidth 54, 56–57 network resources 50 network traffic 22 new features 15 New Web GUI 16 next tier repeater 74 Noalways 28 Noalways option 28 Nodefault 28 Nokia Communicator 15 notify_interval 80 Nowan option 28

O object repository enhancements 112 odadmin 38, 89 odadmin db_sync 117 OID 31 one that stayed behind approach 87 one-to-one TCP connection 22 OS/2 12 OS/400 endpoint support 15 otherpages 115

P package differences 127 paging file 34 PalmOS 15 Patchadd 21 pause 72 Pausing a distibution 81 PC hardware scan 11 PC managed node 24, 112 PcManagedNode 36 PDF importer tool 8 Per distribution 43 Per repeater 43 permanent_storage 63, 67 persistent information 72 Pkgadd 21 PocketPC 15

Index

139

policy object 94 positive net_load 56 prerequisite 14 Priority level 46, 48 Pristine Operating System install 14 progs_timeout 36

Q Query libraries 93 queuing mechanism 41 Quick CRC 12

R Redbooks Web site 136 Contact us xvii Redesigned schema 12 reduce network traffic 13 reexec 89 remote Tivoli Desktop 22 repeater 53, 64, 68 Repeater depot 41 Repeater manager 26 Repeater queue 74 Repeater sites 68 repeater timeout 119 resource limits 50 Resource Manager 20 re-submit 76 resume 72 Resuming a distibution 81 retry function 41 retry_ep_cutof 72 RIM database 4, 78 rm_od 89 RPM 21 rpt_dir 64

S same code base 42 satellite 22 Scanning cheksum values 12 Scanning Windows registry 12 schema 12, 93, 97 schema mappings 97 security limitations 16 segment 60 send_timeout 75–76

140

serial number 62 session_timeout 35 shadow installation 101 Shadow migration 101 Advantages 102 Considerations 103 Required steps 103 Topology considerations 103 shared connections 53 shared object support 9 signed applet 16 SinglePortBDT 114 slow link 37, 114 slow network link 22 SLOW_LINK parameter 57 snapshot 8 Software Depot 62 Software Distribution 3.6 8 Software Distribution 3.6.x and ITCM differences MDist2 22 software distribution targets 24 software packages 23 source host 23 Software Distribution 4.0 8 Software Distribution 4.1 13, 15 Software Distribution gateways 99 software package 21 software package block repository 15 Software Package Editor 8 software signature 11 Solaris 16 Source host 43 Source host component 23 Source repair functionality 9 spobj_name 64 spoke Tivoli management regions 117 spooling 58 SQL script 11 SSL 114 standard SQL commands 20 Stat_intv timeout 36 stat_inv 35 storage mechanism 41 store permanently 60 store temporarily 67 store-and-forward 43, 60 Submission 45 subscribers 52 supported database level 97

Migration to IBM Tivoli Configuration Manager Version 4.2

Swing 116 synchronous transfer 40 system hang 60 system performance 50

T table of contents 62 target_net_load 54–55 TCP connections 53 TCP/IP connection 15 testing the Tivoli database 88 third-party packaging formats 8 throughput 45, 69 Tier 2 support 90 tighter integration with Inventory 22 Time-out 35, 74 Time-out Settings 75 Deadline 75 Execution Timeout 75 Notification Interval 75 Send Timeout 75 Tivoli Desktop 22, 122 Tivoli Enterprise Console 110 Tivoli Enterprise Data Warehouse, 5 Tivoli file system 119 Tivoli Framework tasks 14 Tivoli Framework V3.7 25 Tivoli Inventory 12 Tivoli Inventory database 11 Tivoli Inventory graphical user interface 11 Tivoli Inventory Version 3.6.2 97 Tivoli Inventory Version 4.0 97 Tivoli Management Framework 96 Tivoli Management Framework V 3.7 40 Tivoli Management Framework, Version 3.7B 114 Tivoli Management Region See TMR Tivoli Region 100 Tivoli Remote Control 118 TivoliAP.dll 118 TME Agent 24 TMR 28, 42 total number of sessions 51 tuning parameters 33

UNIX hardware scan 11 UNIX software signatures 12 unload depot 64 Update count 62 upgrade 97–99 upgrade or migration 112 Upgrade Plan Generator 99

V Variables 7 Verification of installed software package 9 verify operation 9 Version check 13

W Wake-on-LAN technology 13 wake-on-LAN-enabled 13 WAN 69 Wan option 28 wapmplugin 109 wapmrim 95 wccmplugin 109 wchkdb 89, 108 wdepot 65 wdepot describe 63 Web Gateway 19 WEB UI 15 Web users 15 wep 31 wgetadmin 88 Wide Area Network See WAN winstsp 70 Wizard 21 wldsp 64, 70 wlsconn 30, 89 wlsinst 89, 100 wmannode 122 wmdist 63, 71, 81 wrpt 26, 55 wsetiprf 36 wswdcfg 109 wswdmgr 109 wuldsp 64 wupdate 32, 107, 112, 117

U UDP broadcast packets 22 unicast 3

Index

141

X XML format 100

142

Migration to IBM Tivoli Configuration Manager Version 4.2

Migration to IBM Tivoli Configuration Manager Version 4.2

(0.2”spine) 0.17”0.473” 90249 pages

Back cover

®

Migration to IBM Tivoli Configuration Manager Version 4.2 Learn the business reasons for migration to ITCM Software Distribution, Inventory and Framework migration Covers best practices for migration

Tivoli Inventory and Tivoli Software Distribution have evolved to become smarter, faster, and more efficient, since the earlier 3.6.X versions. IBM Tivoli Configuration Manager Version 4.2 uses all the best features of these post 3.6 versions and also adds new features and enhancements to create a powerful deployment, change, and asset management suite. This IBM Redbook explains both the business reasons and the technical implementation details for migrating from Software Distribution and Inventory 3.6.X to IBM Tivoli Configuration Manager Version 4.2. The topics include:  Business reasons for migration  Functional and architectural differences between IBM Tivoli Configuration Manager and 3.6.X versions of Software Distribution and Inventory  Planning and methodology of migration  Framework migration  Migration scenarios  Package migration This redbook will help you in all aspects of migration from Software Distribution and Inventory 3.6.X to IBM Tivoli Configuration Manager Version 4.2.

INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION

BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.

For more information: ibm.com/redbooks SG24-6616-00

ISBN 0738428302