IBM Tivoli Storage Area Network Manager A Practical Introduction Sg246848
IBM Tivoli Storage Area Network Manager A Practical Introduction Sg246848
IBM Tivoli Storage Area Network Manager A Practical Introduction Sg246848
Charlotte Brooks Michel Baus Michael Benanti Ivo Gomilsek Urs Moser
ibm.com/redbooks
International Technical Support Organization IBM Tivoli Storage Area Network Manager: A Practical Introduction September 2003
SG24-6848-01
Note: Before using this information and the product it supports, read the information in Notices on page xxi.
Second Edition (September 2003) This edition applies to IBM Tivoli Storage Area Network Manager (product number 5698-SRS) and IBM Tivoli Bonus Pack for SAN Management (product number 5698-SRE)
Copyright International Business Machines Corporation 2002, 2003. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... ...... ...... ...... . . . . . . xxiii . . . . . . xxiii . . . . . . . xxv . . . . . . . xxv
Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii September 2003, Second Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii Part 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Chapter 1. Introduction to Storage Area Network management. . . . . . . . . . . . . . . . . . . 3 1.1 Why do we need SAN management? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.1 Storage management issues today . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.2 Current generation of SAN management: spreadsheets and paper . . . . . . . . . . . . 7 1.2 New tools for SAN management are needed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.2.1 Storage management components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.2.2 Standards and SAN management tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.2.3 Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.2.4 Outband management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.2.5 Inband management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.2.6 Why you might use both inband and outband discovery. . . . . . . . . . . . . . . . . . . . 17 1.2.7 Formal standards for outband management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.2.8 Formal standards for inband management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.2.9 The future of SAN management standards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.2.10 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Chapter 2. Introduction to IBM Tivoli Storage Area Network Manager . . . . . . . . . . . . 2.1 Highlights: Whats new in Version 1.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Discovery of iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Event Detection and Fault Isolation (ED/FI - SAN Error Predictor). . . . . . . . . . . . 2.1.3 IBM Tivoli Enterprise Data Warehouse (TEDW) . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.4 IBM Tivoli SAN Manager on AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.5 Embedded WebSphere. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.6 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.7 Other changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 IBM Tivoli SAN Manager overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Business purpose of IBM Tivoli SAN Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Components of IBM Tivoli SAN Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Supported devices for Tivoli SAN Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Major functions of IBM Tivoli SAN Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Discover SAN components and devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Deciding how many Agents will be needed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 How is SAN topology information displayed? . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Copyright IBM Corp. 2002, 2003. All rights reserved.
27 28 28 28 28 28 28 29 29 29 29 29 31 31 32 34 35 iii
2.3.4 How is iSCSI topology information displayed . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 SAN management functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Discover and display SAN components and devices . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Log events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 Highlight faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.4 Provide various reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.5 Launch vendor management applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.6 Displays ED/FI events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.7 Tivoli Enterprise Data Warehouse (TEDW) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36 36 37 45 46 47 49 50 51 51
Part 2. Design considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Chapter 3. Deployment architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Fibre Channel standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Host Bus Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Topologies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Point-to-point. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Arbitrated loop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 Switched fabrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 IBM Tivoli SAN Manager components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 IBM Tivoli SAN Manager Console (NetView) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.3 Tivoli SAN Manager Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.4 Tivoli SAN Manager Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.5 SAN physical view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.1 Inband management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.2 Outband management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Deployment considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.1 Tivoli SAN Manager Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.2 iSCSI management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.3 Other considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.4 Tivoli SAN Manager Agent (Managed Host) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Deployment scenarios. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.1 Example 1: Outband only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.2 Example 2: Inband only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.3 Example 3: Inband and outband . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.4 Additional considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9 High Availability for Tivoli SAN Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9.1 Standalone server failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9.2 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 56 56 56 57 57 58 58 63 63 64 65 66 66 66 66 66 67 68 68 69 70 70 71 72 72 76 76 81 84 87 89 89 92
Part 3. Installation and basic operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Chapter 4. Installation and setup. . . . . . . . . . . . . . . . . . . . . . . 4.1 Supported operating system platforms . . . . . . . . . . . . . . . . . 4.2 IBM Tivoli SAN Manager Windows Server installation . . . . . 4.2.1 Lab environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
IBM Tivoli Storage Area Network Manager: A Practical Introduction
95 96 96 96
4.2.2 Preinstallation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 4.2.3 DB2 installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.2.4 Upgrading DB2 with Fix Pack 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.2.5 Install the SNMP service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.2.6 Checking for the SNMP community name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.2.7 IBM Tivoli SAN Manager Server install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4.2.8 Verifying the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 4.3 IBM Tivoli SAN Manager Server AIX installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 4.3.1 Lab environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 4.3.2 Installation summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 4.3.3 Starting and stopping the AIX manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 4.3.4 Checking the log files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.4 IBM Tivoli SAN Manager Agent installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.4.1 Lab environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.4.2 Preinstallation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.4.3 IBM Tivoli SAN Manager Agent install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.4.4 Configure the Agent service to start automatically . . . . . . . . . . . . . . . . . . . . . . . 117 4.5 IBM Tivoli SAN Manager Remote Console installation . . . . . . . . . . . . . . . . . . . . . . . . 119 4.5.1 Lab environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.5.2 Preinstallation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.5.3 Installing the Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.5.4 Check if the service started automatically. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 4.6 IBM Tivoli SAN Manager configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 4.6.1 Configuring SNMP trap forwarding on devices . . . . . . . . . . . . . . . . . . . . . . . . . . 126 4.6.2 Configuring the outband agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 4.6.3 Checking inband agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 4.6.4 Performing initial poll and setting up the poll interval . . . . . . . . . . . . . . . . . . . . . 132 4.7 Tivoli SAN Manager upgrade to Version 1.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 4.7.1 Upgrading the Windows manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 4.7.2 Upgrading the remote console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 4.7.3 Upgrading the agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 4.8 Tivoli SAN Manager uninstall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 4.8.1 Tivoli SAN Manager Server Windows uninstall. . . . . . . . . . . . . . . . . . . . . . . . . . 135 4.8.2 Tivoli SAN Manager Server AIX uninstall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 4.8.3 Tivoli SAN Manager Agent uninstall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 4.8.4 Tivoli SAN Manager Remote Console uninstall . . . . . . . . . . . . . . . . . . . . . . . . . 137 4.8.5 Uninstalling the Tivoli GUID package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 4.9 Silent install of IBM Tivoli Storage Area Network Manager. . . . . . . . . . . . . . . . . . . . . 139 4.9.1 Silent installation high level steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 4.9.2 Installing the manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 4.9.3 Installing the agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 4.9.4 How to install the remote console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 4.9.5 Silently uninstalling IBM Tivoli Storage Area Network Manager . . . . . . . . . . . . . 145 4.10 Changing passwords. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Chapter 5. Topology management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 NetView navigation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 NetView interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Maps and submaps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.3 NetView window structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.4 NetView Explorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.5 NetView Navigation Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.6 Object selection and NetView properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 150 150 150 150 151 153 153
Contents
5.1.7 Object symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.8 Object status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.9 Status propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.10 NetView and IBM Tivoli SAN Manager integration . . . . . . . . . . . . . . . . . . . . . . 5.2 Lab 1 environment description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Topology views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 SAN view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Device Centric View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Host Centric View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.4 iSCSI discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.5 MDS 9000 discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 SAN menu options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 SAN Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Application launch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Native support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 NetView support for Web interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.3 Non-Web applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.4 Launching IBM Tivoli Storage Resource Manager . . . . . . . . . . . . . . . . . . . . . . . 5.5.5 Other menu options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Status cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Practical cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.1 Cisco MDS 9000 discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.2 Removing a connection on a device running an inband agent . . . . . . . . . . . . . . 5.7.3 Removing a connection on a device not running an agent . . . . . . . . . . . . . . . . . 5.7.4 Powering off a switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.5 Running discovery on a RNID-compatible device. . . . . . . . . . . . . . . . . . . . . . . . 5.7.6 Outband agents only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.7 Inband agents only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.8 Disk devices discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.9 Well placed agent strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
155 155 157 157 158 160 162 166 167 168 169 170 170 174 174 175 177 179 179 180 182 182 184 187 190 193 195 197 200 202 204
Part 4. Advanced operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Chapter 6. NetView Data Collection, reporting, and SmartSets . . . . . . . . . . . . . . . . . 6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.1 SNMP and MIBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 NetView setup and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Advanced Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Copy Brocade MIBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.3 Loading MIBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Historical reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Creating a Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Database maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Troubleshooting the Data Collection daemon . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.4 NetView Graph Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Real-time reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 MIB Tool Builder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Displaying real-time data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.3 SmartSets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.4 SmartSets and Data Collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.5 Seed file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 208 208 210 211 211 212 215 216 224 224 225 227 228 231 235 243 246
7.1 What is iSCSI? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 How does iSCSI work? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 IBM Tivoli SAN Manager and iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Functional description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 iSCSI discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 8. SNMP Event notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Introduction to Tivoli NetView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Setting up the MIB file in Tivoli NetView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Introduction to IBM Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Event forwarding from IBM Tivoli SAN Manager to IBM Director . . . . . . . . . . . . Chapter 9. ED/FI - SAN Error Predictor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Error processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Configuration for ED/FI - SAN Error Predictor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Using ED/FI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.1 Searching for the faulted device on the topology map . . . . . . . . . . . . . . . . . . . . 9.4.2 Removing notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
254 254 255 256 256 257 259 260 260 260 263 263 267 268 269 271 274 276 278
Part 5. Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Chapter 10. Protecting the IBM Tivoli SAN Manager environment . . . . . . . . . . . . . . . 10.1 IBM Tivoli SAN Manager environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.1 IBM Tivoli NetView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.2 Embedded IBM WebSphere Application Server . . . . . . . . . . . . . . . . . . . . . . . . 10.1.3 IBM Tivoli SAN Manager Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.4 IBM Tivoli SAN Manager Agents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 IBM Tivoli Storage Manager integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 IBM Tivoli Storage Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 Setup for backing up IBM Tivoli SAN Manager Server . . . . . . . . . . . . . . . . . . . 10.2.3 Tivoli Storage Manager server configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.4 Tivoli Storage Manager client configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.5 Additional considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Backup procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.1 Agent files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.2 Server files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.3 ITSANMDB Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Restore procedures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.1 Restore Agent files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.2 IBM Tivoli SAN Manager Server files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.3 ITSANMDB database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Disaster recovery procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5.1 Windows 2000 restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5.2 ITSANMDB database restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Database maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 11. Logging and tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.1 Server logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.2 Manager service commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 284 284 284 284 285 285 285 286 286 288 291 291 291 293 296 301 302 305 307 309 310 312 314 317 318 318 318 320
Contents
vii
11.2.3 Service Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.4 Agent logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.5 Remote Console logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.6 Additional logging for NetView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.7 ED/FI - SAN Error Predictor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 SAN Manager Service Tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.1 Exporting (snapshot) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.2 Importing (restore) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Part 6. Tivoli Systems Management Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 Chapter 12. Tivoli SAN Manager and TEC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1 Introduction to Tivoli Enterprise Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 Lab environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3 Configuring the Rule Base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4 Configuring TEC Event Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5 Event format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6 Configuring Tivoli SAN Manager event forwarding . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6.1 Set the event destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6.2 Configure NetView-TEC adapter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.7 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.8 Sample TEC rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 13. IBM Tivoli SAN Manager and Configuration Manager. . . . . . . . . . . . . . . 13.1 Introduction to IBM Tivoli Configuration Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Inventory to determine who has which version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.1 Create an inventory profile in Tivoli Framework . . . . . . . . . . . . . . . . . . . . . . . . 13.3 Software distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.1 Build software package with Software Package Editor . . . . . . . . . . . . . . . . . . . 13.3.2 Create software distribution profile in Tivoli Framework . . . . . . . . . . . . . . . . . . 333 334 335 336 340 347 348 348 349 352 354 357 358 358 359 370 370 379
Chapter 14. Integration with Tivoli Enterprise Data Warehouse . . . . . . . . . . . . . . . . . 387 14.1 Introduction to IBM Tivoli Enterprise Data Warehouse . . . . . . . . . . . . . . . . . . . . . . . 388 14.2 IBM Tivoli SAN Manager Data Warehouse Pack . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Chapter 15. Tivoli SAN Manager and Tivoli Monitoring. . . . . . . . . . . . . . . . . . . . . . . . 15.1 Introduction to IBM Tivoli Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2 IBM Tivoli Monitoring for IBM Tivoli SAN Manager . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3 Daemons to monitor and restart actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 392 392 393
Appendix A. Advanced Topology and Sensor Event Scanners . . . . . . . . . . . . . . . . . 401 Advanced Topology Scanner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 Sensor Event Scanner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 Appendix B. IBM Tivoli SAN Manager backup scripts. . . . . . . . . . . . . . . . . . . . . . . . . Tivoli Storage Manager configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DB2 configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stopping the applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stopping WebSphere Tivoli SAN Manager application. . . . . . . . . . . . . . . . . . . . . . . . . Stopping Tivoli SAN Manager environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Starting the applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . These scripts start up the Tivoli SAN Manager environment in an orderly way. . . . . . . . . Starting WebSphere Tivoli SAN Manager application. . . . . . . . . . . . . . . . . . . . . . . . . . Start of IBM Tivoli SAN Manager environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
IBM Tivoli Storage Area Network Manager: A Practical Introduction
407 408 408 408 409 409 409 409 409 409
DB2 ITSANMDB backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 Offline backup script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 Online backup script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 Appendix C. Additional material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Locating the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . System requirements for downloading the Web material . . . . . . . . . . . . . . . . . . . . . . . How to use the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 413 413 413 414
Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Referenced Web sites . . . . . . . . . . . . . . . . . . . . . . . . . . . How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... ...... ...... ...... ...... ...... ....... ....... ....... ....... ....... ....... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... 417 417 417 417 418 418
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
Contents
ix
Figures
1-1 1-2 1-3 1-4 1-5 1-6 1-7 1-8 1-9 1-10 1-11 1-12 1-13 1-14 1-15 1-16 1-17 2-1 2-2 2-3 2-4 2-5 2-6 2-7 2-8 2-9 2-10 2-11 2-12 2-13 2-14 2-15 2-16 2-17 2-18 2-19 2-20 2-21 2-22 2-23 2-24 2-25 3-1 3-2 3-3 3-4 3-5
The team - Urs, Mike, Michel, Ivo, Charlotte . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv Storage management issues today . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Infrastructure growth issues. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Manual storage management issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Current methods of compiling information about storage networks. . . . . . . . . . . . . . . 7 Large SAN environment to be managed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Storage management architecture for a suite of solutions. . . . . . . . . . . . . . . . . . . . . 11 Storage networking standards organizations and their standards . . . . . . . . . . . . . . . 13 Standards for Interoperability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 SAN Manager Outband management path over the IP network . . . . . . . . . . . . . . 16 SAN Manager Inband management path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Inband management services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 The future of standards in SAN management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 SMIS Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 SMIS Architecture in relation to SNIA storage model . . . . . . . . . . . . . . . . . . . . . . . . 22 CIM/WBEM management model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 CIM Agent & CIM Object Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 SAN management summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 IBM Tivoli SAN Manager V1.2 New functions and features . . . . . . . . . . . . . . . . . 28 IBM Tivoli SAN Manager operating environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 IBM Tivoli SAN Manager functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Functions of IBM Tivoli SAN Manager and Agents . . . . . . . . . . . . . . . . . . . . . . . . . . 32 IBM Tivoli SAN Manager inband and outband discovery paths . . . . . . . . . . . . . . 33 Levels of monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Tivoli SAN Manager Root menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Tivoli SAN Manager explorer display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 iSCSI SmartSet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Tivoli SAN Manager SAN submap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 NetView physical topology display. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Map showing host connection lost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Zone view submap. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Zone members. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Device Centric View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Device Centric View explorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Host Centric View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Host Centric View logical volumes and LUN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Navigation tree for Tivoli SAN Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Switch events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Map Showing Effects of Switch Losing Power. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Graph of # Frames Transmitted over 8 ports in a 2 minute interval. . . . . . . . . . . . . . 48 Number of Frames Transmitted Over Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Vendor application launch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Adornment shown on fibre channel switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Deployment overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Typical HBAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Structure of a fiber optic cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Single mode and multi mode cables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
xi
3-6 3-7 3-8 3-9 3-10 3-11 3-12 3-13 3-14 3-15 3-16 3-17 3-18 3-19 3-20 3-21 3-22 3-23 3-24 3-25 3-26 3-27 3-28 3-29 3-30 4-1 4-2 4-3 4-4 4-5 4-6 4-7 4-8 4-9 4-10 4-11 4-12 4-13 4-14 4-15 4-16 4-17 4-18 4-19 4-20 4-21 4-22 4-23 4-24 4-25 4-26 4-27 4-28 xii
SC fibre optic cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 LC connector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 GBIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Fibre Channel topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Fibre Channel point-to-point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Fibre Channel Arbitrated Loop (FC-AL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Fibre Channel switched fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Component placement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Inband scanning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Outband scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Components of a manger install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Level s of Fabric Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 RNID discovered host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Sample outband requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Display and configure outband agents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Outband management only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Sample inband requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Configure Agents Inband only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Inband management only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Sample inband/outband requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Inband & outband in Configure Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 Inband and outband management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 HOSTS file placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Standby server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Failover process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 IBM Tivoli SAN Manager supported operating system platforms. . . . . . . . . . . . . . 96 Installation of IBM Tivoli SAN Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Verifying system host name. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Computer name change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 DB2 services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Windows Components Wizard. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 SNMP install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 SNMP Service Properties panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Selecting the product to install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Welcome window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Installation path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Port range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 DB2 admin user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 SAN Manager database. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 WebSphere Administrator password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Host authentication password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 NetView install drive. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 NetView password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Installation path and size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Installation progress. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Finished installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Tivoli SAN Manager Windows Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Agent installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Agent installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Welcome window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Installation directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Server name and port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Agent port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4-29 4-30 4-31 4-32 4-33 4-34 4-35 4-36 4-37 4-38 4-39 4-40 4-41 4-42 4-43 4-44 4-45 4-46 4-47 4-48 4-49 4-50 4-51 4-52 4-53 4-54 4-55 4-56 4-57 4-58 5-1 5-2 5-3 5-4 5-5 5-6 5-7 5-8 5-9 5-10 5-11 5-12 5-13 5-14 5-15 5-16 5-17 5-18 5-19 5-20 5-21 5-22 5-23
Agent access password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installation size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installation finished . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Agent Windows service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Console installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Start the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Welcome window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installation directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Server information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Console ports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Console access password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tivoli NetView installation drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tivoli NetView service password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installation summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installation finished . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Console service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuration steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SNMP traps to local NetView console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SNMP trap reception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Trapfwd daemon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SNMP traps for two destinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Agent configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Outband Agent definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Login ID definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Not responding inband agent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Uninstalling the SAN Manager Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Agent uninstall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Uninstalling remote console. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Uninstalling Tivoli GUID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NetView window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NetView Explorer option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NetView explorer window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NetView explorer window with Tivoli Storage Area Network Manager view . . . . . . NetView toolbar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NetView tree map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NetView objects properties menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NetView objects properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Tivoli SAN Manager icons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN Properties menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ITSO lab1 setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ITSO lab1 topology with zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Tivoli NetView root map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Storage Area Network submap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Topology views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Storage Area Network view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Topology view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Switch submap. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interconnect submap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Physical connections view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NetView properties panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zone view submap. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FASTT zone. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
116 116 117 118 119 120 121 121 122 122 123 123 124 124 125 125 126 127 127 129 130 131 131 132 132 133 135 137 138 139 150 151 152 152 153 153 154 154 155 158 159 160 161 161 162 162 163 163 164 164 165 165 166 xiii
Figures
5-24 5-25 5-26 5-27 5-28 5-29 5-30 5-31 5-32 5-33 5-34 5-35 5-36 5-37 5-38 5-39 5-40 5-41 5-42 5-43 5-44 5-45 5-46 5-47 5-48 5-49 5-50 5-51 5-52 5-53 5-54 5-55 5-56 5-57 5-58 5-59 5-60 5-61 5-62 5-63 5-64 5-65 5-66 5-67 5-68 5-69 5-70 5-71 5-72 5-73 5-74 5-75 5-76 xiv
Device Centric View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Host Centric View for Lab 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iSCSI discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iSCSI SmartSet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN Properties menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Tivoli SAN Manager Properties Filesystem . . . . . . . . . . . . . . . . . . . . . . . . . IBM Tivoli SAN Manager Properties Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Tivoli SAN Manager Properties Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing icon and name of a device. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Connection information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sensors/Events information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Brocade switch management application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NetView objects properties Other tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Launch of the management page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PATH environment variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NetView Tools menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . San Data Gateway specialist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Launch Tivoli Storage Resource Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Tivoli SAN Manager normal status cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . Status cycle using Unmanage function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Status cycle using Acknowledge function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lab environment 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Discovery of MDS 9509 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MDS 9509 properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MDS 9509 connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Trap received by NetView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Connection lost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Connection restored. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marginal connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dual physical connections with different status . . . . . . . . . . . . . . . . . . . . . . . . . . . . Agent configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Unsafe removal of Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Connection lost on a unmanaged host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Unmanaged host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Clear History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NetView unmanaged host not discovered . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN lab - environment 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Switch down Lab 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Switch up Lab 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . RNID discovered host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . RNID discovered host properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . RNID host with changed label . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Only outband agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Explorer view with only outband agents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Switch information retrieved using outband agents . . . . . . . . . . . . . . . . . . . . . . . . . Inband agents only without SAN connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Inband agents only with SAN connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Switches sensor information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Discovered SAN with no LUNS defined on the storage server . . . . . . . . . . . . . . . . MSS zoning display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MSS zone with CRETE and recognized storage server . . . . . . . . . . . . . . . . . . . . . Well-placed agent configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Discovery process with one well-placed agent . . . . . . . . . . . . . . . . . . . . . . . . . . . .
167 168 169 169 170 171 172 172 173 173 174 175 176 176 177 178 178 179 180 181 181 182 183 184 184 185 185 186 186 187 188 188 189 189 190 190 191 192 193 194 194 195 196 197 197 198 199 199 200 201 202 203 204
6-1 6-2 6-3 6-4 6-5 6-6 6-7 6-8 6-9 6-10 6-11 6-12 6-13 6-14 6-15 6-16 6-17 6-18 6-19 6-20 6-21 6-22 6-23 6-24 6-25 6-26 6-27 6-28 6-29 6-30 6-31 6-32 6-33 6-34 6-35 6-36 6-37 6-38 6-39 6-40 6-41 6-42 6-43 6-44 6-45 6-46 6-47 6-48 6-49 6-50 6-51 6-52 6-53
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SNMP architecture overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MIB tree structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enabling the advanced menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MIB loader interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Select and load TRP.MIB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Loading MIB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NetView MIB Browser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FE-MIB Error Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SW MIB Port Table Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Private MIB tree for bcsi. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MIB Data Collector GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . starting the SNMP collect daemon. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . internet branch of MIB tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Private arm of MIB tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enterprise branch of MIB tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bcsi branch of MIB tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . swFCPortTxFrames MIB object identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adding the nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add Nodes to the Collection Dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Newly added Data Collection for swFCTxFrames . . . . . . . . . . . . . . . . . . . . . . . . . . Restart the collection daemon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Purge Data Collection files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Select ITSOSW2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Building graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graphing of swFCTxFrames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graph properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Real-time reporting Tool Builder overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enabling all functions in NetView. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MIB tool Builder interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tool Wizard Step 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tool Wizard Step 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SW-MIB Port Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Final step of Tool Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New MIB application FXPortTXFrames. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Monitor pull-down menu. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NetView Graph starting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graph of FCPortTXFrames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graph Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Polling Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tool Builder with all MIB objects defined . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . All MIB objects in NetView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SmartSet Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Selected Fibre Channel switches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Defining a SmartSet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advanced window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advanced window with 2109s added. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New SmartSet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New SmartSet IBM 2109. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SmartSet topology map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ITSOSW1, ITSOSW2 and ITSOSW3 in IBM2109 SmartSet . . . . . . . . . . . . . . . . . . Additional SmartSets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM2109 SmartSet defined to Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . .
208 209 210 211 213 214 214 215 216 216 217 217 218 218 219 219 220 221 221 222 223 223 224 226 226 227 227 228 228 229 229 230 230 231 231 232 232 233 233 234 234 235 235 236 237 238 239 239 240 241 242 243 244 xv
Figures
6-54 6-55 6-56 6-57 6-58 6-59 6-60 6-61 6-62 6-63 6-64 7-1 7-2 8-1 8-2 8-3 8-4 8-5 9-1 9-2 9-3 9-4 9-5 9-6 9-7 9-8 9-9 9-10 9-11 9-12 9-13 9-14 9-15 9-16 9-17 10-1 10-2 10-3 10-4 10-5 10-6 10-7 10-8 10-9 10-10 10-11 10-12 10-13 10-14 10-15 11-1 11-2 11-3 xvi
NetView Graph starting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM2109 SmartSet data collected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Selected MIB instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graph showing selected instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Server Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Server Setup options window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Clear Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Clear databases warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NetView stopping clearing databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . With seed file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Without seed file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iSCSI components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fibre Channel versus iSCSI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Event notification overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN Manager generated SNMP traps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Event Destination. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Director Console. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SNMP event from SAN Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ED/FI - SAN Error Predictor overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Failure indication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adornment example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Error processing cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fault Isolation indication flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ED/FI Menu Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ED/FI Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rule description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adornments on the topology map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Devices currently in Notification State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Indicated device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NetView Search dialog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Found objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Found device on topology map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Clear the notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . After clearing the notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Topology change after notification clearance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Tivoli SAN Manager components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tivoli Storage Manager integration with Tivoli SAN Manager . . . . . . . . . . . . . . . . . Sample environment: Backing up Tivoli SAN Manager to Tivoli Storage Manager . Procedures used to backup IBM Tivoli SAN Manager. . . . . . . . . . . . . . . . . . . . . . . IBM Tivoli SAN Manager restore procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Agent is contacted after restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Netview restart failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tivoli Storage Manager restore interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Tivoli SAN Manager agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Tivoli SAN Manager Disaster Recovery procedures . . . . . . . . . . . . . . . . . . . . Full system restore result. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . System Objects restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . System Objects restore results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Tivoli SAN Manager interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DB2 Database maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Tivoli SAN Manager Logging and tracing overview . . . . . . . . . . . . . . . . . . . Service Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NetView trap reception. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
244 245 245 246 248 249 249 249 250 250 251 254 255 260 261 262 264 265 268 269 269 270 271 272 273 274 275 275 276 277 277 278 279 279 280 284 285 286 291 302 304 306 306 308 309 310 311 311 313 314 318 322 324
11-4 11-5 11-6 11-7 12-1 12-2 12-3 12-4 12-5 12-6 12-7 12-8 12-9 12-10 12-11 12-12 12-13 12-14 12-15 12-16 12-17 12-18 12-19 12-20 12-21 12-22 12-23 12-24 12-25 12-26 12-27 12-28 12-29 12-30 12-31 12-32 12-33 12-34 12-35 13-1 13-2 13-3 13-4 13-5 13-6 13-7 13-8 13-9 13-10 13-11 13-12 13-13 13-14
NetView daemons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enable trapd logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stop and start daemons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Recycling daemons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TEC architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tivoli Lab environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Active Rule Base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Import Rule Base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Import Class Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Compile Rule Base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Load Rule Base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Restart TEC Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TEC Console Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create Event Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create Filter in Event Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Event Group Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add Constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Event Group Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Assign Event Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Assigned Event Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configured Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TEC Console main window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TEC console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . General tab of event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Event attribute list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Set Event Destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enable TEC events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuration GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Choose type of adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enter TEC server name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TEC server platform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TEC server port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configure forwardable events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Choose SmartSets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configure adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Start the adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Defective cable from bonnie to itsosw1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Events for cable fault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Condition cleared . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tivoli Desktop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Policy Region tonga-region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managed Resources for Inventory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Policy Region Inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Profile Manager Inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Inventory Profile Global Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Inventory Profile PC Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Inventory Profile UNIX Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Distribute Inventory Profile. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Distribute Inventory Profile dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Distribution Status Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create Query Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Edit Inventory Query . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Output for IBM Tivoli SAN Manager Query . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
324 325 325 325 335 336 337 337 338 339 339 340 340 341 341 342 342 343 343 344 344 345 345 346 347 348 349 349 350 350 351 351 351 352 352 352 353 353 354 359 360 360 361 361 362 363 364 365 366 367 367 368 369 xvii
Figures
13-15 13-16 13-17 13-18 13-19 13-20 13-21 13-22 13-23 13-24 13-25 13-26 13-27 13-28 13-29 13-30 13-31 13-32 13-33 14-1 15-1 15-2 15-3 15-4 15-5 15-6 15-7 15-8 15-9 15-10 15-11 A-1
Output for IBM Query. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Software Package Editor with new package ITSANM-Agent. . . . . . . . . . . . . . . . . . Properties dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add an execute program action to the package . . . . . . . . . . . . . . . . . . . . . . . . . . . Install dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advanced tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Remove dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advanced properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ready-to-build software package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Policy Region with Profile Managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create Software Package Profile. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Profile Manager with Profiles and Subscribers . . . . . . . . . . . . . . . . . . . . . . . . . . . . Import Software Package. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Import and build a software package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Install a software package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Install Software Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Remove a Software Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tivoli Data Warehouse data flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Tivoli Monitoring Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Policy Region tonga-region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Profile Manager PM_DM_ITSANM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create Monitoring Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add Parametric Services Model to Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Edit Resource Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Parameters of Resource Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Indications and actions of resource models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TEC forwarding of events from Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Profilemanager for Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TEC events from Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sensor Event data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
369 370 371 372 373 374 375 376 377 378 379 379 380 381 382 383 384 385 386 388 392 393 394 394 395 396 397 398 398 399 400 405
xviii
Tables
1-1 3-1 4-1 5-1 5-2 5-3 5-4 A-1 A-2 A-3 A-4 Differences in discovery capability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 SAN Manager using vendor HBAs and switches. . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Procedure to change passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 IBM Tivoli SAN Manager symbols color meaning . . . . . . . . . . . . . . . . . . . . . . . . . . 155 IBM Tivoli NetView additional colors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Problem determination. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 Status propagation rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 MIB II OIDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 FE MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 FC-MGMT MIB OIDS used by Advanced Topology Scanner . . . . . . . . . . . . . . . . . 403 FC-MGMT MIB Sensor Event Scanner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
xix
xx
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces.
xxi
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
AIX Domino DB2 ^ Enterprise Storage Server ESCON IBM ibm.com Lotus MQSeries NetView OS/2 OS/390 Predictive Failure Analysis pSeries Redbooks Redbooks(logo) RS/6000 Tivoli Enterprise Tivoli Enterprise Console Tivoli TotalStorage TME WebSphere xSeries
The following terms are trademarks of other companies: Intel, Intel Inside (logos), MMX, and Pentium are trademarks of Intel Corporation in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure Electronic Transaction LLC. Other company, product, and service names may be trademarks or service marks of others.
xxii
Preface
Now that you have installed your SAN, how are you going to manage it? This IBM Redbook describes the new product, IBM Tivoli Storage Area Network Manager, an active, intelligent, business-centric management solution for storage resources across the enterprise. IBM Tivoli Storage Area Network Manager provides effective discovery and presentation of SAN physical and logical topologies and provides multiple views of the SAN, including zones. Through its interface, it can be configured to show historical and real-time monitoring of SAN fabric devices. With IBM Tivoli Storage Area Network Manager, you will know what's on your SAN, how the devices are connected, and how storage is assigned to the hosts. If something goes wrong, or new devices are added, the topology display automatically updates to show the changed topology. SAN generated events can be displayed on the manager system, or forwarded to another SNMP manager or Tivoli Enterprise Console. This book is written for those who want to learn more about IBM Tivoli SAN Manager, as well as those who are about to implement it. This second edition of the book is current to IBM Tivoli SAN Manager V1.2.
xxiii
Ivo Gomilsek is an IT Specialist for IBM Global Services, Slovenia, supporting the Central and Eastern European Region in architecting, deploying and supporting SAN/storage/DR solutions. His areas of expertise include SAN, storage, HA systems, IBM ~ xSeries servers, network operating systems (Linux, MS Windows, OS/2), and Lotus Domino servers. He holds several certifications from various vendors (IBM, Red Hat, Microsoft). Ivo was a member of the team that wrote the redbook Designing and Optimizing an IBM Storage Area Network, and contributed to various other Redbooks about SAN, Linux/390, xSeries, and Linux. Ivo has been with IBM for five years and was an author of the first edition of this redbook. Urs Moser is an Advisory IT Specialist with IBM Global Services in Switzerland. He has more than 25 years of IT experience, including more than 13 years experience with Tivoli Storage Manager and other Storage Management products. His areas of expertise include Tivoli Storage Manager implementation projects and education at customer sites, including mainframe environments (OS/390, VSE, and VM) and databases. Urs was a member of the team that wrote the redbook Using Tivoli Storage Manager to Back Up Lotus Notes. Thanks to the following people for their contributions to this project: The authors of the first edition of this redbook: Michael Benanti, Hamedo Bouchmal, John Duffy, Trevor Foley, and Ivo Gomilsek. Deanna Polm, Emma Jacobs, Gabrielle Velez International Technical Support Organization, San Jose Center xxiv
IBM Tivoli Storage Area Network Manager: A Practical Introduction
Doug Dunham, Nancy Hobbs, Jason Perkins, Todd Singleton, Arvind Surve, IBM Tivoli SAN Manager Development, San Jose Johanna Hislop, Dave Merbach IBM Tivoli SAN Manager Development, Rochester Rob Basham, Steve McNeal, Brent Yardley IBM SAN Development, Beaverton Bill Medlyn, Daniel Wolfe IBM Tivoli SAN Manager Development, Tucson Steve Luko IBM Tivoli SAN Manager Marketing, Tucson Kaladhar Voruganti Almaden Research Center Murthy Sama Cisco Systems
Comments welcome
Your comments are important to us! We want our Redbooks to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review redbook form found at:
ibm.com/redbooks
Mail your comments to: IBM Corporation, International Technical Support Organization Dept. QXXE Building 80-E2 650 Harry Road San Jose, California 95120-6099
Preface
xxv
xxvi
Summary of changes
This section describes the technical changes made in this edition of the book and in previous editions. This edition may also include minor corrections and editorial changes that are not identified. Summary of Changes for SG24-6848-01 for IBM Tivoli Storage Area Network Manager: A Practical Introduction as created or updated on September 9, 2003.
New information
Version 1 Release 2 of IBM Tivoli Storage Area Network Manager AIX Manager and Linux Agent support iSCSI Support to integrate iSCSI into SAN management Performance enhancements by removing previous software requirements Error Detection and Fault Isolation (ED/F - SAN Error Predictor)
xxvii
xxviii
Part 1
Part
Introduction
In Part 1 we talk about why customers need management for their Storage Area Networks, focusing on the costs and challenges of managing it manually today. We then introduce IBM Tivoli Storage Area Network Manager, a new solution for displaying and monitoring physical and logical SAN topologies, receiving events, and reporting on SAN performance statistics and counters.
Chapter 1.
Growth is overwhelming people, tools, and processes business transactions storing new and different data types (medical records, voice, images, presentations) new data types are larger than the old data types Unmanaged storage costs too much Manual storage management costs too much Multivendor management is hard to master
Figure 1-1 Storage management issues today
Growth
Growth is being driven by three general trends: Business transaction volumes are growing. Businesses are using computers to store information that used to be stored only on film or paper. There are new data types (such as music, video clips, images, and graphical files) that require significantly more storage per file than older data types like flat files. The data and storage infrastructure that support this growth is growing dramatically. That growth rate is estimated to range from 50-125% annually, depending on the industry and consultant report of your choice. Consequently, the storage infrastructure must also grow to support the growth in business transactions. See Figure 1-3.
Server
Large companies have thousands of servers mixture of Windows and different UNIX OS
Staffing
Each corporate server may grow to 3TB data by 2004 - a typical open system administrator can look after 1TB Average storage growth is 50 to 125% per year largest companies may see much higher rates. SAN storage will soon be over 50% of total storage. SANs are being increasingly deployed
Storage
Server growth
Major companies have hundreds of large UNIX servers, and sometimes thousands of Windows servers. They are deploying more servers every quarter, and most large companies have a large variety of different hardware and software platforms, rather than standardizing on particular configurations.
Staffing growth
While we know that storage and data are growing rapidly, support staff numbers are not. This only exacerbates the problem. An average corporate server may be supporting in the order of 3 TB of data in the coming years, yet it is estimated that the typical open systems administrator can manage only 1 TB. Since in todays economic times, businesses are looking to cut costs, most are cutting rather than increasing their IT departments. Clearly more intelligent and powerful applications will be required to support this environment.
One problem is that SAN management crosses traditional organizational boundaries. Networks are traditionally managed by network management groups. Storage has traditionally been managed by the individual operating system platform groups or by a specialized storage group. SAN managers have to understand both networking and storage. Which group, then, should have the responsibility for managing SANs? As will be seen later in this book, IBM Tivoli Storage Area Network Manager targets exactly this intersection of the two skill areas using network management techniques to manage the SAN topology, while providing storage management-oriented logical views.
Coordination is problematic
Each group develops its own policies Policies not coordinated with each other, or with the mainframe group
Small corporations - the "one person who does it all" is spread too thin
Quick notes in a personal notebook Only resource which knows the infrastructure
In todays environments, IT organizations typically manage storage across some or all of these areas: OS platform administration handles disks associated with individual servers Backup and recovery tape Business continuance and Disaster Recovery disk and tape at Disaster Recovery sites Networking group access to NAS and SAN devices, often the overall design Storage group any of the previous functions, cross-platform In large companies, these disciplines often each have separate teams. Coordinating these different teams is a major issue. In small corporations, these functions are usually handled by a single person, who is typically highly skilled and overworked. All the groups have their own spreadsheets, home-grown reports, personal databases, and Visio diagrams, etc., to manage their particular environments. And typically each area monitors and manages in isolation, not coordinating with the other functions. IT organizations have historically been organized by operating system platform. A UNIX platform administrator managed the server, communications, disk and tape, and SANs, that is, everything to do with UNIX. The same applied to the Windows administrator.
Centralizing storage management makes it possible to apply the same tools and processes to all business units within the company. For this reorganization to work effectively, new tools and new procedures are needed to support the new organizational structure. IBM Tivoli SAN Manager is one of the key underlying new tools that support this movement towards a more consistent, more efficient, use of resources that is, people, storage, and money. For example, a company with 500 NT servers and 300 UNIX servers across different business units might have 2100 LUNs to be managed (1.5 x 500 + 4.5 x 300 = 2100). Managing that many filesystems manually is difficult. A growing percentage of companies have consolidated storage into Fibre Channel (FC) SANs, but they still have to manage the same number of LUNs. The LUNs are still associated with individual application servers, and storage on the FC storage frame is still logically segregated. Some companies have a mix of FC storage pools, network-attached (NAS, iSCSI) storage pools, and direct-attached storage environments. Each FC storage pool is managed by its own storage manager. Each NAS pool has its own manager. Each small group of 25-30 (typically) direct-attached storage servers has its own platform administrator. These administration costs can be at the user department level, at the division IT level, or at the corporate IT level. The costs are hard to aggregate, but are large.
Server Information
Spreadsheet, PC database, or WP document
When a user calls and says my application stopped working!, administrators (storage administrators, network administrators, application administrators or platform administrators) have to research, narrow down the possible causes, and make an educated guess as to the root cause of the problem. If the problem is confirmed as related to storage, may have to access several individual components in the storage infrastructure (for example, HBA, disk controller, disk system, microcode), one component at a time, sometimes several times for each component, as they try to identify the root cause. The current approach to managing storage networks typically involves manual processes and point (that is, vendor-specific, non-interoperable) solutions.
Information concerning inventory, topology, and components is typically manual. Todays tools are point-solutions, usually managing one single component, or components from a single vendor. If you need to look at 4 or 5 switches to track down a problem, you might need to log on to 4 or 5 switches, each with its own management software. Here are some frequently encountered scenarios: The topology of the SAN is maintained on a Visio diagram somewhere, which was last updated some months ago before we added those last 2 departments, and deployed several new switches, and I just didnt have the time to update the diagram! The server inventory (a spreadsheet or a PC database) was updated in a consultant study 12 months ago. Each platform group has its own inventory, which is kept separately from the other groups. Rarely does a company have an enterprise view of its infrastructure. The revision levels for all the Operating Systems, the patches, the HBA drivers, etc., is in a spreadsheet, which is somewhat up-to-date (except for the last 3 rounds of server upgrades!) The logical layout of the storage frames is kept either by the storage vendor themselves or on a spreadsheet which needs to be manually updated. If a problem does arise, then the following tools and methods are typically used to identify and resolve the problem: To manage a switch, the administrator has to consult his spreadsheet to find the address, user ID and password of the switch, log on to the switch, run the switch management package (different for each brand of switch), and scan the menus to understand the SAN architecture, and write down what he needs to know on a piece of paper. To manage the storage frame, the administrator has to log on to the frame, and run its point-solution software (again, different for each manufacturer) to understand the storage frame. Then the administrator has to mentally or manually build a map of the SAN infrastructure. Then the administrator maps the specific problem to the infrastructure, forms an hypothesis, and tries to solve the problem. This is an iterative process. With a small and stable SAN (for example, 2 switches, 12 servers, and 1 storage frame with 4 storage ports), managing the components via spreadsheets, PC databases, and point solution tools can be fairly effective for simple problems. In this environment, there are only 2 primary storage tools to learn (the switch tool and the storage frame tool), with only 2 switches and 1 frame to manage. In this small environment, the administrator generally has the architecture in his head, knows all the components, and can usually identify and fix problems within a reasonable amount of time. Note however that there is probably only one person in the organization who is familiar enough with the layout of the network to be able to do this. What happens if that person takes vacation, is ill, or leaves the organization? With a complex SAN, the number of components to manage exceeds the ability of current tools and administrators to manage in a timely fashion. Just the discovery process alone can be very time-consuming.
In this large storage network, there are many components, and many points of management: Infrastructure components: Each switch has its own management software: There are 2 different switch vendors. There are at least 8 switches, each with 16 or 32 ports.
Each storage frame has its own management tool: There are 4 different frames, each with 4-16 storage ports and 50 disks. There are 2 frame vendors.
Servers, file systems, and HBAs each have their own management tools: There are 300 servers (many not shown), each with 2 or more mount points or shares, each with 2 HBAs. There are 5 different platform operating systems (Windows 2000, NT, HP-UX, Solaris, AIX). There are different vendor HBAs (Emulex, JNI, IBM).
Component management: Storage administrators manage the storage in the storage frame. The storage vendor sometimes manages the logical-to-physical conversion (file system to LUN) for the storage. Platform administrators manage servers, file systems, and HBAs. Backup and recovery are managed by yet a different group. The client-facing IP network is managed by the network group, who also try to manage the SAN as a whole.
To manage the physical infrastructure, the IT organization would have to individually manage each component of the SAN infrastructure. That is: 4 * 32 + 8 * 16 = 256 switch ports 2 different switch management packages 40 storage frame ports, approximately 200 disks 600 shares or mount points 600 HBAs 300 instances of 4 different operating systems TOTAL NUMBER OF OBJECTS TO MANAGE = 1996 When a problem arises in this complex environment, administrators turn to the manual documents and point-solution tools to help them narrow the focus of their investigation. Considering the state of the documents and the information with which they are working, their task is challenging, and the business exposure is high. Mission-critical servers cannot afford hours of downtime just to find a root cause, much less additional time to fix the problem. Mission-critical storage, servers, and applications, by definition, need to be available 24x7. Trying to manage these 2000-odd components manually cannot be done consistently over time.
Summary
Storage and data are growing rapidly. SANs are growing, and are too big to manage manually. Manual storage management costs a lot. Companies cannot continue to manage storage and data the old way (managing individual components), and be successful. Companies MUST adopt new tools to manage storage and data.
10
Storage Infrastructure
Policy Based Automation File Systems Volume Mgrs Media Managers Element Managers Subsystem Reporting
Virtualization
Replication
DAS Devices
iSCSI
SAN
Fibre Channel
NAS
TAPE
SAN management tools were developed to help address the issues described in the previous section to consolidate into one place all the information needed to manage the components of a SAN so that storage administrators can keep the physical and logical storage environment operating all the time. With the right SAN management tools, from one console, storage administrators should be able to see all that happens in their storage infrastructure: By hosts in the SAN By devices in the SAN By topology of the SAN These are some of the benefits of using SAN management tools: Technical benefits: Effective discovery and presentation of SAN physical and logical topologies for small or large SANs Continuous real-time monitoring, with rapid Error Detection and Fault Isolation
11
Support for open SAN standards Minimize storage and SAN downtime Provide a framework to extend the SAN to the enterprise Business benefits: Increase revenue by improving availability for applications hosted on the SAN Reduce costs (both administration & storage) These are the main attributes of a good SAN management tool: Standards based Strong architecture: Centralized repository Based on an enterprise database Discovers all components of a SAN Integrated with an enterprise console Identifies errors and isolates faults Thresholds for reporting and actions
Easy to navigate, understand Flexible and extensible: Provides topology views (physical) views host-centric and switch-centric Viewing a single SAN, or all SANs, in an organization Ability to launch vendor-provided management applications from single console Reporting, both standard, and customizable
12
Storage Networking Industry Association (SNIA) SAN umbrella organization IBM participation: Founding member Board, Tech Council, Project Chair
Fibre Channel Industry Association (FCIA) Sponsors customer events IBM participation: Board
American National Standards Institute (ANSI) X3T11 for FC/FICON standards X3T10 for SCI standards IBM participation
International Organization for Standardization (ISO) International standardization IBM Software development ISO Certified
Industry organizations, such as the Storage Networking Industry Association (SNIA) and the Fibre Alliance, have taken a leading role in facilitating discussions among vendors and users. Members chair working groups, looking at a wide range of subjects relating to storage and SANs such as discovery and management, backup and disaster recovery. Developments by these organizations are considered de-facto standards. Recommendations from these organizations are submitted to the officially recognized standards bodies (IETF, ISO and ANSI) for consideration as a formal standard. A key standard is contained in the FC-MI (Fibre Channel Methodologies for Interconnects) technical report published by the ANSI T11 standards committee. Taken as a whole, the FC-MI report addresses multi-vendor interoperability for Storage Area Networks. The next generation of the standard FC-MI-2 is already in development. This report describes a required set of common standards for device and management interoperability in both loop and switched fabric environments. Compliance to the standards defined by the FC-MI allows for operational interoperability between hosts, storage devices, and fabric components over a wide variety of Fibre Channel topologies. It also provides for a common approach to SAN device discovery and management.
13
ANSI has defined all the principal standards relating to physical interfaces, protocols, and management interfaces that would be exploited by the hardware vendors: FC-PH specifies the physical and signaling interface. FC-PH-2 and PC-PH-3 specify enhanced functions added to FC-PH. FC-FG, FC-SW, FC-GS-2, FC-GS-3, FC-SW-2, FC-FS, and draft standards for FC-GS-4 (target announcement date, August 2003) and FC-FS-2 are all documents relating to switched fabric requirements. FC-AL specifies the arbitrated loop topology. FC-MI builds on these standards and groups device interoperability into four areas, shown in Figure 1-8.
Fabric Behaviors
FC Port Behaviors
A single device may have to comply with all four standards. Taken together, these standards define a set of common specifications that a device must adhere to in order to be compliant with FC-MI compliant devices at both the operational and management levels.
14
The following is a partial list of the current standards that different SAN components, end points (hosts, storage subsystems, gateways, etc.), Host Bus Adapter (HBA) drivers, and fabric components must support to be compliant with FC-MI for SAN management: Name Server, as defined in ANSI FC-GS-3 Management Service, as defined in FC-GS-3: Configuration Server Unzoned Name Server Fabric Zone Server Fabric event reporting These are Extended Link Services (ELS) commands defined in the FC-FS (Framing and Signaling Interface) for notification of fabric events RSCN Registered State Change Notification RLIR Registered Link Incident Record HBA drivers must support an API (such as the SNIA SAN management HBA API) that must be capable of: Issuing Name Server, Fabric Management Server, and end point queries, and Notifying the driver (or other recipient) of fabric events SNMP Monitoring, using the IETF FC Management MIB (previously known as the Fibre Alliance MIB) and trap Respond to end point queries RNID Request Node Identification Data RLS Read Link Error Status Block Taken together, these different discovery and reporting mechanisms allow a complete SAN topology to be determined and monitored, along with advanced capabilities such as performance analysis and error detection and fault isolation.
1.2.3 Discovery
Discovery uses two approaches for discovering SAN device information: Outband queries over an IP network via standardized MIBs, which typically are loaded only onto the managed switches. IBM Tivoli SAN Manager gathers SNMP-collected information from outband agents. Inband queries using Fibre-Channel protocols. In the case of IBM Tivoli SAN Manager, an Agent loaded onto the target server queries a standard HBA API loaded onto the managed host, which then queries reachable devices in the SAN. The information obtained is returned to the Manager. Tivoli SAN Manager stores the results of inband and outband discoveries through the Agents in its database, co-relates it to look for duplication, and uses the information to draw or re-draw the topology map.
15
IP Network
SNMP Trap
SNMP Agent
Switch
Managed Switch
Figure 1-9 SAN Manager Outband management path over the IP network
Outband management uses the MIB(s) available for the target switches. The purpose of loading a MIB is to define the MIB objects that the SAN management application will track. These are the items that we want to collect data about, such as number of transmitted or received frames, and error conditions. The objects are defined in the relevant MIBs. Outband management is used during polling, which is the process of scanning devices to collect the SAN topology. The SNMP Agent solicits the appropriate information from the devices, and returns it to the SAN Manager through the inbuilt SNMP Manager provided in NetView. The switches in this case are configured to send their trap to the SNMP Manager. SAN management events are also communicated using outband methods. From time to time, events will be triggered from the Agent on the switch to the SAN Manager. The SAN Manager will log these events and respond accordingly. For example, an event could be sent, indicating that a switch port is no longer functioning. The SAN Manager would update its topology map to reflect this.
16
IP Network
Scan request
SAN Manager
SCSI Query HBA RNID Query
End points
Returned data
Inband Agent
HBA API
Managed host
FC Query
Switch
Switch
fabric elements
Gateway
Managed Host
FC Storage
FC Storage
In the case of IBM Tivoli SAN Manager, an Agent is installed on hosts to be managed and is configured to communicate with a Manager system. The polling process for topology discovery sends queries inband through the SAN. Specifically, the HBA API on the managed Agent issues its own query to the FC switch. Topology information is retrieved from the switch, including information about other switches and their attached end-point devices. This is because in a cascaded switch configuration, topology information is shared and replicated among all switches. End-point devices (which do not have an Agent installed), such as storage systems, gateways, and other hosts, respond to RNID and SCSI queries for device and adapter information. Fabric components, such as switches, respond to queries of the Management Server and the Name Server via the HBA API. Switches are not end-point devices. The Agent returns all collected information to the SAN Manager over the IP network. This information is co-related and consolidated (since other Agents also return possibly duplicate information) and stored on the Manager. It uses this information (combined with information returned by outband Agents, if deployed) to build the topology map and submaps.
1.2.6 Why you might use both inband and outband discovery
Both of these methods have a valid role in SAN management. Both are being actively developed, and offer different technology benefits. One practical benefit of using both methods is that, with 2 discovery methods, should one network or the other become unavailable for some reason, the manager can always fall back on the alternate monitoring method. Multi-protocol management using both inband and outband methods is expected to be the most common implementation of SAN management capabilities. Table 1-1 shows the different capabilities for inband and outband management methods.
Table 1-1 Differences in discovery capability
Function Inband (uses fibre network) Outband (uses IP network)
X X
X X
17
Function
Topology monitoring
X X X X X
SAN identification
Element Manager launch Unit level events Zone discovery End point identification LUN identification Device status Node and link level events End point port statistics Logical device-centric and host-centric views X X X X X
X X
One advantage of inband discovery is that inband compliant devices can discover and report errors for adjoining devices. The capability has other associated benefits: Agents can use this method to discover and manage the physical and logical connections from the switch to the fibre-attached disk. Agents can also use this method to discover and manage fibre-attached hosts through contact with their HBAs. One advantage of outband discovery is that, in the event that a FC path is down, the management server can still receive errors from the IP path. Another advantage of outband discovery via SNMP is that outband discovery is not affected by zoning. Currently, zoning limits inband requests from management agents to discovering only those end-points within the zone. (The ANSI FC-GS-4 compliance should remove this limitation for inband management.)
18
Name Server, and Management Services, comprised of Fabric Configuration Server, and the Fabric Zone server, and the Unzoned Name Server. In-band Query Interface
Figure 1-11 Inband management services
In conjunction with the Name Server, Management Services allow management applications to determine the configuration of the entire SAN fabric.
Name server
This provides registry and name services for hosts and devices on the fabric network. This is the basis for soft, or World Wide Name (WWN) zoning. The list of devices is segregated by zone. When a host logs into the SAN, the Name Server tells it which devices it can see and access over the network. Management agents using only the Name Server are limited to device discovery and queries within the same zone as the management agent.
19
Storage Management Initiative Specification - SMIS Enhancements to inband management Enhancements to outband management SAN management applications
Figure 1-12 The future of standards in SAN management
Today, several different standards exist for discovering management information, and for managing devices. Each standard made sense at the time it was adopted. But the industry has learned a lot, and is now attempting to develop a single management model, the Common Information Model (CIM), for managing hosts, storage subsystems, and storage networking devices. CIM was developed as part of the Web-Based Enterprise Management (WBEM) initiative by the Desktop Management Task Force (DMTF) to simplify management of distributed systems. It uses an object-oriented approach to describe management information, and the description (data model) is platform- and vendor-independent. CIM profiles have already been developed for some devices, such as Fibre Channel switches, and NAS devices. IBMs intent is to support CIM-based management as and when device manufacturers deliver CIM-based management interfaces. SNIA regards CIM-based management as the future for multi-protocol SAN management. In 1999, SNIA demonstrated a prototype common Enterprise Storage Resource Manager (ESRM) using WBEM and CIM technology from a number of different vendors (including IBM, Sun, Microsoft, and HDS). This prototype demonstrated management of different storage subsystems (EMC, IBM, StorageTek, Compaq, HDS, and Sun) from a single common management platform. In 2002, IBM, along with other vendors, presented a new piece of technology code-named Bluefin, to SNIA, which was accepted in August 2002. Bluefin employs CIM and WBEM technology to discover and manage resources in multi-vendor SANs using common interfaces. When implemented in management products, Bluefin will improve the usefulness of SAN and storage management applications and provide for greater management interoperability.
20
management interface technology in the form of an SMI Specification (SMIS). The Figure 1-13 below illustrates the SMIS architectural vision.
For todays management applications, to achieve really comprehensive management of SANs and network storage the application, need to communicate with different interfaces of multiple device vendors. Standards compliance varies by individual vendors. In such an environment it is hard to achieve good management with one application, especially with limited development resources. Use of so many different management protocols also slows down the integration of new devices into management scheme as this each new device to be individually tested and ratified for support. These factors cause users to prefer individual specialized management tools rather than one centralized solution. The idea behind SMIS is to standardize the management interfaces so that management applications can utilize these and provide cross device management. This means that a newly introduced device can be immediately managed as it will conform to the standards.SMIS is based on Common Information model (CIM) and Web Based Enterprise Management (WBEM) standards. SMIS is providing new features which extend CIM/WBEM technology. In Figure 1-14 you can see how the SMIS system architecture is related to the SNIA storage model.
21
Database (dbms)
Host
Device
Storage devices
SMIS extensions to WBEM are: A single management transport Within the WBEM architecture, the CIM-XML over HTTP protocol was selected for this transport in SMIS A complete, unified, and rigidly specified object model. SMIS defines profiles and recipes within the CIM that enables a management client to reliably utilize a component vendors implementation of the standard such as the control of LUNs and Zones in the context of a SAN Consistent use of durable names As a storage network configuration evolves and is reconfigured, key long-lived resources like disk volumes must be uniquely and consistently identified over time Rigorously documented client implementation considerations SMIS provides client developers with vital information for traversing CIM classes within a device/subsystem and between devices/subsystems such that complex storage networking topologies can be successfully mapped and reliably controlled An automated discovery system SMIS compliant products when introduced in a SAN environment will automatically announce their presence and capabilities to other constituents Resource Locking SMIS compliant management applications from multiple vendors can exist in the same SAN and cooperatively share resources via a lock manager The models and protocols in the SMIS implementation are platform-independent, enabling application development for any platform, and enabling them to run on different platforms. The SNIA will also provide interoperability tests which will help vendors to test their applications and devices if they conform to the standard.
22
B ul e f S ni e rM v n a ci a e g s e m e n t
reyal drocer/eliF
noitacilppA
S t o r a g e d o m a i n
CIM/WBEM technology uses a powerful human and machine readable language called the managed object format (MOF) to precisely specify object models. Compilers can be developed to read MOF files and automatically generate data type definitions, interface stubs, and GUI constructs to be inserted into management applications. SMIS object models are extensible, enabling easy addition of new devices and functionality to the model, and allowing vendor-unique extensions for added-value functionality. Figure 1-15 shows the components of the SMIS/CIM/WBEM model.
Integration Infrastructure
Object Model Mapping Vendor Unique Features
SMIS Interface
Platform Independent Distributed Automated Discovery CIM/WBEM Security Technology Locking Object Oriented
Device Types
Tape Library
MOF
Switch
MOF
Array
MOF
Many Other
MOF
As this standards are still evolving we can not expect that all devices will support the native CIM interface, and because of this the SMIS is introducing CIM agents and CIM object managers. The agents and object managers bridge proprietary device management to device management models and protocols used by SMIS. The agent is used for one device and an object manager for a set of devices. This type of operation is also called proxy model and is shown in Figure 1-16.
23
SA
Agent 0n
1 1 Proprietary
0n
Device or Subsystem
Embedded Model
Device Subsystem
Device or
Proxy Model
Proxy Model
The CIM Agent or Object Manager will translate a proprietary management interface to the CIM interface. An example of CIM Agent is the IBM CIM agent for the IBM TotalStorage Enterprise Storage Server. When widely adopted, SMIS will streamline the way that the entire storage industry deals with management. Management application developers will no longer have to integrate incompatible feature-poor interfaces into their products. Component developers will no longer have to push their unique interface functionality to applications developers. Instead, both will be better able to concentrate on developing features and functions that have value to end-users. Ultimately, faced with reduced costs for management, end-users will be able to adopt storage-networking technology faster and build larger, more powerful networks. For more information on SMIS/CIM/WBEM, see the SNIA Web site:
http://www.snia.org
24
management services to build a fabric configuration. This reduces the need for management agents on all hosts, and allows for managing end-points. The role of the host-based agent is still important. Agents are still required to provide logical device-centric and host-centric views of host-to-device connectivity. Another future that is expected with the FC-GS-4 standard is a common zone control mechanism that allows setting and managing zones across multiple switch vendors. This will improve security and administrator productivity. The proposed FC-GS-4 standard has a provision for querying end-points, attributes, and statistics via Extended Link Service (ELS) commands. This includes the ability to retrieve performance and error counters. This information can be used to identify ports with high numbers of transmission or receive errors, and to initiate fault identification processes. Access to performance counters allows analysis of traffic pattern, indications of bottlenecks, and capacity planning of SAN networks. Today with IBM Tivoli SAN Manager, this functionality can already be provided using NetView reporting capabilities. See Chapter 6, NetView Data Collection, reporting, and SmartSets on page 207 for more information.
1.2.10 Summary
Here we summarize the main considerations in SAN management (see Figure 1-17).
Business Transactions are growing More companies are implementing SANs. SANs are getting bigger. Traditional manual methods of managing storage no longer work. New tools are needed to manage storage New tools have to be based on standards. Standards are continually evolving. The new tools will reduce costs of: discovering and presenting topology continuous real-time monitoring and fault identification The new tools will help reduce costs of managing storage keep storage available "all the time" for revenue-generating activities
Figure 1-17 SAN management summary
25
In the next chapter we introduce Tivolis SAN management application, IBM Tivoli SAN Manager. This chapter presents an overview of its architecture, components, and usage. In working on this redbook, the team built a lab environment to test certain configurations. We will present this architecture, identify the configurations and functions we tested, and summarize our findings. Subsequent chapters will go into detailed explanations about deployment considerations, availability issues, installation and setup and operations, and so on.
26
Chapter 2.
27
2.1.2 Event Detection and Fault Isolation (ED/FI - SAN Error Predictor)
Error Detection/Fault Isolation (ED/FI - SAN Error Predictor) is a new feature that performs problem determination on Fibre Channel optical links. ED/FI performs predictive failure analysis and fault isolation that allows users to identify and take appropriate action for components that may be failing.
29
There are two additional components (which are provided by the customer): IBM Tivoli Enterprise Console (TEC) which is used to receive Tivoli SAN Manager generated events. Once forwarded to TEC, These can then be consolidated with events from other applications and acted on according to enterprise policy. IBM Tivoli Enterprise Data Warehouse (TEDW) is used to collect and analyze data gathered by the IBM Tivoli SAN Manager. These components are shown in Figure 2-2.
Fabric B
Storage
TCP/IP
Fabric A
SP
Systems
Systems
The Tivoli SAN Manager Web site, which includes the most up-to-date list of supported manager and agent operating systems, fabric components and HBAs (Host Bus Adapters) is at
http://www-3.ibm.com/software/tivoli/products/storage-san-mgr/
30
Remote Console
One or more Remote Consoles can be installed to provide a GUI for Tivoli SAN Manager. The Server system automatically includes a console display. Remote Consoles must be Windows 2000 or Windows XP systems with the following components: NetView presents the information graphically Remote Console code: allows an administrator to monitor IBM Tivoli SAN Manager from a remote location or locations
Always check here first during planning to see if there are any special considerations for your environment.
Discover SAN components and devices Display a topology map of the SAN in physical and logical views Performs error detection and fault isolation Provide real-time and historical reports (through NetView) Error detection and fault isolation (ED/FI - SAN Error Predictor) Discovery of iSCSI devices Launch vendor-provided applications to manage components
31
These functions are distributed across the Manager and the Agent as shown in Figure 2-4.
32
HBA
HBA
Switch
Fibre Channel SAN
Switch
Fibre Channel Connections
Disk
Figure 2-5 IBM Tivoli SAN Manager inband and outband discovery paths
The Agent sends commands through its Host Bus Adapters (HBA) and the Fibre Channel network to gather information about the switches The switch returns the information through the Fibre Channel network & the HBA to the Agent The Agent queries the endpoint devices using RNID and SCSI protocols The Agent returns the information to the Manager over the IP network The Manager then responds to the new information by updating the database and redrawing the topology map if necessary.
iSCSI Discovery Internet SCSI is an Internet Protocol (IP)-based storage networking standard for linking data storage. It was developed by the Internet Engineering Task Force (IETF). iSCSI can be used to transmit data over LANs and WANs.
33
How many Agents should the administrator load? The answer to this question depends on
what you want to accomplish. Four different levels of monitoring are possible, as summarized in Figure 2-6. They are discussed in detail in 3.8, Deployment scenarios on page 76. We are using the terms inband and outband monitoring as defined in 2.3.1, Discover SAN components and devices on page 32.
34
Outband only
Inband monitoring
In this scenario, at least some hosts have IBM Tivoli SAN Manager Agents installed. How many you load depends on platform support, functionality required, as well as performance implications. More information about this is given in 5.7.9, Well placed agent strategy on page 202. You should load at least one Host per zone (or two hosts for redundancy) for the complete topology display. Use this approach (the well-placed Agent) if you want to manage your switches, and to know the name and identify of your RNID-capable hosts. You will display storage-related information (including logical views) only for the hosts with Agents installed. If you need the logical or storage-centric views for all eligible hosts (with platform support), then Agents should be installed on all of these. Use this approach where platform support for Agents is provided AND you want to discover and display storage-related information for as many hosts as possible.
35
36
Prevent faults in the SAN infrastructure through reporting and proactive maintenance. Identify and resolve problems in the storage infrastructure quickly, when a problem occurs. Provide fault isolation of SAN links. IBM Tivoli SAN Manager achieves these purposes by providing the following functions as outlined in Figure 2-3 on page 31, and on Figure 2-4 on page 32. Discover SAN components and devices. Display a topology map of the various fabrics and SANs, giving both physical and logical views. Highlight faults. Provide report and monitoring capability for SNMP-capable devices. Launch vendor-provided applications to manage individual components. Displays ED/FI adornments on the topology map for fault isolation and problem resolution. Provides reporting into Tivoli Enterprise Data Warehouse We will give a brief overview of these IBM Tivoli SAN Manager functions, to illustrate how these functions achieve the business purpose of the tool. Chapter 5, Topology management on page 149, gives a more detailed exploration of the product capabilities.
The SAN icon on the right (highlighted) takes you down the physical topology displays, while the other two icons (Device Centric View and Host Centric View) provide access to the logical topology displays.
37
We reached this display by drilling down from the map shown in Figure 2-10. In this case, one SAN switch is shown with the hosts and devices connected to it. All icons are colored green, indicating they are active and available. Similarly, the connections are black. NetView uses different colors for devices and connections to indicate their status, as explained in 5.1.8, Object status on page 155.
38
If something happens on the SAN (for example, a port on the switch fails), then the topology will be automatically updated to reflect that event, as shown in Figure 2-12. In this case an event is triggered that the port has failed, however, the Server still communicates with the host Agent attached to that port. Therefore the connection line between the switch and host turns red, while the host system remains green.
39
Zone view
Tivoli SAN Manager can also display switch zones, where supported by the switch API. Figure 2-13 shows two zones configured, FASTT and TSM.
40
If you click an individual zone, the members of that zone will be displayed. This is shown in Figure 2-14. More information on the Zone View for Tivoli SAN Manager is given in Zone view on page 165.
41
You can drill down to individual devices, using the icon display, or display all the information in the explorer view. This is usually a more convenient way to display this information as it is more complete. If you select the Explorer icon, you will see the map shown in Figure 2-16. You can see that the LUNs for both storage systems displayed on the left hand panel. For each LUN, you can drill down to find out which host and Operating System has assigned that LUN. In this example, disk system IT14859668 has five LUNs, and each LUN is associated with one or two named hosts. For example, the first LUN is associated with the hosts SENEGAL and DIOMEDE, running Windows 2000. You can drill down one step further from the Operating System to display the filesystem installed on the LUN. There is one LUN discovered in the other disk system, which is used for the filesystem /mssfs on the AIX system CRETE.
42
43
You can drill down on the Host filesystem entries to show also the logical volume (and LUN if fibre-attached) associated with the filesystem, shown in Figure 2-18.
Summary display
You can also see a summary or navigation display which holds a history of all the maps which you have navigated in IBM Tivoli SAN Manager. In Figure 2-19, we have opened up all 3 views of IBM Tivoli SAN Manager, and therefore can see a very comprehensive display. This is because we have drilled down a Device Centric View, drilled down a Host Centric View, and finally navigated the physical topology, and then opened the Navigation Tree.
44
The Figure 2-19 shows: (1) SAN View (third row, left, see #1) Topology View Zone View (2) Device Centric View (third row, middle, see #2) (3) Host Centric View (third row, right, see #3)
Object Properties
If you click any device to select it (from any map), then right-click and select Object Properties from the pop-up menu, this will bring up the specific properties of the object. In Figure 2-20, we selected a switch, and displayed the Properties window, which has seven different tabs. The events tab is shown, listing events which have been received for this switch.
45
46
3 5
When the switch is fixed, as it powers up, it will send an event back to IBM Tivoli SAN Manager, who will re-query the switch (running both the topology scanner and the attribute scanner). The topology will be refreshed to reflect the switch being back online.
47
With NetView you have a very flexible capability to build your own reports according to your specific needs. The reports we are interested in for IBM Tivoli SAN Manager are reports against the objects in the MIB provided by the switch vendor. In our lab, we used an IBM 2109 16-port switch, so we used the Brocade MIB for the Brocade Silkworm 2800. The data elements in the MIB can report on status (device working, not working) and performance (x frames were transmitted over this switch port in Y seconds).
Historical reporting
With NetView you can display historical reports based on data collected. Figure 2-22 shows a report of data collected over 8 ports in a two minute interval. You can set up the data collection to look for thresholds on various MIB values, and send a trap when defined values are reached.
Combining the MIB objects with the canned and customized reporting from NetView provides the storage administrator with the tools needed to help keep the SAN running all the time.
Real-time reporting
NetView can also track MIB values in real-time. Figure 2-23 shows real-time monitoring of traffic on switch ports. The graph shows the number of frames transmitted from a specific port on a particular switch over a specified time interval. You can set the polling interval to show how often the graph will update.
48
You can also create graphs from multiple devices using NetView SmartSets.
49
IBM Tivoli SAN Manager provides 3 methods for locating such applications: Native support: For some devices, IBM Tivoli SAN Manager will automatically discover and launch the device-related tool. SAN Manager has an internal set of rules (in XML format) by which it identifies the devices whose tool it can launch. Web interface support: Some devices are not discovered automatically, but have a Web interface. IBM Tivoli SAN Manager can be configured with the URL, so that it can subsequently launch Web interface. Non-Web interface support: Other applications have no Web interface. IBM Tivoli SAN Manager offers you the ability to configure the toolbar menu to launch any locally-installed application from the IBM Tivoli SAN Manager console. Note, these applications must be locally installed on the Tivoli SAN Manager Server. These options are presented in 5.5, Application launch on page 174.
50
Figure 2-25 shows an example of an ED/FI adornment on the switch ELM17A110. More information on ED/FI is given in Chapter 9, ED/FI - SAN Error Predictor on page 267.
2.5 Summary
In this chapter, we introduced IBM Tivoli SAN Manager, whose primary business purpose is to keep the storage infrastructure running to assist revenue-generating activities. IBM Tivoli SAN Manager discovers the SAN infrastructure, and monitors the status of all the discovered components. Furthermore, it also discovers iSCSI devices and provides the functionality to detect and report on SAN interconnect failures using ED/FI. Through Tivoli NetView, the administrator can provide reports on faults on components (either individually or in groups, or SmartSets, of components).
51
52
Part 2
Part
Design considerations
In Part 2 we discuss the deployment architectures (including Server, Agents, Remote Console, and inband/outband discovery) for IBM Tivoli SAN Manager.
53
54
Chapter 3.
Deployment architecture
In this chapter we provide an overview of Fibre Channel standards, Fibre Channel topologies, IBM Tivoli Storage Area Network Manager (IBM Tivoli SAN Manager), including a component description, as well as Managed Host placement. We cover these topics: Fibre Channel standards Hardware SAN topologies Point-to-point Arbitrated loop Switched Management Inband Outband Component description and placement Deployment considerations Manager Agents Deployment scenarios High availability
55
3.1 Overview
In this chapter, we start out by describing the standards and interoperability on which IBM Tivoli SAN Manager is built (Figure 3-1).
The challenges of managing heterogeneous SANs are discussed and how IBM Tivoli SAN Manager manages heterogeneous SANs. We also discuss the different Fibre Channel topologies and SAN fabric components followed by deployment scenarios. We also discuss SAN management as it relates to IBM Tivoli SAN Manager and some scanner details.
3.2.1 Interoperability
Interoperability means the ways in which the various SAN components interact with each other. Many vendors and other organizations have their own labs to perform interoperability testing to ensure adherence to standards. Before going ahead with any purchase decision for a SAN design, it is recommended that you check with the vendor of your SAN components about any testing and certification they have in place. This should be an important input to the decision making process. Where there are multiple vendors involved this becomes very important for example, if a storage vendor certifies a particular level of HBA firmware, while a server vendor certifies and supports another level. You need to resolve any incompatibilities to avoid ending up with an unsupported configuration.
56
3.2.2 Standards
The SAN component vendors, especially switch makers, are trying to comply to the standards which will allow them to operate together in the SAN environment. The current standard which gives the opportunity to have different components in the same fabric is the FC-SW2 standard from the Technical Committee T11. See this Web site for details:
http://www.t11.org
This standard defines FSPF (Fabric Shortest Path First), Zoning exchange, and ISL (Inter-Switch Link) communication. Not all vendors may support the entire standard. Future standards (for example, FC-SW3, currently under development) will also bring with them functions which will allow the management information to be exchanged from component to component, thus giving the option to manage different vendors components with tools from one vendor. IBM Tivoli SAN Manager employs iSCSI management. The iSCSI protocol is a proposed industry-standard that allows SCSI block I/O protocols to be sent over TCP/IP protocol. See this web site for additional details:
http://www.ietf.org
We have already mentioned Chapter 1, Introduction to Storage Area Network management on page 3 that SAN vendors are trying to establish support for the standards which will give them the opportunity to work together in the same SAN fabric. But this is just one view of heterogeneous support. The other view is from the platforms which will participate in the SAN as the users of the resources. So, when deploying Tivoli SAN Manager it is important that you check that the SAN components you are using are certified and tested with it. This also means that you need to verify which levels of operating systems, firmware, drivers, vendors models are supported by Tivoli SAN Manager. We discuss this later in 3.7, Deployment considerations on page 70.
SAN Components
HBA GBICs Cables Connectors
Figure 3-2 Hardware overview
We will start off by covering items of hardware that are typically found in a SAN. The purpose of a SAN is to interconnect hosts/servers and storage. This interconnection is made possible by the components (and their subcomponents) that make up the SAN itself.
57
Typical HBAs
3.3.2 Cabling
There are a number of different types of cable that can be used when designing a SAN. The type of cable and route it will take all need consideration. The following section details various types of cable and issues related to the cable route.
Distance
The Fibre Channel cabling environment has many similarities to telecommunications or typical LAN/WAN environments. Both allow extended distances through the use of extenders or technologies such as DWDM (Dense Wavelength Division Multiplexing). Like the LAN/WAN environment, Fibre Channel offers increased flexibility and adaptability in the placement of the electronic network components, which is a significant improvement over previous data center storage solutions, such as SCSI.
Shortwave or longwave
Every data communications fiber falls into one of two categories: Single-mode Multi-mode In most cases, it is impossible to visually distinguish between single-mode and multi-mode fiber (unless the manufacturer follows the color coding schemes specified by the Fibre
58
Channel physical layer working subcommittee orange for multi-mode and yellow for single-mode), since there may not be a difference in outward appearance, only in core size. Both fiber types act as a transmission medium for light, but they operate in different ways, have different characteristics, and serve different applications. Single-mode (SM) fiber allows for only one pathway, or mode, of light to travel within the fiber. The core size is typically 8.3 m. Single-mode fibers are used in applications where low signal loss and high data rates are required, such as on long spans (longwave) between two system or network devices, where repeater/amplifier spacing needs to be maximized. Multi-mode (MM) fiber allows more than one mode of light. Common MM core sizes are 50 m and 62.5 m. Multi-mode fiber is better suited for shorter distance applications. Where costly electronics are heavily concentrated, the primary cost of the system does not lie with the cable. In such a case, MM fiber is more economical because it can be used with inexpensive connectors and laser devices, thereby reducing the total system cost. This makes multi-mode fiber the ideal choice for short distance (shortwave) under 500m from transmitter to receiver (or the reverse).
Optical Fiber
Photodiode
Electrical Driver
Emitter
Core
Detector
Cladding
Optical Pulses
Core
The core is the central region of an optical fiber through which light is transmitted. In general, the telecommunications industry uses sizes from 8.3 micrometers (m) to 62.5 micrometers. The standard telecommunications core sizes in use today are 8.3 m (single-mode), 50 m (multi-mode), and 62.5 m (multi-mode).
Cladding
The diameter of the cladding surrounding each of these cores is 125 m. Core sizes of 85 m and 100 m have been used in early applications, but are not typically used today. The core and cladding are manufactured together as a single piece of silica glass with slightly different compositions, and cannot be separated from one another.
59
Coating
The third section of an optical fiber is the outer protective coating. This coating is typically an ultraviolet (UV) light-cured acrylate applied during the manufacturing process to provide physical and environmental protection for the fiber. During the installation process, this coating is stripped away from the cladding to allow proper termination to an optical transmission system. The coating size can vary, but the standard sizes are 250 m or 900 m. Most enterprises today use the 62.5 micron core fiber due to its high proliferation in Local Area Networks (LAN). The Fibre Channel SAN standard is based on the 50 micron core fiber and is required to achieve distances specified in the ANSI Fibre Channel standards. Customers should not use the 62.5 micron fiber for use in SAN applications. It is wise to check with any SAN component vendor to see if 62.5 is supported.Figure 3-16 shows the various cables.
MultiMode Fiber
Cladding (125 um) Core (50 um or 62.5 um)
Copper
The Fibre Channel standards also allows for fibers made out of copper. There are different standards available: 75W Video Coax 75W Mini Coax 150W shielded twisted pair The maximum supported speeds and distances using copper are lower than when using fiber optics.
Plenum rating
A term that is sometimes used when describing cabling is whether a particular cable is plenum rated or not. A plenum is an air filled duct, usually forming part of an air conditioning or venting system. If a cable is to be laid in a plenum, there are certain specifications which need to be met. In the event of a fire, some burning cables emit poisonous gases. If the cable is in a room, then there could be a danger to people in that room. If on the other hand, the cable is in a duct which carries air to an entire building, clearly, there is a much higher risk of endangering life.
60
For this reason, cable manufacturers will specify that their products are either plenum rated or not plenum rated.
Connectors
The particular connectors used to connect a fiber to a component will depend upon the receptacle into which they are being plugged. Some generalizations that can be made. It also is useful to mention some guidelines for best practices when dealing with connectors or cables. Most, if not all, 2 Gbps devices will be using the Small Form Factor SFF or Small Form Factor Pluggable (SFP) technology, and therefore, use Lucent Connector (LC) connectors. Most Gigabit Interface Converter GBICs (see GBICs and SFPs on page 62) and Gigabit Link Module GLMs use industry standard Subscriber Connector (SC) connectors.
SC connectors
The duplex SC connector is a low loss, push/pull fitting connector. It is easy to configure and replace. The two fibers each have their own part of the connector. The connector is keyed to ensure correct polarization when connected, that is transmit to receive and vice-versa. See the diagram of an SC connector in Figure 3-6.
LC connectors
The type of connectors which plug into SFF or SFP devices are called LC connectors. Again a duplex version is used so that the transmit and receive are connected in one step. The main advantage that these LC connectors have over the SC connectors is that they are of a smaller form factor and so manufacturers of Fibre Channel components are able to provide more connections in the same amount of space. Figure 3-7 shows an LC connector.
61
LC connector
A typical GBIC
62
3.4 Topologies
Fibre Channel provides three distinct interconnection topologies. This allows an enterprise to choose the topology best suited to its requirements. See Figure 3-9. The three Fibre Channel topologies are: Point-to-point Arbitrated loop Switched fabric
3.4.1 Point-to-point
Point-to-point is the simplest Fibre Channel configuration to build, and the easiest to administer. Figure 3-10 shows a simple point-to-point configuration. If you only want to attach a single Fibre Channel storage device to a server, you could use a point-to-point connection, which would be a Fibre Channel cable running from the Host Bus Adapter (HBA) to the port on the device. Point-to-point connections are most frequently used between servers and storage devices, but may also be used for server-to-server communications.
Server
Figure 3-10 Fibre Channel point-to-point
Disk
63
HUB
64
It is possible to connect switches together in cascades and meshes using inter Switch links or ISLs. It should be noted that devices from different manufacturers may not inter-operate fully (or even partially) as standards are still being developed and ratified. As well as implementing this switched fabric, the switch also provides a variety of fabric services and features such as: Name services Fabric control Time services Automatic discovery and registration of host and storage devices Rerouting of frames, if possible, in the event of a port problem Features commonly implemented in Fibre Channel switches include: Telnet and/or RS-232 interface for management HTTP server for Web-based management MIB for SNMP monitoring Hot swappable, redundant power supplies and cooling devices Online replaceable GBICs/interfaces Zoning Trunking Other protocols in addition to Fibre Channel
65
3.5.1 DB2
IBM Tivoli SAN Manager uses DB2 as its data repository. DB2 should be installed on the server system before installing IBM Tivoli SAN Manager. The installation process automatically creates the required database and tables in the instance.
66
Note: The Server system requires an IP connection, but not a Fibre Channel connection (this is optional). Similarly for a Remote Console, and a TEC or SNMP system, since all communication to these systems is sent over TCP/IP. Hosts with Agent code installed require a Fibre Channel attachment (for discovery and monitoring) in addition to the LAN connectivity to the Manager. There will also most likely be additional hosts which are FC attached but do not have the Agent installed. We discuss various deployment options for this in 3.8, Deployment scenarios on page 76.
Component placement
TEC or other SNMP manager Tivoli SAN Manager
Fabric Fabric B B
DB2 WebSphere Express Tivoli SAN Manager NetView
ESS
Ethernet
Fabric A
SP
Systems
Systems
67
3.6 Management
The elements that make up the SAN infrastructure include intelligent disk subsystems, tape systems, Fibre Channel switches, hubs. The vendors of these components usually provide proprietary software tools to manage their own individual elements. For instance, a management tool for a hub will provide information regarding its own configuration, status, and ports, but will not support other fabric components such as other hubs, switches, HBAs, and so on. Vendors that sell more than one element often provide a software package that consolidates the management and configuration of all of their elements. Modern enterprises, however, usually purchase storage hardware from a number of different vendors, resulting in a highly heterogeneous SAN. Fabric monitoring and management is an area where a great deal of standards work is being focused. Two management methods are used in Tivoli SAN Manager: inband and outband management.
Topology Scanner
The topology scanner receives a request to scan from the manager. It issues FC Management Server commands (FC-GS-3 standard) to the SAN interconnection devices to get the topology information. The specific FC Management Server commands are: Get platform information Get interconnect information The topology scanner queries every device within each zone that it belongs to. When a scan request is issued from the Server to the Agent, the agents queries the nameserver in the Fibre Channel switch. The nameserver then returns identification information on every device in its database. The symbol label on the topology map is derived from the nameserver. With this information it constructs a complete physical topology map which shows all connections, devices, and zone information. The topology scanner does not use a database to store results. The discovered data will be translated to XML format and sent back to the IBM Tivoli SAN Manager Server where it is stored in the DB2 repository.
Attribute Scanner
The attribute scanner gets the request from the IBM Tivoli SAN Manager Server to poll the SAN. It uses inband discovery, (specifically the SNI HBA API) to discover endpoint devices. These are Fibre Channel (FC) commands to the endpoint devices to gather attribute information. Typically, the commands used are: SCSI Inquiry SCSI Read Capacity
68
When the attribute scanner runs on a system, it first queries the nameserver on the Fibre Channel switch to get a list of storage devices in the SAN. The scanner then verifies if the LUNs are visible to the host by issuing SCSI commands. In most cases the host can see all the LUNs in the SAN even if they are not assigned to a system (if they are not LUN masked). Since SCSI commands are issued from the Agent, the Agent must have LUNs assigned to the SAN attached storage device to gather the attribute information.
Note:
The attribute & topology scanner executables for Windows can be found in \tivoli\itsanm\agent\bin\w32-ix86\ The attribute & topology scanner executables for AIX can be found in /tivoli/itsanm/agent/bin/aix Figure 3-14 shows the inband scanner process.
Inband scanning
Tivoli SAN Manager
Solaris Agent
Legend
Inband scanner request from Solaris and switches Inband scanner request from AIX and switches Fabric A Fabric B
Storage System
69
Outband scanning
Tivoli SAN Manager
Solaris Agent
AIX Agent
Legend
Manager issues outband scanner request to switches
IBM Tivoli SAN Manager runs on Windows 2000 Server or Advanced Server with Service Pack 3. In addition, it should be running on a system with at least a Pentium III 600Mhz class processor, 1 GB of RAM, and 1 GB of free disk space. It is also supported on a pSeries or RS/6000 running AIX 5.1, with minimum 375 Mhz processor, 200 MB free disk space, and 1GB RAM.
70
It is also recommended that the system be dedicated to running Tivoli SAN Manager, rather than running other key enterprise applications. The Server system also requires TCP/IP network connection and addressability to the hosts and devices on the SAN. It does not need to be attached to the SAN via Fibre Channel.
Note: Based on actual customer deployments, optimal server sizing for the Tivoli SAN Manager is 2-way (dual) Pentium III class processor with a speed of 800 Mhz (or equivalent pSeries), 2GB of Random Access Memory (RAM), and 1 GB of free disk space
At this time, Tivoli SAN Manager requires a single machine install, where all the components (DB2, WebSphere Express, NetView and the manager code itself) are installed and running on the same system. Figure 3-16 shows the components of the Tivoli SAN Manager Server.
AIX
Tivoli SAN Manager
Console - NetView
WebSphere Express
WebSphere Express
DB2 Repository
Tivoli SAN Manager requires that the Manager use a fully qualified static TCP/IP host name. You will need to make DNS services accessible to the Tivoli SAN Manager. Agents, however, can now utilize dynamic IP addresses (DHCP) instead of static IP addresses. Other pre-installation checks are given in 4.2, IBM Tivoli SAN Manager Windows Server installation on page 96. Tivoli SAN Manager does not at this time provide built-in cluster support for high availability. If you require a high availability solution for Tivoli SAN Manager without clustering software, we recommend the configuration of a standby server with an identical network configuration and replicate the Tivoli SAN Manager database to that standby server on a regular basis.
DB2 Repository
Windows
Console - NetView
71
The Tivoli SAN Manager Managed Agent is also known as a Managed Host. Consideration needs to given to what functionality is desired from Tivoli SAN Manager. The Agent can be deployed to assist in network management and is also used to collect logical information about filesystems and perform error detection on SAN interconnect links. Refer to the IBM Tivoli SAN Manager Installation Guide, SC23-4697 for prerequisite checks that should be performed on the target managed host prior to any code installation. Please also refer to 4.4, IBM Tivoli SAN Manager Agent installation on page 112.
72
Agents Everywhere
Agents Everywhere
Monitor the SAN as described in Basic Fabric Management, plus deploy the inband Agents to automatically identify as many endpoints as possible. This is useful in a dynamic SAN, where endpoints change often. This will provide the greatest level of ED/FI.
Important: Consideration must be given to the number of Agents and how the initial discovery is performed. Depending on the number of endpoints to discover, IBM Tivoli SAN Manager may taken a long time to complete the discovery process. The full discovery options are shown below:
Never run the full discovery (use topology discovery only). Only run the full discovery when the user selects Poll Now. Only run the full discovery during a periodic or scheduled discovery. Run the full discovery when the user selects Poll Now or during a periodic or scheduled discovery. See IBM Tivoli SAN Manager Planning and Installation Guide, SC23-4697.
73
\Tivoli\itsanm\manager\log\msgITSANM.log. Therefore an HBA should be installed in a host before installing the Agent.
Example 3-2 Host system with no HBAs
2003.06.04 14:32:16.922 BTAHM2528I Agent wisla.almaden.ibm.com:9570 has been marked active. 2003.06.04 14:32:52.906 BTAQE1144E An error occurred attempting to run the Topology scanner on the IBM Tivoli Storage Area Network Manager managed host wisla.almaden.ibm.com:9570.com.tivoli.sanmgmt.tsanm.queryengine.InbandScanHandler run
74
Table 3-1 gives information on the capabilities of IBM Tivoli SAN Manager depending on the RNID capability of the Agents and other hosts.
Table 3-1 SAN Manager using vendor HBAs and switches
Level of information collected
Good
Better
Tivoli SAN Manager can do both outband and inband management. Other inband agents will not be able to obtain RNID information from this HBA. In addition to the Good level of information, you will see: Managed hosts with agents installed are not shown as Unknown entities in the topology view. Some storage devices will no longer be shown as Unknown entities in the topology view.
75
Note: You should plan what information is required to be displayed when deciding where to deploy the Server and inband and outband Agents.
Outband requirements
Topology map of the SAN State changes of any of the fabric connections or devices Network management only Dedicated console for operations staff
Figure 3-19 Sample outband requirements
Advantages
The major advantage of deploying outband only agents is quick configuration and non intrusive deployment, since no Tivoli SAN Manager Agent code is required on any SAN host. After installing the Tivoli SAN Manager, the discovery is completed in a short amount of time by adding in the IP addresses or hostname of the Fibre Channel interconnect devices to Tivoli SAN Manager. This typically a switch or a director. There are limited Event Detection and
76
Fault Isolation (ED/FI) capabilities. ED/FI will adorn the fibre channel switch of the events detected.
Disadvantages
There are no endpoint identifications, LUN associations, or endpoint properties. It provides limited attribute information on the topology map. Once the discovery is complete, the default symbols for SAN attached devices (other than the switches) are displayed as Unknown symbols. The World-Wide Name (WWN) is also shown. See Figure 5-67 on page 197 for an example of using outband agents only. This is caused by the limited attribute information retrieved from the Advanced Topology Scanner. Once the discovery is complete, we can then change the symbol properties of the Unknown host to their actual symbol type and name. Figure 5-32 on page 173 shows how to change the symbol type and name. Figure 3-20 shows the outband agents defined in the Configure Agents panel of IBM Tivoli SAN Manager. Furthermore, no switch port or host systems will be adorned by ED/FI. For detailed information on changing the symbol type and symbol name refer to 5.4.1, SAN Properties on page 170. No Device or Host Centric views will be available, since these depend on information gathered by the Agents (inband).
Setup procedure
With the above requirements, and noting the limitations, we can set up this scenario.
77
1. We first recommend verifying that the SAN is fully operational and check all the SAN attached devices for compatibilities and incompatibilities. The following URL provides compatibility requirements.
http://www-3.ibm.com/software/sysmgmt/products/support/IBM_TSANM_Device_Compatibility.ht ml
2. Once the SAN attached components have been analyzed, review the Tivoli SAN Manager prerequisite checklist. Please refer to 4.2.2, Preinstallation tasks on page 97.
Important: In this configuration, Tivoli SAN Manager relies on SNMP traps and polling intervals to determine when the status change of a Fibre Channel switch or director has occurred. It is recommended that SNMP trap destination tables of these devices by configured to point to the Tivoli SAN Managers IP address to allow for event driven management.
3. Enable the trap destination table on Fibre Channel switches or directors to forward SNMP traps to Tivoli SAN Manager. We demonstrate below the process for enabling the trap forward definitions on an IBM 2109 Fibre Channel switch. a. Log into the switch as administrator and issue the agtcfgshow command. This command is used for displaying the SNMP community names and trap destination configuration of the FC switch. See Example 3-3 below.
Example 3-3 agtcfgshow output
itsosw3:admin> agtcfgshow Current SNMP Agent Configuration Customizable MIB-II system variables: sysDescr = agtcfgset sysLocation = E3-250 sysContact = Charlotte Brooks swEventTrapLevel = 0 authTrapsEnabled = true SNMPv1 community and trap recipient configuration: Community 1: Secret C0de (rw) No trap recipient configured yet Community 2: OrigEquipMfr (rw) No trap recipient configured yet Community 3: private (rw) Trap recipient: 9.1.38.187 Community 4: public (ro) Trap recipient: 9.1.38.187 Community 5: common (ro) No trap recipient configured yet Community 6: FibreChannel (ro) No trap recipient configured yet SNMP access list configuration: Entry 0: No access host configured Entry 1: No access host configured Entry 2: No access host configured Entry 3: No access host configured Entry 4: No access host configured Entry 5: No access host configured itsosw3:admin>
78
You can see above that Community 3 and Community 4 have already been assigned IP addresses of an SNMP manager. We highlighted in bold the current IP entries that we will modify. We want to change them to use our Tivoli SAN Manager. b. We will now show the command to change the Community 3 and Community 4 fields to another IP address. Issue the agtcfgset command from the switch prompt. The agtcfgset command is interactive. To leave an entry unchanged, hit Enter. c. We hit Enter several times until the 3rd and 4th community name fields are reached. We then entered in the new IP address and hit Enter. Keep hitting Enter until the message Committing Configuration...done is displayed and the command prompt is returned. See Example 3-4 for the output.
Example 3-4 agtcfgset output
itsosw3:admin> agtcfgset Customizing MIB-II system variables ... At each prompt, do one of the followings: o <Return> to accept current value, o enter the appropriate new value, o <Control-D> to skip the rest of configuration, or o <Control-C> to cancel any change. To correct any input mistake: <Backspace> erases the previous character, <Control-U> erases the whole line, sysDescr: [ agtcfgset] sysLocation: [E3-250] sysContact: [Charlotte Brooks] swEventTrapLevel: (0..5) [0] authTrapsEnabled (true, t, false, f): [true] SNMP community and trap recipient configuration: Community (rw): [Secret C0de] Trap Recipient's IP address in dot notation: [0.0.0.0] Community (rw): [OrigEquipMfr] Trap Recipient's IP address in dot notation: [0.0.0.0] Community (rw): [private] Trap Recipient's IP address in dot notation: [9.1.38.187] 9.1.38.188 Community (ro): [public] Trap Recipient's IP address in dot notation: [9.1.38.187] 9.1.38.188 Community (ro): [common] Trap Recipient's IP address in dot notation: [0.0.0.0] Community (ro): [FibreChannel] Trap Recipient's IP address in dot notation: [0.0.0.0] SNMP access Access host Read/Write? Access host Read/Write? Access host Read/Write? Access host Read/Write? Access host Read/Write? Access host Read/Write? list configuration: subnet area in dot notation: (true, t, false, f): [true] subnet area in dot notation: (true, t, false, f): [true] subnet area in dot notation: (true, t, false, f): [true] subnet area in dot notation: (true, t, false, f): [true] subnet area in dot notation: (true, t, false, f): [true] subnet area in dot notation: (true, t, false, f): [true]
79
4. Install Tivoli SAN Manager. Refer to 4.2, IBM Tivoli SAN Manager Windows Server installation on page 96 for more details. When completed, launch NetView from the desktop. 5. Add the outband agents into Tivoli SAN Manager by specifying either IP address or hostname of the Fibre Channel switch in the Configure Agents GUI. See Figure 3-20 on page 77. 6. After being added and committed to the database, the SNMP agents are then automatically queried by the Advanced Topology Scanner and the data is returned where the manger will process and draw the initial SAN topology map. Figure 3-21 shows the outband management topology. Outband agents will continued to be polled at the user defined polling interval. See 4.6.4, Performing initial poll and setting up the poll interval on page 132.
Attention: The initial discovery of any large SAN using outband discovery may take some time. Once complete, full discovery should not need to be run very often after that. Consideration should be given as to when the initial discovery is performed. We recommend scheduling initial discoveries during slower processing times for the business.
7. Remote consoles, if required, can be installed anytime after the Server has been installed. The remote console contains the same functionality as the Server console. Once the console is installed it performs database queries for its topology updates from the Manager.
Outband management
Tiv oli SAN Manager
SNMP agent replies SNMP queries to SNMP agents
80
Inband requirements
More accurate topology map Logical views of storage and host systems
Figure 3-22 Sample inband requirements
Advantages
Additional attribute information is returned when inband agents are used. If there are RNID enabled HBAs installed on the SAN attached hosts, then this will allow for a more complete discovery of the SAN. Refer to 3.7.4, Tivoli SAN Manager Agent (Managed Host) on page 72 for more details on RNID. With RNID-enabled HBAs running on our host systems, the correct host symbol is used Compare this to the previous scenario with outband agents where they were discovered as unknown. We had other SAN attached hosts with RNID enabled HBA drivers, although without Tivoli SAN Manager Agents. These hosts that were running no agents were discovered correctly. Inband agents provides logical views of SAN resources these are the Host and Device centric views for hosts with agents installed. The Device Centric View enables you to see all the storage devices and their logical relation to all the managed hosts. This view does not show the switches or other connection devices. The Host Centric View enables you to see all the managed host systems and their logical relation to local and SAN-attached storage devices. ED/FI will provide greater fault isolation capabilities when agents are deployed in the SAN. If errors are detected by ED/FI, adornments will be displayed on the host system running the agents and the corresponding switch and switch port that it is connected. See IBM Tivoli SAN Manager Planning and Installation Guide, SC23-4697. Refer to 5.3.2, Device Centric View on page 166 and 5.3.3, Host Centric View on page 167 for details on Host and Device Centric Views.
Disadvantages
Inband discovery is not available for non-supported Agent Operating Systems. Tivoli SAN Manager supports a limited number of Agent platforms at this time. If your platform is not supported, then an outband strategy may be more appropriate. The more Agents which are installed, the more processes will run and the more data will be collected and co-related. This requires processing resources and time. The inband agent runs two scanners to collect attribute and topology information. See Figure 3-14 on page 69. The amount of data returned depends on the size of the SAN fabric.
Chapter 3. Deployment architecture
81
The Agent must be installed on the hosts which takes some CPU/memory resources and disk space. Running many inband agents will require a corresponding amount of time and processing power to complete the initial discovery.
Setup procedure
With the above requirements, and noting the limitations, we can set up this scenario. 1. Verify that SAN is fully operational. We proceeded with checking all the SAN attached devices for compatibilities and incompatibilities. 2. Since we are installing Agent code on the SAN attached host, check the HBA make and model for compatibility, operating system levels and maintenance, plus the device driver release level and API compatibility. The following URL provides all compatibility requirements.
Important: Tivoli SAN Manager compatibility can be checked at the following URL:
http://www-3.ibm.com/software/sysmgmt/products/support/IBM_TSANM_Device_Compatibility.ht ml
3. Once the SAN attached components have been analyzed, review the Tivoli SAN Manager prerequisite checklist for the Manager and SAN attached hosts. 4. Install Tivoli SAN Manager Server. 5. Install the Tivoli SAN Manager Agent on the selected hosts. The Agents will automatically populate the Configure Agents interface after installation. Figure 3-23 shows the Configure Agents interface after an inband agent has been deployed and contacted the Manager. Refer to 3.5.3, Tivoli SAN Manager Agents on page 66 for more details regarding the Agent installation process.
82
6. Launch NetView from the desktop. 7. Navigate to the SAN -> Configure Agents and note the Agents appear in the top half of this panel. 8. The Agents will automatically perform inband discovery to create the topology map. Figure 3-24 shows the inband management process.
83
Inband management
Tivoli SAN Manager
FC
Fibre Channel switch
FC
Ethernet
The remote console deployment strategy is the same as described in the outband Example.
This is the recommended approach install at least one Agent per zone (preferably two for redundancy), and configure all capable switches as outband Agents.
Advantages
By default, Tivoli SAN Manager will work with inband and outband agents. With this combination we are assured of getting the most complete topology picture with attribute, topology and advanced scanner data being correlated at the Manager to create a full SAN topology. We will continue to leverage RNID enabled drivers on SAN attached hosts for a more complete topology.
84
The Host Centric and Device Centric logical views are available in addition to the topology display. Zone information can be displayed where it is supported by the switch API. It reduces the risk of single point of failure as both Fibre and IP links are used Redundant and more complete information will be gathered and used to draw the topology map.
Disadvantage
The inband Agent install remains intrusive to the SAN attached host and there are potential performance implications for discovery if a large number of Agents are deployed.
Setup procedure
With the above requirements, and noting the limitations, we set up this example based on the steps below. 1. Verify that the SAN is fully operational. We proceeded with checking all the SAN attached devices for compatibilities and incompatibilities. 2. Since we are installing Agent code on the SAN attached host, check the HBA make, model and driver release levels for compatibility, operating system levels and maintenance. 3. Once the SAN attached components compatibility has been confirmed, review the Tivoli SAN Manager prerequisite checklist for our Manager and Agents. 4. Install the Tivoli SAN Manager Server. 5. Install inband agents. 6. Launch NetView from the desktop. 7. Navigate to SAN --> Configure Agents. The top half of the window displays the inband agents that are currently installed which have been automatically added. Click Add to add outband agents. Figure 3-26 shows the Configure Agents interface with both inband and outband agents deployed. For more details on this, see 5.7.6, Outband agents only on page 195 and 5.7.7, Inband agents only on page 197.
85
The Manager will perform another discovery of the SAN. Figure 3-27 shows the Agent deployment and management process.
86
Agent
FC AIX
FC
Ethernet
Storage system
UT
N BA
Dvia
MP SN
The remote console deployment strategy is the same as described in 3.8.1, Example 1: Outband only on page 76.
87
Ethernet
Before installing the Manager, we updated the \system32\drivers\etc\HOSTS file to include entries for the Manager and all Agents. We then updated the HOSTS file on each Agent to include entries for the Manager and all other Agents. We then installed the Manager.
88
HBA, you must run the EUSDSetup program. Contact your HBA manufacturer if you do not have this program. For Windows 2000, Windows NT, Solaris, or Linux, if using QLogic HBAs, specific versions of the QLogic and Device Driver are required for RNID support. Both API and Driver are packaged as one file. See the QLogic Web site (http://www.qlogic.com) for updates. The required API and Device Driver levels are listed for different QLogic HBAs at
http://www-3.ibm.com/software/sysmgmt/products/support/IBM_TSANM_Device_Compatibility.ht ml
The agent operating system must be at a level that supports JRE 1.3.1 or later for AIX and Solaris. For AIX 5.1 and 5.2 there are required patches that can be downloaded. See the readme.txt file for Tivoli SAN Manager for details of these.
Make sure all FC MIBs are enabled. On the IBM 2109, all MIBs are disabled by default. You can use the snmpmibcapset command to enable the MIBs while logged on as administrator to a IBM 2109 switch. (refer to 6.2.3, Loading MIBs on page 212).
General
Verify all FC switch SNMP trap destinations point to the Tivoli SAN Manager IP address The Tivoli SAN Manager and all agents must have static IP addresses Your network should use DNS The remote DNS server must know the static IP address of each machine Verify forward and reverse lookup is working via DNS Issue nslookup to confirm fully qualified host names of the Manager and Managed Host systems.
89
Ethernet
Services stopped IBM HTTP Administration Server IBM HTTP IBM WS AdminServer 4.0
Here are the steps we followed: 1. We started with a fully deployed Tivoli SAN Manager Server. 2. We then installed Tivoli SAN Manager on the standby server, using the same system settings as the primary server. 3. We then stopped the IBM HTTP Administration Server, IBM HTTP Server and IBM WS Admin 4.0 services on the standby server and changed their startup to manual. 4. Backing up the Tivoli SAN Manager database on the primary server is optional. If you do not have customized data (topology symbol types and symbol names) saved, then you can omit this step. Otherwise, use the DB2 control center to select and backup the ITSANMDB database. See 10.2.2, Setup for backing up IBM Tivoli SAN Manager Server on page 286 for details. 5. We then simulated a failure on the primary server by stopping the Tivoli SAN Manager application on the WebSphere Application Server. We then stopped the services IBM HTTP Administration Server, IBM HTTP Server and IBM WS Admin 4.0 6. We then updated the DNS entry for the primary server, changing the IP address of the primary server hostname to that of the standby server hostname and leaving the hostname of the primary server associated with the IP address of the standby server. We could also update the HOSTS file for these changes if DNS is not used.
90
Note: In our testing we used a HOSTS file on the Manager and all the Agents. For the HOSTS file on each Agent and Manager, we modified the IP address of the primary server in the HOSTS file to point to the IP address of the standby server. We then commented out the standby server HOSTS file entry. In Example 3-5 IP address 9.1.38.186 is the address of the standby server and polonium.almaden.ibm.com is the hostname of the primary server. We then commented out the entry for 9.1.38.186 lead.almaden.ibm.com, since this is the original entry that pointed to the standby server before failover
Example 3-5 Agent HOSTS file
9.1.38.189 tungsten.almaden.ibm.com tungsten 9.1.38.186 polonium.almaden.ibm.com polonium 9.1.38.192 palau.almaden.ibm.com palau 9.1.38.191 crete.almaden.ibm.com crete 9.1.38.166 senegal.itsrmdom.almaden.ibm.com senegal 9.1.38.165 diomede.itsrmdom.almaden.ibm.com diomede #9.1.38.186 lead.almaden.ibm.com lead
7. We then used the DB2 control center to restore our backed up database. See 10.5.2, ITSANMDB database restore on page 312 8. We then started IBM HTTP Administration Server, IBM HTTP Server and IBM WS Admin 4.0 on the standby server, and verified that Tivoli SAN Manager was running on the WebSphere Application server using the WebSphere Administration Console. See 4.2.8, Verifying the installation on page 110. 9. Finally we re-started the Agents. The failover process is summarized in Figure 3-30.
Failover process
3
update DNS
DNS
1
Server fails
Fibre Channel switch
Ethernet
2
Stop agents
5
Start agents
4
Start services IBM HTTP Administration IBM HTTP IBM WS AdminServer 4.0
91
3.9.2 Summary
In this chapter, we discussed fibre channel standards, SAN topologies in how they apply to IBM Tivoli SAN Manager. We also introduced inband and outband management as it relates to IBM Tivoli SAN Manager. Finally, we presented various deployment scenarios using IBM Tivoli SAN Manager.
92
Part 3
Part
93
94
Chapter 4.
95
Figure 4-1 IBM Tivoli SAN Manager supported operating system platforms
Installation
Static IP required, seven contiguous free ports required Fully qualified hostname required Install DB2 7.2 and FP8 Upgrade DB2 JBDC drivers to version 2 Install the SNMP service (if not installed) Install the Server code
Embedded install of the IBM WebSphere Application Server V5.0 Tivoli NetView Tivoli SAN Manager Server
96
If you do not have a full computer name, including domain name, change it by clicking Properties and supply the fully qualified domain name (FQDN) as shown in Figure 4-4.
After this change, you need to reboot the system for it to become effective.
97
rather than the fully qualified host name. This can be changed in the hosts tables on the DNS server and on the local computer. For a Windows 2000 system, edit the HOSTS file in %SystemRoot%\system32\drivers\etc. The %SystemRoot% is the installation directory for Windows 2000, usually WINNT. The long name should appear before the short name as in Example 4-1.
Example 4-1 POLONIUM HOSTS file
9.1.38.167 9.1.38.166 9.1.38.150 127.0.0.0 lochness.almaden.ibm.com senegal.almaden.ibm.com bonnie.almaden.ibm.com localhost lochness senegal bonnie
Attention: Host names are case-sensitive. The case used for the computer name in Network Identification (Figure 4-3) must be the same as that used in the HOSTS file.
Tip: The database can be also used for other data, but we recommend a dedicated database for IBM Tivoli SAN Manager to avoid any potential performance impact.
If you are installing on a system which already has DB2 Enterprise Edition Version 7.2 installed, you need to install fix pack 8 to meet the requirements. Before installing DB2, you should create a userid with administrative rights, and install DB2 this userid. In our example we created the userid db2admin. If this user does not already exist, it will be created during DB2 installation.
Important: Installation can only be performed with a userid with local administrative rights.
When installing DB2, you only need to select the DB2 Enterprise Edition component. You can then accept all defaults the only thing you need to change is to select Do not install the OLAP Starter Kit. After installation, reboot the system. When the system restarts, check that the DB2 service was started as shown in Figure 4-5.
98
To apply the Fix Pack do the following: 1. Logon to the system with the userid used for DB2 installation, in our example db2admin. 2. Stop all applications accessing DB2 databases, and stop all DB2 services (including DB2 Warehouse if running). 3. Unzip the fix pack file you downloaded. 4. Run the SETUP.EXE. This will install the upgrade over your existing DB2 installation. 5. Reboot the system.
99
b. Stop DB2 by issuing the command: db2stop. If DB2 does not stop with this command, you can use db2stop force. c. Run the batch file usejdbc2.bat. Make sure that there are no error messages. If so, correct the errors and try again. d. Restart DB2 by issuing the command: db2start.
Tip: If you have the problems running usejdbc2.bat check if any Java applications are running. Stop the application and run usejdbc2.bat again.
100
Select Management and Monitoring Tools and click Details (Figure 4-7).
Select Simple Network Management Protocol, and click OK. The installation program will prompt you for the installation CD or location where you have the installation files available. After completing these steps we are ready to install the IBM Tivoli SAN Manager Server code.
101
Note: Embedded version of the IBM WebSphere Application Server Express. The installation process automatically installs the embedded version of WebSphere you do not have to install it separately. There are some differences between this embedded version and WebSphere Application Server, for example, you will no longer see the WebSphere Administrative Console, it uses less memory, and is easier to install and maintain.
Note: MQSeries is no longer included with (or used by) Tivoli SAN Manager.
1. Run LAUNCH.EXE from the installation directory. Figure 4-9 shows the startup window.
102
2. Select Manager and click Next to continue. 3. Select the language for example, English and click OK. The Welcome window, shown in Figure 4-10, now displays.
4. Click Next to display the license agreement window. Read and accept the license and click Next to continue. You will be prompted for the directory to install Tivoli SAN Manager, shown in Figure 4-11.
Chapter 4. Installation and setup
103
5. It is recommended that you accept the default directory. Click Next to continue, and the base port selection window will display, as in Figure 4-12.
6. The installation program requires seven consecutive free ports. You only need to define the starting port. In our example we used the default port 9550. Click Next to continue, and you will see the window shown in Figure 4-13.
104
On this window you need to specify the DB2 administrative userid and password. In our example we used db2admin .Click Next to continue and the window in Figure 4-14 displays.
Note: The database administration userid must exist before installing the IBM Tivoli SAN Manager Server.
105
7. Here you specify the name which will be used for the IBM Tivoli SAN Manager Server database, and a userid associated with this database.
Tip: We recommend using a meaningful name for the database as this can simplify other operations related to the database such as administration and backups. We accepted the default name, ITSANMDB.
This database stores the IBM Tivoli SAN Manager Server information which comes from outband and inband Agents. The DB2 administrator userid specified in the previous step will be used to create the userid entered on this window (db2user1 in our case), which will then be used to access the Server database.
Attention: The userid which is specified here must be different from the database administration userid.
After completing the fields, click Next to continue. The window in Figure 4-15 displays.
8. Here you need to specify the userid for WebSphere Administration. This should be an existing system userid. In our example we entered wasadmin. Click Next to continue and you will see a window similar to Figure 4-16.
Tip: The WebSphere userid specified here must already exist on your system. In our sample we defined an ID, WASADMIN. The password used here should never expire on your system.
106
9. Managed systems (Tivoli SAN Manager Agents) have to authenticate to the Server when they send data to it. For this reason, you need to supply an authentication password during installation. The same password will also be used during installation of Agents (see Step 6 on page 116) and Remote Consoles (Step 7 on page 123). After supplying the password, click Next to continue and you will see a window similar to Figure 4-17.
10.Specify a drive letter for installing IBM Tivoli NetView. Click Next to continue and you will see the window in Figure 4-18.
107
Note: This panel, and the next, will not display if Tivoli NetView Version 7.1.3 is already installed. This is the only version supported to work with IBM Tivoli SAN Manager Server.
11.Here you specify the userid and password for running the NetView service. The installation program will create this userid if it does not exist. Click Next to continue, and the Tivoli SAN Manager Installation summary window, shown in Figure 4-19, will display.
108
12.On this window you can see the installation path which defaults to \tivoli\itsanm\manager and the size of the installed code. Click Next to continue and the installation will start, as shown in Figure 4-20.
14.Click Next to continue, and you will be prompted to reboot the system (required).
109
You should also check the HOSTS file which is modified by the Tivoli NetView installation. The entry shown in Example 4-2 was created in our environment.
Example 4-2 Tivoli NetView HOSTS file entry
# # The following entry was created by NetView based on Registry information. # 9.1.38.167 lochness lochness.almaden.ibm.com
Tivoli NetView checks the HOSTS file every time it starts and if this exact line is missing it will recreate the entry. This entry could have been inserted before the entry we made for long host name resolution as shown in Example 4-1 on page 98, meaning it takes precedence. To avoid this, check that the lines shown in Example 4-2 are at the end of the HOSTS file (moving them if necessary), so that it looks similar to Example 4-3.
Example 4-3 Correct HOSTS file order
9.1.38.167 lochness.almaden.ibm.com lochness 9.1.38.166 senegal.almaden.ibm.com senegal 9.1.38.150 bonnie.almaden.ibm.com bonnie 127.0.0.1 localhost # # The following entry was created by NetView based on Registry information. # 9.1.38.167 lochness lochness.almaden.ibm.com
As you can see, the long host name entry precedes the Tivoli NetView entry.
110
You should also check the log file after installation, which is found in the directory c:\tivoli\itsanm\manager\log\install\*.log. See Chapter 11, Logging and tracing on page 317 for more information on logging.
Installation
Static IP required, seven contiguous free ports required Fully qualified hostname required Install DB2 7.2 and FP8 Upgrade DB2 JBDC drivers to version 2 Install the SNMP service (if not installed) Install the Tivoli SAN Manager Server code :
Embedded install of the IBM WebSphere Application Server V5.0 Tivoli SAN Manager Server
Figure 4-23 Agent installation
Important: The AIX installation is almost identical to the Windows installation as described in 4.2, IBM Tivoli SAN Manager Windows Server installation on page 96. The major difference is that since Tivoli NetView for AIX is not supported or installed, a separate Windows system with NetView and the remote console installed (as in 4.5, IBM Tivoli SAN Manager Remote Console installation on page 119), is required to view the console. Therefore the NetView screens do not appear in the AIX installation. All other installation steps are exactly the same.
Since the installation uses a GUI, an XWindows server session (either native or emulated) is required.
111
To start the manager on AIX, run this command (using the default directory):
/tivoli/itsanm/manager/bin/aix/startSANM.sh
To stop the manager on AIX, run this command (using the default directory):
/tivoli/itsanm/manager/bin/aix/stopSANM.sh
Installation
Four contiguous free ports required Fully qualified hostname required Install the Agent code Setup service to start automatically
Figure 4-24 Agent installation
112
Tips:
You need 150 MB of free temporary disk space for installation. If the installation fails on a Windows system, restart the system so that the failed partial installation will be cleaned up before trying to reinstall the agent. Delete all files below the base installation directory c:\tivoli\itsanm\agent (Windows) or /tivoli/itsanm/agent (UNIX) before reinstalling. Before installing the agent on Linux, check the /etc/hosts file and enter the correct IP address in front of the hostname. Linux often automatically creates an entry with the loopback (127.0.0.1) address, which causes the agent to register itself at the IBM Tivoli SAN Manager server under this address and therefore cannot be contacted.
1. Run the appropriate file from the agent subdirectory on the CD: AIX ./setup.aix Solaris ./setup.sol Linux ./setup.lin Windows SETUP.EXE
As the installation program is Java based it will look the same on all platforms. Note that you need an XWindows session on all UNIX platforms to perform the installation. You will first be prompted to select the language for installation. We chose English. Click Next to continue. You will then the Welcome window shown in Figure 4-25.
2. Click Next to display the license agreement. Read and accept the agreement, click Next and you will see a window similar to Figure 4-26.
113
3. Here you can specify the installation directory or just accept the suggested one. Click Next to continue, you will see a window similar to Figure 4-27.
4. Enter the IBM Tivoli SAN Manager Server fully qualified host name and the first port number you defined during Server installation, (Step 5 on page 104). We specified 9550.
114
Important: The port number specified here must match the port number specified during Server install.
Click Next to continue, and you will see the window shown in Figure 4-28.
5. Here you need to specify the starting port for four consecutive ports to be used by the Agent. These ports should not be used by any other application. Click Next to continue, and you will see the window shown in Figure 4-29.
115
6. On this window you define the Agent access password which has to be the same as you defined during Server installation (Step 8 on page 106). Click Next you will see the installation check window, as in Figure 4-30.
7. This shows the installation directory and size. Click Next to start the installation. When complete, you will see the window in Figure 4-31. Click Finish to complete the installation.
116
8. Check the log file c:\tivoli\itsanm\agent\log.txt (Windows) or /tivoli/itsanm/agent/log.txt (UNIX) for any errors.
AIX
The Agent service is started by running the command tcstart.sh from the directory <install>/bin/aix. Stop the Agent service with tcstop.sh. To start the service automatically, IBM Tivoli SAN Manager uses the BSD style rc.d directories on AIX. Since the default run-level is 2, it creates the needed start/stop scripts in /etc/rc/rc2.d. There are two scripts:
S90itsrm_agent - starts the agent, when the run-level is entered (Example 4-4). K90itsrm_agent - stops the agent, when the run-level is left.
Example 4-4 rc2.d start script: S90itsrm_agent used on AIX
#!/bin/sh TSNM_DIR=/opt/tivoli/itsanm/agent/bin/aix if [ -f "$TSNM_DIR/tcstart.sh" ] && [ -r "$TSNM_DIR/tcstart.sh" ] && [ -x "$TSNM_DIR/tcstart.sh" ] then $TSNM_DIR/tcstart.sh > $TSNM_DIR/../../log/S90_tcstart.log 2>&1 & # $TSNM_DIR/tcstart.sh > /dev/null & fi
117
Solaris
The Agent service is started by running the command tcstart.sh from the directory <install>/bin/solaris2. Stop the Agent service with tcstop.sh. The installation program will create a startup script S90itsrm_agent in the directory /etc/rc/rc2.d. This will cause the Agent to start at boot time.
Linux
The Agent service is started by running the command tcstart from the directory tivoli/itsanm/agent/bin/linux. Stop the Agent service with tcstop. The installation program will create a startup script S90itsrm_agent in the directory /etc/rc/rc2.d and /etc/rc/rc3.d. This will cause the Agent to start at boot time.
Windows
The service can be started or stopped with the Service applet in Administrative Tools. When you open the applet you will see the window in Figure 4-32.
The startup type should be set to Automatic, for the service to start automatically. You can also use Command Line commands: To start net start ITSANM-Agent To stop net stop ITSANM-Agent
118
Installation
Six contiguous free ports required Fully qualified hostname required Install the SNMP service (in not installed) Install the Console code Check if service started automatically Correct the HOSTS file
Figure 4-33 Console installation
119
Tips:
You need 150 MB of free temporary disk space. If the installation fails, restart the system so that the failed partial installation will be cleaned up before trying to reinstall. Delete all files below the base installation directory c:\tivoli\itsanm\console before reinstalling. If Tivoli NetView Version 7.1.3 is already installed, ensure these applications are stopped: Web Console Web Console Security MIB Loader MIB Browser Netmon Seed Editor Tivoli Event Console Adaptor Configurator
2. Select Remote Console and click Next. The following window prompts you to select the language. We selected English. The Welcome window will display, shown in Figure 4-35.
120
3. Click Next to continue, and the License window displays. Read and accept the license. Click Next to continue, and you will see a window similar to Figure 4-36.
4. Specify the installation directory, click Next, and the window shown in Figure 4-37 displays.
121
5. Specify the fully qualified host name of the Server and the Server port which you defined during Server installation (Step 5 on page 104). Click Next to continue with the installation, you will see a window similar to Figure 4-38.
6. Specify the starting port of a six port range. This ports should not be in use by any other application. Click Next to continue, you will see a window similar to Figure 4-29.
122
7. On this window you define the Console access password which has to be the same as you defined during Server installation Step 9 on page 107. Click Next to continue, you will see a window similar to Figure 4-40.
8. As Tivoli NetView is part of the IBM Tivoli SAN Manager Console install you need to specify the drive letter where it will be installed. Click Next you will see a window like Figure 4-41.
123
Note: This panel and the next will not display if NetView Version 7.1.3 is already installed.
9. Specify the userid and password to be used for the NetView service. The installation program will create this userid if it does not exist. Click Next to display the summary window (Figure 4-42)
124
10.The summary window shows the selected directory and the size of the installation. Click Next to continue, and the installation will proceed. When it is complete, the window shown in Figure 4-42 displays.
11.Click Next to continue and Finish to finish installation. You need to restart the system after installation. Check the log files c:\tivoli\itsanm\console\log.txt and c:\tivoli\itsanm\console\nv\log\* for any errors.
If the service was started successfully the status should be Started. You also need to check the HOSTS file as the Tivoli NetView installation inserts lines similar to Example 4-5.
Chapter 4. Installation and setup
125
The long name must be resolved before the short name, therefore check there is a suitable long name entry before the lines made by Tivoli NetView, as shown in Example 4-6. Add or edit the line if necessary
Tip: Do not delete the Tivoli NetView entry, as it will be added every time you start IBM Tivoli SAN Manager Console.
Example 4-6 Corrected HOSTS file entry
9.1.38.169 wisla.almaden.ibm.com wisla # # The following entry was created by NetView based on Registry information. # 9.1.38.169 wisla.almaden.ibm.com wisla
After installing the Server, Agent and the Console you need to set up the environment.
126
Disk array
SAN
Switch
SNM P
Dis k array Disk array IBM Tiv oli Storage Area Network Manager
NetView listens for SNMP traps on port 162 and the default community is public. When the trap arrives to the Tivoli NetView console it will be logged in the NetView Event browser and then forwarded to Tivoli SAN Manager as shown in Figure 4-47. Tivoli NetView is configured during installation of the Tivoli SAN Manager Server for trap forwarding to the IBM Tivoli SAN Manager Server.
Tivoli NetView
SAN Manager
NetView forwards SNMP traps to the defined TCP/IP port, which is the sixth port derived from the base port defined during installation, shown in 4.2.7, IBM Tivoli SAN Manager Server install on page 102. We used the base port 9550, so the trap forwarding port is 9556. With this setup, the SNMP trap information will appear in the NetView Event browser and SAN Manager will use it for changing the topology map.
Note: If the traps are not forwarded to SAN Manager, the topology map will be updated based on the information coming from Agents at regular polling intervals. The default IBM Tivoli SAN Manager Server installation (including NetView install) will set up the trap forwarding correctly.
127
2. The trapfrwd daemon must be running before traps are forwarded. Tivoli NetView does not start this daemon by default. To configure Tivoli NetView to start the trapfrwd daemon, enter these commands at a DOS prompt:
ovaddobj \usr\ov\lrf\trapfrwd.lrf ovstart trapfrwd
To verify trapfrwd is running, run Server Setup from the NetView Options menu, (Figure 4-48).
128
After trap forwarding is enabled, configure the SAN components such as switches to send their SNMP traps to the NetView console.
Note: This type of setup will give you the best results, especially for devices where you cannot change the number of SNMP recipients and the destination ports.
Note: Some of the devices do not allow changing the SNMP port they will only send traps to port 162. In such cases this scenario is not useful.
129
Disk array
SAN
Switch
Disk array Disk array IBM Tivoli Storage Area Network Manager SNMP Console port 162
The receiving port number for the Tivoli SAN Manager Server is the primary port number plus six ports as described in Method 1: Forward traps to local Tivoli NetView console on page 126. The receiving port number for the SNMP console is 162. In this case traps are used to reflect the topology changes and they will also show in the SNMP console events. The SNMP console in this case could be another Tivoli NetView installation or any other SNMP management application. For such a setup, the devices have to support setting multiple traps receivers and also changing the trap destination port. As this functionality is not supported in all devices this scenario is not recommended.
130
The configuration panel has two parts for inband and outband agents. The outband Agents are defined in the bottom half of the panel. Here, you define all switches in the SAN you want to monitor. To define such an Agent, click Add and you will see a window as in Figure 4-51.
Enter the host name or IP address of the switch and click OK to continue. The Agent will appear in the agent list as shown in Figure 4-50. The state of the Agent must be Contacted if you want IBM Tivoli SAN Manager to get data from it. To remove an already defined Agent, select it and click Remove.
131
selecting the defined Agent and click Advanced (from the Configure Agents window shown in Figure 4-50). You will see a window like Figure 4-52.
Enter the user name and password for the switch login and click OK to save. You will then be able to see zone information for your switches as described in Zone view on page 165.
Tip: It is only necessary to enter ID and password information for one switch in each SAN to retrieve the zoning information. We recommend entering this information for at least two switches, however, for redundancy. Enabling more switches than necessary for API zone discovery may slow performance.
132
Note: Polling takes time and is dependant on the size of the SAN.
If you did not configure trap forwarding for the SAN devices, (as described in 4.6.1, Configuring SNMP trap forwarding on devices on page 126), you will need to define the polling interval. In this case, the topology change will not be event driven from the devices, but will be updated regularly at the polling interval. You can setup the poll interval in the SAN Configuration (Figure 4-54). After specifying the poll interval click OK to save the changes. The polling interval can be specified in: Minutes Hours Days (you can specify the time of the day for polling) Weeks (you can specify the day of the week and time of the day for polling)
Tip: You do not need to configure the polling interval if all your devices are set to send SNMP traps to either the local NetView console or the Tivoli SAN Manager Server.
133
4. Ensure Windows 2000 Terminal Services are not running. 5. Insert the Tivoli SAN Manager (Manager and Remote Console) CD into the CDROM drive. If Windows autorun is enabled, the installation program should start automatically. If it does not, doubleclick launch.exeo from the CD drive in Windows Explorer. The Launch panel will be displayed. 6. The installation process is the same as described in 4.2.7, IBM Tivoli SAN Manager Server install on page 102. Follow the steps in the Tivoli Storage Area Network Manager Planning and Installation Guide, SC23-4697.
Note: The DB2 default database name for Tivoli SAN Manager Version 1.1 was TIVOLSAN. The new name in Version 1.2 is ITSANMDB. If the database name and user ID and password are not the same as the previous installation, data will not be migrated, therefore, to retain your data, over-ride the default name with the previous database name (for example, TIVOLSAN).
When the installation has completed, the Successfully Installed panel is displayed. If the correct version of Tivoli NetView was installed before you installed the manager, you will see the Finish button. (Tivoli NetView will then not be installed with the manager.) If Tivoli NetView was not previously installed and is therefore installed with this installation of the manager, you will see a prompt to restart the system. After rebooting, check the Tivoli SAN Manager service was started (Figure 4-22 on page 110).
134
2. Insert the Tivoli Storage Area Network Manager and Remote Console CD into the CDROM drive and doubleclick launch.exe. 3. Follow the steps in the Tivoli Storage Area Network Manager Planning and Installation Guide, SC23-4697. The installation process will automatically update your NetView Version 7.1.1 to 7.1.3 After rebooting, check to see if the Tivoli SAN Manager console service was started (Figure 4-44 on page 125).
Follow the directions on the installation panels as described in the Tivoli Storage Area Network Manager Planning and Installation Guide, SC23-4697. The agent service is automatically started after installation.
135
3. To complete the uninstallation process follow the instructions on the window. Restart the system after uninstallation completes. 4. Delete the directory c:\tivoli\itsanm. 5. If needed, uninstall DB2.
2. Follow the steps for Windows uninstallation (4.8.1, Tivoli SAN Manager Server Windows uninstall on page 135). 3. A reboot is not required unless you want to reuse the manager ports (95509556).
Note: This GUID package is not uninstalled when you uninstall IBM Tivoli Storage Area Network Manager. If you plan to reinstall IBM Tivoli Storage Area Network Manager, you should not delete the Tivoli GUID specific files and directories. This can cause IBM Tivoli Storage Area Network Manager to function improperly.
AIX or Solaris
1. Stop the Agent service with the command:
/tivoli/itsanm/agent/bin/tcstop.sh /tivoli/itsanm/agent/bin/solaris2/tcstop.sh
If you do not see the entry in Example 4-8, the agent service has stopped:
Example 4-8 Output of ps -aef | grep "java.*tsnm.baseDir"
root 96498 158924 0 Aug 17 pts/3 24:53 /tivoli/itsanm/ agent/jre/bin/java -Dtsnm.baseDir=/tivoli/itsanm/agent -Dtsnm.localPort=9570 -Dtsnm.protocol=http: // -Djlog.noLogCmd=true -classpath /tivoli/itsanm/agent/lib/ classes:/tivoli/itsanm /agent/servlet/common/lib/servlet.jar:/tivoli/itsanm/agent/lib/ com.ibm.mq.jar:/ti voli/itsanm/agent/lib/com.ibm.mqjms.jar:/tivoli/itsanm/agent/lib/ jms.jar:/tivoli/ itsanm/agent/lib/ServiceManager.jar::/tivoli/itsanm/agent/servlet/ bin/bootstrap.jar -Djavax.net.ssl.keyStore=/tivoli/itsanm/agent/conf/server.keystore -Djavax.net.s sl.keyStorePassword=YourServerKeystorePassword -Dcatalina. base=/tivoli/itsanm/age nt/servlet -Dcatalina.home=/tivoli/itsanm/agent/servlet org.apache.catalina.start up.Bootstrap start
136
Linux
1. Stop the Agent service. 2. Start the uninstallation with the command:
/tivoli/itsanm/agent/_uninst/uninstall
Windows
To uninstall the Windows Agent, select Control Panel -> Add/Remove Programs, select IBM Tivoli Storage Area Network Manager - Agent, and click Change/Remove (Figure 4-56).
To complete the uninstallation, follow the instructions on the window, and restart the system after uninstallation completes.
137
To complete uninstallation process follow the instructions on the window. Restart the system after uninstallation completes.
Tip: Do not uninstall the Tivoli GUID package if you are running other Tivoli applications on the system.
You should only uninstall the Tivoli GUID if this would be the last Tivoli application using it and you want to have a clean computer. To uninstall the Tivoli GUID on various platforms follow these steps.
AIX
Uninstall Tivoli GUID using SMIT or with the command:
installp -u tivoli.guid
Solaris
Uninstall Tivoli GUID with the command:
pkgrm TIVguid
Windows
Choose Control Panel -> Add/Remove Programs, select TivGuid, and click Change/Remove as shown in Figure 4-58.
138
To complete the uninstallation process, follow the instructions on the window. Restart the system after uninstallation completes
139
Where <option file> is manager.opt for the manager, agent.opt for the agent, and console.opt for the remote console.
#########################################################################
# InstallShield Options File Template for Manager silent install # # This file can be used to create an options file (i.e., response file) for the # wizard "Setup". Options files are used with "-options" on the command line to # modify wizard settings. # # The settings that can be specified for the wizard are listed below. To use # this template, follow these steps: # # 1. Specify a value for a setting by replacing the characters value. # Read each settings documentation for information on how to specify its # value. # # 2. Save the changes to the file. # # 3. To use the options file with the wizard, specify -options filename # as a command line argument to the wizard, where filename is the name # of this options file. # example: # setup.exe -silent -options manager.opt ############################################################################### #-----------------------------------------------------------------------------# Select default language # Example: # -P defaultLocale="English" #-----------------------------------------------------------------------------#-P defaultLocale="English" #-----------------------------------------------------------------------------# Installation destination directory. Specify a valid directory into which the # product should be installed. If the directory contains spaces, enclose it in # double-quotes. For example, to install the product to C:\Program Files\My # Product in Windows, use # -P installLocation="C:\Program Files\My Product" # -P installLocation="C:/tivoli/itsanm/manager" # For Unix # -P installLocation="/tivoli/itsanm/manager" #------------------------------------------------------------------------------P installLocation="C:/tivoli/itsanm/manager" #-----------------------------------------------------------------------------# Base port number for this installation # Example: # -W portNoBean.portNumber=9550 #------------------------------------------------------------------------------W portNoBean.portNumber=9550 #-----------------------------------------------------------------------------# DB2 administrator user ID # Example:
140
# -W DBPassword.userID="db2admin" #------------------------------------------------------------------------------W DBPassword.userID="db2admin" #-----------------------------------------------------------------------------# DB2 administrator password # # Example: # -W DBPassword.password="password" #------------------------------------------------------------------------------W DBPassword.password="password" #-----------------------------------------------------------------------------# Name of database to be created and used by SANM (SANM database) # # Example: # -W SANPassword1.dbName="itsanmdb" #------------------------------------------------------------------------------W SANPassword1.dbName="itsanmdb" #-----------------------------------------------------------------------------# SANM database user ID, must be different than DB2 administrator user ID # # Example: # -W SANPassword1.userID="db2user1" #------------------------------------------------------------------------------W SANPassword1.userID="db2user1" #-----------------------------------------------------------------------------# SANM database password # Example: # -W SANPassword1.userID="password" #------------------------------------------------------------------------------W SANPassword1.password="db2user1" #-----------------------------------------------------------------------------# Websphere user ID # Example: # -W WASPassword.userID="wasuser1" #------------------------------------------------------------------------------W WASPassword.userID="wasadmin" #-----------------------------------------------------------------------------# Websphere password for the user above # Example: # -W WASPassword.password="password" #------------------------------------------------------------------------------W WASPassword.password="wasadmin" #-----------------------------------------------------------------------------# Manager, Agent, Console communication password # Example: # -W comPassword.password="password" #------------------------------------------------------------------------------W comPassword.password="itso_san_jose_pw" #-----------------------------------------------------------------------------# Drive Letter where Netview to be installed. # Example: # -W beanNVDriveInput.chcDriveName="C" #------------------------------------------------------------------------------W beanNVDriveInput.chcDriveName="C" #-----------------------------------------------------------------------------# Netview password. # Example: # -W beanNetViewPasswordPanel.password="password" #------------------------------------------------------------------------------W beanNetViewPasswordPanel.password="netview"
141
#-----------------------------------------------------------------------------# Property use by installation program. Do not remove or modify. #------------------------------------------------------------------------------W setWinDestinationBean.value="$P(installLocation)"
UNIX:
-P installLocation="/opt/tivoli/itsanm/manager"
Note: This procedure accepts forward or backward slashes for directory paths on a Windows platform.
Specify the DB2 administrator user ID
-W DBPassword.userID="db2admin"
142
# value. # # 2. Save the changes to the file. # # 3. To use the options file with the wizard, specify -options filename # as a command line argument to the wizard, where filename is the name # of this options file. # example: # setup.exe -silent -options agent.opt ## ############################################################################### #-----------------------------------------------------------------------------# Select default language # Example: # -P defaultLocale="English" #-----------------------------------------------------------------------------#-P defaultLocale="English" #-----------------------------------------------------------------------------# Installation destination directory: # # The install location of the product. Specify a valid directory into which the # product should be installed. If the directory contains spaces, enclose it in # double-quotes. For example, to install the product to C:\Program Files\My # Product in Windows, use # -P installLocation="C:\Program Files\My Product" # -P installLocation="C:/tivoli/itsanm/agent" # For Unix # -P installLocation="/tivoli/itsanm/agent" #------------------------------------------------------------------------------P installLocation="c:/tivoli/itsanm/agent" #-----------------------------------------------------------------------------# Specify full qualified name of remote manager machine: # Example: # -W managerNamePort.managerName="manager.sanjose.ibm.com" #------------------------------------------------------------------------------W managerNamePort.managerName="manager.sanjose.ibm.com" #-----------------------------------------------------------------------------# Specify base port number of remote manager: # Example: # -W managerNamePort.managerPort=9550 #------------------------------------------------------------------------------W managerNamePort.managerPort=9550 #-----------------------------------------------------------------------------# Base port number for this installation # Example: # -W portNoBean.portNumber=9570 #------------------------------------------------------------------------------W portNoBean.portNumber=9570 #-----------------------------------------------------------------------------# Manager, Agent, Console communication password # Example: # -W comPassword.password="password" #------------------------------------------------------------------------------W comPassword.password="itso_san_jose_pw" #-----------------------------------------------------------------------------# Property use by installation program. Do not remove or modify. #------------------------------------------------------------------------------W setWinDestinationBean.value="$P(installLocation)"
143
144
# -W beanManagerLocation.PortNo=9550 #------------------------------------------------------------------------------W beanManagerLocation.PortNo=9550 #-----------------------------------------------------------------------------# Base port number for this installation # Example: # -W portNoBean.portNumber=9560 #------------------------------------------------------------------------------W portNoBean.portNumber=9560 #-----------------------------------------------------------------------------# Manager, Agent, Console communication password # Example: # -W comPassword.password="password" #------------------------------------------------------------------------------W comPassword.password="itso_san_jose_pw" #-----------------------------------------------------------------------------# Drive Letter where Netview to be installed. # Example: # -W beanNVDriveInput.chcDriveName="C" #------------------------------------------------------------------------------W beanNVDriveInput.chcDriveName="C" #-----------------------------------------------------------------------------# Netview password. # Example: # -W beanNetViewPasswordPanel.password="password" #------------------------------------------------------------------------------W beanNetViewPasswordPanel.password="netview"
145
To uninstall the UNIX agent, from the installation directory, run this command:
/tivoli/itsanm/agent/_uninst/uninstall -silent
Change of ID allowed?
db2admin
N/A
Change password from the Computer Management Administrative tool. 1. Change password from the Computer Management Administrative tool. 2. Use following procedure to change password stored inside ITSANM properties file: srmcp ConfigService setPW 1. Change User ID/Password from following file <Install_Location>/apps/was/prope rties/soap.client.props Modify following entries. com.ibm.SOAP.loginUserid=<User _ID> com.ibm.SOAP.loginPassword=<P ASSWORD> Where you need to replace <User_ID> or <PASSWORD> 2.Scripts are available from IBM Support for AIX and Windows contact your local support structure to get them. 1. Change password from the Computer Management Administrative tool. 2. Change Logon Password for Tivoli NetView Service from Control Panel/Services
db2user
Yes
No
Was Admin
Yes
Yes
Yes
NetView password
Yes
N/A
146
User ID/Password
Change of ID allowed?
N/A
1. Change password from the Computer Management Administrative tool. 2. Use following procedure to change password stored inside ITSANM properties file. srmcp ConfigService setAuthenticationPw
147
148
Chapter 5.
Topology management
In this chapter we provide an introduction to the features of IBM Tivoli SAN Manager. We discuss the following topics: IBM Tivoli NetView navigation overview Lab environment description Physical and logical topology views: SAN view Host centric view Device centric view iSCSI view MDS 9000
149
submap window
submap stack
150
The NetView window is divided in three parts: The submap window displays the elements included in the current view. Each element can be another submap or a device The submap stack is located on the left side of the submap window. This area displays a stack of icons representing the parent submaps that you have already displayed. It shows the hierarchy of submaps you have opened for a particular map. This navigation bar can be used to go back to a higher level with one click The child submap area is located at the bottom of the submap window. This submap area shows the submaps that you have previously opened from the current submap. You can open a submap from this area, or bring it into view if it is already opened in another window on the window.
Figure 5-3 shows the new display using the NetView Explorer.
151
From here, you can change the information displayed on the right pane by changing to the Tivoli Storage Area Network Manager view on the top pull-down field. The previously displayed view was System Configuration view. The new display is shown in Figure 5-4.
Figure 5-4 NetView explorer window with Tivoli Storage Area Network Manager view
Now, the right pane shows Label, Name, Type and Status for the device. You may scroll right to see additional fields.
152
NetView will display, with a tree format, all the objects contained in the maps you have already explored. Figure 5-6 shows the tree view.
You can see that our SAN circled in red does not show its dependent objects since we have not yet opened this map through the standard NetView navigation window. You can click any object and it will open its submap in the standard NetView view.
153
The Object Properties for that device will display (Figure 5-8). This will allow you to change NetView properties such as the label and icon type of the selected object.
Important: As IBM Tivoli SAN Manager runs its own polling and discovery processes and only uses NetView to display the discovered objects, each change to the NetView object properties will be lost as soon as IBM Tivoli SAN Manager regenerates a new map.
154
Connection color
Black Black
Status
Normal New
Status meaning
The device was detected in at least one of the scans The device was detected in at least one of the scans and a new discovery has not yet been performed since the device was detected Device detected - the status is impaired but still functional None of the scans that previously detected the device are now reporting it
Yellow Red
Yellow Red
IBM Tivoli NetView uses additional colors to show the specific status of the devices, however these are not used in the same way by IBM Tivoli SAN Manager.
Table 5-2 IBM Tivoli NetView additional colors
Symbol color
Blue Wheat (tan) Dark green
Status
Unknown Unmanaged Acknowledged
Status Meaning
Status not determined The device is no longer monitored for topology and status changes. The device was Missing, Suspect or Unknown. The problem has been recognized and is being resolved Status not determined
Unknown
If you suspect problems in your SAN, look in the topology displays for icons indicating a status of other than normal/green. To assist in problem determination, Table 5-3 provides an overview of symbol status with possible explanations of the problem.
Chapter 5. Topology management
155
Device
Normal (green)
Link
Marginal (yellow)
Non-ISL explanation
One or more, but not all links to the device in this topology are missing. All links to the device in this topology are missing, while other links to this device in other topologies are normal. All links to the device in this topology are missing, while all other links to devices in other topologies are missing (if any)
ISL explanation
One or more, but not all links between the two switches is missing All links between the two switches are missing, but the out-of-band communication to the switch is normal All links between the two switches are missing, and the out-of-band communication to the switch is missing or indicates that the switch is in critical condition This condition should not happen. If you see this on an ISL where switches on either side of the link have an out-of-band agent connected to your SAN Manager, then you are having problems with your out-of-band agent. This condition should not happen. If you see this on an ISL where switches on either side of the link have an out-of-band agent connected to your SAN Manager, then you are having problems with your out-of-band agent.
Any
Normal (green)
Critical (red)
Any
Critical (red)
Critical (red)
Both
Critical (red)
Normal (black)
All in-band agents monitoring the device can no longer detect the device. For example, a server reboot, power-off, shutdown of agent service, Ethernet problems, and soon.
Both
Critical (red)
Marginal (yellow)
At least one link to the device in this topology is normal and one or more links are missing. In addition, all in-band agents monitoring the device can no longer detect the device
156
Critical
SAN Properties to display and change object properties, such as object label and icon Launch Application to run a management application ED/FI Properties to view ED/FI events ED/FI Configuration to start, stop, and configure ED/FI Configure Agents to add and remove agents Configure Manager to configure the polling and discovery scheduling Set Event Destination to configure SNMP and TEC events recipients Storage Resource Manager to launch IBM Tivoli Storage Resource Manager Help
157
158
One IBM xSeries 330 (GALLIUM) with: Two QLogic QLA2300 card with firmware 8.1.5.12 One IBM Ultrium Scalable Tape Library (3583) One IBM TotalStorage FAStT700 storage server Figure 5-11 shows the SAN topology of our lab environment.
Lab topology
ITSOSW1 ITSOSW2
LEAD
SOL-E
SICILY
GALLIUM
BRAZIL
LTO 3583
ITSOSW4
FAStT700
CRETE
SDG
ITSOSW3
TUNGSTEN
MSS
BONNIE
CLYDE
DIOMEDE
SENEGAL
We also set up various zones within the switch Figure 5-12 shows these. Note that this is an initial configuration which changed throughout various testing scenarios examples shown in this book may not represent this exact configuration.
159
ITSOSW1
ITSOSW2
LEAD
SOL-E
SICILY
GALLIUM
BRAZIL
LTO 3583
ITSOSW4
FAStT700 CRETE
SDG
ITSOSW3
TUNGSTEN
Switches zoning SW1-SW2 TSM SW1-SW2 FAStT SW3 ITSOSW3ALLPORTS SW3 MSS SW4 FAStT
MSS
BONNIE CLYDE DIOMEDE SENEGAL
160
The Storage Area Network submap (shown in Figure 5-14) displays an icon for each available topology view. There will be a SAN view icon for each discovered SAN fabric (three in our case), a Device Centric View icon, and a Host Centric View icon.
161
You can see in this figure that we had three fabrics. They are named Fabric1, Fabric3, and Fabric4, since we have changed their label using SAN -> SAN Properties as explained in Properties on page 171. Figure 5-15 shows the complete list of views available. In the following sections we will describe the content of each view.
Topology views
Tivoli NetView root map Storage Area Network
SAN view
Topology view
Zone view
Hosts
Switches
Interconnect elements
Zones
Platform
Elements
Elements (switches)
Elements
Host
Filesystems
Platform
Volumes
162
Topology view
The topology view is used to display all elements of the fabric including switches, hosts, devices, and interconnects. As shown on Figure 5-17, this particular fabric has two switches.
Now, you can click a switch icon to display all the hosts and devices connected to the selected switch.
163
On the Topology View (shown in Figure 5-17) you can also click Interconnect Elements to display information about all the switches in that SAN.
The switch submap, (Figure 5-18), shows that six devices are connected to switch ITSOSW1. Each connection line represents a logical connection. Click a connection bar twice to display the exact number of physical connections (Figure 5-20). We now see that, for this example, SOL-E is connected to two ports on the switch ITSOSW1.
164
When the connection represents only one physical connection (or, if we click one of the two connections shown in Figure 5-20), NetView displays its properties panel (Figure 5-21).
Zone view
The Zone view submap displays all zones defined in the SAN fabric. Our configuration contains two zones called FASTT and TSM.
165
Click twice on the FASTT icon to see all the elements included in the FASTT zone.
In lab1, the FASTT zone contains five hosts and one storage server. We have installed Tivoli SAN Manager Agents on the four hosts that are labelled with their correct hostname (BRAZIL, GALLIUM, SICILY and SOL-E). For the fifth host, LEAD, we have not installed the agent. However, it is discovered since it is connected to the switch. IBM Tivoli SAN Manager displays it as a host device, and not as an unknown device, because the QLogic HBA drivers installed on LEAD support RNID. This RNID support gives the ability for the switch to get additional information, including the device type (shown by the icon displayed), and the WWN. The disk subsystem is shown with a question mark because the FAStT700 was not yet fully supported (with the level of code available at the time of writing) and IBM Tivoli SAN Manager was not able to determine all the properties from the information returned by the inband and outband agents.
166
In the preceding figure, we can see the twelve defined LUNs and the host to which they have been allocated. The dependency tree is not retrieved from the FAStT server but is consolidated from the information retrieved from the managed hosts. Therefore, the filesystems are not displayed as they can be spread on several LUNs and this information is transparent to the host. Note that the information is also available for the MSS storage server, the other disk storage device in our SAN.
167
We see our four hosts and all their local filesystems whether they are locally or SAN-attached. NFS-mounted filesystems and shared directories are not displayed. Since no agent is running on LEAD, it is not shown in this view.
Starting discovery
You can discover and manage devices that use the iSCSI storage networking protocol through IBM Tivoli SAN Manager using IBM Tivoli NetView. Before discovery, SNMP and the iSCSI MIBs must be enabled on the iSCSI device, the Tivoli NetView IP Discovery must be enabled. See 6.4, Real-time reporting on page 227 for enabling IP discovery. The IBM Tivoli NetView nvsniffer daemon will discover the iSCSI devices. Depending on the iSCSI operation chosen, a corresponding iSCSI SmartSet will be created under the IBM Tivoli NetView SmartSets icon. By default, the nvsniffer utility runs every 60 minutes. Once nvsniffer discovers a iSCSI device, it creates an iSCSI SmartSet located on the NetView Topology map at the root level. The user can select what type of iSCSI device is discovered. From the menu bar, click Tools -> iSCSI Operations menu and select Discover All iSCSI Devices, Discover All iSCSI Initiators or Discover All iSCSI Targets, as shown in Figure 5-26. For more details about iSCSI, refer to Chapter 7, Tivoli SAN Manager and iSCSI on page 253.
168
Double-click the iSCSI SmartSet icon to display all iSCSI devices. Once all iSCSI devices are discovered by NetView, the iSCSI SmartSet can be managed from a high level. Status for iSCSI devices is propagated to the higher level, as described in 5.1.9, Status propagation on page 157. If you detect a problem, drill to the SmartSet icon and continue drilling through the iSCSI icon to determine what iSCSI device is having the problem. Figure 5-27 shows an iSCSI SmartSet.
169
This will display a SAN Properties window that is divided into two panes. The left pane always contains Properties, and may also contain Connection and Sensors/Events, depending on the type of object being displayed. The right pane contains the details of the object. These are some of the device types that give information in the SAN Properties menu: Disk drive Hdisk Host file system LUN Log volume OS Physical volume Port SAN
170
Properties
The first grouping item is named Properties and contains generic information about the selected device. The information that is displayed depends on the object type. This section shows at least the following information:
Label: The label of the object as it is displayed by IBM Tivoli SAN Manager. If you update this field, this change will be kept over all discoveries. Icon: The symbol representing the device type. If the object is of an unknown type, this
field will be in read-write mode and you will be able to select the correct symbol.
Figure 5-30 shows the Properties section for a host. You can see that it displays the hostname, the IP address, the hardware type, and information about the HBA. Since the host does not give back sensor related events, only the Properties and Connections sections are available.
171
Figure 5-31 shows the Properties section for a switch. You can see that it displays fields including the name, the IP address, and the WWN. The switch is a connection device and sends back information about the events and the sensors. Therefore, all three item groups are available (Properties, Connections, and Sensors/Events).
172
Figure 5-32 shows the properties for an unknown device. Here you can change the icon to a predefined one by using the pull-down field Icon. You can also change the label of a device even if the device is of a known type.
Connection
The second grouping item, Connections shows all ports in use for the device. This section appears only when it is appropriate to the device displayed switch or host. On Figure 5-33, we see the Connection tab for one switch where six ports are used. Port 0 is used for the Inter-Switch Link (ISL) to switch ITSOSW2. This is a very useful display, as it shows which device is connected on each switch port.
Sensors/Events
The third grouping item, Sensors/Events, is shown in Figure 5-34. It shows the sensors status and the device events for a switch. It may include information about fans, batteries, power supplies, transmitter, enclosure, board, and others.
173
174
175
After this, you can launch the Web application by right-clicking the object and then selecting Management Page, as shown in Figure 5-37.
Important: This definition will be lost if your device is removed from the SAN and subsequently rediscovered, since it will be a new object for NetView.
176
2. Stop NetView 3. To be sure that the application can be automatically launched, update the PATH variable on your server and add the path to the program directory.
My Computer -> Properties, select the Advanced tab -> Environment Variables. Under System Variables, select PATH. Include the full pathname of the application in the PATH variable (Figure 5-38).
4. Re-start NetView. After this, you will be able to launch the SAN Data Gateway application by selecting it from the Tools menu as shown in Figure 5-39.
177
This will launch the SAN Data Gateway Specialist application (Figure 5-40).
Note: The application must be locally installed on the server where the NetView console runs (either IBM Tivoli SAN Manager Server or Remote Console).
178
The user properties file contains an SRMURL setting that defaults to the fully qualified host name of Tivoli Storage Area Network Manager. This default assumes that both Tivoli Storage Resource Manager and Tivoli Storage Area Network Manager are installed on the same machine. If IBM Tivoli Storage Resource Manager is installed on a separate machine, you can modify the SRMURL value to specify the host name of the IBM Tivoli Storage Resource Manager machine. For instructions on how to do this, please refer to the manual IBM Tivoli Storage Area Network Manager Users Guide, SC23-4698. If the following conditions are true, you can start the Tivoli Storage Resource Manager graphical interface from the Tivoli NetView console: IBM Tivoli Storage Resource Manager or the Tivoli Storage Resource Manager graphical interface is installed on the same machine as Tivoli Storage Area Network Manager, or the SRMURL value specifies the hostname of IBM Tivoli Storage Resource Manager. The Tivoli Storage Area Network Manager is currently running. For more information on Tivoli Storage Resource Manager, please see the redbook IBM Tivoli Storage Resource Manager: A Practical Introduction, SG24-6886.
179
NEW GREEN
Clear History
Device down
NORMAL GREEN
Device down Device up
MISSING RED
Figure 5-42 IBM Tivoli SAN Manager normal status cycle
If you do not manually use NetView capabilities to change status, the status of a Tivoli SAN Manager object goes from green to red and from red to green. Note that the only difference between an object in the NORMAL/GREEN and NEW/GREEN status is in the Status field under SAN Properties (see Figure 5-30 on page 172 for an example). A new object will have New in the field and a normal object will show Normal. The icon displayed in the topology map will look identical in both cases. You can encounter situations where your device is down for a known reason such as an upgrade or hardware replacement and you dont want it displayed with a missing/red status. You can use the NetView Unmanage function to set its color as tan to avoid having the yellow or red status reported and propagated in the topology display. See Figure 5-43.
180
NORMAL GREEN
Device up Device down
MISSING RED
Manage / Unmanage
Clear History
MISSING TAN
Clear History
However, when a device is unmanaged and you do a SAN -> Configure Manager -> Clear History to remove historical data, the missing device will be removed from the IBM Tivoli SAN Manager database and will no longer be reported until it is up back with a new/green status. If you have changed the label of the device, and it is re-discovered after a Clear History, it will reappear with the default generated name, as this information is not saved. See Figure 5-44.
MISSING RED
Ack/Unack
Device up
You can use the NetView Acknowledge function to specify that you have been notified about the problem and that you are currently searching for more information or for a solution. This will set the devices color as dark green to avoid having the yellow or red status reported and propagated in the topology display. Subsequently, you can use the Unacknowledge function to return in the normal status and colors cycle. When the device becomes available, it will automatically return to the normal reporting cycle.
181
Cisco 9509
Intranet
Lochness SAN Manager
Sanxc1
Sanxc2
Sanan3
We first deployed an IBM Tivoli SAN Manager Agent to SANAN. Once the agent was installed, it registered with the IBM Tivoli SAN Manager - LOCHNESS and discovered the CISCO1 (MDS 9509). The topology in Figure 5-46 was displayed after deploying the agent.
182
Note: In order to discover the MDS 9000, at least one IBM Tivoli SAN Manager Agent must be installed on a host attached to the MDS 9000. Outband management is not supported for the MDS 9000.
To display the properties of CISCO1, right-click the CISCO1 icon to select it and select SAN -> SAN Properties. See Figure 5-47.
183
The Connection option (Figure 5-48) displays information about the slots and ports where the hosts SANXC1, SANXC2 and SANXC3 are connected, as well as the status of each port.
184
We see that ITSOSW1 sent a trap to signal that FCPortIndex4 (port number 3) has a status of 2 (which means Offline). The correlation between the inband information and the trap received is then made correctly and only the connection is shown as missing. You can see in Figure 5-50 that the connection line has turned red, using the colors referenced in Table 5-1 on page 155.
We then restored the connection, and following the status cycle explained in Figure 5-42 on page 180, the connections returned to normal.
185
Next, we removed one out of the two connections from the host TUNGSTEN to ITSOSW3. One link is lost, so the connection is now shown as suspect (yellow) Figure 5-52.
NetView follows its status propagation rules in Table 5-4 on page 157. This connection links to a submap with the two physical connections. The bottom physical connection is missing (red), and the other (top) one is normal (black), resulting is propagated status of (yellow) on the parent map (left hand side). See Figure 5-53.
186
187
After removing the link on LEAD and we received a standard Windows missing device popup (Figure 5-55) indicating it could no longer see its FC-attached disk device.
IBM Tivoli SAN Manager shows the device as Missing (the icon changes to red see the color status listing in Table 5-1 on page 155) as it is no longer able to determine the status of the device. See Figure 5-56.
188
In Figure 5-57, the host is Unmanaged (tan) status since we decided to unmanage it.
We finally select SAN -> Configure Manager -> Clear History. See Figure 5-58.
189
After the next discovery, as explained in Figure 5-43 on page 181, the host is no longer displayed (Figure 5-59), since it has been removed from the IBM Tivoli SAN Manager database.
Lab 2 environment
For demonstration purposes in the following sections, this lab is referenced as Lab 2. The configuration consists of: Two IBM 2109-S08 (ITSOSW1 and ITSOSW2) switches with firmware V2.6.0g One IBM 2109-S16 (ITSOSW3) switch with firmware V2.6.0g
190
One IBM 2109-F16 (ITSOSW4) switch with firmware V3.0.2 One IBM 2107-G07 SAN Data Gateway Two pSeries 620 (BANDA, KODIAK) running AIX 5.1.1 with : Two IBM 6228 cards One IBM pSeries F50 (BRAZIL) running AIX 5.1.1ML4 with: One IBM 6227 card with firmware 02903291 One IBM 6228 card with firmware 02C03891 One HP Server running HP-UX 11.0 One FC HBA Four Intel servers (TONGA, PALAU, WISLA, LOCHNESS) Two Intel servers (DIOMEDE, SENEGAL) with: Two QLogic QLA2200 card with firmware 8.1.5.12 One IBM xSeries 5500 (BONNIE) with: Two QLogic QLA2300 card with firmware 8.1.5.12 One IBM Ultrium Scalable Tape Library (3583) One IBM TotalStorage FAStT700 storage server Figure 5-11 shows the SAN topology of our lab environment.
Easter - HPUX 11
iSCSI
191
We have powered off the switch ITSOSW4, with managed host SENEGAL enabled. The topology map reflects this as shown in Figure 5-61. The switch and all connections change to red.
The agent running on the managed host (SENEGAL) has scanners listening to the HBAs located in the host. Those HBAs detect that the attached device, ITSOSW4, is not active since there is no signal from ITSOSW4. The information is retrieved by the scanners and reported back to the manager through the standard TCP/IP connection. Since the switch is not active, the hosts can no longer access the storage servers. The active agent (SENEGAL) sends the information to the manager which triggers a new discovery. Since the switch does no longer responds to outband management, IBM Tivoli SAN Manager will correlate all the information and as a result, the connections between the managed hosts and the switch, and the switch itself, are shown as red/missing. The storage server is shown as green/normal because of a second fibre channel connection to ITSOSW2. ITSOSW2 is also green/normal because of the outband management being performed on this switch. The active agent host is still reported as normal/green as it sends its information to the Manager through the TCP/IP network. Therefore the Manager can determine that only the agents switch connections, not the host itself, is down. Now, we powered the switch on again. At startup, the switch sends a trap to the manager. This trap will cause the manager to ask for a new discovery. The result is shown in Figure 5-62.
192
Now, following the status propagation detailed in 5.6, Status cycles on page 180, all the devices are green/normal.
193
You can see under the SAN Properties window, Figure 5-64, that the RNID support only provides the device type (Host) and the WWN. Compare with the SAN Properties window for a managed host, shown in Figure 5-30 on page 172.
To have a more explicit map, we put CLYDE in the Label field (using the method shown in Figure 5-32 on page 173) and the host is now displayed with its new label.
194
195
When configuring the agents, we also used the Advanced button to enter the administrator userid and password for the switches. This information is needed by the scanners to obtain administrative information such as zoning for Brocade switches. IBM Tivoli SAN Manager discovers the topology by scanning the three registered switches. This is shown in Figure 5-67. The information about the attached devices is limited to the WWN of the device since this information is retrieved from the switch and there is no other inband management. Note the - signs next to Device Centric and Host Centric Views this information is retrieved only by the inband agent so is not available to us here.
196
Figure 5-68 shows the information retrieved from the switches (SAN Properties).
197
about the node and the local filesystems, shown in Figure 5-69. Note the - sign in front of /data01 for host SICILY. The filesystem is defined but not mounted, as the Fibre Connections are not active.
We reconnected the Fibre Channel connections to all agents into the switch and forced a new polling. We now see that all agents reported information about their filesystems. Since the agents are connected to a switch, the inband agents will retrieve information from it using inband management. That explains why we see all the devices including those without agents installed. Figure 5-70 shows that: Our four inband agents (BRAZIL, GALLIUM, SICILY, SOL-E) are recognized. The two switches ITSOSW1 and ITSOSw2 are found, since agents are connected to them. Device 1000006045161FF5 is displayed since it is connected to the switch ITSOSW1. The device type is Unknown, as there is no inband nor outband agent on this device.
198
We now have no zoning information available since this is retrieved from the switch outband Agent for the 2109 switch. This is indicated by the sign next to Zone View on Figure 5-70.
199
Figure 5-72 Discovered SAN with no LUNS defined on the storage server
Figure 5-73 shows that the host CRETE is not included in the MSS zone (we have enabled the outband agent for the switch in order to display zone information). This zone includes TUNGSTEN, which has no LUNs defined on the MSS.
200
We changed the MSS zone to include the CRETE server. We run cfgmgr on CRETE so that it scans its configuration and finds the disk located on the MSS as shown in Example 5-2.
Example 5-2 cfgmgr to discover new disks
# lspv hdisk0 hdisk1 hdisk2 hdisk3 # cfgmgr # lspv hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 00030cbf4a3eae8a 00030cbf49153cab 00030cbf170d8baa 00030cbf170d9439 rootvg None datavg datavg
Now, the agent on CRETE is able to run SCSI commands on the MSS and discovers that it is a storage server. IBM Tivoli SAN Manager maps it correctly in Figure 5-74.
201
Figure 5-74 MSS zone with CRETE and recognized storage server
202
The agent will use inband management to: Query the directly attached devices. Query the name server of the switches to get the list of other attached devices. Launch inband management to other devices to get their WWN and device type (for RNID compatible supported drivers). Launch SCSI request to get LUN information from storage servers. You can see in Figure 5-76 that the agent on GALLIUM has returned information on: Directly attached switches (ITSOSW1 and ITSOSW4) Devices attached to those switches (if they are in the same zones) LUNs defined on the FAStT for this server Its own filesystems Because of the other hosts, only CLYDE runs with RNID compatible drivers, all other devices excluding switches and FAStT storage server are displayed with an unknown device icon. However, we have shown how we can get a complete map of our SAN by deploying just one inband agent.
203
5.8 Summary
This chapter provided an overview of Tivoli NetView navigation. We discussed the physical and logical topologies and practical cases when using IBM Tivoli SAN Manager. Topology views of iSCSI and MDS 9000 devices were also presented.
204
Part 4
Part
Advanced operations
In Part 4 we present more operational concepts. This includes functions to provide: Historical and real-time SAN device reporting Error prediction Integration of IBM Tivoli SAN Manager with other SNMP management applications
205
206
Chapter 6.
207
6.1 Overview
The NetView MIB Tool Builder enables you to create applications that collect, display, and save real-time MIB data. The MIB Data Collector provides a way to collect and analyze historical MIB data over long periods of time to give you a more complete picture of your networks performance. We will explain the SNMP concepts and standards, demonstrate the creation of Data Collections and the use of the MIB Tool Builder as it applies to SAN network management. Figure 6-1 lists the topics we cover in this overview section.
Currently IBM Tivoli NetView does not support the querying of SNMP V2 MIBs with the MIB Tool Builder and the Data Collection utilities. In our configuration, the SNMP manager is NetView and the SNMP agents are IBM 2109 Fibre Channel Switches.
208
These objects are arranged in what is known as the Management Information Base (MIB). SNMP allows managers and agents to communicate for the purpose of accessing these objects. Figure 6-2 provides an overview of the SNMP architecture.
SNMP architecture
iSCSI MIB
Ethernet
A typical SNMP manager performs the following tasks: Queries agents Gets responses from agents Sets variables in agents Acknowledges asynchronous events from agents A typical SNMP agent performs the following tasks: Stores and retrieves management data as defined by the MIB Signals an event to the manager
209
Note: We are using the Brocade 2.6 enterprise-specific MIBs for SAN network performance reporting and the IBM TotalStorage IP Storage 200i iSCSI MIB
TOP
CCITT (0)
JOINT-ISO-CCITT (2)
ISO (1)
STD (0) REG AUTHORITY (1) MEMBER BODY (2)
ORG (3)
DOD (6)
INTERNET (1)
PRIVATE (4)
DIRECTORY (1) MGMT (2) EXPERIMENTAL (3)
MIB (1)
ENTERPRISE (1)
RESERVED (0)
IBM (2)
bcsi (1588)
iSCSI
210
We downloaded the MIBs below and copied them to the directory. v2_6trp.mib (Enterprise Specific trap) v2_6sw.mib (Fibre Channel Switch) v2_6fe.mib (Fabric Element) v2_6fa.mib (Fibre Alliance)
Note: If you have unloaded all the MIBs in the MIB description file (\usr\ov\snmp_mibs), you must load MIB-I or MIB-II before you can load any enterprise-specific MIBs. These are loaded by default in NetView.
In Example 6-1 we show the \usr\ov\snmp_mibs directory listing with our newly added MIBs.
211
C:\usr\ov\snmp_mibs>
IBM 2109
The IBM 2109 comes configured to use the MIB II-private MIB (TRP-MIB), FC Switch MIB (SW-MIB), Fibre Alliance MIB (FA-MIB) and Fabric Element MIB (FE-MIB). By default, the MIBs are not enabled. Here is a description of each MIB and their respective groupings.
212
swFabric swActCfg swFCport swNs swEvent swFwSystem swEndDevice To enable the MIBs for the IBM/Brocade switch, log into the switch via a telnet session, using an ID with administrator privilege (for example, the default admin ID). We enabled all four of the above MIBS using the snmpmibcapset command. The command can either disable or enable a specific MIB within the switch. Example 6-2 shows output from the snmpmibcapset command.
Example 6-2 snmpmibcapset command on IBM 2109
itsosw2:admin> snmpmibcapset The SNMP Mib/Trap Capability has been set to support FE-MIB SW-MIB FA-MIB SW-TRAP FA-TRAP SW-EXTTRAP FA-MIB (yes, y, no, n): [yes] SW-TRAP (yes, y, no, n): [yes] FA-TRAP (yes, y, no, n): [yes] SW-EXTTRAP (yes, y, no, n): [yes] no change itsosw2:admin>
NetView
The purpose of loading a MIB is to define the MIB objects so the NetView programs applications can use those MIB definitions. The MIB you are interested in must be loaded on the system where you want to use the MIB Data Collector or MIB Tool Builder. Some vendors specific MIBs are already loaded into NetView. Since we want to collect performance MIB objects types for the Brocade 2109 switch, we will load its MIB. On the NetView interface, select Tools -> MIB -> Loader SNMP V1. This will launch the MIB Loader interface as shown in Figure 6-5.
213
Each MIB that you load adds a subtree to the MIB tree structure. You must load MIBs in order of their interdependencies. We loaded the v2_6TRP.MIB first by clicking Load then selecting the TRP.MIB from the \usr\ov\snmp_mibs directory see Figure 6-6.
Click Open and the MIB will loaded into NetView. Figure 6-7 shows the MIB loading indicator.
We then loaded the v2_6 SW.MIB, v2_6FE.MIB and v2_6FA.MIBs in turn using the same process. You must load the MIBs in order of their interdependencies. A MIB is dependent on another MIB if its highest node is defined in the other MIB. After the MIBs are loaded, we now verify that we are able to traverse the MIB tree and select objects from the enterprise-specific MIB. We used the NetView MIB Browser to traverse the branches of the above MIBs. Click Tools --> MIB --> Browser SNMP v1. to launch the MIB browser and use the Down Tree button to navigate down through a MIB see Figure 6-8
214
215
216
bcsi (1588)
commDev (2)
fcSwitch (1)
sw (1)
swFCPort (6)
swFCPortTable (2)
swFCPortEntry (1)
swFCPortTxFrames (13)
1. To create the NetView Data Collection, select Tools -> MIB -> Collect Data from the NetView main menu. The MIB Data Collector interface displays (Figure 6-12). Select New to create a collection.
2. If creating the first Data Collection, you will also see the pop-up in Figure 6-13 to start the Data Collection daemon. Click Yes to start the SNMPCollect daemon.
217
3. The Data Collection Wizard GUI then displays (Figure 6-14). This is the first step in creating a new Data Collection. By default NetView has navigated down to the Internet branch of the tree (.iso.org.dod.internet). See Figure 6-3 on page 210 for the overall tree structure. Highlight private and click Down Tree to navigate to the private MIB.
We have now reached the private branch of the MIB tree (.iso.org.dod.internet.private). See Figure 6-15.
218
4. Continue to navigate down the enterprise branch of the tree by clicking Down Tree. Figure 6-16 shows the enterprise branch of the tree (.iso.org.dod.internet.private.enterprise).
219
5. We reach the bcsi branch of the tree by clicking Down Tree. Figure 6-17 shows the bcsi (Brocade) branch of the tree (.iso.org.dod.internet.private.enterprise.bcsi).
6. We continue to navigate down the tree, using the path shown in Figure 6-11 on page 217, and, as shown in Figure 6-18, eventually reaching: .iso.org.dod.internet.private.enterprise.bcsi.commDev.fibrechannel.fcSwitch.sw.swFCport. swFCPort.swFCPortEnrty.swFCPortTxFrames.
220
7. We selected swFCPortTxFrames and clicked OK. We received the following pop-up (Figure 6-19) from the collection wizard. This pop-up occurs because this will be the first node added to this collection. NetView then adds the swFCTxFrames MIB Data Collection definition as a valid data collector entry.
This launches the Add Nodes to the Collection Dialog, which is the second step in creating a new Data Collection. See Figure 6-20.
221
8. We proceeded to customize the section Collect MIB Data from fields, using the following steps: a. We entered the switch node name for which we wanted to collect performance data (in this case, ITSOSW2.ALMADEN.IBM.COM) and clicked Add Node. You can add a node either by selecting it on the topology map or typing in the field as the IP address or hostname for the device. Also, you can select multiple devices on the topology map and click Add Selected Nodes from Map. This adds all the selected nodes selected on the topology map to the Collect MIB Data From field. We also added several nodes to the collection by adding one device at a time in the Node field and clicking Add Node. To remove the node, just click the node name in the list and click Remove. b. We then customized the section Set the Polling Properties for these Nodes, using the following steps: i. We changed the Poll Nodes Every field to 5 minutes. This specifies the frequency in which the nodes are polled.
Important: Before setting the polling interval, you should have a clear understanding of available and used bandwidth in your network. Shorter polling intervals generate more SNMP data on the network.
ii. We checked Store MIB Data. This will store the MIB data that is collected to C:/usr/ov/databases. iii. The Check Threshold if box was checked. This will define the arm threshold. We want to collect data and signal an event each time more than 200 frames are sent on a particular port. Since we checked this box, we will be required to define the trap value and rearm number fields. iv. The option then send Trap Number was configured. We used the default setting, which is the MIB-II enterprise-specific trap. v. We then configured and rearm When. We specified a rearm value of greater than or equal to 75%. of the arm threshold value. This means that a trap will be
222
generated and sent when the number of TX frames reaches 150. Note that these traps are NetView-specific traps (separate from Tivoli SAN Manager traps) and will therefore be sent to the NetView console. 9. Click OK to create the new Data Collection, shown in Figure 6-21. Select the swFCPortTxFrames Data Collection and click Collect.
Note: It could take up to 2 minutes before the newly defined Data Collection is being collected by NetView. To verify that data is being captured, navigate to: c:\usr\ov\databases\snmpcollect. If there are files present, then the Data Collection is functioning properly.
10.Click Close and the Stop and restart Collection dialog is displayed as in Figure 6-22. Click Yes to recycle the snmpcollect daemon. At this point the Data Collection status (Figure 6-21 above) should change from Suspended to To be Collected
We are now collecting the data swFCTxFrames on ITSOSW2. Depending upon the level of granularity that is required for your reporting needs, you may want to collect data over shorter or longer periods. In our lab we collected every 5 minutes, but you may want to collect data once every hour for a week or once every hour for a month. We will now use the NetView Graph tool to display the data collected as described in 6.3.4, NetView Graph Utility on page 225.
223
Note: We followed the same procedure to add the remaining metrics for Data Collection swFCRxFrames, swFCTxErrors, and swFCRxErrors. For demonstration purposes we used a of 50 for an arm threshold and a value of 75% for re-arm. Your values for arm/re-arm may differ from what we used.
Important: There are documented steps on how to perform important maintenance of Tivoli NetView. Refer to the IBM Redbook Tivoli NetVIew and Friends, SG24-6019.
224
If the snmpcollect daemon is not running, you will see a state value of NOT RUNNING from the ovstatus snmpcollect command as shown in Example 6-4.
Example 6-4 snmpcollect daemon stopped
C:\>ovstatus snmpcollect object manager name: snmpcollect behavior: OVs_WELL_BEHAVED state: NOT RUNNING PID: 1536 last message: Exited due to user request. exit status: Done C:\>
The snmpcollect daemon can be started manually. At a command prompt, we typed in ovstart snmpcollect. You will see the output shown in Example 6-5. We then issued an ovstatus snmpcollect for verification, as shown in Example 6-3.
Example 6-5 snmpcolllect started
C:\>ovstart snmpcollect Done C:\>
Note: If no Data Collections are currently defined to the MIB Data Collector tool, the snmpcollect daemon will not run.
225
Select Tools --> MIB --> Graph Data to launch the graph utility This will report on the historical data that has been collected on ITSOSW2. After selecting this, NetView takes some time to process the data and present it in the graphical display. The graph build time depends on the amount of data collected. Figure 6-25 shows the progress indicator.
After the graph is built, it displays the swFCTxFrames data that was collected (Figure 6-26). Note there are multiple instances of the object ID mapped that is, swFCPortTxFrames.1, swFCPortTxFrames.2 and so on. In this case they represent the data collected for each port in the switch.
226
For viewing purposes, we adjusted the x-axis for Time by clicking Edit --> Graph Properties in the open graph window. This allowed us to zoom into shorter time periods. See Figure 6-27.
Any MIB object identifier that has been collected using the NetView MIB Data Collector can be graphed using the NetView Graph facility using the above process.
227
Important: Depending on the configuration, some advanced functionality may be initially disabled in NetView under Tivoli SAN Manager. This section requires this functionality to be enabled. To enable all functionality required, in NetView, click Options --> Polling and check the Poll All Nodes field. This is shown in Figure 6-29.
228
We will demonstrate how to build a a MIB application that will query the swFCPortTxFrames MIB object identifier in the SW-MIB. This process can used to query any SNMP enabled device using NetView. With the switch ITSOSW2 selected, we start building the MIB Application by launching the Tool Builder. Select Tools --> MIB --> Tool Builder --> New. The MIB Tool Builder interface is launched as in Figure 6-30. Click New to create a new Tool Builder entry for collecting data on ITSOSW2.
The Tool Builder Wizard Step1 window is displayed (Figure 6-31). We entered FCPortTxFrames in the Title field and clicked in the Tool ID field to auto populate the remaining fields. We clicked Next to continue with the wizard.
229
The Tool Wizard Step 2 interface displays. You can see our title of FCPortTxFrames has carried over. We are now ready to select the display type. We can choose between Forms, Tables, or Graphs. We will choose Graph and click New as shown in Figure 6-32.
The NetView MIB Browser is now displayed. We will use the MIB Browser to navigate down to the FCPortTxFrames object identifier. Use the Down Tree button to navigate through the MIB tree. Figure 6-33 shows the path through the SW-MIB error table. Click OK to add the object identifier.
SW MIB - Port Table group private... enterprise... bcsi... commDev... fibrechannel... fcSwitch... sw... swFcPort... swFcPortTable... swFCPortTxFrames
Figure 6-33 SW-MIB Port Table
The newly created MIB application is displayed in the Tool Builder Step 2 of 2 window. See Figure 6-34 for the completed MIB Application. Click OK to complete the definition.
230
Now, the final window for the Tool Builder is displayed. It shows the newly created MIB application in the window, Figure 6-35. Click Close to close the window. The new MIB Application has been successfully created.
231
Clicking on the FCPortTXFrames option, launches a graph utility, shown in Figure 6-37.
The collection of MIB data starts immediately after selecting the swFCPortTXFrames MIB application from the Monitor --> Other menu. Figure 6-38 shows the data being collected and displayed for each MIB instance of the ITSOSW2.
232
The polling interval of the application can be controlled using the Poll Nodes Every field located under Edit --> Graph Properties. See Figure 6-39.
233
This launches a dialog to specify how often NetView Graph receives real-time data for graphing, shown in Figure 6-40. This determines how often the nodes are asked for data.
We continued to use the Tool Builder process defined in 6.4.1, MIB Tool Builder on page 228 to build additional MIB applications for real-time performance monitoring. We used the following MIB objects: swFcPortTXWords swFcPortRXC2Frames swFCPortRXC3Frames fcFXPortLinkFailures fcFXPortSyncLosses fcFXPortSigLosses Figure 6-41 shows the newly defined MIB Applications as they appear in the Tool Builder.
Figure 6-42 shows all the above MIB objects as they appear in the NetView Monitor pull-down menu. Note we have abbreviated the names of the MIB applications listed in the Monitor --> Other menu for ease of use.
234
6.4.3 SmartSets
With Tivoli SAN Manager providing the management of the SAN, we can further extend the management functionality of the SAN from a LAN and iSCSI perspective. NetView SmartSets gives us this ability. This section describes the concept of the NetView SmartSet. See Figure 6-43 below. For an overview, we provide details on how to group and mange your SAN attached resources from an TCP/IP (SNMP) perspective. By default, the iSCSI SmartSet is created by IBM Tivoli SAN Manager when nvsniffer is enabled. SmartSets for iSCSI initiators and targets can be created using the process described here.
What is a SmartSet? Why SmartSets? Defining a SmartSet SmartSets and Data Collections
Figure 6-43 SmartSet Overview
In NetView a SmartSet is used to monitor a set of objects (devices). NetView allows for user-defined SmartSets. We use this to define and manage our SAN devices as one item. SmartSets can be used to group together systems that support a specific operating system, device type or business function. The symbol status displayed for nodes appearing in
Chapter 6. NetView Data Collection, reporting, and SmartSets
235
user-defined SmartSets is based solely on the IP status, not Fibre Channel status. You can customize the attributes available for creating a SmartSets. Refer to the manual Tivoli NetView for Windows Users Guide, SC31-8888 for more information. With Tivoli SAN Manager using the TCP/IP and Fibre Channel protocols to manage the SAN, we will demonstrate how to complement this by using SNMP to manage the same components of the SAN using SmartSets.
Important: Depending on the configuration, some advanced functionality required for SmartSets may be disabled in NetView in Tivoli SAN Manager. This section requires this functionality to be enabled. To enable all functionality required, in NetView, click Options -> Polling and check the Poll All Nodes field. This is shown in Figure 6-29 on page 228.
We will demonstrate how to group all the IBM 2109 Fibre Channel switches (ITSOSW1, ITSOSW2 and ITSOSW3) in our configuration, into one SmartSet called IBM2109. 1. On the NetView topology display select the switches ITSOSW1, ITSOSW2 and ITSOSW3. See Figure 6-44 for selected switches. Each symbol can be selected by holding down the Shift-Key and clicking once on each symbol.
2. Select Submap --> New Smartset from the main menu. The Find window is displayed, as in Figure 6-45.
236
3. Click the Advanced tab this will allow the selected switches on the topology map to be added to the SmartSet. See Figure 6-46.
237
4. Click the Add Selected Objects to add ITSOSW1, ITSOSW2, and ITSOSW3 to the Combined Functions field (Figure 6-47).
238
5. Click Create SmartSet. This launches the New SmartSet dialog. We entered the name of our SmartSet as IBM2109, and added a description. See Figure 6-48. Note no spaces are allowed in the SmartSet Name field.
6. At this point, the SmartSet definition is complete. Click the SmartSets tab to verify that the IBM2109 SmartSet was created as shown in Figure 6-49.
239
240
3. Clicking on the IBM2109 SmartSet, we find its members ITSOSW1, ITSOSW2, and ITSOSW3, as shown in Figure 6-51.
Note: Symbols on the topology map have links back to their respective objects, since the same symbol can reside in more than one location in NetView. In the case of the switch discussed here, the same symbol in the SmartSet also resides on the IP Internet map. Propagation of status occurs to all symbols regardless of their location on the topology. For example, if there is a problem with the switch, causing it to change to a critical (RED) status, this will be reflected in both the SmartSet and on the IP Internet map.
241
SmartSets can be used to group your devices using a logical taxonomy for the enterprise. For our setup, we categorized our SAN resources by Fabric and Operating System. This allows us to easily manage those devices at a high level. Alternatively, we could have grouped the devices by SAN fabric, or by Application or Business Function. We created the following SmartSets, shown in Figure 6-52: IBM 2109 contains all IBM 2109 Fibre Channel switches SANfabricA_AIX contains all AIX SAN attached hosts SANfabricA_HPUX contains all HP-UX SAN attached hosts SANfabricA_Solaris contains all Solaris SAN attached hosts SANfabricA_Win2k contains all Windows 2000 SAN attached hosts TivoliSANManager contains all the Tivoli SAN Manager hosts. Now we can manage our SAN attached devices from both SAN and LAN perspectives from a single console.
242
243
3. After allowing the Data Collection to collect data, we then graph the data using Tools --> MIB --> Graph Data All. The NetView Graph dialog (Figure 6-54) is displayed while the information is collected this can take some time, depending on the amount of data returned.
4. A window displays, presenting all MIB instances of the swFCPortTxFrames MIB object (Figure 6-55) for all three switches in the SmartSet. Since the total number of entries is greater than 15, we get a message on the menu bar indicating that Maximum Graph Lines Exceeded. The NetView Graph utility can only graph 15 lines at a time.
244
5. Next, we need to select the desired instance of the MIB object for each switch that we want to graph. We then clicked Add to add the selected MIB labels to the Lines To Graph panel, then we clicked OK. For this example, we chose the first 5 instances for each of the three switches, shown in Figure 6-56. Click OK to start the graph.
The NetView Graph for the fifteen MIB instances we selected is shown in Figure 6-57.
245
Note: iSCSI discovery requires that IP discovery in Tivoli NetView that is shipped with IBM Tivoli SAN Manager be enabled. Be aware that when you turn on IP discovery, there can be a lot of network activity depending on how many devices are in your IP network. For this reason we advise the use of the seed file.
246
Once the seed file is updated and saved, we then need to clear out the NetView databases where the current topology information is stored. Start Server Setup by clicking Options --> Server Setup as in Figure 6-58.
Important: Performing the Clear Databases on NetView will delete all previously saved NetView object and topology information only. This does affect the Tivoli SAN Manager and WebSphere Application Server databases.
247
Now we want to configure NetView to use the updated seed file. Click the Discovery tab in The Server Setup options window. Under discovery, check Use Seed File, shown in Figure 6-59 and click OK.
248
Click the Databases tab. Click the pull-down, select Clear Databases, shown in Figure 6-60, and click OK. This starts the process to clear the databases.
NetView prompts one last time to verify that you want to clear the databases. Click Yes. Figure 6-61 shows the warning message.
249
Clearing the databases typically takes a minute, however, this will vary depending on the size of the NetView databases being cleared. The NetView console will automatically shut down and restart when complete. See Figure 6-62.
When NetView restarts, it will discover and display the nodes that we defined in our netmon.seed file, shown in Figure 6-63.
To demonstrate the difference in the discovered IP topologies, Figure 6-64 shows the NetView display without using a seed file for discovery. In this case, NetView discovers itself and all other nodes on the subnet.
250
This completes our demonstration of how existing NetView capabilities can be leveraged to further extend the capabilities of Tivoli SAN Manager.
251
252
Chapter 7.
253
iSCSI Components
Application Server
initiator
IP
target
Client Desktop
initiator
Storage
iSCSI uses standard Ethernet switches and routers to move the data from server to storage. It also allows the IP and Ethernet infrastructure to be used for expanding access to SAN storage and extending SAN connectivity across any distance. Figure 7-2 shows a comparison of fibre channel to iSCSI.
254
FC SAN
Database Application Block I/O
iSCSI
Database Application Block I/O
Pooled Storage
Pooled Storage
255
The iSCSI MIBs and iSNS MIBs are pre-installed into the c:\usr\ov\snmp_mibs directory. This is performed so that the NetView MIB browser can be used to query the iSCSI MIBs.
Restriction: Note that IBM Tivoli NetView does not currently support MIB Tool Builder and Data Collections against SNMP V2.
The iSCSI MIB trap definition files are used by Tivoli NetView for event processing.
iSCSI MIBs
Before managing the iSCSI device, the MIBs must be loaded. By default, the MIBs are not loaded in to Tivoli NetView at installation time. You have to load these MIBs using the NetView MIB loading function. The purpose of loading a MIB is to define the MIB objects so NetViews applications can use those MIB definitions. You load the iSCSI MIB files one at a time into Tivoli NetView.
256
iSNS MIB - The Internet Storage Name Service iSNS defines a mechanism for IP based storage devices to register and query for other storage devices in the network. The iSNS MIB is designed to allow SNMP to be used to monitor and manage iSCSI devices.
See 6.2.3, Loading MIBs on page 212 for detailed instructions on loading MIBs.
7.4 Summary
In this chapter, we introduced iSCSI and explained how it functions. We also described how IBM Tivoli SAN Manager performs discovery of iSCSI devices.
257
258
Chapter 8.
259
8.1 Overview
First, we describe configuration options for forwarding events to the SNMP managers. We also describe how IBM Director can be integrated via SNMP to Tivoli SAN Manager. Figure 8-1 gives an overview of this chapter.
260
Disk array
SAN
Disk array Switch
Disk array Disk array IBM Tivoli Storage Area Network Manager
Use the ITSANM.MIB file from the \misc\utils directory on the installation media. This file should be incorporated in your SNMP management console trap definition. This MIB file only provides the trap information. If you use the Tivoli NetView console as the SNMP console, you need to perform these steps so that traps will be displayed in an appropriate format in the Tivoli NetView Event browser: 1. Copy ITSANM.MIB to the c:\temp directory. 2. Run the mib2trap program on the ITSANM.MIB file. Specify the full path name for a writable directory when creating the ITSANM.BAT file. For example, run this command to create the bat file on the directory c:\temp:
mib2trap c:\temp \ITSANM.MIB c:\temp \ITSANM.BAT
You can name the BAT file anything you want. This example creates a file called ITSANM.BAT. 3. Edit ITSANM.BAT to format the events displayed in NetView. Change the options: c option which is the event type display:
-c LOGONLY
Change to:
-c "Status Events"
Change to:
-F "$1 $*"
Example 8-1 and Example 8-2 show the ITSANM.BAT file before and after the change.
261
Tip: There are many traps, so use the Replace All feature in your editor.
4. Run the ITSANM.BAT file. 5. Restart Tivoli NetView and bring up the monitor to see all the events.
Type in the IP address, port number and community name of the SNMP Manager console you will use for receiving SNMP traps from SAN Manager Server. After you type in the information click Add to add the entry to the list.
262
Note: IBM Tivoli Storage Area Network Manager only supports one SNMP community name. A device can have several community names, but IBM Tivoli Storage Area Network Manager can only communicate with one of those names. Also, if the SNMP community name entered in the command is not a community name on the device, IBM Tivoli Storage Area Network Manager SNMP queries will time out. IBM Tivoli Storage Area Network Manager will not be able to communicate with the device.
If you want to change the community name, follow these steps: 1. Open a Command Prompt window on the Server system. 2. Change to the following directory:
c:\tivoli\itsrm\manager\bin\w32-ix86
Where name is the community name you want to use for SNMP queries. For example, to change the SNMP community name to myname, enter the following command:
srmcp SANDBParms set SNMPCommunityName=myname
8.3.1 Event forwarding from IBM Tivoli SAN Manager to IBM Director
After installing IBM Director on your system and before starting to receive traps from IBM Tivoli SAN Manager Server you need to set the trap destination to your IBM Director. The procedure for this is the same as described in Setting up the SNMP trap destination in SAN Manager on page 262.
263
The next step is to define the IBM Tivoli SAN Manager Server system in IBM Director. You can do this by accessing IBM Director Console as shown in Figure 8-4.
As you can see from our example, we defined our IBM Tivoli SAN Manager Server system POLONIUM as an SNMP capable device. If we want to see the SNMP trap events coming from our IBM Tivoli SAN Manager Server we simply drag the All events task, as shown in Figure 8-4, from the Tasks windows to the defined system. A window similar to Figure 8-5 is shown.
264
As you can see in our example, there is an event showing that managed host TUNGSTEN was shut down. IBM Director allows you to build event filters which can be then associated with actions, for example, sending an e-mail alert to the system administrator.
265
266
Chapter 9.
267
9.1 Overview
SANs are becoming more and more critical in the corporate infrastructure, therefore they should be made as highly available as possible, just like other IT components. SANs are complex network environments, with potentially hundreds or even thousands of individual devices. Hardware outages cause disruptions to the business environment, leading to lost revenue and reduced customer satisfaction. Minimizing outages due to hardware failures is therefore a goal of SAN management and one way to do this is by predicting and detecting likely errors before they cause outages. Typically, servers have multiple redundant paths to devices. Determining a root error cause in such an environment is usually problematic. Some of the most important factors in complex root cause analysis: Error data can be inconsistent and sparse Complexity of error counter implementations Error indications can be dispersed from the source - they can propagate across the SAN Error Detection and Fault Isolation (ED/FI - SAN Error Predictor) is implemented in IBM Tivoli SAN Manager Version 1.2 to provide a way to predict errors on the optical links that are used to connect SAN components (including HBA to switch, switch to switch, and switch to storage connections. ED/FI functions are listed in Figure 9-1.
By using Predictive Failure Analysis (PFA), downtime of SAN components can be significantly decreased as it is possible to remove problematic components before failure. This can significantly reduce operational cost of SANs. The ED/FI function collects data from IBM Tivoli SAN Manager agents, outband and/or inband as available. The polling interval is every 15 minutes. The data is stored in the ITSANMDB database.This data is then analyzed using various statistical methods and from this future errors are predicted. The predicted errors are presented in the NetView interface by adorning the appropriate icons as shown in Figure 9-2. The adornment means that the exclamation point is superimposed on the icon representing the device where the error is predicted. A TEC event and SNMP trap are also generated.
268
Figure 9-3 shows an example of a failing device, in this case, the host SENEGAL. Although in this case, the icon is actually red, indicating a SAN Manager detected failure, note that typically, adorned icons will still show green, indicating they are available. This is because the ED/FI function is designed to flag potential problems before they have escalated to an actual failure. This allows you to replace hardware preemptively at a convenient time, rather than incurring an unplanned outage due to failure.
269
Data is collected from the following counters: FA MIB Counters FE MIB Counters Brocade Switch MIB Counters HBA APIs (Request Port Status, Read Link Status) - inband only
Note: Not all the switch vendors collect data on all the defined counters in the MIB schema. This depends on the particular implementation and adherence to the various standards. At the time of writing, the fullest ED/FI functionality is available on Brocade switches. Fewer counters are available for monitoring on other switch vendors.
Predictive Failure Analysis is build on a Stochastic model called Dispersion Frame Technique (DFT), which was developed and tested at Carnegie Mellon University. The method eliminates complexity through simple and effective pattern recognition of error occurrences. DFT involves a set of rules for predicting failures, based on the proximity of error occurrences to each other in time. ED/FI uses a set of these rules to determine when a set of counters exceeding a threshold will indicate an error. While the specific rules are internal to ED/FI, they are used to detect the difference between normal and abnormal behavior by using an increase in error rate and a decrease in time intervals between error occurrences. An example rule might be to trigger if a counter exceeds a threshold 3 times within a defined interval. When the PFA process sees that counters have changed, along with previous data, it evaluates the counters. If the counter changes meet the criteria using DFT rules, an indication is created. An Indication Record is created for each port/counter/rule group. These indications are them passed on to the Fault Isolation process (FI). The FI process analyzes the indications by further filtering the errors. FI also uses topology and attribute information provided by IBM Tivoli SAN Manager and with this data isolates faults to the specific Fibre Channel (FC) link. If all requirements are met, FI will create a Fault Record. After a defined number of faults occurs (as defined in the FI rules), a Notification Record will be created. The Notification Record will be presented in NetView by adorning the
270
corresponding device as shown in Figure 9-3 on page 269. The Notification Record is permanent and can only be removed with explicit user intervention (via the GUI). When a user clears the adornment, a Cleared Record will be created in the ITSANMDB database and the device port will be set to a cleared state. If another fault occurs on the same port it may be immediately upgraded to a Notification. The whole FI flow is shown in Figure 9-5.
Cleared
FI Upgrade
Fault
User Input
Notification
FI Upgrade
Fault Isolation will adorn the transmitter of the link (rather than the sender), because it is most likely that the faulty component in the group of link, transmitter, cable and receiver, is the transmitter.
Note: (1) Switches cannot be adorned if inband agents are not active, except in the case of cascaded switches using outband management only. (2) Endpoint devices cannot be adorned if outband agents are not active. Important: Error counters can also change for non error conditions including:
Rebooting the system Configuration changes Clearing of counter manually As Fault Isolation mechanism will count them as error conditions, it is recommended that Error Detection/Fault Isolation is disabled in such cases to avoid spurious adornments.
271
272
In this window you can enable or disable ED/FI using the Enable Error Detection and Fault Isolation radio button.
Tip: As stated in the window, it is recommended that you disable the error prediction in case of service actions so that false notifications can be avoided.
In the Rule Set Selection you can see the available rules and which rules are active. The active rules here are used in error processing as described in 9.2, Error processing on page 269. To see the notes for the specific rule select the rule and click View, you will see a window similar to Figure 9-8.
273
274
In our example we simulated errors by disabling and enabling a port on the switch ITSOSW1 over a period of time. As well as the graphical display of the adornments, they are also listed under SAN->ED/FI Configuration in the Properties tab as shown in Figure 9-10.
275
This window displays the list of potentially faulty SAN devices, using the following columns:
Clear - check this box to clear the adornment on a particular device. Time - the time when the error was identified by FI rules. Faulted Device - the device which was predicted by FI to be failing. The rule here is that the device with the transmitter will be marked as failed, as explained in 9.2, Error processing on page 269. If the device has a IBM Tivoli SAN Manager agent installed and running it will appear with its Global Unique Identifier (GUID) similar to the first entry in Figure 9-10. If there is no agent running or the device is a switch, the device will be identified by its node WWN. In our example the fifth entry in Figure 9-10 is a server without an agent and the sixth is a switch. Faulted Port - if the device has several ports, the actual faulting port WWN will be displayed here. Indicated Device - the device which actually detected the errors. It is identified in the same way as the faulted device. Figure 9-11(which is simply Figure 9-10 scrolled to the right), shows an example. Indicated Port - if the device has several ports, the actual port WWN on which errors where detected will be displayed. PD Reference - the reference to Problem Determination guides which can be used by IBM Support to diagnose the problem (if it is an IBM-supported piece of hardware).
276
To find the object with the corresponding GUID or port WWN, enter it in the Object Name field. NetView uses both GUID and port WWNs for the Object Names. As the GUID and ports are usually uniquely identified by less then the whole numeric string, you can use wildcards, rather than the entire string, as shown in Figure 9-12.
In our example we used the last four numbers of the GUID displayed in the first entry shown in Figure 9-10 on page 275. The search string is actually the least significant digits of the GUID, which is truncated in that figure. The full string for the GUID, including the searched string is displayed in Figure 9-15 on page 279. After entering the search string, click OK . The search results are displayed in Figure 9-13.
If you double-click on a returned object, NetView will open the topology map, highlighting the device, as shown in Figure 9-14. We can see the notification is for the host SENEGAL, which is adorned.
277
Now you can clearly see where the faulted device is located in the SAN, and you can start planning the necessary action to diagnose or repair the faulting device. ED/FI isolates faults only to the link level. Therefore, either side of the link or the cable itself might be the faulty component. Before replacing hardware, you should consult your service contracts and product problem determination guides for direction. Cleaning, cable seating, and diagnostic execution are some of the steps that might be recommended that lead to a definitive decision on parts repair or replacement. IBM Service can use ED/FI information in conjunction with problem determination guides to advise what/if part replacements are necessary. If you can identify a component, you should diagnose the problem and repair or replace the component as soon as possible before a permanent failure occurs. If you cannot identify a component, at a minimum you should monitor the link for further errors. In environments where high systems availability is a requirement or service level agreements are in place, you can contact service representatives about replacing the Fibre Channel component.
278
In Figure 9-16 you can see that the selected entry is now removed.
The removal is also reflected in the topology map as shown in Figure 9-17. The host SENEGAL is no longer adorned.
279
280
Part 5
Part
Maintenance
In Part 5 we provide information on keeping your IBM Tivoli SAN Manager environment healthy. First we describe how to back up each component, including the application files and the database repository, then we present the logging and tracing facilities for problem diagnosis provided with the product.
281
282
10
Chapter 10.
IBM Tivoli Storage Management Concepts, SG24-4877 IBM Tivoli Storage Manager Implementation Guide, SG24-5416 Deploying the Tivoli Storage Manager Client in a Windows 2000 Environment, SG24-6141 Backing Up DB2 Using Tivoli Storage Manager, SG24-6247
283
Note: There is no additional protection for the embedded IBM WebSphere Application Server needed. This has changed with IBM Tivoli SAN Manager Version 1.2, since it uses the embedded IBM WebSphere Application Server version 5.0.
284
config files
DB2 utility
DB2 databases
Figure 10-2 Tivoli Storage Manager integration with Tivoli SAN Manager
285
Normal flat files on the Tivoli SAN Manager manager can be backed up using the Tivoli Storage Manager Backup/Archive client for Windows 2000. DB2 has already integrated the Tivoli Storage Manager API code to facilitate database backup. DB2 provides its own backup utility which allows both full database as well as individual tablespace backup. The backup utility can be set up to use Tivoli Storage Manager as the backup repository, as you will see later. Therefore, the two client types (Backup/Archive client for flat files, API client for DB2 backup) work together to provide full data protection for your Tivoli SAN Manager environment. The API client and the Tivoli Storage Manager Backup/Archive client can run simultaneously on the same DB2 server, however, they are totally separate clients as far as the Tivoli Storage Manager server is concerned and we will configure them separately.
Ethernet
Tivoli SAN Manager TSM Server 5.2.0 Tivoli SAN Agent TSM Client 5.2.0 DB2 V7.2 FP8 NetView TSM Client 5.2.0 TSM API 5.2.0
Figure 10-3 Sample environment: Backing up Tivoli SAN Manager to Tivoli Storage Manager
Here is a summary of the setup steps: 1. Configure the Tivoli Storage Manager server to receive backups from the Tivoli SAN Manager Server 2. Configure the API and Backup/Archive clients on the Tivoli SAN Manager Server
286
operating system-level backups of the Windows Server or AIX V5.1 Server which runs the Tivoli SAN Manager code. We need to specify a management class and copy group within a policy domain for DB2 backups. We recommend defining a separate policy domain for the DB2 backups. We will define a domain called DB2_DOMAIN and register the nodename assigned to the DB2 backup client (in our case, LOCHNESS_DB2) to it. DB2 places special requirements on the management class. Each DB2 database backup is stored as a unique object in the Tivoli Storage Manager server, by specifying a time stamp as part of the low level qualifier (LL_NAME). This means that the DB2 backups must be manually inactivated. This also means that the management class that the backup objects are bound to should have retention settings that change the inactivated backup objects to be expired immediately. The retention settings for a backup copy group that would provide this is RETONLY=0 and VERDELETED=0. Example 10-1 shows typical Tivoli Storage Manager commands used to define a suitable environment for DB2 backups. We define a policy domain, policy set, management class, and copy groups for the DB2 environment. We activate the policy set and register our client node to the policy domain. We are using a storage pool called BACK_LTO as the destination for our DB2 backups.
Example 10-1 Definition of Tivoli Storage Manager environment for DB2 backups
DEFINE DOMAIN DB2_DOMAIN DESCRIPTION="Domain for DB2 backups" BACKRETENTION=30 ARCHRETENTION=365 DEFINE POLICYSET DB2_DOMAIN DB2_POLICY DESCRIPTION="DB2 BACKUPS Policyset" DEFINE MGMTCLASS DB2_DOMAIN DB2_POLICY DB2_MGMTCLASS DESCRIPTION="Mgmtclass for DB2 databases" SPACEMGTECHNIQUE=NONE AUTOMIGNONUSE=0 MIGREQUIRESBKUP=YES DEFINE COPYGROUP DB2_DOMAIN DB2_POLICY DB2_MGMTCLASS DESTINATION=BACK_LTO FREQUENCY=0 VEREXISTS=1 VERDELETED=0 RETEXTRA=0 RETONLY=0 MODE=MODIFIED SERIALIZATION=SHRSTATIC DEFINE COPYGROUP DB2_DOMAIN DB2_POLICY DB2_MGMTCLASS TYPE=ARCHIVE DESTINATION=ARCHIVEPOOL RETVER=NOLIMIT SERIALIZATION=SHRSTATIC ASSIGN DEFMGMTCLASS DB2_DOMAIN DB2_POLICY DB2_MGMTCLASS ACTIVATE POLICYSET DB2_DOMAIN DB2_POLICY REGISTER NODE LOCHNESS_DB2 LOCHNESS_DB2 DOMAIN=DB2_DOMAIN ARCHDELETE=YES BACKDELETE=YES USERID=NONE
The following parameters for the backup copygroup were set: VEREXISTS=1 to keep only one version of the backup file as the name of each DB2 backup is unique. (There will never be a newer version of the backup image with the same name). VERDELETED=0 so that if the backup file has been deleted (via db2adutl), then Tivoli Storage Manager should not keep an inactive version of this file. RETEXTRA=0 (the same value as RETONLY) parameter will never be used as you will never have more than one version of the backup file. To prevent confusion set this parameter to the same value as RETONLY. RETONLY=0 so that when a backup image file becomes inactive it will be purged from the Tivoli Storage Manager server at the next expiration.
287
DB2 configuration
Now, DB2 needs to be configured to use the right user name, password and management class as shown in Example 10-3, which uses the DB2 command line interface. This can be achieved in two ways. Either define these parameters within DB2 as shown in Example 10-3, or use the information taken from the Tivoli Storage Manager options file and default settings that were defined in 10.2.3, Tivoli Storage Manager server configuration on page 286.
Example 10-3 DB2 configuration
db2 => update db cfg for ITSANMDB using TSM_MGMTCLASS DB2_MGMTCLASS DB20000I The UPDATE DATABASE CONFIGURATION command completed successfully. DB21026I For most configuration parameters, all applications must disconnect from this database before the changes become effective. db2 => update db cfg for ITSANMDB using TSM_OWNER LOCHNESS_DB2 DB20000I The UPDATE DATABASE CONFIGURATION command completed successfully. DB21026I For most configuration parameters, all applications must disconnect from this database before the changes become effective. db2 => update db cfg for ITSANMDB using TSM_NODENAME LOCHNESS_DB2 DB20000I The UPDATE DATABASE CONFIGURATION command completed successfully. DB21026I For most configuration parameters, all applications must disconnect from this database before the changes become effective. db2 => update db cfg for ITSANMDB using TSM_PASSWORD LOCHNESS_DB2 DB20000I The UPDATE DATABASE CONFIGURATION command completed successfully. DB21026I For most configuration parameters, all applications must disconnect from this database before the changes become effective. db2 => get db cfg for ITSANMDB .
288
. . Number of database backups to retain Recovery history retention (days) TSM TSM TSM TSM management class node name owner password
(NUM_DB_BACKUPS) = 12 (REC_HIS_RETENTN) = 366 (TSM_MGMTCLASS) (TSM_NODENAME) (TSM_OWNER) (TSM_PASSWORD) = = = = DB2_MGMTCLASS LOCHNESS_DB2 LOCHNESS_DB2 *****
In either case you will need to set up some operating system environment variables so that the Tivoli Storage Manager API is able to find the Tivoli Storage Manager options file and knows where to write log files. These environment variables are shown in Example 10-4.
Tip: We used a different client option file, called DB2_DSM.OPT, to save our DB2 environment. To inform our DB2 environment, you have to define all the DSMI_ variables to the system. If you should choose this simpler way, you do not have to add the TSM entries into the DB2 configuration of the database ITSANMDB as shown in Example 10-3, TSM_MGMTCLASS, TSM_NODENAME, TSM_OWNER, TSM_PASSWORD. If you have this entry in the DB2 configuration, you can remove the with the following commands:
update db cfg for ITSANMDB using TSM_MGMTCLASS update db cfg for ITSANMDB using TSM_OWNER update db cfg for ITSANMDB using TSM_NODENAME update db cfg for ITSANMDB using TSM_PASSWORD
Otherwise define them into the System variables as shown in Example 10-4.
Example 10-4 Tivoli Storage Manager environment variables System wide entries
DSMI_CONFIG=c:\tivoli\tsm\api\db2_dsm.opt DSMI_DIR=c:\tivoli\tsm\api DSMI_LOG=c:\tivoli\tsm\api
Now, configure DB2 for ONLINE backups if required. This is set by the LOGRETAIN parameter. Example 10-5 shows the commands.
Example 10-5 Configure for online backup
db2 => get db cfg for ITSANMDB . . Log retain for recovery enabled (LOGRETAIN) = OFF . . db2 => update db cfg for ITSANMDB using LOGRETAIN RECOVERY DB20000I The UPDATE DATABASE CONFIGURATION command completed successfully. DB21026I For most configuration parameters, all applications must disconnect from this database before the changes become effective. db2 => quit DB20000I The QUIT command completed successfully.
289
C:\PROGRA~1\SQLLIB\BIN>db2stop force SQL1064N DB2STOP processing was successful. C:\PROGRA~1\SQLLIB\BIN>db2start SQL1063N DB2START processing was successful.
Stop and re-start DB2 to allow the changes to take effect (Example 10-7).
Example 10-7 Stop and start DB2
C:\PROGRA~1\SQLLIB\BIN>db2stop SQL1064N DB2STOP processing was successful. C:\PROGRA~1\SQLLIB\BIN>db2start SQL1063N DB2START processing was successful.
As the DB2 database files are backed up using DB2, they must be excluded from backup by the normal Backup/Archive client. We excluded all DB2 files except the RECOVERY LOG files. You must update the dsm.opt file located in C:\Tivoli\tsm\baclient\ directory (Example 10-9).
Example 10-9 baclient dsm.opt file sample
NODENAME PASSWORDACCESS TCPSERVERADDRESS EXCLUDE LOCHNESS GENERATE banda.almaden.ibm.com
C:\DB2\...\*
290
INCLUDE
C:\DB2\...\*.LOG
291
(C) Copyright IBM Corporation 1990, 2003 All Rights Reserved. Node Name: BANDA Please enter your user id <BANDA>: Please enter password for user id "BANDA": Node Name: BANDA Session established with server BANDA: AIX-RS/6000 Server Version 5, Release 2, Level 0.0 Data compression forced on by the server Server date/time: 06/03/03 14:23:51 Last access: 06/03/03 .... <information omitted> Session established with server BANDA: AIX-RS/6000 Server Version 5, Release 2, Level 0.0 Data compression forced on by the server Server date/time: 05/29/03 17:36:25 Last access: 05/29/03
14:23:30
16:06:15
ITSAN Manager backup initiated by TSM scheduler. 05/29/03 16:05:47 The Scheduler is under the control of the TSM Scheduler Daemon 05/29/03 16:05:47 Scheduler has been started by Dsmcad. 05/29/03 16:05:47 Querying server for next scheduled event. 05/29/03 16:05:47 Node Name: BANDA 05/29/03 16:05:47 Session established with server BANDA: AIX-RS/6000 05/29/03 16:05:47 Server Version 5, Release 2, Level 0.0 05/29/03 16:05:47 Data compression forced on by the server 05/29/03 16:05:47 Server date/time: 05/29/03 16:05:47 Last access: 05/29/03 15:59:37 Executing scheduled command now. 05/29/03 16:05:47 Incremental backup of volume '/' 05/29/03 16:05:47 Incremental backup of volume '/usr' 05/29/03 16:05:47 Incremental backup of volume '/var' 05/29/03 16:05:47 Incremental backup of volume '/home' 05/29/03 16:05:47 Incremental backup of volume '/opt'. . ANS1898I ***** Processed 28,000 files ***** . 05/29/03 16:05:57 Normal File--> 0 /opt/tivoli/itsanm/agent/InbandEvents [Sent] 05/29/03 16:05:57 Normal File--> 748 /opt/tivoli/itsanm/agent/agentLog.txt [Sent] 05/29/03 16:05:57 Normal File--> 1,155 /opt/tivoli/itsanm/agent/ibmchanges.txt [Sent] 05/29/03 16:05:57 Normal File--> 2,201 /opt/tivoli/itsanm/agent/ibmchanges.zip [Sent] 05/29/03 16:05:57 Normal File--> 34,799 /opt/tivoli/itsanm/agent/license.txt [Sent] 05/29/03 16:05:57 Normal File--> 8,733 /opt/tivoli/itsanm/agent/log.txt [Sent] 05/29/03 16:05:57 Normal File--> 41 /opt/tivoli/itsanm/agent/setacc.sh [Sent] 05/29/03 16:05:57 Normal File--> 330,209 /opt/tivoli/itsanm/agent/_uninst/uninstall [Sent] 05/29/03 16:05:57 Normal File--> 28,483 /opt/tivoli/itsanm/agent/_uninst/uninstall.dat [Sent] 05/29/03 16:05:58 Normal File--> 4,182,045 /opt/tivoli/itsanm/agent/_uninst/uninstall.jar [Sent]
292
05/29/03 16:05:58 Directory--> 512 /opt/tivoli/itsanm/agent/bin/aix [Sent] 05/29/03 16:05:58 Directory--> 512 /opt/tivoli/itsanm/agent/bin/aix/en_US [Sent] 05/29/03 16:05:58 Normal File--> 347 /opt/tivoli/itsanm/agent/bin/aix/.toc [Sent] ... <information omitted> Successful incremental backup of '/opt' . 05/29/03 16:06:15 Total number of objects inspected: 28,191 05/29/03 16:06:15 Total number of objects backed up: 4,255 05/29/03 16:06:15 Total number of objects updated: 0 05/29/03 16:06:15 Total number of objects rebound: 0 05/29/03 16:06:15 Total number of objects deleted: 0 05/29/03 16:06:15 Total number of objects expired: 0 05/29/03 16:06:15 Total number of objects failed: 0 05/29/03 16:06:15 Total number of bytes transferred: 54.75 MB 05/29/03 16:06:15 Data transfer time: 0.22 sec 05/29/03 16:06:15 Network data transfer rate: 247,479.37 KB/sec 05/29/03 16:06:15 Aggregate data transfer rate: 2,003.16 KB/sec 05/29/03 16:06:15 Objects compressed by: 23% 05/29/03 16:06:15 Elapsed processing time: 00:00:27
ITSANMstopall
The script ITSANMstopall stops all the applications, including NetView and the Tivoli SAN Manager Server application via WebSphere Application Server. This script calls another script, ITSANMstop (Example 10-11).
Example 10-11 ITSANMstopall script
@REM Stop the Netview Application @REM ---------------------------@echo "Stopping Netview" ovstop net stop "Tivoli Netview Service" @REM Stop the ITSANM-Manager @REM -----------------------@echo "Stopping the IBM WebSphere Application Server V5 - ITSANM-Manager" call ITSANMstop.bat
ITSANMstop
The script ITSANMstop stops just the Tivoli SAN Manager WebSphere application (Example 10-12).
293
ITSANMstartall
The script ITSANMstartall re-starts both NetView and the Tivoli SAN Manager Server WebSphere application (Example 10-13). This script calls another script, ITSANMstart.
Example 10-13 ITSANMstartall script
@REM Start the Netview Application @REM ---------------------------@echo "Starting Netview" net start "Tivoli Netview Service" ovstart @REM Start the ITSANM-Manager @REM ------------------------@echo "Starting the ITSANM-Manager ..." call ITSANMstart.bat
ITSANMstart
The script ITSANMstart starts just the Tivoli SAN Manager WebSphere application (Example 10-14).
Example 10-14 ITSANMstart application
@ECHO ON @REM Start WAS ITSANM-Manager Application @REM -----------------------------------@echo "Starting the ITSANM-Manager ..." net start "IBM WebSphere Application Server V5 - ITSANM-Manager"
294
"Stopping the ITSANM-Manager" C:\bkupscripts>call ITSANMstop.bat "Stopping the ITSANM-Manager" C:\bkupscripts>net stop "IBM WebSphere Application Server V5 - ITSANM-Manager" The IBM WebSphere Application Server V5 - ITSANM-Manager service was stopped successfully. More help is available by typing NET HELPMSG 2186. C:\bkupscripts>cd C:\Program files\tivoli\tsm\baclient . C:\Tivoli\tsm\baclient>dsmc inc IBM Tivoli Storage Manager Command Line Backup/Archive Client Interface - Version 5, Release 2, Level 0.0 (c) Copyright by IBM Corporation and other(s) 1990, 2003. All Rights Reserved. Node Name: LOCHNESS Session established with server BANDA: AIX-RS/6000 Server Version 5, Release 2, Level 0.0 Server date/time: 05/30/2003 16:45:11 Last access: 05/30/2003 16:20:09 Incremental backup of volume '\\LOCHNESS\C$' Incremental backup of Backup System Object: Backup System Object: Backup System Object: Backup System Object: Backup System Object: Backup System Object: Directory--> volume 'SYSTEMOBJECT' 'COM+ Database'. 'Event Log'. 'System and Boot Files'. 'Registry'. 'RSM Database'. 'WMI Database'. 0 \\lochness\c$\ [Sent]
05/29/2003 16:25:32 Normal File--> 1,032,192 \\lochness\c$\DB2\NODE0000\SQL00002\SQLOGDIR\S0000041.LOG [Sent] 05/29/2003 16:25:34 Normal File--> 1,032,192 \\lochness\c$\DB2\NODE0000\SQL00002\SQLOGDIR\S0000042.LOG [Sent] 05/29/2003 16:25:35 ANS1898I ***** Processed 500 files ***** 05/29/2003 16:25:35 Normal File--> 1,032,192 \\lochness\c$\DB2\NODE0000\SQL00002\SQLOGDIR\S0000043.LOG [Sent] 05/29/2003 16:25:37 Normal File--> 1,032,192 \\lochness\c$\DB2\NODE0000\SQL00002\SQLOGDIR\S0000044.LOG [Sent] 05/29/2003 16:45:32 Directory--> 0 \\lochness\c$\tivoli\itsanm\manager\conf\TIVINV [Sent] 05/29/2003 16:45:32 Normal File--> 30 \\lochness\c$\tivoli\itsanm\manager\conf\ATMS.properties [Sent] 05/29/2003 16:45:32 Normal File--> 11,562 \\lochness\c$\tivoli\itsanm\manager\conf\DataStore.defaults [Sent] 05/29/2003 16:45:32 Normal File--> 17,532 \\lochness\c$\tivoli\itsanm\manager\conf\DataStore.properties [Sent] 05/29/2003 16:45:32 Normal File--> 239 \\lochness\c$\tivoli\itsanm\manager\conf\edfi.properties [Sent] 05/29/2003 16:45:32 Normal File--> 44 \\lochness\c$\tivoli\itsanm\manager\conf\internal.properties [Sent] 05/29/2003 16:45:32 Normal File--> 20,955 . \\lochness\c$\usr\ov\snmp_mibs\ibm-midlevelmgr.mib [Sent] 05/29/2003 17:06:08 Normal File--> 9,032 \\lochness\c$\usr\ov\snmp_mibs\ibm-nv6ksubagent.mib [Sent]
295
... ... <information omitted> 05/29/2003 17:47:25 --- SCHEDULEREC STATUS BEGIN 05/29/2003 17:47:25 Total number of objects inspected: 70,942 05/29/2003 17:47:25 Total number of objects backed up: 39,536 05/29/2003 17:47:25 Total number of objects updated: 0 05/29/2003 17:47:25 Total number of objects rebound: 0 05/29/2003 17:47:25 Total number of objects deleted: 0 05/29/2003 17:47:25 Total number of objects expired: 0 05/29/2003 17:47:25 Total number of objects failed: 10 05/29/2003 17:47:25 Total number of bytes transferred: 3.10 GB 05/29/2003 17:47:25 Data transfer time: 4,744.76 sec 05/29/2003 17:47:25 Network data transfer rate: 686.56 KB/sec 05/29/2003 17:47:25 Aggregate data transfer rate: 653.76 KB/sec 05/29/2003 17:47:25 Objects compressed by: 0% 05/29/2003 17:47:25 Elapsed processing time: 01:23:02 05/29/2003 17:47:25 --- SCHEDULEREC STATUS END C:\Program files\tivoli\tsm\baclient>cd C:\bkupscripts C:\bkupscripts>ITSANMstartall "Starting Netview" C:\bkupscripts>net start "Tivoli Netview Service" The Tivoli NetView Service service is starting..... The Tivoli NetView Service service was started successfully.
C:\bkupscripts>ovstart Done "Starting the ITSANM-Manager server" C:\bkupscripts>call ITSANMstart.bat "Starting the ITSANM-Manager ..." C:\bkupscripts>net start "IBM WebSphere Application Server V5 - ITSANM-Manager" The IBM WebSphere Application Server V5 - ITSANM-Manager service is starting.... ......... The IBM WebSphere Application Server V5 - ITSANM-Manager service was started successfully. C:\bkupscripts>
296
Note: Refer to the following documentation for detailed information about DB2 protection and Tivoli Storage Manager integration:
IBM Redbook, Backing Up DB2 Using Tivoli Storage Manager, SG24-6247
IBM DB2 Universal Database - Administration Guide: Implementation - Version 7, SC09-2944 IBM DB2 Universal Database - Command Reference - Version 7, SC09-2951
Offline backup
An offline backup will run only if the database is not currently in use. You must stop the database or at least close all connections. In our case, we do not have to stop the database, since IBM Tivoli SAN Manager is the only application using it. Check this using the DB2 command shown in Example 10-16. We then stop the IBM Tivoli SAN Manager application, which will close all active connections to the ITSANMDB database.
Example 10-16 Active connections to ITSANMDB database
C:\bkupscripts>db2cmd.exe /c /w /i db2 list applications for database itsanmdb Auth Id Application Name -------------java.exe java.exe java.exe java.exe java.exe java.exe java.exe java.exe java.exe java.exe java.exe Appl. Handle ---------70 85 86 87 88 89 92 93 94 95 96 Application Id DB Name # of Agents ----1 1 1 1 1 1 1 1 1 1 1
-------DB2USER1 DB2USER1 DB2USER1 DB2USER1 DB2USER1 DB2USER1 DB2USER1 DB2USER1 DB2USER1 DB2USER1 DB2USER1
-----------------------------*LOCAL.DB2.030603221517 *LOCAL.DB2.030603221532 *LOCAL.DB2.030603221533 *LOCAL.DB2.030603221534 *LOCAL.DB2.030603221535 *LOCAL.DB2.030603221536 *LOCAL.DB2.030603221539 *LOCAL.DB2.030603221540 *LOCAL.DB2.030603221541 *LOCAL.DB2.030603221542 *LOCAL.DB2.030603221543
-------ITSANMDB ITSANMDB ITSANMDB ITSANMDB ITSANMDB ITSANMDB ITSANMDB ITSANMDB ITSANMDB ITSANMDB ITSANMDB
C:\bkupscripts>net stop "IBM WebSphere Application Server V5 - ITSANM-Manager" C:\bkupscripts>db2cmd.exe /c /w /i db2 list applications for database itsanmdb SQL1611W No data was returned by Database System Monitor. SQLSTATE=00000
You can see that after stopping the application, message SQL1611W is returned by db2 list applications for database itsanmdb, which means that no connections are active on the database. The backup script, ITSANMBackupOffline, shown in Example 10-17, performs the following operations: 1. Stops the Tivoli SAN Manager WAS application 2. Runs a backup of ITSANMDB database. 3. Starts the Tivoli SAN Manager WAS application.
Example 10-17 ITSANMBackupOffline offline backup script for ITSANM database
@REM @REM Stop the Netview Application ----------------------------
297
@echo "Stopping Netview" ovstop @ECHO ON @REM Stop the Application ITSANM DB @REM -----------------------------call ITSANMstop.bat
@ECHO ON @REM Get Status and check if Stopped @REM ------------------------------net start | findstr /i "ITSANM-Manager @if %errorlevel% EQU 0 GOTO BACKUPDB :NOTSTOPPED @ECHO ON @REM ITSANM not stopped - Backup cannot run @REM -------------------------------------@echo "WAS Application ITSANM Not Stopped !!!" @echo "Backup process cancelled " exit 1 :BACKUPDB @ECHO ON @REM ITSANM is stopped - Backup can run @REM ---------------------------------@echo "Backup of ITSANMDB starting ....." C:\PROGRA~1\SQLLIB\BIN\db2cmd.exe /c /w /i db2 backup database ITSANMDB USE TSM @if %errorlevel% NEQ 0 echo "Backup failed - Please check error messages" @REM Backup completed - Start ITSANM @REM ------------------------------:STARTITSANM call ITSANMstart.bat @ECHO ON @REM Get Status and check if Started @REM ------------------------------net start | findstr /i "ITSANM-Manager" @if %errorlevel% EQU 0 GOTO STARTOK @REM ITSANM not started @REM -----------------@echo "Application ITSANM Not Started !!!" exit 1 @REM ITSANM started @REM -------------:STARTOK @echo "Application ITSANM started successfully" @REM Start the Netview Application @REM ----------------------------@echo "Starting Netview" ovstart exit
298
C:\bkupscripts>net start
| findstr /i "ITSANM-Manager
"Backup of ITSANMDB starting ....." C:\bkupscripts>C:\PROGRA~1\SQLLIB\BIN\db2cmd.exe /c /w /i db2 backup database IT SANMDB USE TSM Backup successful. The timestamp for this backup image is : 20030604163542 C:\bkupscripts>call ITSANMstart.bat "Starting the ITSANM-Manager ..." C:\bkupscripts>net start "IBM WebSphere Application Server V5 - ITSANM-Manager" The IBM WebSphere Application Server V5 - ITSANM-Manager service is starting.... ......... The IBM WebSphere Application Server V5 - ITSANM-Manager service was started successfully.
C:\bkupscripts>net start | findstr /i "ITSANM-Manager" IBM WebSphere Application Server V5 - ITSANM-Manager "Application ITSANM started successfully" "Starting Netview" C:\bkupscripts>ovstart Done C:\bkupscripts>
Online backup
An online backup can run while applications are still accessing the data. DB2 will manage the enqueue process and will use its recovery log to track all changes made to the database during while the backup is running. You database must be configured for online backups (see Example 10-5 on page 289). The database backup procedure, ITSANMBackupOnline, displayed in Example 10-19, includes: 1. List current connections 2. Run backup of ITSANMDB database 3. List current connections
299
300
DB2USER1 DB2USER1 DB2USER1 DB2USER1 DB2USER1 DB2USER1 DB2USER1 DB2USER1 DB2USER1 DB2USER1 DB2USER1
java.exe java.exe java.exe java.exe java.exe java.exe java.exe java.exe java.exe java.exe java.exe
20 21 22 23 24 25 26 27 28 29 30
*LOCAL.DB2.030604153905 *LOCAL.DB2.030604153906 *LOCAL.DB2.030604153907 *LOCAL.DB2.030604153908 *LOCAL.DB2.030604153909 *LOCAL.DB2.030604153910 *LOCAL.DB2.030604153911 *LOCAL.DB2.030604153912 *LOCAL.DB2.030604153913 *LOCAL.DB2.030604153914 *LOCAL.DB2.030604153915
ITSANMDB ITSANMDB ITSANMDB ITSANMDB ITSANMDB ITSANMDB ITSANMDB ITSANMDB ITSANMDB ITSANMDB ITSANMDB
1 1 1 1 1 1 1 1 1 1 1
You can check the status of your backups using the db2adutl command, which is only valid for backups done using Tivoli Storage Manager (Example 10-21).
Example 10-21 db2adutl output
C:\PROGRA~1\SQLLIB\BIN>db2adutl query database ITSANMDB Query for database ITSANMDB Retrieving FULL DATABASE BACKUP information. 1 Time: 20030604105830 Oldest log: S0000004.LOG 2 Time: 20030604105106 Oldest log: S0000004.LOG 3 Time: 20030604103857 Oldest log: S0000004.LOG 4 Time: 20030529161536 Oldest log: S0000055.LOG 5 Time: 20030529143040 Oldest log: S0000055.LOG
0 0 0 0 0
1 1 1 1 1
Retrieving INCREMENTAL DATABASE BACKUP information. No INCREMENTAL DATABASE BACKUP images found for ITSANMDB Retrieving DELTA DATABASE BACKUP information. No DELTA DATABASE BACKUP images found for ITSANMDB Retrieving TABLESPACE BACKUP information. No TABLESPACE BACKUP images found for ITSANMDB Retrieving INCREMENTAL TABLESPACE BACKUP information. No INCREMENTAL TABLESPACE BACKUP images found for ITSANMDB Retrieving DELTA TABLESPACE BACKUP information. No DELTA TABLESPACE BACKUP images found for ITSANMDB Retrieving LOAD COPY information. No LOAD COPY images found for ITSANMDB Retrieving LOG ARCHIVE information. No LOG ARCHIVE images found for ITSANMDB
We find our two latest backups with timestamps 20030604105106 and 20030604103857.
301
We describe now the procedures we have used to recover from: A loss of major Agents files A loss of major Server files A lost of the IBM Tivoli SAN Manager database
pts/1
root@banda> dsmc restore /opt/tivoli/itsanm/agent/* -subdir=yes -replace=yes IBM Tivoli Storage Manager Command Line Backup/Archive Client Interface - Version 5, Release 2, Level 0.0 h4 (c) Copyright by IBM Corporation and other(s) 1990, 2003. All Rights Reserved. Restore function invoked.
302
Node Name: BANDA Session established with server BANDA: AIX-RS/6000 Server Version 5, Release 2, Level 0.0 Data compression forced on by the server Server date/time: 06/04/03 14:44:19 Last access: 06/04/03
14:42:07
ANS1247I Waiting for files from the server... Restoring 512 /opt/tivoli/itsanm/agent/_uninst [Done] Restoring 512 /opt/tivoli/itsanm/agent/bin [Done] Restoring 512 /opt/tivoli/itsanm/agent/conf [Done] . Restoring 11,224 /opt/tivoli/itsanm/agent/conf/DataStore.defaults [Done] Restoring 17,113 /opt/tivoli/itsanm/agent/conf/DataStore.properties [Done] Restoring 2,871 /opt/tivoli/itsanm/agent/conf/nativelog.properties [Done] Restoring 542 /opt/tivoli/itsanm/agent/conf/services.properties [Done] Restoring 219 /opt/tivoli/itsanm/agent/conf/setup.properties [Done] Restoring 240 /opt/tivoli/itsanm/agent/conf/srmRoles.properties [Done] Restoring 30 /opt/tivoli/itsanm/agent/conf/user.properties [Done] Restoring 0 /opt/tivoli/itsanm/agent/conf/TIVINV/BTSAGT01_01.SIG [Done] .... <information omitted> Restoring 3,994 /opt/tivoli/itsanm/agent/servlet/logs/localhost_log.20 03-05-28.txt [Done] Restoring 8,390 /opt/tivoli/itsanm/agent/servlet/logs/localhost_log.20 03-05-29.txt [Done] . . Restore processing finished. Total number of objects restored: 1,559 Total number of objects failed: 0 Total number of bytes transferred: 55.23 MB Data transfer time: 1.71 sec Network data transfer rate: 32,933.72 KB/sec Aggregate data transfer rate: 2,256.77 KB/sec Elapsed processing time: 00:00:25 root@banda> cd /opt/tivoli/itsanm/agent/bin/aix root@banda> ./tcstart.sh root@banda> Using CLASSPATH: /opt/tivoli/itsanm/agent/lib/classes:/opt/tivoli/itsanm/agent/servlet/common/lib/servlet.ja r:/opt/tivoli/itsanm/agent/lib/jms.jar:/opt/tivoli/itsanm/agent/lib/ServiceManager.jar::/op t/tivoli/itsanm/agent/servlet/bin/bootstrap.jar Using CATALINA_BASE: /opt/tivoli/itsanm/agent/servlet Using CATALINA_HOME: /opt/tivoli/itsanm/agent/servlet Using JAVA_HOME: /opt/tivoli/itsanm/agent/jre root@banda> ps -ef |grep itsanm root 20898 12456 1 14:59:59 pts/1 0:29 /opt/tivoli/itsanm/agent/jre/bin/java -Dtsnm.baseDir=/opt/tivoli/itsanm/agent -Djlog.noLogCmd=true -Djavax.net.ssl.trustStore=/opt/tivoli/itsanm/agent/conf/server.keystore -Djavax.net.ssl.keyStorePassword= -Dtsnm.localPort=9570 -Dtsnm.protocol=http:// -ss1m -classpath /opt/tivoli/itsanm/agent/lib/classes:/opt/tivoli/itsanm/agent/servlet/common/lib/servlet.ja r:/opt/tivoli/itsanm/agent/lib/jms.jar:/opt/tivoli/itsanm/agent/lib/ServiceManager.jar::/op t/tivoli/itsanm/agent/servlet/bin/bootstrap.jar -Dcatalina.base=/opt/tivoli/itsanm/agent/servlet -Dcatalina.home=/opt/tivoli/itsanm/agent/servlet org.apache.catalina.startup.Bootstrap start start root 22020 21676 0 15:00:37 pts/1 0:00 grep itsanm
303
root@banda> root@banda> cd/opt/tivoli/itsanm/agent/log root@banda> tail msgITSANM.log 2003.06.04 15:00:21.550 BTACS0004I Started service SANAgentInbandChangeAgent. java.lang.Class realStartup 2003.06.04 15:00:21.553 BTACS0008I Starting service log (timeout 600 seconds) com.tivoli.sanmgmt.middleware.data.Service startup 2003.06.04 15:00:21.661 BTACS0004I Started service log. java.lang.Class realStartup 2003.06.04 15:00:21.665 BTACS0017I All autostart services have started. com.tivoli.sanmgmt.middleware.TSNMServiceManager startupAllServices 2003.06.04 15:00:21.665 BTACS0024I The properties from file /opt/tivoli/itsanm/agent/conf/setup.properties were successfully read. com.tivoli.sanmgmt.middleware.TSNMServiceManager readConnectionProps 2003.06.04 15:00:21.666 BTACS0013I Monitoring services (monitor interval is 10 seconds). com.tivoli.sanmgmt.middleware.TSNMServiceManager monitor 2003.06.04 15:00:21.980 BTAHQ2942I Heartbeat started, method: agentHeartbeat on HostManager. com.tivoli.sanmgmt.subagent.hostquery.HostQuery run 2003.06.04 15:00:52.388 BTASA1407I The Inband scanner Topology has started. com.tivoli.sanmgmt.subagent.scanner.Scanner invoke 2003.06.04 15:00:52.389 BTASA1407I The Inband scanner Attribute has started. com.tivoli.sanmgmt.subagent.scanner.Scanner invoke 2003.06.04 15:00:52.417 BTASA1407I The Inband scanner Attribute has started. com.tivoli.sanmgmt.subagent.scanner.Scanner invoke
We then checked in SAN ->Configure Agents configuration menu in the NetView interface, shown in Figure 10-6, to find that the agent, BANDA is Contacted.
304
305
After deleting the directories, we tried to start NetView and IBM Tivoli SAN Manager, but it was unsuccessful, as shown in Figure 10-7 for NetView. We started the Netview from a command prompt see Example 10-24
Example 10-24 NetView start from Windows Command window
C:\usr>ovstart 'ovstart' is not recognized as an internal or external command, operable program or batch file.
We launched the Tivoli Storage Manager Backup/Archive client interface and started the restore of the deleted directories (Figure 10-8).
306
successfully. successfully.
We restarted IBM Tivoli SAN Manager successfully. A new discovery is automatically launched since the inband agents send new data to the manager. As expected, the outband agents do not appear under SNMP Agents as their configuration has been lost to the Server, as shown in Figure 10-9.
307
We stopped all services and restored the database. Example 10-26 shows the commands used to restore the ITSANMDB database.
Example 10-26 ITSANMDB restore procedure
C:\PROGRA~1\SQLLIB\BIN>db2adutl query db ITSANMDB Query for database ITSANMDB Retrieving FULL DATABASE BACKUP information. 1 Time: 20030606161023 Oldest log: S0000027.LOG 2 Time: 20030605111502 Oldest log: S0000019.LOG 3 Time: 20030604163542 Oldest log: S0000019.LOG 4 Time: 20030604162311 Oldest log: S0000017.LOG 5 Time: 20030604161510 Oldest log: S0000016.LOG 6 Time: 20030604155946 Oldest log: S0000015.LOG 7 Time: 20030604105830 Oldest log: S0000004.LOG 8 Time: 20030604105106 Oldest log: S0000004.LOG 9 Time: 20030604103857 Oldest log: S0000004.LOG 10 Time: 20030529161536 Oldest log: S0000055.LOG 11 Time: 20030529143040 Oldest log: S0000055.LOG . .
Node: Node: Node: Node: Node: Node: Node: Node: Node: Node: Node:
0 0 0 0 0 0 0 0 0 0 0
Sessions: Sessions: Sessions: Sessions: Sessions: Sessions: Sessions: Sessions: Sessions: Sessions: Sessions:
1 1 1 1 1 1 1 1 1 1 1
C:\>db2 list applications for database ITSANMDB SQL1611W No data was returned by Database System Monitor.
SQLSTATE=00000
C:\>db2 restore database ITSANMDB use tsm taken at 20030606161023 SQL2539W Warning! Restoring to an existing database that is the same as the ba ckup image database. The database files will be deleted. Do you want to continue ? (y/n) y DB20000I The RESTORE DATABASE command completed successfully.
308
C:\>db2 rollforward db ITSANMDB to 2003-06-06-23.16.00.000000 and STOP Rollforward Status Input database alias Number of nodes have returned status Node number Rollforward status Next log file to be read Log files processed Last committed transaction DB20000I = ITSANMDB = 1 = = = = = 0 not pending S0000027.LOG - S0000027.LOG 2003-06-06-23.01.10.000000
C:\PROGRA~1\SQLLIB\BIN>ovstart Done C:\PROGRA~1\SQLLIB\BIN>c:\bkupscripts\ITSANMstart.bat The IBM WebSphere Application Server V5 - ITSANM-Manager service is starting.... ....... The IBM WebSphere Application Server V5 - ITSANM-Manager service was started successfully. C:\PROGRA~1\SQLLIB\BIN>
In the ROLLFORWARD command, we specified to which point we want to restore the database. 2003-06-06-23.16.00.000000 is expressed in Coordinated Universal Time (UTC) and is the time just before we started our SQL DELETE commands.
309
4. Restored all the files on the boot partition (disk C:\) as shown in Figure 10-11.
310
The restore of the System Objects finished successfully, as shown in Figure 10-13.
6. We now rebooted the system. At this time, we are in the situation where all our software and configuration files have been restored. We must now restore the IBM Tivoli SAN Manager and the ITSANMDB databases to their latest available status.
Note: Refer to the redbook, Deploying the Tivoli Storage Manager Client in a Windows 2000 Environment, SG24-6141, for detailed information on Windows disaster recovery procedures.
311
Node: 0 Node: 0
Sessions: 1 Sessions: 1
db2 => restore db ITSANMDB use TSM taken at 20030529161536 DB20000I The RESTORE DATABASE command completed successfully. db2 => connect to ITSANMDB SQL1117N A connection to or activation of database "ITSANMDB" cannot be made because of ROLL-FORWARD PENDING. SQLSTATE=57019 db2 => rollforward db ITSANMDB to end of logs and stop Rollforward Status Input database alias Number of nodes have returned status Node number Rollforward status Next log file to be read Log files processed Last committed transaction DB20000I = ITSANMDB = 1 = = = = = 0 not pending S0000055.LOG - S0000058.LOG 2003-05-29-23.17.35.000000
C:\PROGRA~1\SQLLIB\BIN>
312
Note that if your DB2 files and directories were never backed up using the standard Backup/Archive client, your DB2 local and system directory will not be synchronized. You will have to uncatalog the ITSANMDB database and recreate the database during the restore as briefly shown in Example 10-29.
Example 10-29 Database restore output with synchronization
C:\PROGRA~1\SQLLIB\BIN>db2 restore db ITSANMDB use TSM TAKEN AT 20030529161536 SQL1005N The database alias "ITSANMDB" already exists in either the local database directory or system database directory. SQLSTATE=00000 C:\PROGRA~1\SQLLIB\BIN>db2 uncatalog database ITSANMDB DB20000I The UNCATALOG DATABASE command completed successfully. DB21056W Directory changes may not be effective until the directory cache is refreshed. C:\PROGRA~1\SQLLIB\BIN>db2stop SQL1064N DB2STOP processing was successful. C:\PROGRA~1\SQLLIB\BIN>db2start SQL1063N DB2START processing was successful. C:\PROGRA~1\SQLLIB\BIN>db2 restore db ITSANMDB use TSM TAKEN AT 20030529161536 SQL1036C An I/O error occurred while accessing the database. SQLSTATE=58030 C:\PROGRA~1\SQLLIB\BIN>db2 restore db ITSANMDB use TSM TAKEN AT 20030529161536 t o C into ITSANMDB DB20000I The RESTORE DATABASE command completed successfully. C:\PROGRA~1\SQLLIB\BIN>
313
The runstats command is not mandatory since the statistics can be updated by the reorgchk command. Moreover, you need to run a runstats command for each table while reorgchk executes over all tables. Example 10-30 shows the output of the reorgchk command on our ITSANMDB database.
Example 10-30 Output of reorgchk
db2 => connect to itsanmdb Database Connection Information Database server SQL authorization ID Local database alias = DB2/NT 7.2.6 = DB2ADMIN = ITSANMDB
db2 => reorgchk on table all Doing RUNSTATS .... Table statistics: F1: 100 * OVERFLOW / CARD < 5 F2: 100 * TSIZE / ((FPAGES-1) * (TABLEPAGESIZE-76)) > 70 F3: 100 * NPAGES / FPAGES > 80 CREATOR NAME CARD OV NP FP TSIZE F1 F2 F3 REORG -------------------------------------------------------------------------------DB2USER1 AGENT2SCANASSP 26 0 3 3 8892 0 100 100 --DB2USER1 AGENTPEER 10 0 1 1 1740 0 - 100 --DB2USER1 AIXOSPEER - --DB2USER1 CALLBACK 10 0 1 7 2220 0 9 14 -** DB2USER1 CLASS_EXT - --DB2USER1 CLASSTABLE 83 0 3 3 8798 0 100 100 --DB2USER1 DBSCHEMAVPEER - --DB2USER1 DEMOIDPEER 216 0 9 16 18360 0 30 56 -** DB2USER1 DESCANNERPEER 26 0 3 3 7228 0 89 100 --DB2USER1 FABRICPORTPEER 32 1 2 3 4864 3 60 66 -**
314
DB2USER1 FCENDPORTPEER 12 0 1 2 1728 0 42 50 -** DB2USER1 FCHUBPORTPEER - --. . . -------------------------------------------------------------------------------Index statistics: F4: CLUSTERRATIO or normalized CLUSTERFACTOR > 80 F5: 100 * (KEYS * (ISIZE+8) + (CARD-KEYS) * 4) / (NLEAF * INDEXPAGESIZE) > 50 F6: (100-PCTFREE) * (INDEXPAGESIZE-96) / (ISIZE+12) ** (NLEVELS-2) * (INDEXPAGESIZE-96) / (KEYS * (ISIZE+8) + (CARD-KEYS) * 4) < 100 CREATOR NAME CARD LEAF LVLS ISIZE KEYS F4 F5 F6 REORG -------------------------------------------------------------------------------Table: DB2USER1.AGENT2SCANASSP DB2USER1 RDBPK_60 26 1 1 6 26 100 - --DB2USER1 RDBUI_27 26 2 2 124 26 100 41 104 -** Table: DB2USER1.AGENTPEER DB2USER1 RDBPK_12 10 1 1 6 10 100 - --DB2USER1 RDBUI_3 10 1 1 41 10 100 - --. . . -------------------------------------------------------------------------------CLUSTERRATIO or normalized CLUSTERFACTOR (F4) will indicate REORG is necessary for indexes that are not in the same sequence as the base table. When multiple indexes are defined on a table, one or more indexes may be flagged as needing REORG. Specify the most important index for REORG sequencing.
The reorgchk command calculates three formulas (F1, F2, F3) for the tables and three formulas (F4, F5, F6) for the indexes to determine if the table or index must be reorganized. Each hyphen displayed in the REORG column indicates that the calculated results were within the set bounds of the corresponding formula, and each asterisk indicates that the calculated result exceeded the set bounds of its corresponding formula. Table reorganization is suggested when the results of the calculations exceed the bounds set by the formula.
Attention: Refer to the IBM DB2 Universal Database Administration Guide: Implementation Version 7, SC09-2944 for more information about DB2 performance.
If a reorganization is recommended for a table or an index, this can only be done when no activity is running against the database. This means that IBM Tivoli SAN Manager must be stopped in order to reorganize the tables. Example 10-31 shows an output of the reorg for a ITSANMDB DB2 table followed by a reorgchk on this table.
Example 10-31 REORG output
db2 => reorg table db2user1.scan2dmoidassp DB20000I The REORG TABLE command completed successfully. db2 => reorgchk on table db2user1.scan2dmoidassp
315
Doing RUNSTATS .... Table statistics: F1: 100 * OVERFLOW / CARD < 5 F2: 100 * TSIZE / ((FPAGES-1) * (TABLEPAGESIZE-76)) > 70 F3: 100 * NPAGES / FPAGES > 80 CREATOR NAME CARD OV NP FP TSIZE F1 F2 F3 REORG -------------------------------------------------------------------------------DB2USER1 SCAN2DMOIDASSP 345 0 31 31 116955 0 96 100 ---------------------------------------------------------------------------------Index statistics: F4: CLUSTERRATIO or normalized CLUSTERFACTOR > 80 F5: 100 * (KEYS * (ISIZE+8) + (CARD-KEYS) * 4) / (NLEAF * INDEXPAGESIZE) > 50 F6: (100-PCTFREE) * (INDEXPAGESIZE-96) / (ISIZE+12) ** (NLEVELS-2) * (INDEXPAGESIZE-96) / (KEYS * (ISIZE+8) + (CARD-KEYS) * 4) < 100 CREATOR NAME CARD LEAF LVLS ISIZE KEYS F4 F5 F6 REORG -------------------------------------------------------------------------------Table: DB2USER1.SCAN2DMOIDASSP DB2USER1 RDBPK_104 345 2 2 6 345 94 58 74 --DB2USER1 RDBUI_49 345 16 2 122 345 70 68 8 *--------------------------------------------------------------------------------CLUSTERRATIO or normalized CLUSTERFACTOR (F4) will indicate REORG is necessary for indexes that are not in the same sequence as the base table. When multiple indexes are defined on a table, one or more indexes may be flagged as needing REORG. Specify the most important index for REORG sequencing.
Important: Note that it is preferable to reorganize a table according its most used index. Refer to the DB2 Administration Guide for tables and indexes reorganization.
Finally, rebuild the packages if any that are associated with ITSANMDB using the db2rbind command (Example 10-32).
Example 10-32 DB2 rebind
C:\PROGRA~1\SQLLIB>db2rbind ITSANMDB -l C:\ITSANMDB_RBIND.txt all Rebind done successfully for database 'ITSANMDB'. C:\PROGRA~1\SQLLIB>
316
11
Chapter 11.
317
11.1 Overview
In the following sections we provide an overview of the logging files that are available for the Server, Agent, and Remote Console within IBM Tivoli SAN Manager. We describe the default logging parameters that are set within the product and give a high level description of the tracing facility is also provided. Finally, we describe the SAN Manager Service Tool that is used for capturing a snapshot of the managed environment. These concepts are listed in Figure 11-1.
Logging
Server Agent Remote Console Additional NetView logging SAN Error Predictor
11.2 Logging
Logging for Tivoli SAN Manager is intended to provide information to the end user and is enabled by default. This means logging provides information about your system, such as what components are started, or which exceptions and errors are received during an operation. To help you track server activity and monitor the system, the messages are logged in text files. These files can be viewed with a standard editing program, such as Windows Wordpad or Notepad. By default, the log files are located in the <install_dir>\log directory on the Manager, Agent And Remote Console machines. The number, size, type, and format of your message log files is configurable. The following message types are enabled by default: Informational messages Warning messages Error messages Refer to the IBM Tivoli Storage Area Network Manager Users Guide, SG23-4698 for information on configuring logging.
Windows Manager
mgrlog.txt - the main installation logging file for IBM Tivoli SAN Manager Server is in c:\tivoli\itsanm\manager\mgrlog.txt. See Example 11-1.
Example 11-1 mgrlog.txt for Tivoli SAN Manager
(May 28, 2003 2:17:58 PM), Setup.product.install, com.tivoli.sanmgmt.install.Mov
318
eFileProdAct, wrn, MoveFileProdAct: c:\tivoli\itsanm\manager\conf does not exist s (May 28, 2003 2:18:03 PM), Setup.product.install, com.tivoli.sanmgmt.install.Mer gePropFileProdAct, wrn, MergePropFileProdAct: c:\tivoli\itsanm\manager\conf.bkp/ nativelog.properties does not exists
Other installation logs are located in the directory c:\tivoli\itsanm\manager\\install\. Review these in the event of any problems with installation. Example 11-2 shows its contents.
Example 11-2 Installation logs for Tivoli SAN Manager
Directory of C:\tivoli\itsanm\manager\log\install 06/03/2003 06/03/2003 06/03/2003 06/03/2003 06/03/2003 06/03/2003 06/03/2003 06/03/2003 06/03/2003 06/03/2003 06/03/2003 06/03/2003 06/03/2003 06/03/2003 06/03/2003 06/03/2003 06/03/2003 06/03/2003 06/03/2003 06/03/2003 06/03/2003 06/03/2003 06/03/2003 06/03/2003 06/03/2003 06/03/2003 03:54p <DIR> . 03:54p <DIR> .. 03:54p 0 addWASServiceErr.txt 03:54p 272 addWASServiceOut.txt 03:52p 0 CreatePortsstderr.log 03:52p 100 CreatePortstdout.log 03:45p 0 db2createstderr.txt 03:45p 0 db2createstdout.txt 03:46p 63,788 dbcreate.log 03:52p 0 encryptSoapErr.txt 03:52p 0 encryptSoapOut.txt 03:45p 110,942 guidInstalllog.txt 03:45p 0 guidInstallStderr.txt 03:45p 0 guidInstallStdout.txt 03:52p 0 launchITSANMstderr.log 03:54p 6,249 launchITSANMstdout.log 03:50p 170 netview.log 03:51p 71 OvConfstderr.log 03:51p 57,230 OvConfstdout.log 03:54p 0 rmWASServiceErr.txt 03:54p 57 rmWASServiceOut.txt 03:50p 0 temp.log 03:51p 0 wasInstallstderr.log 03:51p 0 wasInstallstdout.log 03:51p 44 wasUnInstallstderr.log 03:51p 0 wasUnInstallstdout.log 24 File(s) 238,923 bytes 2 Dir(s) 710,127,616 bytes free
dbCreate.log - used to log DB2 creation for ITSANMDB. This log is useful if the IBM Tivoli SAN Manager database fails to install. See Example 11-3.
Example 11-3 dbcreate.log for DB2
C:\Tivoli\itsanm\manager\log\install>more dbcreate.log IBM Tivoli Storage Area Network Manager Database Creation Script (C) Copyright IBM Corp. 2000, 2001 DB20000I DB20000I DB21026I from this The CREATE DATABASE command completed successfully. The UPDATE DATABASE CONFIGURATION command completed successfully. For most configuration parameters, all applications must disconnect database before the changes become effective.
319
AIX Manager
The installation log for the IBM Tivoli SAN Manager Server on AIX is /opt/tivoli/itsanm/manager/mgrlog.txt. If you encounter errors during installation, the logs for problem determination can be found in: /opt/tivoli/itsanm/manager/log/install.
Note: This log contains the same logging information as described in Example 11-1.
GUID logging
The Tivoli GUID package is used to resolve a computers identification. Computers can have multiple domain names, a dynamic IP address that changes, or a host name that changes. The GUID package gives the computer a global unique identifier (GUID). This assures that one computer that is running multiple applications can be uniquely identified. For example, one computer might be running the Tivoli Storage Manager client and the IBM Tivoli Storage Area Network Manager agent. The following logs are used for GUID and can be found in c:\Tivoli\itsanm\manager\log\install. For AIX, the GUID will be created in /opt/tivoli/itsanm/manager/guid/aix. guidInstalllog.txt guidInstallStderr.txt guidInstallStdout.txt These files contain messages and errors related to installing the GUID package.
320
In Figure 11-2, the Service Manager displayed all the IBM Tivoli SAN Manager services.
321
Windows
The msgITSANM.log contains information for the IBM Tivoli SAN Manager Agent. The log can be found at c:\tivoli\itsanm\agent\install See the Example 11-6 below.
Example 11-6 msgITSANM output
2003.05.28 15:01:11.062 BTACS0024I The properties from file c:\tivoli\itsanm\agent\conf\setup.properties were successfully read. Service readConnectionProps 2003.05.28 15:01:11.156 BTACS0005I Deployed service ConfigService: class=com.tivoli.sanmgmt.dbconnparms.ConfigService, scope=application, autostart=true, static=false, order=1. com.tivoli.sanmgmt.middleware.data.Service deploy 2003.05.28 15:01:11.594 BTACS0005I Deployed service SANAgentHostQuery: class=com.tivoli.sanmgmt.subagent.hostquery.HostQuery, scope=application, autostart=true, static=false, order=2. com.tivoli.sanmgmt.middleware.data.Service deploy
322
Note: The following agent platforms contain similar agent logs that are located in the respective directories.
Solaris
This log contains information for the IBM Tivoli SAN Manager Agent. The log can be found at /tivoli/itsanm/agent/log.
Linux
This log contains information for the IBM Tivoli SAN Manager Agent. The log can be found at /tivoli/itsanm/agent/log.
AIX
This log contains information for the IBM Tivoli SAN Manager Agent. The log can be found at /tivoli/itsanm/agent/log.
323
Trap Reception
NetView Server
SNMP Trap
TCP 162
Trapd.log
Event Log
To enable NetView to use the trapd.log, do the following: 1. From the NetView console, select Options -> Server Setup, to bring up the Server Setup window. This lists all the NetView processes. Select the trapd from the process list, and click the pull down menu to select Trap Daemon, as shown in Figure 11-4.
2. This brings up the trap daemon configuration window. Check the box Log Events and Traps, then click OK. See Figure 11-5.
324
3. NetView now prompts for a stop and start of the daemons as shown in Figure 11-6.
4. NetView shuts down all daemons, then restarts them. See Figure 11-7.
Important: There are documented steps on how to perform maintenance on the trapd.log in Tivoli NetView. Please refer to the redbook Tivoli NetVIew and Friends, SG24-6019.
325
11.3 Tracing
Tracing is intended for Tivoli Support to diagnose problems. This functionality is disabled by default. Tracing can be dynamically enabled using a command and each trace can be turned on or off independently. The tracing output file is called traceITSANM.log and is located in the <install_dir>\log of the Manager or Agent. By default up to three traceITSANM.log files can exist and each of them can grow to 512KB. however the file size as well as the number of files are configurable. When traceITSANM.log is full, traceITSANM.log will be renamed to traceITSANM2.log and new logs will be written to the empty traceITSANM.log. When traceITSANM.log fills again, traceITSANM2.log will be renamed to traceITSANM.log, and traceITSANM2.log will be renamed to traceITSANM3.log. This is to ensure that traceITSANM.log contains the latest trace information. When traceITSANM.og fills for a third time, the oldest log (traceITSANM3.log) is discarded and the cycle continues.
Important: By default all trace loggers are enabled. The filter mode is what controls the level of tracing. (WARN=high level messages, INFO=detailed messages). When the filter mode is changed INFO, this can impact system performance. The filter mode should only be changed when recommended by IBM Tivoli Support.
Once the environment is sourced (setenv). We issued the srmcp -u <uid> -p <password> log list -loggers command to list out the status of the trace loggers. We show the output of this command in Example 11-9.
Example 11-9 IBM Tivoli SAN Manager loggers
C:\tivoli\itsanm\manager\bin\w32-ix86>srmcp -u db2admin -p itsosj log list -loggers IBM Tivoli Storage Area Network Manager Logging Toolkit Command Line Interface - Version 1 Release 1 Level 0 a State on on on on on on on on on on on Filter ALL WARN WARN ALL ALL WARN ALL WARN WARN WARN WARN Handlers file.message file.trace file.trace file.message file.message file.trace file.message file.trace file.trace file.trace file.trace Logger san.indexLog san.ArchiveTableMaintenanceServiceTrace san.schedulerTrace san.SanAgent_ScannerMsgLogger san.formatLog san.SanAgent_ScannerTraceLogger san.schedulerLog san.indexTrace san.StatusCacheTraceLogger san.ReportMgrTrace san.SANlicenseSnmpTrapTrace
326
on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on on
ALL WARN ALL ALL WARN WARN WARN ALL ALL ALL WARN WARN ALL ALL ALL WARN WARN ALL ALL WARN WARN ALL WARN ALL WARN ALL WARN ALL WARN ALL WARN WARN ALL WARN ALL WARN ALL WARN ALL WARN ALL WARN WARN WARN ALL WARN ALL WARN ALL ALL WARN INFO DEBUG_MAX DEBUG_MAX DEBUG_MAX DEBUG_MAX DEBUG_MAX DEBUG_MAX DEBUG_MAX
file.message file.trace file.message file.message file.trace file.trace file.trace file.message file.message file.message file.trace file.trace file.message file.message file.message file.trace file.trace file.message file.message file.trace file.trace file.message file.trace file.message file.trace file.message file.trace file.message file.trace file.message file.trace file.trace file.message file.trace file.message file.trace file.message file.trace file.message file.trace file.message file.trace file.trace file.trace file.message file.trace file.message file.trace file.message file.message file.trace
san.monitorLog san.monitorTrace san.ArchiveTableMaintenanceServiceLog san.ReportMgrLog san.formatTrace san.SANAgentHostQueryTrace san.SanAgent_InbandChangeAgentTraceLogger san.SANAgentHostQueryLog san.SanAgent_InbandChangeAgentMsgLogger san.SanPersistMessageLogger san.SanPersistTraceLogger san.EDFITraceLogger san.EDFIMsgLogger san.StatusCacheMsgLogger san.TSNMServiceManagerLog san.TSNMServiceManagerTrace san.dbparmsTrace san.dbparmsLog san.MessagingServiceLog san.MessagingServiceTrace san.eventFactoryTrace san.licenseServiceLog san.licenseServiceTrace san.HostMgrMsgLogger san.HostMgrTraceLogger san.SanEventCorrelatorFactoryMsgLogger san.SanEventCorrelatorFactoryTraceLogger san.OutbandChangeAgentLogger san.OutbandChangeAgentTraceLogger san.SanManagerMsgLogger san.SanManagerTraceLogger san.SanManagerHighLevelPerformanceTraceLogger san.JDBCConnectionPoolLog san.JDBCConnectionPoolTrace san.DBMsgLogger san.DBTraceLogger san.ChangeMonitorMsgLogger san.ChangeMonitorTraceLogger san.tesMsgLogger san.tesTrcLogger san.SanManagerDaemonMsgLogger san.SanManagerDaemonTraceLogger san.DBAPITrace san.eventTrace san.eventLog san.SanQueryEngineTraceLogger san.SanQueryEngineMsgLogger san.LoggingToolkitTraceLogger san.LoggingToolkitLogger srm.PolicyManagerLog srm.PolicyManagerTrace native.msg.fswp native.trace.fswp native.trace.tivguid native.trace.attributeScanner native.trace.topologyScanner native.trace.eventScanner native.trace.eventAgent native.trace.brocadeScanner
327
on
DEBUG_MAX
native.trace.statisticsScanner
Note: For a complete review of messaging, refer to the manual IBM Tivoli Storage Area Network Manager Messages, SC32-0953.
Note: Although it is recommended, It is not required to perform a shutdown of the IBM Tivoli SAN Manager application when running the SAN Manager Service Tool.
Once the service tool has completed, it creates ITSANMservice.zip or ITSANMservice.tar in the same directory. The file is typically several megabytes in size. The compressed file contains all the critical files from the log directory as well as the database.
Important: Always open a zip file on the same manager operating system - there is no interchange possible between AIX and Windows managers. For example, if the snapshot was taken from an AIX manager, it can only be imported to another AIX manager.
1. Stop SAN Manager service 2. Unpack the ITSANMservice.zip or tar file in an empty directory on IBM Tivoli SAN Manager server. 3. Modify the \tivoli\itsanm\manager\conf\services.properties. Comment out the below services and save the file. SANHostMgr SANQueryEngine
328
DiscoverEngineService
Note: The above services are disabled to prevent the IBM Tivoli SAN Manager from writing to the database at the time of the restore.
In Example 11-10 we show the services.properties file.
Example 11-10 service.properties file with commented out services
SM = com.tivoli.sanmgmt.middleware.TSNMServiceManager ConfigService = com.tivoli.sanmgmt.dbconnparms.ConfigService application autostart notstatic 1 MessagingService = com.tivoli.sanmgmt.middleware.MessagingService.MessagingService application autostart notstatic 2 SANEvent = com.tivoli.sanmgmt.event.SANEventService application autostart notstatic 3 SANLicense = com.tivoli.sanmgmt.license.SANLicenseService application autostart notstatic 4 #SANHostMgr = com.tivoli.sanmgmt.diskmgr.hostservice.manager.SANDiskMgrHostService application autostart notstatic 5 #SANQueryEngine = com.tivoli.sanmgmt.tsanm.queryengine.QueryEngine application autostart notstatic 6 SANEventCorrelatorFactory = com.tivoli.sanmgmt.tsanm.eventcorrelator.EventCorrelatorFactory application autostart notstatic 7 SANAgentOutbandChangeAgent = com.tivoli.sanmgmt.tsanm.outbandchangeagent.OutbandChangeAgent application autostart notstatic 8 #DiscoverEngineService = com.tivoli.sanmgmt.tsanm.discoverengine.DiscoverEngine application autostart notstatic 9 SANManagerDaemon = com.tivoli.sanmgmt.tsanm.console.SanManagerDaemon application autostart notstatic 10 #PFAService = com.ibm.edfi.pfa.PFAService application autostart notstatic 11 #FIService = com.ibm.edfi.fi.FIService application autostart notstatic 12 log = com.tivoli.sanmgmt.logging.log application autostart nonstatic 13
4. Import SAN manager database - Open a DB2 command window by typing db2cmd from a Windows command prompt. Change to the directory where you extracted the snapshot files. Execute the db2move command to restore the database. It will replace the current database.
db2move ITSANMDB IMPORT
329
330
Part 6
Part
331
332
12
Chapter 12.
333
334
Master (tec_server)
tec_task
tec_reception
tec_rule
tec_dispatch
Reception Buffer
Event Cache
Status = PROCESSED
RIM
tec_t_evt_rec_log
RDBMS
tec_t_evt_rep
A Rule base is divided into event class definitions, which define the attributes of an event; and rules, which define what should be done with an event. IBM Tivoli SAN Manager ships only a class definition file (so called baroc file) but no rule file. Events can be received either via Tivoli Enterprise Framework mechanisms (which requires some software to be installed on each event sender) or via a socket connection (which only requires that events are sent according to TEC formats). IBM Tivoli SAN Manager sends its events via a socket connection directly to the TEC server. In order to view the events and assign them to administrators to be treated, there is a Java based program called the TEC Console. This connects to the event repository using Framework mechanisms (RIM) and a helper process called tec_ui_server. It can be configured to show different views for different administrators. Events can be modified graphically.
335
Component placement
Tivoli Managed Region Server (TMR) TONGA Tivoli Enterprise Data Warehouse PALAU
Ethernet
Tivoli SAN Manager Agent Win2k Tivoli Light Client Framework (LCF)
Tivoli SAN Manager Agent AIX Tivoli Light Client Framework (LCF)
Tivoli SAN Manager Agent Win2k Tivoli Light Client Framework (LCF)
The machines used in the setup are: TONGA Windows 2000 SP3 Tivoli Management Framework 4.1 Tivoli Enterprise Console 3.8 FP1 Tivoli Configuration Manager 4.2 Tivoli Monitoring 5.1.1 FP3
PALAU Windows 200 SP3 Tivoli Enterprise Data Warehouse 1.1 FP2 All the other machines in the lab are running the Tivoli Light Client Framework (LCF) code, which is the basis for all Tivoli Management activities.
336
Example 12-1 Output of wtdumprl when baroc file has not been imported
1~1556~1~1054253423(May 29 17:10:23 2003) ### EVENT ### PhysicalRelationshipEvent;fromObjectLabel='winzone_1_1';toObjectLabel='bonnie.almaden.ibm.c om';toObjectType='Host';state='Normal';msg='The association between SAN SAN 1 Zone winzone_1_1 and port 0 is normal';sub_source='SanManagerService';fromObjectType='Soft Zone';messageId='BTADE1732I';toObjectUniqueId=;uniqueId='L210000060691064CFwinzone_1_121000 0E08B023629';eventType='normal';hostname='lochness';fromHighLevelDevice='Not Applicable';source='IBM Tivoli Storage Area Network Manager'; entityType='Zone2Port'; severity='HARMLESS';toHighLevelDevice='a3.88.da.60.8d.64.11.d7.9c.f1.00.a0.cc.d9.58.33';fro mObjectUniqueId='E510000060691064CFwinzone_1_1';origin='9.1.38.167';END ### END EVENT ### PARSING FAILED
To import the event class definitions, open the Tivoli Desktop and double-click the Event Server icon. In the window (Figure 12-3) you see the defined rule bases, with the active one highlighted by an arrow.
Choose the active rule base and right-click on it. Select Import (Figure 12-4).
337
Select the check-box Import Class Definitions and enter the fully qualified path to the definitions file. This file is on the IBM Tivoli SAN Manager CD and is called ITSANM_120.baroc.(Our example uses a copy of this file on disk). The Position to insert depends on how many events you expect to receive from IBM Tivoli SAN Manager and on the hierarchical dependencies inside the class structure. Since the Tivoli SAN Manager classes depend only on the root EVENT, you could put it right after that event class. However, since classes are matched from the top to bottom, put the busiest event classes higher in the hierarchy than less busy classes. We put it at the very bottom, because we do not expect very many events (Figure 12-5).
After the class definitions are imported we must compile the rule base to incorporate the changes (as shown in Figure 12-6). To compile, right-click on the active rule base icon and select Compile.
338
Carefully check the output for any compilation errors. If there were none, load the rule base right-click on the active rule base icon and selecting Load. Select the correct option as in Figure 12-7. You must recycle the event server whenever you make any changes to the class definitions. If you only changed rules, then recycling the event server is not necessary.
Stop and start the Event Server by right-clicking its icon on the Tivoli Desktop (Figure 12-8).
339
In the Configuration dialog there are three folders: Event Groups Consoles Operators First we have to create an Event Group to specify filters to sort out the IBM Tivoli SAN Manager events. Right-click Event Groups and select Create Event Group (Figure 12-10).
340
Name the Event Group (for example, ITSANM), right-click it and select Create Filter (Figure 12-11).
On the resulting dialog, enter a filter description select Add Constraint (Figure 12-12).
341
Choose Class as Attribute and Operator In, then select SANManagerEvent in the Value window (Figure 12-13).
This will add a constraint to our filter ITSANM. If you add multiple constraints, they behave as a boolean AND. If you add more filters to an Event Group they behave as a boolean OR. You can test if your filter matches any events by clicking the Test SQL button on Figure 12-12. If there are no events in the TEC repository, then you will get zero matching events. You can view the constraint in plain SQL by clicking the little arrow above the Help button on Figure 12-12. It will display similar to Figure 12-14.
342
After creating the Event Group, we must assign it to a Console. We assume that you already have a Console defined, so right-click on it and select Assign Event Group. The menu in Figure 12-15 appears.
Select the appropriate roles and click OK. You will see output similar to Figure 12-16.
343
Your Console should now have the ITSANM Event Group assigned to it (Figure 12-17).
After configuring the Event Console, you can see the results by selecting Summary Chart View from the Windows menu. This displays the actual event viewer, with all configured event groups (Figure 12-18).
344
Clicking on a particular event group bar opens the event viewer for that group (Figure 12-19). The upper half of the window shows the events which are assigned to you to solve. You can acknowledge, close, run tasks or view the details of the selected event.
If you select an event and click the Details button, the window in Figure 12-20 opens. It describes in plain text the most important details of the selected event.
345
To see a complete list of all event attributes, select the Attribute List tab (Figure 12-21). There you can get additional information on where the event originated, when it has occurred, when it has been received by the TEC server and other fields.
346
347
In the dialog, the only information that is mandatory is the hostname and port of the TEC server. If it is running on Windows the standard port is 5529, for UNIX TEC servers enter 0. Select Yes to enable TEC logging (Figure 12-23) and click OK.
348
First, you must decide whether to send events using integrated Tivoli (TME) mechanisms (this requires you to install a Tivoli Managed Node on the IBM Tivoli SAN Manager) or non-Tivoli communications.
Chapter 12. Tivoli SAN Manager and TEC
349
The main advantage of the Tivoli integrated method is that it caches the events if the TEC server is down. It has also advantages in Firewall configurations, because you can define a single port to cross the firewall and additionally ssl-encrypt the connection. But installing the Managed Node software requires disk and memory on the Manager, and configuration changes in the Tivoli environment, so check with your Tivoli administrators to determine which method to use.
You need to enter the full qualified hostname of your TEC server (Figure 12-26).
The next question asks for the platform of your TEC server. If you are using UNIX, then the port will be dynamically assigned using RPC-Portmapper. If you are using Windows, there is a fixed port (Figure 12-27).
350
Specify the TEC port for the Windows TEC server (Figure 12-28).
In the next window you can specify which type of events will be forwarded from NetView on IBM Tivoli SAN Manager to the TEC (Figure 12-29). This will update your trapd.conf file in the NetView installation. With this you will be able to forward received SNMP events from monitored SAN devices without any knowledge of the NetView product (Figure 12-29).
After pressing Next, you can specify for each NetView SmartSet, which events should be forwarded to TEC. This gives you the flexibility to suppress events from one group of hosts, but pass them for another one (Figure 12-30).
351
All the options can be modified later, by starting the configuration program again. After clicking Next, the adapter will be configured (Figure 12-31).
After you have followed all the instructions, you should soon receive some events from your IBM Tivoli SAN Manager to TEC.
12.7 Example
In this example, we disconnected a Fibre Channel cable between the host BONNIE and the ITSOSW1 FC-switch. After a short while, the IBM Tivoli SAN Manager shows that the
352
connection between the host and the switch is down, by changing the color of the connection to red (Figure 12-33).
At the same time some events are sent to TEC (Figure 12-34). There is one Physical Entity Event, indicating that the host is missing and three Physical Relationship Events indicating that the association between the host and the SAN, the zone, and the switch is missing. Depending on how many LUNs and zones are associated with your host, there can be a large number of events for a single cable fault.
353
As soon as the error has been recovered, the respective normal events are sent to TEC (Figure 12-35).
354
The rule looks for incoming events matching these event classes of IBM Tivoli SAN Manager: PhysicalEntityEvent LogicalEntityEvent PhysicalRelationshipEvent LogicalRelationshipEvent These events must be clearing events (slot eventType equals normal). When such an event is received, it fires the following actions: Look at the event repository. If there are any events with the same classes, which are missing events (slot eventType equals missing) AND if they have the same uniqueId slot, then do the rest of the actions. Set event severity of the missing events to HARMLESS. Close those events. Set the event administrator of both events to ITSANM_rule, to easily determine that these events were closed by our rule. Close the clearing event as well.
Important: The slot uniqueId, contains a unique event ID describing all the involved resources. That makes sure that the clearing event points to the originating missing event.
355
Paste the sample code to a file named ITSANM.rls and import it to your rule base as described in the procedure for the baroc file in 12.3, Configuring the Rule Base on page 336. The only differences are to select Import Rule Set instead of Import Class Definitions (Figure 12-5 on page 338) and you do not need to recycle the TEC server, just Load and Activate it (Figure 12-7 on page 339).
356
13
Chapter 13.
357
Inventory enables you to gather and maintain up-to-date inventory asset management information in a distributed environment. This helps system administrators and accounting personnel to manage complex, distributed enterprises. Software Distribution enables you to install, configure, and update software remotely within your network.
Tivoli Configuration Manager also provides the following services: Activity Planner Change Manager Resource Manager Web Interface Enterprise Directory Query Facility
Activity Planner enables you to define a group of activities that originate from different applications in an activity plan, submit or schedule the plan for running, and monitor the plan while it runs. Change Manager functions with Activity Planner to support software distribution, inventory, and change management in large networks. It uses reference models to simplify the management of the network environment.
You can use Resource Manager, together with Software Distribution and Inventory, to perform the management operations for pervasive devices. You can use the Web Interface to install and manage various Tivoli Configuration Manager Web objects. The Web Interface has a server component that pushes software packages, inventory profiles, and reference models from the Tivoli region to the Web Gateway where they are stored until they are pulled by the Web Interface endpoint. With enterprise directory integration, you can exploit organizational information that is stored in enterprise directories in order to determine a set of targets for a software distribution or an inventory scan. The Enterprise Directory Query Facility enables you to select a specific directory object, or container of directory objects, as subscribers for a reference model or an activity plan.
358
We created separate Policy Regions for each Tivoli product. Double-click Inventory Policy Region (Figure 13-2).
359
Make sure that the Inventory Policy Region contains the InventoryConfig resource as a Managed Resource. To determine if it has been set, right-click the Policy Region and select Managed Resources. The dialog in Figure 13-3 appears.
In our environment we created the default Query Libraries with the script inventory_query.sh in the bin/generic/inv/SCRIPTS/QUERIES directory of the Tivoli installation directory and created a Profile Manager called Inventory_default_PM (Figure 13-4). To create a Profile Manager select Create in the top menu and select Profile Manager.
360
Double-click the Inventory_default_PM Profile Manager and the following dialog appears (Figure 13-5).
361
Create an Inventory Profile by clicking Create in the top menu and select Profile. Enter the name (P_SoftwareScan in our example), and select InventoryConfig as the Profile type. Then right-click on the newly created Profile and select Properties. The window that appears shows you the global properties of the Inventory Profile (Figure 13-6).
Since we want to create a software only inventory scan, you should deselect all hardware related check boxes. The only ones we need are the PC Software section (Figure 13-7) and the UNIX Software section (Figure 13-8).
362
There are two possible ways to collect software information from endpoints. One is to scan all the files on a machine and compare them to a predefined list, thus determining an installed product by filename and filesize of a significant file in the software package. IBM Tivoli SAN Manager ships these so called Inventory Signature files with the product. They can be found in the installation directory in the conf/TIVINV subdirectory. The signature files are zero bytes in length and are recognized by filename (BTSMGR01_01.SIG for IBM Tivoli SAN Manager Manager Version 1.1 and 1.2). The signatures for IBM Tivoli SAN Manager are already incorporated in the latest inventory signature files, which you can download from the IBM Software support web site. Another way to determine installed software is by querying the native software repository of the operating system. This gives you very fast scans, but relies on the fact that the software actually registers itself in the operating system, rather than just copying files to your machine. For IBM Tivoli SAN Manager, both methods are available - and your choice depends on the policies of your IBM Tivoli Configuration Manager environment. In our examples we chose to use the native software query, so we check the Scan Operating System for Product Information boxes in the dialog (Figure 13-8), not the Scan for File Information.
363
Close the dialog with the OK button and then distribute the Inventory Profile to your Endpoints. Right-click the Profile and select distribute (Figure 13-9).
364
This opens a dialog where you can choose the machines on which the inventory scan should occur. Select the machines and click Distribute & Close (Figure 13-10).
365
You can determine the status of the inventory scan with a tool called Distribution Status console. If this is installed in your environment, the icon will be on the main screen of your Tivoli Desktop (Figure 13-1 on page 359). Double-click on it to open the console (Figure 13-11).
366
In the upper window select All Distributions and double-click the P_SoftwareScan Inventory Application. In the lower window select All Nodes. You can see which scans successfully completed, pending, failed etc. When the scans are complete, you can query the collected information. There are a lot of standard queries, but we want to gather only data for IBM Tivoli SAN Manager. Therefore we create a new query by clicking Create -> Query in the menu (Figure 13-12).
Name the Query and select the inv_query as the repository. This is the Inventory Database RIM object. The table name which contains the native software information is NATIVE_SWARE_VIEW. Select the columns you want and add a filter which says: Column
Chapter 13. IBM Tivoli SAN Manager and Configuration Manager
367
name PACKAGE_NAME LIKE IBM Tivoli Storage Area Network Manager%. This will give you an output of all Software Packages whose names begin with IBM Tivoli Storage Area Network Manager. The % is the wildcard in SQL (Figure 13-13).
At the bottom there is a button called Run Query, which runs the query while you are editing it. The output show all the installed IBM Tivoli SAN Manager products including agents, manager and consoles (Figure 13-14).
368
You can also query the Inventory database with a native DB2 client. That enables you to connect to Business Intelligence tools or script based applications. The query feature is very powerful - there is a lot of other information available. For example, together with the hardware scans you can determine which Fibre Channel cards are installed and which firmware levels and drivers they are using.The following query showed all the IBM software which was on the endpoints (Figure 13-15).
369
We are using the lab setup shown in Figure 12-2 on page 336.
You can build one package for each platform or all platforms in one. The benefit of separating the packages by operating system is that you prevent having to download all the code to all the endpoints before installation occurs. If temporary space is an issue, you should split into multiple packages. This in turn makes it slightly more complicated in installation tasks, because you have to group the endpoints by operating system. We will give some simple examples here, but if your enterprise has already deployed Configuration Manager, then the design rules will be in place, and you should build the packages according to them.
370
Right-click the package name and select Properties. You get the dialog shown in Figure 13-17. Enter the package version and a title for your package. Leave all the other parameters at their default values.
For the actual installation we use the silent install procedure described in Silent install of IBM Tivoli Storage Area Network Manager on page 139. First copy the installation media to the hard drive and modify the agent.conf file to suit the environment. After setting the package properties, we add objects to the package. From the window in Figure 13-16, click the tab Execute program as shown in Figure 13-18.
371
With this action you can distribute files to the endpoint, run the provided script and delete the temporary files again.
Note: Once the installation program finishes, these files are deleted. So be aware that if the setup program spawns other programs and finishes, the other processes cannot access the files and the installation fails.
After selecting the button, the Execute Program Properties dialog appears (Figure 13-19).
372
Initially the Install tab opens, and you have to enter the fully qualified path to the installation setup program. The example shows the installation of the Windows Tivoli SAN Manager agent. This must be the path as it appears after transferring the files to the endpoint, so it could be different from the directory structure on the node where you are building the filepack. Dont include any arguments they go into the Advanced dialog (Figure 13-20).
373
In the arguments field, enter the parameters for silent installation. In our case the full installation program is:
setup.exe -silent -options agent.opt
Note that we have not included the fully qualified path to the agent.conf file. Instead we used the Working Directory entry to point to the option file. Optionally you can redirect standard out and standard error to files. End this dialog with the OK button. After specifying the program to execute, we must add the installation files. click Add next to the Co-requisite section on Figure 13-19. Figure 13-21 appears.
374
Select the source files which should be copied to the endpoint and choose the path where they should be copied to. Be sure to check the Descend directories box. Click OK to close. This should be sufficient for the installation process. Configuration Manager can also do uninstallation to configure this, select the Remove tab from Figure 13-19 on page 373. The dialog in Figure 13-22 appears.
375
This time we do not need any co-requisite files to be copied to the endpoint. A single command is sufficient to remove the software as described in the silent installation chapter Silent install of IBM Tivoli Storage Area Network Manager on page 139. The uninstallation program resides in the installation directory of the IBM Tivoli SAN Manager agent. It is specified at installation time in the agent.conf file. We need an argument for the uninstallation program. To open the dialog click Advanced (Figure 13-23).
376
The only parameter to specify is -silent. Be sure to add the working directory for the process. We chose to make just one software package for Windows and AIX machines, so in order to not execute the above program on an AIX machine, you can specify a condition when to run that action. There is a Condition button at the top of right hand corner of Figure 13-22. Figure 13-24 appears.
377
Choose os_name from the list box, add an == operator and enter Windows_NT. This will ensure execution only on a certain platform. Using the same procedure, we added an extra action for the AIX installation, starting from the Execute Program Properties dialog shown in Figure 13-19 on page 373. The actions to define are mainly the same except for the paths and the setup.aix program. Also we added a condition which allows execution only on AIX machines. The ready-to-build software package is shown in Figure 13-25.
378
Save this package to a .sp file on your server and exit the Software Package Editor.
379
Double-click the object PM_SD_ITSANM to open the Profile Manager and create a Profile with the name of your file package including the version (Figure 13-27).
After you have created the Profile, an empty package icon appears in the Profile Manager. Add any subscribers you want to distribute the package to.
380
Next we need to import the previously defined Software package to the Profile we just created. Right-click the Profile and choose Import (Figure 13-29).
381
A dialog appears, where you can select the node on which you have previously created the Package and the path to the .sp file. With Build, you select to include all the source files and programs and actions into one single file (.spb) to be distributed to the target endpoint. Enter the location where you want to store the .spb file. You might want to store it on your software distribution server or on any of your software depot servers. If your are rebuilding it, check the Overwrite box (Figure 13-30).
382
The icon of the package should now be a sealed package, ready to ship to your targets. To install, right-click the package and choose Install (Figure 13-31).
383
The install dialog, shown in Figure 13-32, lets you select the endpoints to install the software. Our package will work on Windows and AIX servers. Additional checks can be made, whether the software is already installed or with the Change Manager feature, if you are allowed to install the software due to licensing issues. For additional information, see the redbook All About IBM Tivoli Configuration Manager V4.2, SG24-6612.
384
You can also schedule the installation and query inventory to look for hardware or software constraints. To ensure that every host in your SAN environment has a Tivoli SAN Manager agent, you can use the strategies described in Implementing Automated Inventory Scanning and Software Distribution After Auto Discovery, SG24-6626 to discover new nodes via Tivoli NetView, install an endpoint, perform an inventory query and automatically deploy the IBM Tivoli SAN Manager agent on them. Another method of identifying hosts to install software on, is querying an LDAP directory like Microsoft Active Directory or IBM Directory with the Enterprise Directory Query facility. Then you would be able to create a machine group for IBM Tivoli SAN Manager, and automatically deploy the software once a machine belongs to the group. Configuration Manager enables you to remove the software as well. For this function, right-click the package and select Remove (Figure 13-33).
385
All the other options like verify, clean, and so on are not defined and will not work.
386
14
Chapter 14.
387
Warehouse Metadata
Source Apps
ITM
ETL
Inventory
ETL
TEC
ETL
Source App
ETL
Brio
Business Objects
The first step to introducing TEDW is enabling the source applications. This means to provide all tools and customizations necessary to import the source operational data into the central data warehouse. All components needed for that task are collected in warehouse packs for each source application. An important part of the warehouse packs is the ETL programs Extract, Transform, and Load). In principle, ETL programs process data in three steps. First they extract the data from a data source. Then the data is validated, transformed, aggregated, and/or cleansed so that it fits the format and needs of the data target. Finally the data is loaded into the target database. In TEDW there are two types of ETLs. The central data warehouse ETL pulls the data from the source applications and loads it into the central data warehouse. The central data warehouse ETL is also known as source ETL or ETL1. The second type of ETL is the data
mart ETL.
The central data warehouse (CDW) is the database that contains all enterprise-wide historical data (with hour as the lowest granularity). This data store is optimized for the efficient storage of large amounts of data and has a documented format that makes the data accessible to many analysis solutions. The database is organized in a very flexible way, and you can store data from new applications without adding or changing tables. The data mart ETL extracts a subset of historical data from the central data warehouse that contains data tailored to and optimized for a specific reporting or analysis task. This subset of data is used to create data marts. The data mart ETL is also known as target ETL or ETL2.
388
A data mart satisfies the needs of a specific department, team, or customer. The format of a data mart is specific to the reporting or analysis tool you plan to use. Each application that provides a data mart ETL creates its data marts in the appropriate format. TEDW provides a Report Interface (RI) that creates static two-dimensional reports of your data using the data marts. The RI is a role-based Web interface that can be accessed with a Web browser without any additional software installed on the client. You can also use other tools to perform OLAP analysis, business intelligence reporting, or data mining. The Control server is the system that contains the control database, which contains metadata for Tivoli Enterprise Data Warehouse and from which you manage your data warehouse. The Control server controls communication between the Control server, the central data warehouse, the data marts, and the Report Interface. The Control server uses the Data Warehouse Center to define the ETL processes and the star schemas used by the data marts. You use the Data Warehouse Center to schedule, maintain, and monitor these processes. For more information about Tivoli Enterprise Data Warehouse, see Introduction to Tivoli Enterprise Data Warehouse, SG24-6607.
389
390
15
Chapter 15.
391
Trend Analysis
C tri is D e/ iz m to us
Data warehouse
llu Ro p
Profile
De fa ul
te bu
ts
TMR
ITM Heartbeat
ll sta In
lay sp Di
te ibu str Di
Endpoint NT/W2K
ITM Engine
Endpoint UNIX/Linux
ITM Engine
392
You can use any of these to monitor the basic functions of your operating system. There are numerous additional modules which provide special monitoring capabilities for other software products. These include: DB2 WebSphere Application Server Oracle Microsoft Active Directory Apache / IIS If you want in depth monitoring for your IBM Tivoli Storage Area Network Manager DB2 instance, you can use these additional modules. In the example in this book, we use the shipped monitor Parametric Services to watch the status of the Windows services, which are required to run IBM Tivoli Storage Area Network Manager. Additionally there is a default action to restart stopped services.
Create a profile manager to contain the monitoring profiles. Select Create -> Profile Manager and create a dataless Profile manager. Our example shows a Profile manager called PM_DM_ITSANM (Figure 15-3).
393
Open the Profile Manager and select Create -> Profile and choose a Tmw2kProfile (which is the Monitoring profile resource). If this entry doesnt show up in the list, make sure the Tmw2kProfile is in the managed resources list of the Policy Region. The example shows a Profile called P_DM_ITSANM in Figure 15-4.
Double-click on the newly created profile and in the window that appears, and click Add with Defaults. This will open up a chooser window, where you can select the resource model you want to add to your profile. In the Category list box choose Windows and select Parametric Services (Figure 15-5).
394
After adding the resource model, we have to edit the model to include the services we want to monitor. Click Edit (Figure 15-6).
395
In this window, we can adjust attributes belonging to that resource model. To specify the services to monitor, open the Parameters window (Figure 15-7). You must enter the names of the services exactly as they appear in the Windows Registry under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services. The service names, which IBM Tivoli Storage Area Network Manager needs to run are: DB2 IBMWAS5Service - ITSANM-Manager SNMPTRAP NetView
396
Close this window (Apply Changes and Close) and bring up the next dialog by clicking Indications (from Figure 15-6). As you can see from the definitions, the default action, when a service is stopped or failed is to restart the service automatically. Also, a CRITICAL TEC event is generated (Figure 15-8).
397
To enable the TEC events globally for this Profile and to specify to which TEC server the events are sent, click the Properties menu in the Profiles main window (Figure 15-4 on page 394) and the window in Figure 15-9 opens.
398
Select the Send TEC Events check box and choose the event delivery method and TEC server. The TME (Secure) Delivery is (in most cases) the better choice, because it caches events when the Event Server is temporarily unavailable. Click OK to close windows until you are back in the Profile Manager main window. (Figure 15-10). Subscribe the endpoints running the IBM Tivoli Storage Area Network Manager - Manager with Profile manager -> Subscriber and distribute the Profile using Profile Manager -> Distribute -> Distribute Now.
You can determine if your resource models are running on a particular endpoint by issuing the wdmlseng command on your Tivoli Managed Region (TMR) server, as shown in Example 15-1.
Example 15-1 Determining if resource models are running
bash$ wdmlseng -e lochness Forwarding the request to the engine... The following profiles are running: P_DM_Basic_Win#tonga-region TMW_EventLog TMW_PhysicalDiskModel TMW_Services TMW_TCPIP TMW_MemoryModel TMW_Process TMW_Processor P_DM_ITSANM#tonga-region TMW_ParamServices bash$
399
For demonstration purposes, we stopped the NetView and the SNMPTRAP services on our manager machine. After a few seconds the following TEC events appear in the TEC console (Figure 15-11).
IBM Tivoli Monitoring detected the services were stopped and restarted them accordingly.
400
Appendix A.
401
The scanner then proceeds to collect the fcFabricName OID from the FE MIB. See Table A-2 below. Also see 3.6.2, Outband management on page 69. The fcFabricName is a required object by Tivoli SAN Manager. See Table A-2.
Table A-2 FE MIB
FE MIB Entry Name RFC2837 version OID pre-RFC version
If the OID data cannot be retrieved from the FE-MIB, the scanner proceeds to the FC MIB, where an algorithm runs and derives the fabric name. Tivoli SAN Manager requires all OID data from the FC MIB to draw an accurate topology map. If data is missing, there will be limited functionality and missing topology information. See Table A-3 for a list of the OIDS from the FC MIB used by the Advanced Topology Scanner.
402
systemURL connUnitGlobalId connUnitDomainId connUnitName connUnitType connUnitLocation connUnitProduct connUnitPrincipal connURL connUnitPortWWN connUnitPortUnitId connUnitPortNodeWwn connUnitPortPhysicalNumber connUnitPortState connUnitLinkPortWwnY connUnitLinkUnitTypeY connUnitLinkPortNumberX connUnitLinkNodeIdY connUnitLinkAgentAddressY connUnitFabricId
1.3.6.1.3.94.1.2 1.3.6.1.3.94.1.6.1.2 1.3.6.1.3.94.1.6.1.11 1.3.6.1.3.94.1.6.1.20 1.3.6.1.3.94.1.6.1.3 1.3.6.1.3.94.1.6.1.24 1.3.6.1.3.94.1.6.1.7 1.3.6.1.3.94.1.6.1.13 1.3.6.1.3.94.1.6.1.10 1.3.6.1.3.94.1.10.1.10 1.3.6.1.3.94.1.10.1.1 1.3.6.1.2.1.8888.1.1.6.1.20 1.3.6.1.3.94.1.10.1.18 1.3.6.1.3.94.1.10.1.6 1.3.6.1.3.94.1.12.1.8 1.3.6.1.3.94.1.12.1.12 1.3.6.1.3.94.1.12.1.4 1.3.6.1.3.94.1.12.1.6 1.3.6.1.3.94.1.12.1.9 1.3.6.1.2.1.8888.1.1.6.1.17 1.3.6.1.2.1.8888.1.1.6.1.5 1.3.6.1.2.1.8888.1.1.8.1.7 1.3.6.1.2.1.8888.1.1.8.1.12 1.3.6.1.2.1.8888.1.1.8.1.3 1.3.6.1.2.1.8888.1.1.8.1.5 1.3.6.1.2.1.8888.1.1.8.1.8 1.3.6.1.2.1.8888.1.1.3.1.2 1.3.6.1.2.1.8888.1.1.3.1.11 1.3.6.1.2.1.8888.1.1.3.1.17 1.3.6.1.2.1.8888.1.1.3.1.3 1.3.6.1.2.1.8888.1.1.3.1.21 1.3.6.1.2.1.8888.1.1.3.1.7 1.3.6.1.2.1.8888.1.1.3.1.13 1.3.6.1.2.1.8888.1.1.3.1.10 1.3.6.1.2.1.8888.1.1.6.1.9
1.3.6.1.3.94.1.2 1.3.6.1.3.94.1.6.1.2 1.3.6.1.3.94.1.6.1.11 1.3.6.1.3.94.1.6.1.20 1.3.6.1.3.94.1.6.1.3 1.3.6.1.3.94.1.6.1.24 1.3.6.1.3.94.1.6.1.7 1.3.6.1.3.94.1.6.1.13 1.3.6.1.3.94.1.10.1.10 1.3.6.1.3.94.1.10.1.10 1.3.6.1.3.94.1.10.1.1
403
connUnitGlobalId connUnitState connUnitStatus connUnitSensorName connUnitSensorStatus connUnitSensorInfo connUnitSensorMessage connUnitSensorType connUnitSensorCharacteristic connUnitEventUnitIndex connUnitREventTime connUnitSEventTime connUnitEventSeverity connUnitEventType connUnitEventObject connUnitEventDescr
1.3.6.1.3.94.1.6.1.2 1.3.6.1.3.94.1.6.1.5 1.3.6.1.3.94.1.6.1.6 1.3.6.1.3.94.1.8.1.3 1.3.6.1.3.94.1.8.1.4 1.3.6.1.3.94.1.8.1.5 1.3.6.1.3.94.1.8.1.6 1.3.6.1.3.94.1.8.1.7 1.3.6.1.3.94.1.8.1.8 1.3.6.1.3.94.1.11.1.2 1.3.6.1.3.94.1.11.1.4 1.3.6.1.3.94.1.11.1.5 1.3.6.1.3.94.1.11.1.6 1.3.6.1.3.94.1.11.1.7 1.3.6.1.3.94.1.11.1.8 1.3.6.1.3.94.1.11.1.9
1.3.6.1.2.1.8888.1.1.3.1.2 1.3.6.1.2.1.8888.1.1.3.1.5 1.3.6.1.2.1.8888.1.1.3.1.6 1.3.6.1.2.1.8888.1.1.5.1.2 1.3.6.1.2.1.8888.1.1.5.1.3 1.3.6.1.2.1.8888.1.1.5.1.4 1.3.6.1.2.1.8888.1.1.5.1.5 1.3.6.1.2.1.8888.1.1.5.1.6 1.3.6.1.2.1.8888.1.1.5.1.7 1.3.6.1.2.1.8888.1.1.7.1.1 1.3.6.1.2.1.8888.1.1.7.1.2 1.3.6.1.2.1.8888.1.1.7.1.3 1.3.6.1.2.1.8888.1.1.7.1.4 1.3.6.1.2.1.8888.1.1.7.1.5 1.3.6.1.2.1.8888.1.1.7.1.6 1.3.6.1.2.1.8888.1.1.7.1.7
1.3.6.1.3.94.1.6.1.2 1.3.6.1.3.94.1.6.1.5 1.3.6.1.3.94.1.6.1.6 1.3.6.1.3.94.1.8.1.3 1.3.6.1.3.94.1.8.1.4 1.3.6.1.3.94.1.8.1.5 1.3.6.1.3.94.1.8.1.6 1.3.6.1.3.94.1.8.1.7 1.3.6.1.3.94.1.8.1.8 1.3.6.1.3.94.1.11.1.2 1.3.6.1.3.94.1.11.1.4 1.3.6.1.3.94.1.11.1.5 1.3.6.1.3.94.1.11.1.6 1.3.6.1.3.94.1.11.1.7 1.3.6.1.3.94.1.11.1.8 1.3.6.1.3.94.1.11.1.9
404
To display the sensor data, we select (click) on the Fibre Channel switch, then click SAN -> SAN Properties -> Sensors/Events. We show the Sensor Event display in Figure A-1.
405
406
Appendix B.
407
DB2 configuration
Example B-2 shows a script you can use to update your DB2 database configuration with parameters related to the Tivoli Storage Manager environment. It will also stop and restart your DB2 environment. This script must be adapted to your environment.
Example: B-2 DB2_TSM_config.bat script
echo Database name : %1 echo Node name : %2 echo Password : %3 db2 update db cfg for %1 db2 update db cfg for %1 db2 update db cfg for %1 db2 update db cfg for %1 db2stop force db2start
408
409
net start "Tivoli Netview Service" ovstart @REM Start the ITSANM-Manager @REM ------------------------@echo "Starting the ITSANM-Manager..." call ITSANMstart.bat
@ECHO ON @REM Get Status and check if Stopped @REM ------------------------------net start | findstr /i "ITSANM-Manager @if %errorlevel% EQU 0 GOTO BACKUPDB :NOTSTOPPED @ECHO ON @REM ITSANM not stopped - Backup cannot run @REM -------------------------------------@echo "WAS Application ITSANM Not Stopped !!!" @echo "Backup process cancelled " exit 1 :BACKUPDB @ECHO ON @REM ITSANM is stopped - Backup can run @REM ---------------------------------@echo "Backup of ITSANMDB starting ....." C:\PROGRA~1\SQLLIB\BIN\db2cmd.exe /c /w /i db2 backup database ITSANMDB USE TSM @if %errorlevel% NEQ 0 echo "Backup failed - Please check error messages" @REM Backup completed - Start ITSANM @REM ------------------------------:STARTITSANM call ITSANMstart.bat
410
@ECHO ON @REM Get Status and check if Started @REM ------------------------------net start | findstr /i "ITSANM-Manager" @if %errorlevel% EQU 0 GOTO STARTOK @REM ITSANM not started @REM -----------------@echo "Application ITSANM Not Started !!!" exit 1 @REM ITSANM started @REM -------------:STARTOK @echo "Application ITSANM started successfully" @REM Start the Netview Application @REM ----------------------------@echo "Starting Netview" ovstart exit"
411
412
Appendix C.
Additional material
This redbook refers to additional material that can be downloaded from the Internet as described below.
Select the Additional materials and open the directory that corresponds with the redbook form number, SG246848.
413
414
Advanced Interactive eXecutive American National Standards Institution Application Programming Interface Address Resolution Protocol Automated Tape Library Asynchronous Transfer Mode Berkeley Software Distribution Common Internet File System Common Information Model Dispersion Frame Technique Dynamic Host Configuration Protocol Desktop Management Task Force Domain Name Resolution Dense Wavelength Division Multiplexing Error Detection and Fault Isolation Exterior Gateway Protocol Extended Link Services Enterprise System Connection Enterprise Storage Resource Manager Extract-Transform-Load Fibre Channel Fibre Channel Arbitrated Loop Fibre Channel Framing and Signaling Interface Fibre Channel - Methodologies for Interconnects Fibre Channel Protocol Fibre Channel Switch Fabric Fabric Device Management Interface Fault Isolation Fully Qualified Domain Name Fabric Shortest Path First Full Time Equivalent GigaBit Interface Converter Gigabit Link Module Graphical User Interface Global User ID
HBA HSM HTTP I/O IBM ICMP IETF ISL ITSO JAR JBOD JDBC JRE JVM LAN LAN LC LTO LUN MIB MM MOF NAS NFS NLS NOS OEM OID PD RAID RDBMS RFC RIM RLIR RLS RNID
Host Bus Adapter Hierarchical Storage Management Hyper Text Translation Protocol Input/Output International Business Machines Corporation Internet Control Message Protocol Internet Engineering Task Force Inter Switch Link International Technical Support Organization Java Archive Just a Bunch of Disks Java Database Connectivity Java Runtime Environment Java Virtual Machine Local Area Network Local Area Network Lucent Connector Linear Tape Open Logical Unit Number Management Information Base Multi-Mode Managed Object Format Network Attached Storage Network File System National Language Support Network Operating System Original Equipment Manufacture Object Identifier Problem Determination Redundant Array of Independent Disks Relational Database Management System Request for Comment RDBMS Interface Module Registered Link Incident Record Read Link Error Status Block Request Node Identification Data
415
RSCN SAN SCSI SFF SFP SM SMI SMIS SNIA SNMP SRM SSP STP TCP/IP TEC TEDW UDP UI URL UTC UV WEBM XML
Registered State Change Notification Storage Area Network Small Computer System Interface Small Form Factor Small Form Factor Pluggable Single Mode Storage Management Initiative Storage Management Initiative Specification Storage Networking Industry Association Simple Network Management Protocol Storage Resource Management Storage Service Provider Shielded Twisted Pair Transmission Control Protocol/Internet Protocol Tivoli Enterprise Console Tivoli Enterprise Data Warehouse User Datagram Protocol User Interface Universal Resource Locator Coordinated Universal Time Ultra Violet Web-Based Enterprise Management eXtensible Markup Language
416
Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook.
IBM Redbooks
For information on ordering these publications, see How to get IBM Redbooks on page 418.
Early Experiences with Tivoli Enterprise Console 3.7, SG24-6015 IBM Tivoli Storage Resource Manager: A Practical Introduction, SG24-6886 Tivoli NetView V6.01 and Friends, SG24-6019 Implementing System Management Solutions using IBM Director, SG24-6188 IBM Tivoli Storage Management Concepts, SG24-4877 IBM Tivoli Storage Manager: Implementation Guide, SG24-5416 Deploying the Tivoli Storage Manager Client in a Windows 2000 Environment, SG24-6141 Backing Up DB2 Using Tivoli Storage Manager, SG24-6247 All About IBM Tivoli Configuration Manager V4.2, SG24-6612 Introduction to Tivoli Enterprise Data Warehouse, SG24-6607
Other resources
These publications are also relevant as further information sources:
IBM Tivoli Storage Area Network Manager Planning and Installation Guide, SC23-4697 IBM Tivoli Storage Area Network Manager Users Guide, SC23-4698 IBM Tivoli Storage Area Network Manager Messages, SC32-0953 Tivoli NetView for Windows Users Guide, SC31-8888 IBM DB2 Universal Database Administration Guide: Implementation, Version 7, SC09-2944 IBM DB2 Universal Database Command Reference, Version 7, SC09-2951
417
Brocade
http://www.brocade.com/
QLogic
http://www.qlogic.com
418
Index
A
Active Directory monitoring 393 agents 29 AIX 31, 70, 72, 89 inittab 111 ANSI 13, 18, 24, 60 Apache monitoring 393 API 15, 17, 74, 285 application availability 29 arbitrated loop 64 archive logs 291 asset management 358 authentication password 116 availability 29 collect daemon 217 configuration for Tivoli SAN Manager 131 configuration information 284 ConnUnitLinkTable 70 ConnUnitPortTable 70 Control server 389 copper 60 core 59 cost cutting 5
D
data mart 388389 data mining 389 data types 4 Data Warehouse Pack 388 and Tivoli SAN Manager 389 DB2 30, 66, 71, 98, 111, 369 archive logs 291, 312 backup configuration 287 backup environment variables 289 backup with TSM 286 database backup 286 database maintenance 314 database restore 307, 312 indexes 315 install 98 install Fix Pack 99 LOGRETAIN 289 monitoring 393 offline backup 296297 online backup 289, 296, 299 roll forward restore 312 sample scripts 407 service 98 Tivoli SAN Manager database 106, 284 uncatalog database 313 user exit 291 DB2 commands db2rbind 316 db2start 100, 290 db2stop 100, 290 get db cfg 288289 reorg 315 reorgchk 314315 ROLLFORWARD 309 runstats 314 update db cfg 289 DB2 Warehouse 99 DHCP 29, 71 directors 208 Disaster Recovery 6 discovery 11, 32, 168 disk system LUNs 42
B
backup and recovery 6 bandwidth 6465, 222 baroc file 72, 335, 338 basic fabric management. 73 Bluefin 20 boot partition 310 Brocade 131, 210 business intelligence reporting 389
C
cabling 58 cladding 59 coating 60 connectors 61 copper 60 core 59 distance limitations 58 multi-mode 59 plenum rating 60 SC connectors 61 single-mode 59 Carnegie Mellon University 270 CDW 388 CIM 1921 agent 23 classes 22 object manager 23 CIM-XML 22 Cisco 29 MDS 9000 169 cladding 59 class definition file 335 Cleared Record 271 clustering 71 coating 60
419
Dispersion Frame Technique 270 Distribution Status console 366 DMTF 20 DNS 71, 87, 89, 98
firmware 89 FSPF 57
G
GBIC 62 growth 4 GUI 31 GUID 136, 138, 276
E
ED/FI 14, 28, 66 ELS 15, 25 Emulex 58 encryption 350 endpoint 370, 392 endpoint devices 17, 33 enterprise-specific MIB 209 environment variables 289 error detection 11, 1415 ESRM 20 Ethernet 69, 255 ETL 28, 389 Central Data Warehouse 388 data mart 388 ETL programs 388 ETL1 388 ETL2 388 EUSDSetup 89 event forwarding 215, 349 event logging 45, 222 Extract 388 Extract, Transform and Load See ETL
H
HBA 15, 18, 33, 58, 63, 66, 74, 81, 88, 166 API 15, 17, 68, 74, 88 historical reporting 215 Host Bus Adapter. See HBA hostname 97 HOSTS file 97, 110, 125 HTTP 22, 65, 90 hub 64
I
IBM 348 IBM Director 259, 263 event logging 263 IBM Directory 385 IBM SAN Data Gateway 177 IBM Tivoli Configuration Manager see Tivoli Configuration Manager IBM Tivoli Enterprise Console see TEC IBM Tivoli Enterprise Data Warehouse see TEDW IBM Tivoli Monitoring see Tivoli Monitoring IBM Tivoli NetView. See NetView IBM Tivoli SAN Manager 70 IBM Tivoli Storage Area Network Manager. See Tivoli SAN Manager IBM Tivoli Storage Manager. See Tivoli Storage Manager IBM WebSphere Express. See WebSphere IETF 13, 18, 254255 IIS monitoring 393 inband discovery 15, 18, 3233 inband management 1617, 31, 68 incremental 291 Indication Record 270 Infiniband 25 interoperability 12, 14, 56 inventory 358 inventory profile 362 Inventory Signature files 363 IP network 3, 32, 246 IP network management 35 iSCSI 7, 25, 28, 33, 71, 253254 adapter 255 Auth MIB 257 discovery 168, 256 driver 255 initiators 254 iSNS MIB 257 MIB 256
F
FA-MIB 70, 212, 402 fault detection 46 fault isolation 11, 1415, 28, 270 Fault Record 270 FC Management MIB 18 FC Management Server 68 FC_MGMT MIB 257 FC-AL 14, 64 FC-GS-3 24, 68 FC-GS-4 24 FC-MGMT MIB 402 FC-MI 13, 18, 20 FCPortTxFrames 230 FC-SW2 57 FDMI 24 FE-MIB 212, 402 Fibre Alliance 13, 18 Fibre Alliance MIB 257 Fibre Channel 32, 55 Fibre Channel arbitrated loop 64 Fibre Channel attachment 67 Fibre Channel cabling 58 Fibre Channel MIB 210 Fibre Channel network 33 Fibre Channel standards 56 Fibre Channel topologies 63 filesystems 42, 66, 74, 171 firewall 350
420
NetView discovery 168, 256 SmartSet 169, 235 SNMP 168 targets 254 ISL 57, 65, 163, 173 iSNS 255 MIB 257 ISO 13 ITSANM.MIB 261 ITSANMDB 284
multi-mode fiber 59
N
NAS 7 netmon 246 netstat 88 NetView 2930, 35, 66, 71, 83, 98, 100, 108, 110, 119, 150, 260, 284, 293, 349, 400 acknowledge 181 Advanced Menu 211 arm threshold 222 child submap area 151 clearing the database 247 copying MIB 211 data collection 216, 232 data collection troubleshooting 224 database 247, 284 database maintenance 224 discovery process 246 enable MIB 210 event browser 127, 129, 261 event forwarding 260, 349 event forwarding to TEC 348 event logging 222, 323 existing installation 128 explorer display 36, 151, 166 graph 223, 244 Graph Properties 233 graphs 207, 215, 225 historical reporting 48, 208, 215, 226 HOSTS file 110, 125 icon display 36 interface 150 iSCSI 256 iSCSI discovery 168, 256 iSCSI SmartSet 169, 235 launch Tivoli Storage Resource Manager 179 loading MIB 212213 logging 323 Management Page 176 maps 150 MIB applications 228, 230 MIB Browser 214, 230 MIB Data Collection 216, 243 MIB Data Collector 213, 215, 227 MIB Tool Builder 213, 227228, 234 Navigation Tree 153 netmon daemon 246 Object Properties 153, 175, 284 password 108, 124 performance applications 207 performance data 215, 222 polling 228, 233, 236 properties panel 165 real-time reporting 48, 208, 227, 234 reporting 25, 47 restricting discovery 246 root map 150, 160, 240 rule builder 349 search function 276 Index
J
Java 113, 335, 340 JDBC 99 JNI 58 JRE 29 JVM 3031, 133
L
LAN 58 LC connectors 61 LDAP 385 leaf node 210 Linux 29, 31, 89 Load 388 logical topology display 41 logical volume 44 longwave 59 LUN masking 69 LUNs 7, 42, 69, 74, 166
M
managed hosts 27, 29 Linux 29 management applications 7 MDS 9000 169 mgrlog.txt 318 MIB 1516, 29, 48, 65, 69, 207, 209, 401 applications 228 definitions 213 enable 213 enable in NetView 210 enterprise-specific 209, 211, 214 iSCSI 256 object ID 210, 216, 227, 401 objects 210, 215, 225, 228, 245 performance objects 213 standard 209 subtree 214 thresholds 215 Tivoli SAN Manager 261 tree structure 210 MIB-II 212, 222 Microsoft Active Directory 385 monitoring 393 MOF 23 monitoring 392 MQSeries 29, 102, 293
421
seed file 246, 248 Server Setup 247 service 108, 124 SmartSets 160, 168, 235, 243, 256 SmartSets and Data Collection 243 SNMP trap forwarding 259 status propagation 157, 241 submap stack 151 submap window 151 submaps 150, 160 supported MIBs 209 System Configuration view 152 Tivoli Storage Area Network Manager view 152 toolbar menu 177 topology map 222, 225, 246 trap 222 trap daemon 324 trap forwarding 128, 130 trap port 323 trapd.conf 351 trapfrwd daemon 128 traps 126 unacknowledge 181 unmanage object 180 upgrade 133 NetView commands nvsniffer 168, 235, 256 ovaddobj 128 ovstart 128 ovstart snmpcollect 225 ovstatus snmpcollect 224225 Network Attached Storage. See NAS network bandwidth 222 network management 6, 208 network monitoring 207, 228, 392 network problem resolution 207 network resource allocation 207 NIC 255 non-Tivoli applications 388 Notification Record 271 nslookup 89
point-to-point 63 Policy Regions 359 polling 1617 polling interval 48, 132 port 104, 112, 122, 127 port statistics 216 Predictive Failure Analysis 28, 268 problem determination 28 profile overview 392 profile manager 360, 393 Prolog 334 protocols 65
Q
QLogic 58, 8889
R
Redbooks Web site 418 Contact us xxv remote console 107 removable media devices 72 report interface 389 reporting 12, 25 repository 89 resource model 392 resource models 392 RFC 210 RIM 334 RLIR 15 RLS 15 RNID 15, 17, 33, 35, 68, 74, 81, 84, 89, 166 root cause analysis 268 RSCN 15 Rule Base 334335
S
SAN adoption 5 arbitrated loop 14, 64 attributes 66 bandwidth 6465 basic fabric management 73 cabling 58 Cleared Record 271 components 57 connections 38 discovery 11, 13, 15, 246 endpoint devices 68 event logging 45 events 16, 30, 66 fault detection 46 fault isolation 270 Fault Record 270 heterogeneous support 57 historical reporting 208, 226 inband management 31, 68 interconnects 163
O
object ID 210 OEM 58 OLAP analysis 389 Oracle monitoring 393 outband agents 29 outband discovery 15, 18, 3233 outband management 16, 69
P
Pentium 70 performance metrics 215 PFA 268 platform administration 6 plenum rating 60
422
interoperability 56, 65 management 10, 56, 68, 235 management API 15 management costs 5 Management Services 19 management standards 12, 18 monitoring 11, 207, 228 Name Server 17 nameserver 15 navigation 160 Notification Record 271 performance data 215 physical topology 37 physical view 67 point-to-point 63 polling 1617, 68 problem determination 28 problem resolution 29 protocols 57 real-time reporting 208, 227, 234 reporting 12, 25, 29 root cause analysis 268 standardization 5 standards 10, 12, 27, 57 switch port statistics 216 switched fabric 65 switches 27 topology 8, 12, 16, 37, 55, 63, 150, 161, 163, 241 trunking 65 zones 35, 40 zoning 15, 19, 65 SAN Error Predictor 28 SAN management 4 vendor applications 27, 49 SC connectors 61 scanner 56 SCSI 58 protocol 33, 254 SCSI Inquiry 68 SCSI queries 17 seed file 246, 248 server growth 5 SFF 61 SFP 6162 shortwave 58 silent install 139 silent uninstall 145 single-mode fibre 59 SmartSet 235 SmartSets 160, 168169, 235, 256 SNIA 13, 20, 2425 SNMP 1516, 18, 2933, 65, 69, 95, 100, 168, 208, 236, 260 agents 208 collect daemon 217, 223224 community name 78, 127, 263 console 130 events 30 manager 66, 208, 260 MIB 401
port 129 trap 48, 78, 268 trap destination 89, 262 trap forwarding 126127, 259 traps to IBM Director 264 socket 335 software distribution 358 software inventory 358 Solaris 31, 89 SRMURL 179 SSL encryption 350 staffing growth 5 standard MIB 209 Stochastic 270 Storage Area Network. See SAN storage consolidation 7 storage growth 4 storage management 3 manual 6 standards 10 swFCPortTxFrames 216, 229, 243244 swFCRxErrors 224 swFCRxFrames 224 swFCTxErrors 224 swFCTxFrames 226 switch commands agtcfgset 79 agtcfgshow 78 snmpmibcapset 89, 213 switch management 14 switched fabric 6465 switches 8, 27, 33, 76, 131 administrative rights 131 API 40, 85, 131, 404 display 163 environmentals 173 events 404 firmware 89 login ID 131, 213 management applications 49, 174, 208 MIB 16, 65, 69 nameserver 46, 65, 6869 performance data 215, 222 port connections 173 port statistics 216 query 66 sensors 173 trap destination 78, 89 trap forwarding 126 zone information 131, 165 SW-MIB 212, 229 systems management issues 6
T
T11 57 tape 68 TCP/IP 254 TEC 30, 66, 157, 268, 334 Assign Event Group 343 baroc file 335 Index
423
class definition file 335 compile rule base 338 Console 335 Constraint 341 event 397 Event Console 340 Event Filters 340 event format 347 Event Group 340 event processing 334 events from Tivoli SAN Manager 348 Import Class Definitions 336, 338 load rule base 339 RIM 334 Rule Base 334336 rule processing 354 stop or start event server 339 Test SQL 342 TEC commands wtdumprl 336 tec_dispatch 334 tec_reception 334 tec_rule 334 tec_server 334 tec_task 334 tec_ui_server 335 TEDW 30, 37, 336, 388389 control server 389 data mart 388 Data Warehouse Pack 388389 Data Warehouse Pack and Tivoli SAN Manager 389 ETL 388389 ETL processes 389 ETL programs 388 source applications 388 telnet 65, 213 Tivoli 126 Policy Regions 359 Tivoli Configuration Manager 357358 Activity Planner 358 Change Manager 358 Enterprise Directory Query Facility 358 Inventory 358 Inventory Profile 362 Inventory Scan 367 Profile Manager 360 Query 367 removing software 385 Resource Manager 358 Software Distribution 358 software distribution 370 software distribution profile 379 software package 370 Web Interface 358 Tivoli Desktop 359, 393 Distribution Status console 366 profile manager 393 Tivoli endpoint 370 Tivoli Enterprise Console see TEC
Tivoli Enterprise Framework 334335 Tivoli Light Client Framework 336 Tivoli Managed Node 350 Tivoli Managed Region 399 Tivoli Monitor wdmlseng 399 Tivoli Monitoring 391392 engine 392 Parametric Services 393394 profile 392 resource models 392 Web Health Console 392 Tivoli NetView see NetView Tivoli SAN Manager 25, 55, 70, 270271 Agent access password 116 agent address 71 agent backup 291 agent configuration files 291 agent logging 322 agent placement 34, 76 agent restore 302 agent startup 117 agents 15, 17, 29, 31, 6667, 72, 96, 106, 127, 166, 284, 322 agents installation 111112 agents uninstall 136 AIX 28, 70 AIX manager install 111 and tape devices 72 application discovery 174 application launch 49, 66, 174 attribute scanner 68, 84 authentication 123 authentication password 107, 284 backup strategies 283 baroc file 72, 335, 338 change device icon 173 change device icon type 154 change device label 154, 162, 173, 181 class definition file 335 Clear History 181 cluster support 71 component placement 67 components 29, 66, 96, 284 configuration 126 configuration information 284 Configure Agents 73, 77, 80, 82, 130, 132, 157, 304 configure management application 175 Configure Manager 132, 157, 181 Connection 173 console 27, 29, 50, 66 console access password 123 console service 125 Data Warehouse Pack 389 database 30, 33, 80, 98, 106, 268, 284 database backup 296 database backup environment variables 289 database maintenance 314 database restore 307 database userid 106
424
deployment considerations 70 Device Centric View 4142, 81, 85, 161, 166 device icons 155, 171 device label 81, 171 device properties 171 device support 31, 57 disaster recovery 309 discovery 32, 76, 8081, 284, 307 display switch connections 173 ED/FI 28, 66 ED/FI Configuration 157 ED/FI Properties 157 event forwarding 215, 348349 event forwarding to TEC 348 event logging 45 events 30, 66 events to IBM Director 263 fabric ports 173 fault detection 46 filesystem display 42, 171 flat file backup 286 functions 31 GUI 31, 66 GUID 136, 138, 320 high availability 89 historical reporting 48, 208, 215, 226 Host Centric View 41, 43, 81, 85, 161, 167 host display 171 icons 155, 180 Import Class Definitions 338 inband and outband 84 inband management 68, 81, 83 indication record 270 initial poll 132 installation 95 installation directory 103, 114, 121 installation id 9899, 102, 105, 112, 119 installation log files 111, 117, 125 installation verification 110 Inventory Signature files 363 iSCSI 33, 255 iSCSI discovery 168, 256 iSCSi discovery 168 ITSANM_120.baroc 338 Launch Application 157, 174 launch Tivoli Storage Resource Manager 157, 179 license 113, 121, 284 license agreement 103 Linux 29 log files 318 logging 7374, 284 logging service commands 320 logical topology 41 logical views 34, 74, 77, 81, 85, 166167 logical volume display 44 LUN display 42, 69, 74, 166 managed hosts 27, 29, 66, 72, 167 manager 2829 message types 318 mgrlog.txt 318
MIB 261 monitoring 393 MQSeries 29 navigation display 44 Navigation Tree 44, 153 NetView 30, 284 NetView console 28 NetView traps 126 Object Properties 45 object status 155, 180 outband agents 29, 130 outband management 69, 76, 80, 401 overview 28 physical topology 37, 160 physical view 67, 161 polling 68, 80, 127, 132 polling interval 284 port selection 104, 114115, 122, 284 Predictive Failure Analysis 28, 268 pre-installation checks 97, 112, 119 problem determination 28 propagation 157 real-time reporting 48, 208, 227, 234 remote console 29, 31, 6667, 80, 84, 88, 96, 178, 323 remote console installation 119 remote console logging 323 remote console uninstall 137 reporting 47 repository 66, 68, 98 restore strategies 283, 302 RNID 74, 166 rule file 354 sample scripts 407 sample TEC rule 354 SAN Error Predictor 28 SAN menu 35, 157, 170 SAN Properties 157, 162, 170, 180, 405 SAN view 162 scanners 56, 68, 81 Sensor Events Scanner 401, 404 Sensors/Events 170, 172173, 405 Server 30, 6667, 89, 96, 284 Server backup 286 Server installation 96, 102 Server logs 319 Server port 114, 122 Server requirements 70 Server restore 305 Server start 294 Server stop 293 Server uninstall 135136 service commands 320 services 393 Set Event Destination 157, 262 silent install 139 silent uninstall 145 SNMP agent 31 SNMP community name 263 SNMP trap forwarding 262
Index
425
start agent service 117 start AIX service 111 status colors 155 status cycle 180 status propagation 157, 241 submap 157 summary display 44 supported platforms 31, 81, 96 switch display 172 switch environmentals 173 symbols 74, 77, 81 TEC event format 347 Tivoli Storage Resource Manager 157 topology management 149 topology map 33, 35, 37, 46, 66, 68, 70, 77, 80, 8384, 127, 161, 180181, 222, 241 topology scanner 6869, 77, 84, 401402 topology view 163, 284 tracing 317, 326 trap forwarding 126128, 130, 262 uninstall 135 uninstall AIX server 136 uninstall Windows server 135 unknown device 173 unknown symbols 77 upgrade 133 upgrade agents 135 upgrade remote console 134 WebSphere 28, 102, 284 well-placed agent 35, 7374 Windows 2000 70 zone display 40, 85, 131, 162, 165 Tivoli SAN Manager commands setenv 320 srmcp 320 srmcp log list 326 srmcp SANDBParms 263 tcstart 117118 tcstop 117118, 136 Tivoli Service Level Advisor 28 Tivoli Storage Manager 285 API 285 API password 290 Backup/Archive Client 286, 288, 291, 310 client configuration 288 client options file 289290, 310 clients 285 copy group 286 disaster preparation and recovery 312 dsm.opt 290 expiration 287 inactivate backups 287 include exclude list 290 incremental backup 291 management class 286, 288 nodenames 288 policy domain 286 RETONLY 287 server 285 server configuration 286
VERDELETED 287 Tivoli Storage Manager commands db2adutl 287, 291, 301 dsmapipw 290 QUERY NODE 288 Tivoli Storage Manager for Databases 285 Tivoli Storage Resource Manager launch 157 launch from NetView 179 topology display 36, 84 Transform 388 trap forwarding 126128, 130 trapd.conf 351 trapfrwd 128 trapfrwd.conf 128 traps 222 trend analysis 207 TRP-MIB 212 TSM. See Tivoli Storage Manager
U
ultraviolet light 60 UTC 389
V
virtualization 11
W
warehouse 388 warehouse pack 388 WBEM 2021 wdmlseng 399 Web Health Console 392 WebSphere 28, 30, 71, 90 administration ID 106 monitoring 393 service 110 WebSphere configuration information 284 well-placed agent 35, 7374 Windows 2000 31, 70, 89 administrative rights 98, 112, 119 boot partition 310 registry 293 SNMP service 100 System Objects 310 Wordpad 318 Windows Explorer 36 Windows NT 89 WWN 19, 34, 74, 77, 166, 276
X
XML 50, 68 XWindows 111
Z
zones 35, 40, 68, 85, 165 zoning 19, 65, 131
426
Back cover
SG24-6848-01
ISBN 0738499978