NGT113STUD

Download as pdf or txt
Download as pdf or txt
You are on page 1of 500

V9.

0
cover
IBM Training
Front cover
Student Notebook
IBM PureFlex System Fundamentals
Course code NGT11 ERC 3.0
Student Notebook
December 2013 edition
The information contained in this document has not been submitted to any formal IBM test and is distributed on an as is basis without
any warranty either express or implied. The use of this information or the implementation of any of these techniques is a customer
responsibility and depends on the customers ability to evaluate and integrate them into the customers operational environment. While
each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will
result elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk.
Copyright International Business Machines Corporation 2012, 2013.
This document may not be reproduced in whole or in part without the prior written permission of IBM.
US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide.
The following are trademarks of International Business Machines Corporation, registered in many
jurisdictions worldwide:
Intel, Intel Xeon and Xeon are trademarks or registered trademarks of Intel Corporation or its
subsidiaries in the United States and other countries.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft and Windows are trademarks of Microsoft Corporation in the United States, other
countries, or both.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of
Oracle and/or its affiliates.
VMware and the VMware "boxes" logo and design, Virtual SMP and VMotion are registered
trademarks or trademarks (the "Marks") of VMware, Inc. in the United States and/or other
jurisdictions.
Netezza is a trademark or registered trademark of IBM International Group B.V., an IBM
Company.
Other product and service names might be trademarks of IBM or other companies.
Active Memory AIX 6 AIX
BladeCenter Easy Tier Electronic Service Agent
Express FlashCopy IBM Flex System
IBM Flex System Manager IBM PureData Micro-Partitioning
Power POWER Hypervisor Power Systems
PowerHA PowerPC PowerSC
PowerVM POWER6 POWER7+
POWER7 PureApplication PureData
PureFlex PureSystems Real-time Compression
Redbooks Storwize System Storage
System x SystemMirror Tivoli
VMready X-Architecture XIV
Active Memory AIX 6 AIX
Student Notebook
V9.0
TOC
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Contents iii
Contents
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Course description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Agenda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Unit 1. IBM PureSystems and IBM Flex System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1
Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
IBM PureSystems topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
Unique attributes of expert integrated systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4
IBM PureSystems enables multiple client initiatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-6
IBM PureSystems family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7
IBM PureFlex and IBM PureApplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9
IBM PureData System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-11
IBM PureSystems topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-12
IBM PureApplication System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-13
The ideal cloud application platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-14
IBM PureApplication System: The right model to fit your needs . . . . . . . . . . . . . . . . . . . . 1-15
IBM PureSystems patterns of expertise: Three types of patterns . . . . . . . . . . . . . . . . . . . 1-16
IBM PureApplication System: Virtual pattern types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-17
IBM PureApplication System: Additional patterns of expertise . . . . . . . . . . . . . . . . . . . . . . 1-18
IBM PureSystems topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-19
IBM PureData System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-20
IBM PureData System for Hadoop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-22
IBM PureSystems topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-24
What if we could do the same integration in an optimized IT system? . . . . . . . . . . . . . . . . 1-25
IBM PureFlex System: Simplified experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-26
IBM PureFlex System offerings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-27
IBM PureFlex System Express (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-28
IBM PureFlex System Express (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-29
IBM PureFlex System Enterprise (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-30
IBM PureFlex System Enterprise (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-31
IBM Flex System topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-32
IBM Flex System Enterprise Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-33
IBM Flex System chassis integration of components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-35
IBM Flex System topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-36
IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-37
IBM Flex System Manager v1.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-38
IBM Flex System topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-39
IBM Flex System compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-40
IBM Flex System: X-Architecture compute nodes positioning . . . . . . . . . . . . . . . . . . . . . . 1-42
IBM Flex System: Power Systems compute node positioning . . . . . . . . . . . . . . . . . . . . . . 1-43
IBM Flex System topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-44
IBM Flex System storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-45
IBM Flex System Storage Expansion Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-46
IBM Flex System V7000 Storage Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-47
IBM Flex System storage portfolio positioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-48
IBM Flex System topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-49
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
iv IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-50
IBM Flex System Scalable Switch options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-51
IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch . . . . . . . . . . . . . . . . . 1-52
IBM Flex System Fabric SI4093 System Interconnect Module . . . . . . . . . . . . . . . . . . . . . . 1-53
IBM Flex System EN6131 40Gb Ethernet Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-54
Flex System Ethernet module positioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-55
Evolutionary and game-changing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-56
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-57
Checkpoint (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-58
Checkpoint (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-59
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-60
Unit 2. IBM Flex System Enterprise Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-1
Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
IBM Flex System Enterprise Chassis topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
At a glance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-4
Product overview (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-6
Product overview (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-8
Product overview (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-9
Models (type 8721) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-10
Front view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-11
Rear view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-12
Chassis component parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-13
Midplane: Front view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-14
Midplane: Rear view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-15
Compute node insertion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-16
Chassis bay numbering (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-17
Chassis bay numbering (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-18
Chassis bay numbering (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-19
Chassis air filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-20
Hot plug and hot swap components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-21
IBM Flex System Enterprise Chassis topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-22
IBM Flex System 2100 W power supply option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-23
Power supply location and numbering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-24
Power policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-25
Power supply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-27
Power supply selection matrix (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-29
Power supply selection matrix (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-31
Power supply selection matrix (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-33
IBM Flex System Enterprise Chassis topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-35
Fan module location and numbering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-36
40 mm fan module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-37
80 mm fan module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-38
Fan logic module location and numbering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-39
Fan modules: Base configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-40
Fan modules: Eight nodes installed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-41
Fan modules: Maximum configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-42
IBM Flex System Enterprise Chassis topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-43
Front information panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-44
Rear chassis LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-45
IBM Flex System Enterprise Chassis topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-46
Student Notebook
V9.0
TOC
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Contents v
Upper and lower cooling apertures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-47
Chassis air flow (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-48
Chassis air flow (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-49
Chassis air flow (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-50
IBM Flex System Enterprise Chassis topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-51
I/O module location and numbering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-52
Node I/O connectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-53
I/O adapter and I/O module interconnects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-54
Node to I/O module interconnects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-55
I/O expansion adapter form factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-56
LAN on Motherboard implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-57
Single node/two I/O adapter interconnects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-58
Installing I/O modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-59
I/O module LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-60
IBM Flex System Enterprise Chassis topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-61
Chassis Management Module location and numbering . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-62
Chassis Management Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-63
Chassis Management Module LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-65
Chassis Management Module ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-66
Chassis management using CMM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-67
CMM capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-68
CMM for IBM Flex System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-69
CMM management options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-71
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-72
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-73
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-74
Unit 3. IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1
Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
IBM Flex System management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3
IBM Flex System Manager node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4
IBM Flex System Manager node hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5
IBM Flex System Manager node: Internal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-6
IBM Flex System Manager front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7
IBM Flex System management network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-8
IBM Flex System Manager networks separated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-9
IBM Flex System Manager networks merged . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-10
IBM Flex System Manager capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-11
IBM Flex System Manager software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-12
IBM Flex System Manager v1.2 features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-13
IBM Flex System Manager v1.3 features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-15
Flex System Manager management packaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-16
IBM PureFlex System configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-17
IBM Flex System Manager: Home page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-18
IBM Flex System Manager: Additional Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-19
IBM Flex System Manager: Plug-ins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-20
IBM Flex System Manager: Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-21
IBM FSM Explorer console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-22
IBM Flex System Manager hardware map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-23
VMControl: Automate with system pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-24
Remote Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-25
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
vi IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-26
Checkpoint (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-27
Checkpoint (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-28
Checkpoint (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-29
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-30
Unit 4. IBM Flex System X-Architecture compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1
Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
IBM Flex System X-Architecture compute node topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3
Compute node overview and architecture topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-4
IBM Flex System x220 Compute Node: At a glance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-5
Product overview (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-6
Product overview (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-8
Product overview (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-10
Front view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-11
Interior view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-12
Compute node overview and architecture topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-13
IBM Flex System x222 Compute Node: At a glance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-14
Product overview (1 of 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-15
Product overview (2 of 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-17
Product overview (3 of 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-18
Product overview (4 of 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-19
Front view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-20
Rear view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-21
Interior view: Upper server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-22
Interior view: Lower server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-23
Upper server / lower server locking mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-24
Compute node overview and architecture topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-25
IBM Flex System x240 Compute Node: At a glance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-26
Product overview (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-27
Product overview (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-29
Product overview (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-30
Front view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-31
Interior view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-32
Compute node overview and architecture topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-33
IBM Flex System x440 Compute Node: At a glance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-34
Product overview (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-35
Product overview (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-37
Product overview (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-38
Front view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-39
Interior view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-40
IBM Flex System X-Architecture compute node topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-41
IBM Flex System x220 Compute Node: Disk subsystem overview (1 of 2) . . . . . . . . . . . . 4-42
IBM Flex System x220 Compute Node: Disk subsystem overview (2 of 2) . . . . . . . . . . . . 4-43
Optional disk controllers and kits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-44
Drive combinations: In summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-45
IBM Flex System x222 Compute Node: Disk subsystem overview (1 of 2) . . . . . . . . . . . . 4-46
IBM Flex System x222 Compute Node: Disk subsystem overview (2 of 2) . . . . . . . . . . . . 4-47
Optional disk controllers and kits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-48
IBM Flex System x240 Compute Node: Disk subsystem overview (1 of 2) . . . . . . . . . . . . 4-49
IBM Flex System x240 Compute Node: Disk subsystem overview (2 of 2) . . . . . . . . . . . . 4-50
Student Notebook
V9.0
TOC
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Contents vii
Optional disk controllers/kits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-51
Drive combinations: In summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-53
IBM Flex System x440 Compute Node: Disk subsystem overview (1 of 2) . . . . . . . . . . . . 4-54
IBM Flex System x440 Compute Node: Disk subsystem overview (2 of 2) . . . . . . . . . . . . 4-55
Optional disk controllers/kits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-56
Drive combinations: In summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-58
IBM Flex System X-Architecture compute node topics . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-59
IBM Flex System Storage Expansion Node (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-60
IBM Flex System Storage Expansion Node (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-62
IBM Flex System X-Architecture compute node topics . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-63
IBM Flex System x220 and x222 Compute Nodes: Intel Romley-EN platform (1 of 2) . . . 4-64
IBM Flex System x220 and x222 Compute Nodes: Intel Romley-EN platform (2 of 2) . . . 4-65
x220: Processor subsystem overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-66
x222: Processor subsystem overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-67
IBM Flex System x240 Compute Node: Intel Romley-EP platform (1 of 2) . . . . . . . . . . . . 4-68
IBM Flex System x240 Compute Node: Intel Romley-EP platform (2 of 2) . . . . . . . . . . . . 4-69
Processor subsystem overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-70
IBM Flex System x440 Compute Node: Intel Romley-EP platform (1 of 2) . . . . . . . . . . . . 4-71
IBM Flex System x440 Compute Node: Intel Romley-EP platform (2 of 2) . . . . . . . . . . . . 4-72
Processor subsystem overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-73
IBM Flex System X-Architecture compute node topics . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-74
Unbuffered DIMMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-75
Registered DIMMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-76
Load-reduced DIMMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-77
IBM Flex System x220 Compute Node: Unbuffered DIMMs . . . . . . . . . . . . . . . . . . . . . . . 4-78
Registered DIMMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-79
Load-reduced DIMMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-81
IBM Flex System x222 Compute Node: Registered DIMMs . . . . . . . . . . . . . . . . . . . . . . . . 4-82
Load-reduced DIMMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-84
IBM Flex System x240 Compute Node: Unbuffered DIMMs . . . . . . . . . . . . . . . . . . . . . . . 4-86
Registered DIMMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-87
Load-reduced DIMMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-89
IBM Flex System x440 Compute Node: Unbuffered DIMMs . . . . . . . . . . . . . . . . . . . . . . . 4-90
Registered DIMMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-91
Load-reduced DIMMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-92
Memory modes (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-93
Memory modes (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-95
IBM Flex System X-Architecture compute node topics . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-96
Network subsystem overview (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-97
Network subsystem overview (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-99
Network subsystem overview (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-101
IBM Flex System X-Architecture compute node topics . . . . . . . . . . . . . . . . . . . . . . . . . . 4-103
I/O expansion overview (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-104
I/O expansion overview (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-105
I/O expansion overview (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-106
IBM Flex System PCIe Expansion Node (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-107
IBM Flex System PCIe Expansion Node (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-108
IBM Flex System x220 Compute Node: Supported network adapters . . . . . . . . . . . . . . . 4-109
IBM Flex System x222 Compute Node: Supported network adapters . . . . . . . . . . . . . . . 4-110
IBM Flex System x240 Compute Node: Supported network adapters . . . . . . . . . . . . . . . 4-111
IBM Flex System x440 Compute Node: Supported network adapters . . . . . . . . . . . . . . . 4-112
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
viii IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
IBM Flex System X-Architecture compute node topics . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-113
Standard onboard features overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-114
USB ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-115
Integrated virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-116
Console breakout cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-117
Trusted Platform Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-118
IBM Flex System X-Architecture compute node topics . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-119
Front panel LEDs and controls (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-120
Front panel LEDs and controls (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-121
Front panel LEDs and controls (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-123
Systems management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-124
IMMv2 for IBM X-Architecture compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-125
IMM capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-127
IMM management options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-128
Light path diagnostics (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-129
Light path diagnostics (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-131
Light path diagnostics (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-132
Configuration patterns: Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-133
Deploy compute node images: Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-135
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-137
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-138
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-139
Unit 5. IBM Power Systems compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1
Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
The Power node essentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-3
What is Power? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-5
Power Systems family (with a new addition) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-6
Power Systems features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-7
Power is operating system choices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-9
IBM Flex System Power compute node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-11
Summary of features by form factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-12
IBM PureFlex POWER7+ compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-13
Power compute nodes comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-14
IBM Flex System Power compute node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-15
Memory options and form factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-16
Local storage overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-17
Power nodes: IO adapter options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-18
I/O adapter location code information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-19
Four-port 10 Gb Ethernet adapter connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-20
Two-port 8 Gb Fibre Channel adapter connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-21
IBM Flex System Power compute node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-22
Managing Power servers: An evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-23
IBM Flex System Manager: Integrated management appliance . . . . . . . . . . . . . . . . . . . . . 5-25
Flex System Manager: Home and Plug-ins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-27
Power Systems virtualization topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-28
Virtualizing workloads with PowerVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-29
What is a POWER virtual server? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-31
Virtual server resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-32
Virtual I/O adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-34
What is a Virtual I/O Server? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-36
Student Notebook
V9.0
TOC
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Contents ix
Virtual I/O Server summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-38
Power Systems virtualization topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-39
Creating a partitioned environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-40
Creating virtual servers and profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-41
Accessing the virtual server creation wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-42
Create Virtual Server wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-43
Installing an OS in a virtual server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-44
IBM Power systems management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-45
Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-46
Checkpoint (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-47
Checkpoint (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-48
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-49
Unit 6. IBM Flex System storage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-1
Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-2
IBM Flex System V7000 Storage Node topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3
IBM Flex System platform details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-4
IBM Flex System storage portfolio positioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-5
A complete portfolio of supported IBM storage for PureFlex . . . . . . . . . . . . . . . . . . . . . . . . 6-6
Virtualization: The big picture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-7
IBM Flex System V7000 Storage Node topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-8
IBM Flex System V7000 storage overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-9
Flex System V7000 storage chassis integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-10
Flex System V7000 front view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-11
Control enclosure internal components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-12
Flex System V7000 node canisters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-13
Integrated scalable storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-14
SAS network cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-15
Expand storage beyond the chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-16
IBM Flex System V7000 Storage Node topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-17
Flex System V7000 installation planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-18
Flex System V7000 initial setup wizard (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-19
Flex System V7000 initial setup wizard (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-20
Flex System V7000 initial setup wizard (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-21
IBM Flex System V7000 Storage Node topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-22
Flex System V7000 enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-23
Flex System Manager chassis map with Flex System V7000 . . . . . . . . . . . . . . . . . . . . . . 6-24
FSM storage management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-25
FSM storage management capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-26
Flex System V7000 GUI (v7.1): Home - Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-27
Home: Overview - Functions icons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-28
IBM Flex System V7000 Storage Node topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-29
Integration of Storwize V7000 technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-30
Advanced storage functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-31
Storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-32
Storage scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-33
Storage efficiency: Thin provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-34
Storage efficiency: Easy Tier management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-35
Storage efficiency: Real-time Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-36
Storage availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-37
Storage high availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-38
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
x IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Storage business continuance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-39
Packaging options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-40
IBM PureFlex System configuration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-41
Flex System V7000 licensing requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-42
Real-time Compression licensing enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-43
Real-time Compression licensing scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-44
Base storage configuration for PureFlex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-46
Storage configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-47
Installed devices ready to go . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-48
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-49
Checkpoint (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-50
Checkpoint (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-51
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-52
Unit 7. IBM Flex System networking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-1
Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-2
IBM Flex System networking topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3
IBM Flex System I/O architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-4
Midplane: Front . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-5
Midplane: Rear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-6
I/O options: 2S X-Architecture compute node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-7
I/O options: 2S Power Systems compute node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-8
I/O options: FSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-9
Chassis Management Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-10
Management IP network in CMM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-12
ScSE bay order in IBM Flex System Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-13
ScSE I/O connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-14
I/O modules connector architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-15
ScSE I/O architecture: ScSE scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-16
ScSE I/O architecture: Adapter to ScSE mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-17
ScSE I/O architecture: Two- and four-port adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-18
IBM Flex System Manager appliance networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-19
IBM Flex System networking topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-20
I/O adapters: X-Architecture compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-21
I/O adapters: Power Systems compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-22
Naming scheme for I/O adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-23
IBM Flex System EN2024 4-port 1Gb Ethernet Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . 7-24
IBM Flex System CN4054 10Gb Virtual Fabric Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-25
IBM Flex System FC3172 2-port 8Gb FC Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-27
IBM Flex System networking topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-28
Naming scheme for ScSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-29
IBM Flex System Scalable Switch options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-30
IBM Flex System EN4093 / EN4093R 10Gb Scalable Switch . . . . . . . . . . . . . . . . . . . . . . 7-31
Connector and cable options for EN4093 and EN4093R . . . . . . . . . . . . . . . . . . . . . . . . . . 7-32
IBM Flex System FC3171 8Gb SAN switch and pass-thru . . . . . . . . . . . . . . . . . . . . . . . . . 7-33
FC3171 supported SFP modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-34
Cable options for FC3171 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-35
IBM Flex System networking topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-36
VLAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-37
VLAN tagging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-38
Stacking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-40
Student Notebook
V9.0
TOC
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Contents xi
VLAG versus STP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-42
Virtual Router Redundancy Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-43
VMready . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-44
Virtual NICs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-45
Unified Fabric Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-47
Network convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-48
IBM Flex System networking topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-49
Accessing the switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-50
Configure internal management port from the CMM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-51
I/O module management console access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-52
Launch browser console from CMM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-53
Launch browser console from FSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-54
Browser-based interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-55
Industry standard command-line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-56
IBM Networking OS command-line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-58
Flex System I/O module login levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-59
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-60
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-61
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-62
Appendix A. Checkpoint solutions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
xii IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Trademarks xiii
V9.0
TMK
Trademarks
The reader should recognize that the following terms, which appear in the content of this training
document, are official trademarks of IBM or other companies:
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide.
The following are trademarks of International Business Machines Corporation, registered in many
jurisdictions worldwide:
Intel, Intel Xeon and Xeon are trademarks or registered trademarks of Intel Corporation or its
subsidiaries in the United States and other countries.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft and Windows are trademarks of Microsoft Corporation in the United States, other
countries, or both.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of
Oracle and/or its affiliates.
VMware and the VMware "boxes" logo and design, Virtual SMP and VMotion are registered
trademarks or trademarks (the "Marks") of VMware, Inc. in the United States and/or other
jurisdictions.
Netezza is a trademark or registered trademark of IBM International Group B.V., an IBM
Company.
Other product and service names might be trademarks of IBM or other companies.
Active Memory AIX 6 AIX
BladeCenter Easy Tier Electronic Service Agent
Express FlashCopy IBM Flex System
IBM Flex System Manager IBM PureData Micro-Partitioning
Power POWER Hypervisor Power Systems
PowerHA PowerPC PowerSC
PowerVM POWER6 POWER7+
POWER7 PureApplication PureData
PureFlex PureSystems Real-time Compression
Redbooks Storwize System Storage
System x SystemMirror Tivoli
VMready X-Architecture XIV
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
xiv IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Course description xv
V9.0
pref
Course description
IBM PureFlex System Fundamentals
Duration: 3 days
Purpose
The IBM PureFlex System is a category of computing that integrates multiple
server architectures, networking, chassis, storage, and system management
capability into a single system that is easy to deploy and manage.
This fundamentals class covers IBM PureSystems, IBM PureFlex System,
and the IBM Flex System Enterprise Chassis along with the IBM
X-Architecture and IBM Power Systems compute nodes, systems
management, networking, and IBM Flex System V7000. Included are
hands-on lab exercises to reinforce the learning principles associated with
the Chassis Management Module, X-Architecture and Power Systems
compute nodes, Flex System Manager, networking, and Flex System V7000.
Audience
This is a base course for individuals who are involved in the planning,
installing, configuring, and upgrading of IBM Systems.
Prerequisites
There are no prerequisites for this course.
Objectives
After completing this course, you should be able to:
Differentiate between IBM PureSystems and IBM PureFlex System
Summarize the features and functions of the IBM Flex System Enterprise
Chassis
Differentiate the characteristics of the IBM Flex System x86 Compute
Node from other IBM System x servers
Differentiate the characteristics of the IBM Flex System Power Compute
Nodes from other IBM Power servers
Select the proper IBM Flex System networking components based on the
solution requirements
Classify the IBM Flex System management options
Critique the storage options available for the IBM Flex System
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
xvi IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Agenda xvii
V9.0
pref
Agenda
Day 1
Welcome
Unit 1: IBM PureSystems and IBM Flex System
Lab 1: IBM PureFlex System introduction
Unit 2: IBM Flex System Enterprise Chassis
Lab 2: Exploring the Chassis Management Module
Unit 3: IBM Flex System Manager
Lab 3: IBM Flex System Manager navigation
Day 2
Day 1 review
Unit 4: IBM Flex System X-Architecture compute nodes
Lab 4: Exploring the Integrated Management Module II
Unit 5: IBM Power Systems compute nodes
Lab 5: IBM Power Systems compute node
Day 3
Day 2 review
Lab 5: IBM Power Systems compute node (continued)
Unit 6: IBM Flex System storage
Lab 6: Basic IBM Flex System V7000 administration
Unit 7: IBM Flex System networking
Lab 7: IBM Flex System networking: Ethernet switches
Lab 8: IBM Flex System networking: SAN switch
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
xviii IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-1
V9.0
Uempty
Unit 1. IBM PureSystems and IBM Flex System
What this unit is about
Welcome to IBM PureFlex System Fundamentals. This section is an
overview of IBM PureSystems and IBM PureFlex System.
What you should be able to do
After completing this unit, you should be able to:
Summarize the features of IBM PureSystems and IBM PureFlex System
Identify the major elements of IBM PureSystems and IBM PureFlex
System
Summarize the features of the IBM Flex System
Identify the major elements of the IBM Flex System
Differentiate between the IBM Flex System and traditional IT solutions
Explain how the IBM Flex System will fundamentally change the way IT
solutions are provided
How you will check your progress
Checkpoint questions
Lab exercises
References
IBM Information Center
http://pic.dhe.ibm.com/infocenter/flexsys/information/index.jsp
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-2 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-1. Unit objectives NGT113.0
Notes:
These are the objectives for this unit.
Unit objectives
After completing this unit, you should be able to:
Summarize the features of IBM PureSystems and IBM
PureFlex System
Identify the major elements of IBM PureSystems and IBM
PureFlex System
Summarize the features of the IBM Flex System
Identify the major elements of the IBM Flex System
Differentiate between the IBM Flex System and traditional IT
solutions
Explain how the IBM Flex System will fundamentally change
the way IT solutions are provided
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-3
V9.0
Uempty
Figure 1-2. IBM PureSystems topics NGT113.0
Notes:
The IBM PureSystems topics we will cover are:
IBM PureSystems family overview
IBM PureApplication System overview
IBM PureData System overview
IBM PureFlex System
IBM PureFlex System Express
IBM PureFlex System Enterprise
This section is an overview of the IBM PureSystems product family.
IBM PureSystems family overview
IBM PureApplication System overview
IBM PureData System overview
IBM PureFlex System overview
IBM Flex System overview
IBM PureSystems topics
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-4 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-3. Unique attributes of expert integrated systems NGT113.0
Notes:
The last 100 years have brought dramatic change to the information technology industry. IT has
moved from being a specialized tool to being a pervasive influence on nearly every aspect of life.
From tabulating machines that simply counted with mechanical switches or vacuum tubes to the
first programmable computers, IBM has been a part of this growth. And for the last 100 years, IBM
has helped customers solve problems.
Today, as the planet becomes smarter, IT is a constant part of business and is a constant part of our
lives. IBM expertise in delivering solutions complex solutions throughout infrastructure,
middleware and applications has helped the planet become smarter. And on this smarter planet,
organizations seek to extract more real value from their data, business processes and other key
investments. Leaders know that in todays environment, the success of IT can determine the
success of the business itself.
The time has come for a new way forward, one that combines the flexibility of general-purpose
systems, the elasticity of cloud and the simplicity of an appliance tuned to the workload. When
expertise is integrated throughout your enterprise, the experience and economics of IT will
fundamentally change. For example, what if you could improve the productivity of your IT
operations staff by up to 20 percent? Or shift another 10 percent of your IT budget from
Expert
integrated
systems
Integration by
design
Deeply integrating and
tuning hardware and
software: In a single,
ready-to-go system
Built-in expertise
Capturing and
automating what
experts do: From
the infrastructure to the
application
Simplified experience
Making every part of the IT life cycle easier
Integrated management of the entire system
A broad open ecosystem of optimized solutions
Unique attributes of expert integrated systems
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-5
V9.0
Uempty
systems-maintenance initiatives to revenue generating initiatives? Can you imagine systems
designed to be up and running in hours, instead of days or weeks? Or can you imagine systems
that require zero downtime when upgrading capacity, and delivering system-wide lifecycle
maintenance?
In order to deliver fully on this economic promise, systems with integrated expertise must possess
the following core capabilities:
Built-in expertise: When embedded expertise and client proven best practices are captured and
automated for you in various deployment forms, you can dramatically improve time-to-value.
Integration by design: When you deeply tune hardware and software in a ready-to-go, workload
optimized system, it becomes easier to tune to the task.
Simplified experience: When every part of the IT lifecycle becomes easier with integrated
management of your entire system, including a broad, open ecosystem of optimized solutions,
business innovation can thrive. You can deliver a leap forward in the IT experience for your
customers and colleagues.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-6 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-4. IBM PureSystems enables multiple client initiatives NGT113.0
Notes:
You can take advantage of IBM PureSystems to address key IT needs in a simple manner. Our
customers are asking themselves how they can better consolidate workloads for lower TCO and to
reduce complexity and sprawl. They are asking themselves how they can tune and automate
systems to optimize them for their environment. They are asking how they can deliver capabilities
more rapidly to fuel innovation and they want to do all of this in way that enables fast, secure and
integrated cloud environments if they choose to go in that direction.
With IBM PureSystems, clients can consolidate systems and application workloads in order to
reduce the total cost of ownership for their IT infrastructure. In doing so they can simplify and
reduce their data center complexity and sprawl. The workloads can be managed intelligently and
controlled from a single point of management and the environment provides dynamic scalability to
help meet service levels.
New capabilities can be delivered rapidly to help improve time to market for new services and
capacity to handle changes in demand can be added cost-effectively and automatically.
Finally, this can all be done in a way that accelerates the use of cloud environments that help
extend current investments through open standards and helps efficiently share IT resources to
improve the economics of IT.
Consolidate
More efficiently
consolidate
systems and
applications to
reduce operating
expenses
Optimize
Better tune and automate
systems and applications to
improve application performance,
scalability, and reliability
Accelerate cloud
More quickly enable
secure and
integrated cloud
environments
Innovate
More rapidly deliver new
applications and services to
meet new business needs
Consolidate
Accelerate
cloud
Innovate
Optimize
IBM PureSystems enables multiple client
initiatives
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-7
V9.0
Uempty
Figure 1-5. IBM PureSystems family NGT113.0
Notes:
In April 2012 IBM introduced PureSystems with the first two family members, PureFlex System and
PureApplication System. In November 2012, the PureData System was added to the PureSystems
family.
IBM PureFlex System integrates computing, storage, networking resources and system
management to simplify and accelerate the delivery of infrastructure services. It is ideal for
those clients who want to accelerate deployment and simplify management of infrastructure for
building custom application platforms and application stacks.
IBM PureApplication System is a platform system designed and tuned specifically for
transactional web and database applications. Its workload-aware, flexible platform is designed
to be easy to deploy, customize, safeguard and manage.
IBM PureData System is an expert integrated system that is optimized exclusively for delivering
data services. This system contains compute and storage resources as well as data
management middleware and comes in different models optimized for different transactional,
analytics data workloads.
Data Platform
Delivers high
performance
data services to
transactional
and analytics
applications
Application
Platform
Built on IBM
middleware
to accelerate
deployment of
your choice of
applications
Infrastructure
Runs your
choice of
operating
systems,
hypervisors,
applications and
middleware
X-Architecture
& Power
Compute
Virtualized
Storage
Optional
Management
Fabric
Future Proof
Chassis
Workload optimized
Application optimized
Simplifying cloud, big data & analytics
IBM PureSystems family
Interior view of the
IBM triplex door
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-8 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
The IBM PureSystems offering are optimized for performance and virtualized for efficiency; helping
to deliver the promise of smarter computing. These systems offer a no-compromise design and
come out of the box with expertise encapsulated in patterns that IBM has developed based on
decades of client and partner engagements around the world. IBM PureSystems is built for cloud,
containing built-in flexibility and simplicity with system-level upgradeability. These system were
announced along with a partner program that will help drive full business system solutions from our
broad network of ISV partners as well as IBM.
All are equipped with the IBM (blue) triplex doors.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-9
V9.0
Uempty
Figure 1-6. IBM PureFlex and IBM PureApplication NGT113.0
Notes:
The IBM PureFlex System is an infrastructure system that provides an integrated computing
system - combining servers, enterprise storage, networking, virtualization and management into a
single structure. Its built-in expertise enables organizations to simply manage and flexibly deploy
integrated patterns of virtual and hardware resources through unified management.
Thousands of client engagements have influenced the creation of workload profiles and images
that administrators can quickly download for rapid deployment. The Infrastructure System
recommends workload placement based on virtual machine compatibility and resource availability.
Using built-in virtualization across servers, storage and networking, the Infrastructure System
enables automated scaling of resources and true workload mobility.
Because of relentless testing and experimentation, IBM PureFlex System can mitigate IT
complexity without compromising the flexibility that many companies need to tune systems to the
tasks their business demands. By providing both flexibility and simplicity, IBM PureFlex System can
provide extraordinary levels of IT control, efficiency and operating agility that enable businesses to
rapidly deploy IT services at a reduced cost. Moreover, the system is built on decades of expertise,
enabling deep integration and central management of the comprehensive, open-choice
IBM PureSystems
PureApplication System
IBM PureSystems
PureFlex System
Application platform
System infrastructure System infrastructure
Technology services
Processor capacity
Network capacity
Storage capacity
Operating system
Architectural choice
Infrastructure services
Power management
Storage and VM optimization
Virtualization
System management
Image management
Provisioning
Security
Monitoring
Application services
Web application serving
Database management
Java platform
Connectivity
Platform services
Application optimization
System wide management
Automation and scaling
Application provisioning
Security
Monitoring
Life cycle maintenance
License management
IBM PureFlex and IBM PureApplication
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-10 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
infrastructure system and dramatically cutting down on the skills and training required for managing
and deploying the system.
The IBM PureApplication System is a platform system that integrates a full application platform set
of middleware and expertise in with the IBM PureFlex System infrastructure system. Its a workload
aware, flexible platform that is designed to be easy to deploy, customize, safeguard and manage in
a traditional or private cloud environment ultimately providing superior IT economics. With the
PureApplication System, organizations can provision their own patterns of software, middleware,
virtual system resources within a unique framework that is shaped by IT best practices and industry
standards culled from IBMs experience with clients and a deep understanding of smarter
computing. These IT best practices and standards are infused throughout the system.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-11
V9.0
Uempty
Figure 1-7. IBM PureData System NGT113.0
Notes:
The IBM PureData System is optimized exclusively for delivering data services in support of
transactional or analytics applications that are running on IBM PureFlex System, IBM
PureApplication System, or on other general purpose systems. And, with its newest member, IBM
PureData System for Hadoop you can now optimize Hadoop data services for big data analytics
and online archive with appliance simplicity. Todays big data challenges for both transactions and
analytics are increasing demands on data systems. With the growing volume, velocity and variety of
data available to organizations, there is a need to optimize performance, efficiency and simplicity of
data systems that manage transactions and help turn information into insight.
The PureData system also simplifies the entire life cycle it is factory integrated to be data load
ready in hours, and offers integrated management and support.
Having an integrated system enables delivery of integrated and automated maintenance
eliminating manual steps that cost time and increase the risk of human error.
Optimized for
data services:
Hadoop
Analytics
Transactional
Expert integrated:
Data platform
Infrastructure
Unified platform
management
Built-in expertise
Workload optimized
performance
Data load ready in hours
Integrated management
Single point of support
Automated maintenance
in hours, not days
IBM PureData System
Data platform
Delivering Data
Services
IBM PureSystems
PureData System
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-12 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-8. IBM PureSystems topics NGT113.0
Notes:
This section is an overview of IBM PureApplication System.
IBM PureSystems family overview
IBM PureApplication System overview
IBM PureData System overview
IBM PureFlex System overview
IBM Flex System overview
IBM PureSystems topics
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-13
V9.0
Uempty
Figure 1-9. IBM PureApplication System NGT113.0
Notes:
The PureApplication System takes integration one step further by building in the capabilities, the
middleware that you need to deliver transactional web applications and really start to leverage this
notion of application patterns and making it even faster to build or modify applications and deploy
those into a Platform as a Service environment.
These are complete, ready to go systems. Theyre pre-optimized for Java, Web and Database
performance. Its virtualized across the entire stack. It has the single point of management,
integrated monitoring and maintenance and repeatable self-service provisioning needed for a
Cloud environment.
You can really move from manual practices to best practice pattern-based deployment of
applications. Its got the Operating System run-time resources that are required to give you
policy-based elasticity in a single management view and its pre-optimized by IBMs experts
delivering everything weve learned about how to do these types of applications in the system and
make them manageable, scalable, elastic, all the things customers expect from a leadership
application system.
IBM PureApplication System
IBM PureApplication System: A platform system built-for-cloud that simplifies
deployment and management of applications
Ready for cloud
Repeatable self service provisioning
Integrated and elastic application and data
runtimes
Application-aware workload management
Simplify ongoing tasks
Single point of management
Integrated monitoring and maintenance
Easy to integrate with existing
environment
Complete, ready-to-go systems
Arrives ready to go with expert integration
Pre-optimized for Java, web, and database
performance
Virtualized across the stack for efficiency
Resilient, secure, scalable infrastructure
Best practice,
pattern-
based
Manual,
brittle
Policy based
elasticity, single
view
OS, runtime,
resources
Manual
optimizations
on-site
Pre-optimized by
experts
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-14 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-10. The ideal cloud application platform NGT113.0
Notes:
PureApplication System is a cloud application platform that can dramatically accelerate time to
value and automate deployment and life cycle management for a broad range of applications
platform solution thats ready to work right out of the box:
Pre-integrating application server and database services into a single system with compute
nodes (both X-Architecture and Power platforms), storage, and networking
Providing a single management console that offers vertical visibility from the application level
down through the infrastructure resources being leveraged by the system
Building expertise into deployable patterns that are then made available by IBM or our broad
partner ecosystem to enabled cloud-base service provisioning in minutes in many cases.
The PureApplication System's optimized configurations are designed to provide pre-integrated and
optimized business benefits for emerging markets, mid-size, or large enterprises.
The ideal cloud application platform
Pattern-based deployment
Catalog of
services
User-based self-service
Service level
management
Usage-based
reporting
Common cloud platform
Dynamic resource
scalability
Multi-tenancy
Virtualization
Automated
IT resource
provisioning
Automated
IaaS
Expert integrated:
Platform for applications
Application server
Database services
Compute (X-Architecture or
Power)
Storage
Networking
Built-in expertise
Infrastructure, platform &
application patterns
Install, Config, Tune: up and running within 4 hours
Deploy: multi-tier applications in minutes
with automatically scaling workloads
Manage: 1000s of VMs on a single system
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-15
V9.0
Uempty
Figure 1-11. IBM PureApplication System: The right model to fit your needs NGT113.0
Notes:
IBM PureApplication System is delivered fully assembled, with compute nodes, storage, and
networking hardware already installed, cabled, and tuned. The models share common
characteristics:
Integrated by design with cloud application platform capabilities
Built-in expertise with the ability to deploy pre-built, expert-created patterns
Offers a simplified experience through evolving the IT lifecycle and delivering a single
management console that provides a complete vertical stack view of your application runtime
environment
There are three different models of PureApplication System available: two W1500 are Intel
X-Architecture models come in both 25U and 42U rack sizes, and one W17000 Power come only in
a 42U. The 25U W1500 (mini) model contain a choice of 32 or 64 cores, and both of the 42U
(W1500 and W17000) modules offers four different configurations from 96 cores to up to 608 cores.
All of the models are managed the same way, have the same management console, and support
the same set of software and patterns. All are equipped with the new triplex doors.
These system can be upgraded from one configuration to the next by buying the appropriate
upgrade. Upgrades can be completed without powering down the installed machine.
IBM PureApplication System:
The right model to fit your needs
25U Rack (Mini)
32 & 64 core options
26.4 TB of storage (HDD & SSD)
42U Rack
96,192, 384 & 608 core options
54.4 TB of storage (HDD & SSD)
All models are integrated by design and share the same built-in expertise and simplified experience
Upgrade to larger configurations in a rack without taking an outage
W1500 (X-Architecture) W1500 (X-Architecture) W1700 (Power)
42U Rack
96,192, 384 & 608 core options
54.4 TB of storage (HDD & SSD)
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-16 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-12. IBM PureSystems patterns of expertise: Three types of patterns NGT113.0
Notes:
IBM PureApplication set of pre-defined patterns are based on leading IBM software capabilities.
These patterns focus on solutions using IBM middleware. They represent years of experience
gathered in production environments, called patterns of expertise. The idea behind this is that IBM
delivers a system that has the built-in knowledge of IBM experts about how to best configure and
integrate IBM middleware products. There are three types of patterns of expertise:
Platform patterns bring into play the middleware in addition to the system infrastructure.
Application patterns then bring in expertise at the business application level.
Infrastructure patterns bridge the base system infrastructure elements like servers, storage,
network, virtualization and management.
Many patterns are built-in the system directly out of the box. Still more are available in a catalog for
you to easily purchase and download.
System Infrastructure
Inherits the
capabilities of
PureFlex System
Application Platform
Integrated Server, Storage,
Network
Power Management
Storage & VM Optimization
Virtualization
Integrated System Management
Provisioning
Security
Monitoring
IT Lifecycle Management
System design
Application Optimization
System wide Management
Automation & Scaling
Caching & Elasticity
Application Centric Provisioning
Usage Metering
Security
Monitoring
App Lifecycle Management
License Management
Self-service
Data management
Integrates an
application platform
optimized for
enterprise
applications
Application patterns
from IBM and
partners
100+ ISV business applications
Business intelligence
Business process management
Web experience (Portal)
Patterns of Expertise
IBM PureSystems patterns of expertise:
Three types of patterns
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-17
V9.0
Uempty
Figure 1-13. IBM PureApplication System: Virtual pattern types NGT113.0
Notes:
With the IBM PureApplication System, you have the option to build two different pattern models:
virtual systems and virtual applications.
Virtual systems provide an automated model for deploying middleware topology patterns. They
allow you to quickly deploy traditional workloads in a virtualized environment in a repeatable
fashion. Products deployed using the virtual systems model are managed using the existing
management tools provided by those products.
Virtual applications provide a highly automated, policy-based deployment model in which you define
application components and policies that specify the needs of the application. The virtual application
model is application-centric, whereas the virtual system model is middleware topology-centric. The
virtual application model has a highly simplified administrative model, exposing fewer administrative
functions than the virtual system model.
PureApplication System also supports the virtual appliance deployment model, which allows you to
run custom software images of your choosing within the system, though virtual appliances do not
feature the robust management and monitoring features available for virtual systems and virtual
applications.
Virtual system patterns
Automated deployment of
middleware topology patterns
Traditional administration
and management model
Application and infrastructure
driven elasticity
Extend pattern by creating
custom image
Improved TCO
virtualized applications
Virtual
Appliance
Metadata
Application
Server
Operating
system
Virtual
Appliance
Metadata
Application
Server
Operating
system
Virtual
Appliance
Metadata
HTTP
Server
Operating
system
Virtual system patterns
Standard TCO
existing applications
Virtual appliances
Standard software installation
and configuration on OS
Traditional administration and
management model
Infrastructure driven elasticity
Virtual Appliance
Metadata
Software
application
Operating
system
Virtual Appliance
Virtual appliances
Virtual application
patterns
Highly automated deployments
using expert patterns
Business policy driven elasticity
Built for the cloud environment
Leverages elastic workload
management services
Best TCO
cloud applications
Virtual application
patterns
Software
application
IBM PureApplication System:
Virtual pattern types
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-18 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-14. IBM PureApplication System: Additional patterns of expertise NGT113.0
Notes:
These pre-defined patterns provide the ability to extend your PureApplication System enabled
environment into any of these workload areas in 60 minutes or less in most cases using the power
of IBMs pattern deployment technology.
Social
Collaboration
Business Analytics
& Data
Warehousing
Information
Integration and
Governance
Data
Management
Connectivity,
Integration and SOA
Application
Infrastructure
Business Process
Management
Mobile Development
and Connectivity
Asset and Facilities
Management
Mixed Language
Application
Modernization
DataPower*
IBM Mobile
Application
Platform*
Maximo Asset
Management*
* IBM intends to deliver new, separately licensed software patterns
IBM PureApplication System:
Additional patterns of expertise
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-19
V9.0
Uempty
Figure 1-15. IBM PureSystems topics NGT113.0
Notes:
This section is an overview of IBM PureData System.
IBM PureSystems family overview
IBM PureApplication System overview
IBM PureData System overview
IBM PureFlex System overview
IBM Flex System overview
IBM PureSystems topics
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-20 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-16. IBM PureData System NGT113.0
Notes:
IBM PureData System helps clients quickly analyze the influx of data that is being created every
day and intelligently use those insights to support specific business goals across their organization
including marketing, sales and business operations. PureData is an integrated stack of hardware
and software with built in expertise. It will deliver greater simplicity to organizations as they look for
new, cost-effective ways to find insight in masses of information.
IBM PureData System helps clients reduce complexity, accelerate value, and improve their IT
economics. PureData comes in four models that have been designed, integrated, and optimized to
deliver data services to today's demanding applications with simplicity, speed, and lower cost.
PureData System for Hadoop delivers a smarter way to reduce the complexity, accelerate time
to value, and improve IT economics.
PureData System for Analytics which is powered by Netezza technology is a data warehouse
system for high performance analytics and reporting on large volumes of data. It is an enhanced
update to the Netezza appliance, because the Netezza technology is built specifically for
analytics, it greatly simplifies both data and system management.
For apps like e-commerce
Database cluster services optimized for
transactional throughput and scalability
For apps like customer analysis
Data warehouse services optimized for
high-speed, peta-scale analytics and simplicity
For apps like real-time fraud
detection
Operational data warehouse services optimized to balance
high performance analytics and real-time operational
throughput
Meeting Big Data challenges Fast and Easy!
System for Transactions
System for Analytics
System for Operational Analytics
System for Hadoop
For exploratory analysis &
queryable archive
Hadoop data services optimized for big data analytics and
online archive with appliance simplicity
IBM PureData System
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-21
V9.0
Uempty
PureData System for Operational Analytics is an operational warehouse system that balances
the demands of delivering analytics to real-time decision making in business operations. It
handles continuous loading of data, complex data analysis, and up to 1000 or more concurrent
operational queries.
PureData System for Transactions is for running highly reliable and scalable transactional
databases.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-22 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-17. IBM PureData System for Hadoop NGT113.0
Notes:
IBM PureData System for Hadoop is built to optimize Hadoop data services for big data analytics
and online archive with appliance simplicity. It delivers enterprise Hadoop capabilities with
easy-to-use analytic tools and visualization for business analysts and data scientists. It comes with
rich developer tools, powerful analytic functions, and exceptional administration and management
capabilities, as well as the latest versions of Hadoop and associated projects. In addition, IBM
PureData System for Hadoop provides extensive capabilities with enhanced big data tools for
monitoring, development, and integration with many more enterprise systems.
IBM PureData System for Hadoop offers simplicity, flexibility, and consumability in a single
integrated system. Following on the principles of Pure Systems, we focused on these three areas in
providing value in the PureData System for Hadoop.
Built in Expertise
Speed to insight with built-in social data, machine data and text analytics accelerators
Speed to value with accelerated deployment
Simplified Experience
No assembly required, data load ready in hours
Exploring and analyzing more types of data
Deploy 8x faster
than custom-built solutions
1
Built-in visualization
to accelerate insight
Unlike big data appliances on the market, PureData
System for Hadoop offers built-in analytic accelerators
2
Single system console
for full system administration
Rapid maintenance updates
with automation
No assembly required, data load ready in hours
Only integrated Hadoop system with built-in
archiving tools
2
Delivered with more robust security
than open source software
Architected for high availability
Provides ability to load data at up to 14TB/hr
1
Based on IBM internal testing and customer feedback. "Custom built clusters" refer to clusters that are not professionally pre-built, pre-tested and optimized. Individual results may vary.
2
Based on current commercially available Big Data appliance product data sheets from large vendors. US ONLY CLAIM.
Accelerate
Big Data
Time to Value
Accelerate
Big Data
Time to Value
Simplify Big Data
Adoption & Consumption
Simplify Big Data
Adoption & Consumption
Implement
Enterprise Class
Big Data
Implement
Enterprise Class
Big Data
IBM PureData System for Hadoop
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-23
V9.0
Uempty
Single system console for full system administration
Rapid maintenance updates with automation
Integration by Design
Hadoop system with built-in archiving tools
Delivered with more robust security than open source software
Architected for high availability
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-24 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-18. IBM PureSystems topics NGT113.0
Notes:
This section is an overview of IBM PureFlex System.
IBM PureSystems family overview
IBM PureApplication System overview
IBM PureData System overview
IBM PureFlex System overview
IBM Flex System overview
IBM PureSystems topics
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-25
V9.0
Uempty
Figure 1-19. What if we could do the same integration in an optimized IT system? NGT113.0
Notes:
In a traditional IT environment, many times IT directors and administrators are on their own when
attempting to piece together a cohesive and workable solution. This might mean bringing in experts
from many different fields to verify that all the hardware and software proposed in the solution work
seamlessly together.
What if, just as in the smart phone example, all that expertise was integrated into the hardware and
software already. What if the X-Architecture and Power servers already knew what storage was
available and was already attached to it? What if every piece of hardware and software were
seamlessly networked together so all the components could easily communicate with each other?
What if you could use one tool to manage everything?
You do not have to imagine anymore. It is available today from IBM and it is called the IBM
PureFlex System!
Applications
Storage
Networking
Virtualization
Management
Compute
Tools
Flexible and open choice in a fully integrated system
What if we could do the same integration in an
optimized IT system?
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-26 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-20. IBM PureFlex System: Simplified experience NGT113.0
Notes:
The building blocks for the IBM PureSystems family are the IBM Flex System components. The
IBM Flex System components consists of:
IBM Flex System Enterprise Chassis
IBM Flex System Compute Nodes (either X-Architecture or Power)
IBM Flex System V7000 Storage Node
IBM Flex System PCIe and Storage Expansion nodes
IBM Flex System Manager Node with the IBM Flex System Manager software
IBM Flex System network switches
Starts at acquisition: A continuum of value from building blocks to systems
Reduce time, effort, and risk throughout the solution life cycle
IBM PureFlex System: Simplified experience
IBM Flex System building blocks IBM PureFlex System
Pre-configured, pre-integrated
infrastructure systems with compute,
storage, networking, physical, and
virtual management and entry cloud
management with
integrated expertise
IBM PureApplication System
Pre-configured, pre-integrated
platform systems with
middleware designed for
transactional web applications
and enabled for cloud with
integrated expertise
Compute nodes
Power 2S/4S
X-Architecture
2S/4S
Storage
Flex System V7000
Storage Node
(internal)
Storwize V7000
(external)
Chassis
14 half-wide bays
for nodes
Management
appliance
Optional
Expansion
PCIe
Storage
Networking
10/40Gb, FCoE, IB
8/16Gb FC
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-27
V9.0
Uempty
Figure 1-21. IBM PureFlex System offerings NGT113.0
Notes:
IBM PureFlex System is a complete, flexible cloud infrastructure system with integrated expertise.
The system integrates and optimizes all compute, storage and networking resources to deliver
infrastructure-as-a-service (IaaS) out of the box. To simplify acquisition of your solution, you can
choose one of two pre-defined and fully integrated, optimized configurations as the starting point:
IBM PureFlex System Express: Designed for small and medium businesses and is the most
affordable entry point for PureFlex System.
IBM PureFlex System Enterprise: Optimized for transactional and database systems and has
built-in redundancy for highly reliable and resilient operation to support your most critical
workloads.
IBM PureFlex System offerings
Express
Enterprise
Infrastructure
for Small and midsize
businesses. Most affordable
entry point
Infrastructure for scalable
cloud deployments.
Redundancy for resilient
operation
Pre-configured, pre-integrated systems with servers, storage and networking
Available in predefined starting points delivered integrated and tested
Flexibility and simplicity
extends from acquisition
to deployment
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-28 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-22. IBM PureFlex System Express (1 of 2) NGT113.0
Notes:
The IBM PureFlex System Express combines advanced IBM hardware and software along with
patterns of expertise and integrates them an optimized configuration that is simple to acquire and
deploy so you get fast time to value for your solution.
The table outlines the options within the IBM PureFlex System Express specifications.
IBM PureFlex System Express specifications
IBM PureFlex System Rack including
triplex (blue) door
Optional 42U, 25U or no rack
IBM Flex System Enterprise Chassis One
Flex System compute nodes:
Selectable (1): p24L,p260, p270,
p460, x220, x222, x220, x240, x440
SmartCloud Entry Default Off
Integrated 10Gb or 1Gb Networking
Switch
Selectable option with redundancy
Integrated 16Gb Fibre Channel
Switch
Selectable option with redundancy
Integrated IBM Flex System
Converged Scalable Switch (FCoE)
Selectable option with redundancy
IBM Flex System management node
(std/sw)
Yes
IBM Flex System Manager Edition
Flex System Manager Base with 1-
year service and support
Power supplies (std/max) Two / Six
80 mm fan modules (std/max) Four / Eight
IBM Flex System Chassis
Management Module
Two
IBM PureFlex System Express (1 of 2)
Infrastructure
for small and
midsize
businesses
Most
affordable
entry point
Express
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-29
V9.0
Uempty
Figure 1-23. IBM PureFlex System Express (2 of 2) NGT113.0
Notes:
This table is a continuation of options within the IBM PureFlex System Express specifications.
Infrastructure
for small and
midsize
businesses
Most
affordable
entry point
IBM PureFlex System Express specifications
IBM Storwize V7000 Disk System or
IBM V7000 Storage Node
One required with a selectable
option
IBM Storwize V7000 Software
Required - Base with one-year
software maintenance agreement
Supported drive options
V7000 Storage (2x 200GB
SSD, 16x 600GB SAS HDD, SCE
adds 4x 300GB SAS)
IBM System Networking RackSwitch
(Top of Rack)
If one chassis Optional
If > one chassis 1
IBM SAN Switch(Top of Rack)
If one chassis Optional
If > one chassis 1
Lab Services 3 days (Intro)
Warranty
Base HW warranty 3yr 9 x 5
Plus
Microcode Analysis 3yr/1x, Account
Advocate
9 x 5 and WSU upgrade to 24 x7
(Selectable)
IBM PureFlex System Express (2 of 2)
Express
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-30 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-24. IBM PureFlex System Enterprise (1 of 2) NGT113.0
Notes:
The IBM PureFlex System Enterprise combines advanced IBM hardware and software along with
patterns of expertise and integrates them an optimized configuration that is simple to acquire and
deploy so you get fast time to value for your solution. The Enterprise configuration is optimized for
scalable cloud deployments and has built-in redundancy for highly reliable and resilient operation to
support your critical applications and cloud services.
The table outlines the options within the IBM PureFlex System Enterprise specifications.
Enterprise
Infrastructure
system for
transactional
and database
systems
Includes
redundancy
for resilient
operations
IBM PureFlex System Enterprise (1 of 2)
IBM PureFlex System Enterprise specifications
IBM PureFlex System Rack including
triplex (blue) door
Optional 42U, 25U or no rack
IBM Flex System Enterprise Chassis One / Three
Flex System compute nodes:
Selectable (2): p24L,p260, p270,
p460, x220, x222, x220, x240,
x440
SmartCloud Entry Default On
Integrated 10Gb Networking Switch Selectable option with redundancy
Integrated 16Gb Fibre Channel Switch Selectable option with redundancy
Integrated IBM Flex System
Converged Scalable Switch (FCoE)
Selectable option with redundancy
IBM Flex System management node
(std/sw)
Yes
IBM Flex System Manager Edition
Flex System Manager Advanced
with 3-year service and support
Power supplies (std/max) Two / Six
80 mm fan modules (std/max) Six / Eight
IBM Flex System Chassis
Management Module
Two
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-31
V9.0
Uempty
Figure 1-25. IBM PureFlex System Enterprise (2 of 2) NGT113.0
Notes:
This table is a continuation of options within the IBM PureFlex System Enterprise specifications.
Infrastructure
system for
transactional
and database
systems
Includes
redundancy
for resilient
operations
IBM PureFlex System Enterprise (2 of 2)
IBM PureFlex System Enterprise specifications
IBM Storwize V7000 Disk System or
IBM V7000 Storage Node
One required with a selectable
option
IBM Storwize V7000 Software
Required - Base with one-year
software maintenance agreement
Supported drive options
V7000 Storage (2x 200GB
SSD, 16x 600GB SAS HDD, SCE
adds 4x 300GB SAS)
IBM System Networking RackSwitch
(Top of Rack)
If one chassis Optional
If > one chassis 1
IBM SAN Switch (Top of Rack) If one chassis Optional
If > one chassis 1
Lab Services 7 days
Warranty
Base HW warranty 3yr 9 x 5
Plus
Microcode Analysis 3yr/2x,
Account Advocate 9 x 5 and WSU
upgrade to 24 x7
(Account Advocate Mandatory
others selectable)
Enterprise
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-32 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-26. IBM Flex System topics NGT113.0
Notes:
The IBM Flex System topics we will cover are:
IBM Flex System overview
IBM Flex System platform details including
IBM Flex System Enterprise Chassis
IBM Flex System Manager
IBM Flex System compute nodes
IBM Flex System storage options
IBM Flex System networking
This section is an overview of the IBM Flex System.
IBM Flex System overview
IBM Flex System platform details
IBM Flex System Enterprise Chassis
IBM Flex System Manager
IBM Flex System compute nodes
IBM Flex System storage options
IBM Flex System networking
IBM Flex System topics
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-33
V9.0
Uempty
Figure 1-27. IBM Flex System Enterprise Chassis NGT113.0
Notes:
The IBM Flex System Enterprise Chassis offers compute, networking, and storage capabilities far
exceeding currently available offerings in the market. With the ability to handle up 14 compute
nodes, intermixing Power Systems and X-Architecture, the IBM Flex System Enterprise Chassis
provides flexibility and tremendous compute capacity in a 10U package. Flex System is designed to
support the largest memory configurations with up to 50% more memory for denser virtualization,
higher networking bandwidths up 2X with half the latency, and more dedicated storage in each
compute node up to 3X the competition. Flex System supports a broad range of hypervisors and
environments enabling the choice of tens of thousands of applications.
Additionally, the rear of the chassis accommodates four high speed networking switches
interconnecting compute, networking, and storage through a high performance and scalable
mid-plane. The IBM Flex System Enterprise Chassis is the first chassis capable of supporting 40Gb
Ethernet speeds.
The ground-up design of the IBM Flex System Enterprise Chassis reaches new levels of energy
efficiency through innovations in power, cooling, and air flow. Smarter controls and futuristic
designs allows the IBM Flex System Enterprise Chassis to break free of the one size fits all
energy schemes.
Four scalable switch bays
10U chassis, 14 bays
Standard and full width node support
Up to six 2500W or 2100W power
supplies N+N or N+1 configurations
Up to eight cooling fans (scalable)
Integrated chassis management through
Chassis Management Module
IBM Flex System Enterprise Chassis
Front
10 U
14 node
bays
(7 full wide)
Standard node bays
Power
supplies (6X)
Scalable switch bays
Fans CMM
Rear
Infrastructure
components
Networking
Infrastructure supporting multi
fabrics, Ethernet, FCoE, Fibre
Channel

Energy efficient cooling


and power system

Multi-chassis management

Easy to use with integrated single-


point management

Designed to support future


advancements in I/O, processors,
memory, and storage
S
y
s
t
e
m

i
n
f
r
a
s
t
r
u
c
t
u
r
e
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-34 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
The ability to support the workload demands of tomorrows workloads is built in with an IO
architecture providing choice and flexibility in fabric and speed. With the ability to provide Ethernet,
InfiniBand, FC, FCoE, and iSCSI, the IBM Flex System Enterprise Chassis is uniquely positioned to
meet the growing IO needs of the IT industry.
With a clean slate design, IBM Flex System allows new levels of integration particularly with the
integration of the Flex System Manager across all physical and virtual resources.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-35
V9.0
Uempty
Figure 1-28. IBM Flex System chassis integration of components NGT113.0
Notes:
IBM Flex System chassis delivers high-speed performance complete with integrated servers,
storage, networking, and management all in a single chassis.
Management
Compute
Storage Networking
IBM Flex System chassis integration of
components
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-36 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-29. IBM Flex System topics NGT113.0
Notes:
This section is an overview of the IBM Flex System platform starting with the IBM Flex System
Manager.
IBM Flex System overview
IBM Flex System platform details
IBM Flex System Enterprise Chassis
IBM Flex System Manager
IBM Flex System compute nodes
IBM Flex System storage options
IBM Flex System networking
IBM Flex System topics
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-37
V9.0
Uempty
Figure 1-30. IBM Flex System Manager NGT113.0
Notes:
IBM Flex System Manager has full, built-in virtualization support of servers, storage, and
networking to speed provisioning and increase resiliency. In addition, it supports open industry
standards such as operating systems, networking and storage fabrics, virtualization, and system
management protocols to easily fit within existing and future data center environments. IBM Flex
System Manager is scalable and extendable with multi-generational upgrades to protect and
maximize IT investments.
Networking
S
y
s
t
e
m

i
n
f
r
a
s
t
r
u
c
t
u
r
e
Advanced GUI with physical topology
maps

Tools to speed deployment

Easy to use with integrated single-


point management

Advanced virtualization management


through VMControl

Workload migration tools


Standard width compute node
Integrated into chassis
Optimized for the IBM Flex System
environment, with upward integration for
enterprise management
Single point of management for server,
storage, and network
Customizable management profiles to match
your administrative structure
Storage device discovery and coverage in
integrated physical and logical topology
views
IBM Flex System Manager
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-38 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-31. IBM Flex System Manager v1.3 NGT113.0
Notes:
FSM has expanded the number of concurrently managed chassis support to sixteen chassis and
5000 managed resources.
A utility to manage the FSM capacity utilization will display information related to resource utilization
and capacities that may affect the FSM performance. These include monitoring the number of
managed resources, the number of concurrently active users, CPU utilization, I/O utilization,
memory utilization, and disk space.
The FSM supports the additionally announced compute nodes and IOMs.
The FSM has added a display that provides a view of the firmware levels for all chassis
components. Compliance policies are built automatically when new firmware is available.
Operating system image for deployment is expanded from two to five images that can be loaded
onto the FSM.
Mobile System Management function are enhanced to include power control, selected LED
controls, and virtual reseat command support.
IBM Flex System Manager v1.3 features
Support up to 16 chassis concurrently
FSM capacity utilization management
HW currency
Added view to all firmware levels
OS deployment supports five images
Mobile System Management enhancements
IBM Flex System Manager v1.3
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-39
V9.0
Uempty
Figure 1-32. IBM Flex System topics NGT113.0
Notes:
This section is an overview of the IBM Flex System compute nodes.
IBM Flex System overview
IBM Flex System platform details
IBM Flex System Enterprise Chassis
IBM Flex System Manager
IBM Flex System compute nodes
IBM Flex System storage options
IBM Flex System networking
IBM Flex System topics
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-40 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-33. IBM Flex System compute nodes NGT113.0
Notes:
The IBM Flex System x220 Compute Node, machine type 7906, is the next generation
cost-optimized compute node designed for less demanding workloads and low-density
virtualization. The x220 is efficient and equipped with flexible configuration options and advanced
management to run a broad range of workloads.
The IBM Flex System x222 Compute Node is a high-density dual-server offering that is designed
for virtualization, dense cloud deployments, and hosted clients.
The IBM Flex System x240 Compute Node is a high-performance server that offers outstanding
performance for virtualization with new levels of CPU performance and memory capacity, and
flexible configuration options.
The IBM Flex System x440 Compute Node is a high-density four socket server, optimized for
high-end virtualization, mainstream database deployments, memory-intensive and high
performance environments.
The IBM Flex System p260, p270, p24L, and p460 Compute Nodes are servers based on IBM
Power architecture technologies. These compute nodes run in IBM Flex System Enterprise Chassis
units to provide a high-density, high-performance compute node environment, using advanced
IBM Flex System compute nodes
Copyright IBM Corporation 2012, 2013
Compute node choices: Leading edge compute technologies deliver
an open architecture, operating system and hypervisor choice.
IBM Flex System p260
IBM Flex System p24L
IBM Flex System x220
IBM Flex System x240
IBM Flex System x440
IBM Flex System PCI expansion node
IBM Flex System p460
IBM Flex System Storage expansion node
IBM Flex System x222 IBM Flex System p270
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-41
V9.0
Uempty
processing technology. The nodes support IBM AIX, IBM i, or Linux operating environments and are
designed to run a wide variety of workloads.
The IBM Flex System PCIe Expansion Node provides the ability to attach additional PCI Express
cards, such as High IOPS SSD adapters, fabric mezzanine cards, and next-generation graphics
processing units (GPU), to supported IBM Flex System compute nodes. It can be attached to the
x220 and x240 compute nodes.
The IBM Flex System Storage Expansion Node is a locally attached storage node that is dedicated
and directly attached to a single half-wide compute node. The Storage Expansion Node provides
storage capacity for Network Attach Storage (NAS) workloads, providing flexible storage to match
capacity, performance, and reliability needs. It can be attached to the x220 and x240 compute
nodes.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-42 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-34. IBM Flex System: X-Architecture compute nodes positioning NGT113.0
Notes:
This chart provides an outlook as to how the X-Architecture compute nodes are positioned within
the IBM Flex System environment.
Database /
Consolidation
Enterprise
Performance
Business
Applications
(SAP, ERP,
Small DB)
Infrastructure
Applications
(file/ print/
collaboration)
IBM Flex System:
X-Architecture compute nodes positioning
Copyright IBM Corporation 2012, 2013
x440
Price-
Performance
Optimized
4 socket
Mainstream
Database
High-End
Virtualization
Memory
Intense
x240
IvyBridge
Enhanced
Platform
Dual Proc
High-Density
Virtualization
Standard
High-Value
features
x220
Standard
Platform
Dual Proc.
Optional
High-Value
features
Energy
Efficient
2 Socket 4 Socket
x440
High performance compute
node with leadership compute,
memory and
I/O capacity
48 DIMMs for memory
intense
Performance optimized with
leadership IO capacity
x240
High Density, high performance
optimized for virtualization
24 DIMM slots
Integrated HW RAID 0,1
& HS HDDs
Performance optimized with
flexible IO capability
x222
Double dense computing and
virtualization
Optimized for solution cost
Efficient infrastructure
utilization
Flexible I/O to integrate into
existing datacenters
x220
Versatile, easy to use,
optimized for performance,
power and cooling. Flexible
design for Virtualized and Native
workloads
Great price/performance
No compromise blade
Cost-optimized platform
x222
Virtual Desktop
computing
Flexible I/O
Dense Cloud
Deployments
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-43
V9.0
Uempty
Figure 1-35. IBM Flex System: Power Systems compute node positioning NGT113.0
Notes:
This chart provides an outlook as to how the Power Systems are positioned within the IBM Flex
System environment.
Database /
Consolidation
Enterprise
Performance
Business
Applications
(SAP, ERP,
Small DB)
Infrastructure
Applications
(file/ print/
collaboration)
p460
AIX/IBM i/Linux
4-sockets
16 to 32-cores
Flexible
configuration
Leadership
Virtualization
Demanding
Database
p24L
Linux only
2-sockets
8 to 16-cores
Optional
High-Value
features
Competitive
with X-Architecture
2 Socket 4 Socket
p460
High-performance, reliable, secure
system that is cloud-enabled.
Outstanding offering for mid-market
clients
Ideal for server consolidation or a
high-performing database server
p270
Combines secure, reliable computing
and energy efficient virtualization.
Workload-optimizing capabilities to
enable companies to get the most out of
their systems by increasing utilization
and performance while helping to
reduce infrastructure and energy costs.
p260
Highly flexible node with large memory
capacity, outstanding performance,
industrial-strength virtualization and
workload-optimizing capabilities.
Ideal for small-to-midsize database
servers, and consolidation of virtualized
application servers
p24L
Compute node which runs industry-
standard Linux from Red Hat or SUSE.
Exploits the advanced hardware and
software capabilities of POWER7 to
provide high qualities of service
for Linux workloads
p270
AIX/IBM i/Linux
2-sockets
24-cores only
Built-in
Dual VIOS
Leadership
Virtualization
Cloud Ready
p260
AIX/IBM i/Linux
2-sockets
4 to 16-cores
Flexible
configuration
Leadership
Virtualization
Application
Workloads
POWER7 POWER7+ DCM POWER7+ SCM
IBM Flex System:
Power Systems compute node positioning
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-44 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-36. IBM Flex System topics NGT113.0
Notes:
This section is an overview of the IBM Flex System storage options.
IBM Flex System overview
IBM Flex System platform details
IBM Flex System Enterprise Chassis
IBM Flex System Manager
IBM Flex System compute nodes
IBM Flex System storage options
IBM Flex System networking
IBM Flex System topics
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-45
V9.0
Uempty
Figure 1-37. IBM Flex System storage NGT113.0
Notes:
IBM Flex System offers a broad range of storage capabilities that include:
IBM Flex System Storage Expansion Node
- Mixed configs (SSDs and HDDs)
- Expansion to 12 additional hot swap drives
- Shared storage (x220 and x240 compute nodes only)
Integrated IBM Flex System V7000 Storage Node
Simplifies storage administration
Virtualizes for higher storage utilization
Balances high performance and cost for mixed workloads
IBM offers a wide variety of storage solutions that will work with PureFlex such as DS3500, XIV, and
so on.
IBM Flex System offers a broad range of
storage capabilities:
IBM Flex System Storage Expansion
Node
Mixed configs (SSDs and HDDs)
Expansion to 12 additional hot swap drives
Integrated IBM Flex System V7000
storage node
Simplifies storage administration
Virtualizes for higher storage utilization
Balances high performance and cost for mixed
workloads
Direct attached storage options
Shared storage
IBM Flex System storage
S
y
s
t
e
m

i
n
f
r
a
s
t
r
u
c
t
u
r
e
S
y
s
t
e
m

i
n
f
r
a
s
t
r
u
c
t
u
r
e
Storage
Simplifies storage administration

Virtualizes for higher storage


utilization

Balances high performance and cost


for mixed workloads

Protects data and minimizes


downtime
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-46 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-38. IBM Flex System Storage Expansion Node NGT113.0
Notes:
The IBM Flex System Storage Expansion Node (SEN) is a storage enclosure that attaches to the
IBM Flex System x220 and x240 single-wide compute nodes, providing additional direct-attach
local storage. The Storage Expansion Node provides storage capacity for Network Attach Storage
(NAS) workloads, providing flexible storage to match capacity, performance and reliability needs.
The Storage Expansion Node connects directly to supported compute nodes via a PCIe 3.0
interface to the compute node's interposer connector (also known as the everything-to-everything
or ETE connector). It also supports for 12 hot-swap 2.5-inch drives, accessible via a sliding tray.
IBM Flex System Storage Expansion Node
Dedicated direct attached storage for the 2S
Intel x220 and x240 compute nodes
Not supported on the x222 or any Power
compute nodes
Increase storage in following workloads:
Network Attach Storage (NAS)
Video Security/Surveillance
Transactional Data Server
Features:
Connects through a PCIe 3.0 interface to the
compute nodes expansion connector (ETE)
Up to 12 HS 2.5-nch SAS/SATA or SSD
Accessible via the siding drive tray
Supports RAID 0/1/5/10/50
JBOD also supported
Optional RAID 6 and 60 with a Features
on Demand upgrade
IBM Flex System x240 with a Storage
Expansion Node attached
IBM Flex System Storage Expansion Node
Up to
19.2 TB
of storage
Faster recovery
with
Hot-Swap
disk capability
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-47
V9.0
Uempty
Figure 1-39. IBM Flex System V7000 Storage Node NGT113.0
Notes:
The IBM Flex System V7000 consists of a set of drive enclosures, the IBM Flex System V7000
Control Enclosure, and the IBM Flex System V7000 Expansion Enclosure. The IBM Flex System
offers a broad range of storage capabilities that include:
Direct attached storage options
- Up to 24, 2.5-inch internal drives
- Supports mixed configs SAS/NL-SAS hot-swap HDDs or SSDs
Control enclosure contains embedded SSDs for caching applications
For storage management, the IBM Flex System V7000 Storage Node system can have two
management IP addresses
- One for the 1 Gbps internal management network (required)
- One on the first 10 Gbps Ethernet port in the configuration node canister (optional)
Flex System V7000 support host connectivity through the following optional host interface cards that
connect to the Flex System Enterprise Chassis midplane and its switch modules:
- 10Gb Converged Network Adapter (CNA) 2 Port Daughter Card for FCoE and iSCSI fabric
connections
- 8Gb Fibre Channel (FC) 4 Port Daughter Card for Fibre Channel fabric connections
IBM Flex System V7000 Storage Node
Integrated storage system (double high / full wide)
Shared storage (M/T 4939) control enclosure and
expansion enclosure
Dual canister enclosures:
Supports up to 24 hot swap internal SFF
HDDs/SSDs
Scalable up to 240 HDDs (960 HDDs with 4
system cluster) within/external to the chassis
Clustered systems support up to 960 SFF drives
and 4x the bandwidth in IBM Flex System
Host protocol options:
8Gb FC, 1/10Gb iSCSI, & 10Gb FCoE
Supports IBM Flex System compute nodes across
multiple chassis
Customer installable and maintainable
Copyright IBM Corporation 2012, 2013
Control
Enclos
ure
Expansion Enclosure
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-48 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-40. IBM Flex System storage portfolio positioning NGT113.0
Notes:
This chart provides an outlook of the flexible storage options and how they are positioned within the
IBM Flex System environment.
IBM Flex System storage portfolio positioning
Copyright IBM Corporation 2012, 2013
Not Share Cluster
High Performance
Internal Shared
Boot
File/Print
File
Share/NAS
Small Database
Internal
Storage
(1-2 drives)
HDD/SSD
Raid 0,1
Storage
Expansion
Node
(1-12 drives)
Dist database
Entry level NAS
Caching
Raid 0, 1, 10, 5
and 6
Raided Direct
Attach Storage
JBOD Only
Mode
Flex System V7000
Storage Node
(1-240 drives)
Automatic Clustering,
Zoning, Pooling,
Discovery and
Inventory
Internal
Shared Block Storage
Raid 0, 1, 10, 5 and 6
FCoE
10Gb Ethernet
8Gb Fibre Channel
iSCSI
Easy Tier
Clustering
JBOD support
IBM Storwize V7000
IBM Storwize V7000
Unified
(1-240+ drives)
Traditional Storage
Implementation
Flexibility
Fits in racks without a
chassis
External
Shared Block Storage
Raid 0, 1, 10, 5 and 6
FCoE
10 Gb Ethernet
8 Gb Fibre Channel
iSCSI
Easy Tier
Clustering
JBOD support
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-49
V9.0
Uempty
Figure 1-41. IBM Flex System topics NGT113.0
Notes:
This section is an overview of the IBM Flex System networking.
IBM Flex System overview
IBM Flex System platform details
IBM Flex System Enterprise Chassis
IBM Flex System Manager
IBM Flex System compute nodes
IBM Flex System storage options
IBM Flex System networking
IBM Flex System topics
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-50 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-42. Networking NGT113.0
Notes:
The IBM Flex System provides many networking options. The IBM Flex System Enterprise Chassis
supports up to four scalable switches. Some switches have the capability to scale to greater port
capacity and to allow the customer to only pay for what they need to support. Many switches
support the Features on Demand (FoD) that provides the capability to additional ports without
physically replacing the switch.
IBM networking offerings
Four flexible and open scalable switch modules
per chassis
Supports multiple protocols - Ethernet, FCoE,
iSCSI, Fibre Channel and InfiniBand
Choice of full Layer 2/3 switch for highest
flexibility to simple connectivity module for easy
setup and management
Capable to provide up to 16 virtual switch
partitions per chassis
Feature on Demand (FoD) port upgrades for
switches
Flexible and simple connect to existing
infrastructure, no rip and replace
Integrated with Flex System Manager for single
point of management
Networking
S
y
s
t
e
m

i
n
f
r
a
s
t
r
u
c
t
u
r
e
Networking
Simplifies network deployment
through integrated management

Reduces network complexity through


convergence and intelligent fabric
monitoring

Improves network performance


through uncompromised IO
throughput

Fits with existing infrastructure and


scales with customers IO needs
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-51
V9.0
Uempty
Figure 1-43. IBM Flex System Scalable Switch options NGT113.0
Notes:
These are the scalable switch element options that are available for the IBM Flex System
Enterprise Chassis.
IBM Flex System Scalable Switch options
IBM Flex System EN2092 1Gb Ethernet Scalable Switch
IBM Flex System EN4091 10Gb Ethernet Pass-thru
IBM Flex System Fabric SI4093 System Interconnect Module
IBM Flex System Fabric EN4093 10Gb Scalable Switch
IBM Flex System Fabric EN4093R 10Gb Scalable Switch
IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch
IBM Flex System EN6131 40Gb Ethernet Switch
IBM Flex System FC3171 8Gb SAN Switch
IBM Flex System FC5022 24-port 16Gb ESB SAN Scalable Switch
IBM Flex System IB6131 InfiniBand Switch
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-52 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-44. IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch NGT113.0
Notes:
The IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch provides unmatched
scalability, performance, convergence, and network virtualization, while also delivering innovations
to help address a number of networking concerns today and providing capabilities that will help you
prepare for the future.
The switch offers full Layer 2/3 switching as well as FCoE Full Fabric and Fibre Channel NPV
Gateway operations to deliver a truly converged integrated solution, and it is designed to install
within the I/O module bays of the IBM Flex System Enterprise Chassis. The switch can help clients
migrate to a 10Gb or 40Gb converged Ethernet infrastructure and offers virtualization features like
Virtual Fabric and VMready.
IBM Flex System Fabric CN4093 10Gb Converged
Scalable Switch
Physical features
3 x SPARs (switch partitions)
42 x 10Gb internal ports
2 x 10Gb SFP+ uplinks
12 x Omni port SFP+ uplinks (10GbE or 4/8Gb FC)
2 x 40Gb QSFP+ uplinks
1 x RJ45 management port
2 x internal management ports
1 x mini-USB RS-232 serial port
FoD Scalability
Base
14 x 10Gb internal ports
2 x 10Gb SFP+ uplink ports
6 x external SFP+ Omni ports (not specific ports)
Upgrade 1
Additional 14 x 10Gb internal ports
2 x 40Gb QSFP+ uplink ports
Upgrade 2
Additional 14 x 10Gb internal ports
Additional 6 x external SFP+ Omni ports
Features
IBM Networking OS
Layer 2/3 Ethernet functionality
Easy Connect Mode
Virtual Fabric / Unified Fabric Port
VMready
FCoE Full Fabric /Fibre Channel NPV Gateway (FCF)
Base
Upgrade 1
Base
Omni ports
Upgrade 2
Omni ports
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-53
V9.0
Uempty
Figure 1-45. IBM Flex System Fabric SI4093 System Interconnect Module NGT113.0
Notes:
The SI4093 is a transparent network device that eliminates network administration concerns of
Spanning Tree Protocol configuration/interoperability, VLAN assignments, and avoidance of
possible loops. Its operation is invisible to the upstream network.
By emulating a host NIC to the data center core, it accelerates the provisioning of VMs by
eliminating the need to configure the typical access switch parameters.
The SI4093 also offers increased security and performance advantage when configured in
VLAN-aware mode. It does not force communications upstream into the network, thus reducing
latency and generating less network traffic.
IBM Flex System Fabric SI4093 System
Interconnect Module
Physical features
3 x SPARs (switch partitions)
42 x 10Gb internal ports
14 x 10Gb SFP+ uplinks
2 x 40Gb QSPF+ uplinks
1 x RJ45 management port
2 x internal management ports
1 x mini-USB RS-232 serial port
FoD Scalability
Base
14 x 10Gb Internal ports
10 x 10Gb SFP+ uplink ports
Upgrade 1
Additional 14 x 10Gb internal ports
2 x 40Gb QSFP+ uplink ports
Upgrade 2 (upgrade 1 is prerequisite)
Additional 14 x 10Gb Internal ports
Additional 4 x 10Gb SFP+ uplink ports
Features
IBM Networking OS
Support with ISCLI and System Networking Switch Center
Layer 2 Ethernet functionality
No spanning tree
Switch independent VNIC
FCoE transit switch operations
Transparent mode / Local-domain mode (VLAN aware)
Base
Upgrade 2
Upgrade 1
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-54 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-46. IBM Flex System EN6131 40Gb Ethernet Switch NGT113.0
Notes:
The IBM Flex System EN6131 40Gb Ethernet switch in conjunction with the EN6132 40Gb Ethernet
adapter offer the performance that you need to support 40Gb connectivity end to end. It would
benefit environments that operate clustered databases, parallel processing, transactional services,
and high-performance embedded I/O applications by reducing task completion time and lowering
the cost per operation. This switch offers 14 internal and 18 external 40Gb Ethernet ports that
enable a non-blocking network design. It supports all Layer 2 functions so servers can
communicate within the chassis without going to a Top-of-Rack (ToR) switch, which helps improve
performance and latency.
IBM Flex System EN6131 40Gb Ethernet Switch
Physical features
1 x SPAR (switch partitions)
14 x 40Gb internal ports
18 x 40Gb QSFP+ uplink ports
1 x 1Gb internal management port
1 x RJ45 management port
1 x mini-USB RS-232 serial port
FoD Scalability
N/A
Features
MLNX-OS
Layer 2 Ethernet functionality
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-55
V9.0
Uempty
Figure 1-47. Flex System Ethernet module positioning NGT113.0
Notes:
This chart provides an outlook as to how the scalable switch Ethernet and converged options are
positioned within the IBM Flex System environment.
Flex System Ethernet module positioning
Low latency/
high bandwidth
apps
Telecommunicati
on
Enterprise
applications
high
Performance
Scalability
Virtualization
Cloud computing
Simple
management
Simple
connectivity and
interop with
upstream
network
Infrastructure
Applications
(file/ print/
collaboration)
EN2092
1Gb Scalable
Switch
10Gb Uplinks
For easy
Transition
Ethernet, iSCSI
support
1Gb Ethernet
CN4093
Convergence within the chassis
Connect to existing LAN and
SAN network
Scalable fabric
Easy transition to 40GbE
L2/3 function
Enhanced Virtual Fabric for
reduce I/O cost and complexity
EN4093R
High performance 10GbE
connection
L2/3 function
Scalable fabric
Easy transition to 40Gb
Enhanced Virtual Fabric for
reduced I/O cost and complexity
OpenFlow SDN Support
SI4093
Simple setup and management
Scalable fabric
Easy transition to 40Gb
Virtual Fabric support reduces
cost and complexity
EN6131
High performance 40Gb
Low latency
EN2092
1Gb connectivity
Easy transition to 10Gb
EN4091
10Gb connectivity
Unmanaged device
SI4093
Simple
Low touch
Connectivity
Module
110Gb performance
Scalable
40Gb uplinks
Ethernet, iSCSI
FCoE support
EN4091
10Gb Pass thru
Unmanaged
Device
Non blocking
Connectivity to
Upstream
Network
Ethernet, FCoE
EN6131
40Gb Ethernet
Eighteen uplink
High bandwidth
Low latency
Ethernet support
CN4093
10Gb Scalable
Converged
Switch
40Gb uplinks
Native FC ports
FCF support for
Storage node
Ethernet, FCoE
EN4093R
10Gb Scalable
Switch
40Gb uplinks
Ethernet, iSCSI
FCoE support
OpenFlow
Enabled
10Gb iSCSI, Ethernet
FCoE (transit)
10Gb Converged
In chassis
40Gb
Ethernet
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-56 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-48. Evolutionary and game-changing NGT113.0
Notes:
In summary, the IBM Flex System is evolutionary and game-changing at the same time. You have
seen how it integrates multiple server architectures, networking, storage and system management
capability into a single system that is easy to deploy and manage. IBM Flex System has full built-in
virtualization support of servers, storage, and networking to speed provisioning and increase
resiliency. In addition, it supports open industry standards, such as operating systems, networking
and storage fabrics, virtualization, and system management protocols, to easily fit within existing
and future data center environments. IBM Flex System is scalable and extendable with
multi-generation upgrades to protect and maximize IT investments.
IBM Flex System
Integrating flexibility and simplicity
Providing control, efficiency, agility
Integrated systems and management
with no compromises
Breakthrough management and
storage integration
High-bandwidth integrated networking
Built-in expertise of thousands of data
center optimizations
Evolutionary and game changing
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-57
V9.0
Uempty
Figure 1-49. Glossary NGT113.0
Notes:
This slide presents a glossary of terms used in this unit.
Expert integrated systems
IBM PureSystems
IBM PureFlex System
IBM PureApplication System
IBM PureData System
IBM PureData System for Hadoop
IBM PureData for Analytics
IBM PureData System for Operational
Analytics
IBM PureData System for Transactions
IBM Flex System
IBM Flex System Enterprise Chassis
IBM Flex System x220 Compute Node
IBM Flex System x222 Compute Node
IBM Flex System x240 Compute Node
IBM Flex System x440 Compute Node
IBM Flex System p24L Compute Node
IBM Flex System p260 Compute Node
IBM Flex System p270 Compute Node
IBM Flex System p460 Compute Node
IBM Flex System Manager Node
IBM Flex System Manager
IBM Flex System V7000 Storage Node
IBM Flex System Storage Expansion
Platform management
Glossary
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-58 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-50. Checkpoint (1 of 2) NGT113.0
Notes:
Write down your answers here:
1.
2.
1. Which of the following IBM PureApplication System X-
Architecture models offers 96 - 608 cores options?
a. IBM PureApplication W1500 (25U)
b. IBM PureApplication W1500 (42U)
c. IBM PureApplication W1700 (42U)
d. Both the IBM PureApplication W1500 (42U) and
IBM PureApplication W1700 (42U)
2. Which of the following IBM PureFlex offerings is shipped
standard with the IBM Flex System Manager Advanced?
a. IBM Flex System Express
b. IBM Flex System Enterprise
c. IBM Flex System Chassis
Checkpoint (1 of 2)
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 1. IBM PureSystems and IBM Flex System 1-59
V9.0
Uempty
Figure 1-51. Checkpoint (2 of 2) NGT113.0
Notes:
Write down your answers here:
3.
4.
3. Which of the following is an example of an IBM
PureApplication platform pattern of expertise?
a. Operation system provisioning to compute nodes
b. SAP application deployment
c. Web application deployment
d. Automated configuration of compute nodes
4. Which of the following storage options provides direct-attach
local storage to a compute node?
a. IBM Flex System Storage Expansion Node
b. IBM Flex System Storwize V7000 Unified
c. IBM Flex System V7000 Storage Node
d. IBM Flex System Storwize V7000
Checkpoint (2 of 2)
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-60 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 1-52. Unit summary NGT113.0
Notes:
Having completed this unit, you should be able to:
Summarize the features of IBM PureSystems and IBM
PureFlex System
Identify the major elements of IBM PureSystems and IBM
PureFlex System
Summarize the features of the IBM Flex System
Identify the major elements of the IBM Flex System
Differentiate between the IBM Flex System and traditional IT
solutions
Explain how the IBM Flex System will fundamentally change
the way IT solutions are provided
Unit summary
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-1
V9.0
Uempty
Unit 2. IBM Flex System Enterprise Chassis
What this unit is about
This section is an overview of the IBM Flex System Enterprise Chassis.
What you should be able to do
After completing this unit, you should be able to:
Summarize the features of the IBM Flex System Enterprise Chassis
Identify the major elements of the IBM Flex System Enterprise Chassis
Explain the power features of the IBM Flex System Enterprise Chassis
Explain the cooling features of the IBM Flex System Enterprise Chassis
Explain the management features of the IBM Flex System Enterprise
Chassis
How you will check your progress
Checkpoint questions
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-2 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-1. Unit objectives NGT113.0
Notes:
After completing this unit, you will be able to:
Summarize the features of the IBM Flex System Enterprise Chassis
Identify the major elements of the IBM Flex System Enterprise Chassis
Explain the power features of the IBM Flex System Enterprise Chassis
Explain the cooling features of the IBM Flex System Enterprise Chassis
Explain the management features of the IBM Flex System Enterprise Chassis
Copyright IBM Corporation 2012, 2013
Unit objectives
After completing this unit, you should be able to:
Summarize the features of the IBM Flex System Enterprise Chassis
Identify the major elements of the IBM Flex System Enterprise Chassis
Explain the power features of the IBM Flex System Enterprise Chassis
Explain the cooling features of the IBM Flex System Enterprise Chassis
Explain the management features of the IBM Flex System Enterprise
Chassis
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-3
V9.0
Uempty
Figure 2-2. IBM Flex System Enterprise Chassis topics NGT113.0
Notes:
The topics that are covered are:
IBM Flex System Enterprise Chassis overview and architecture
IBM Flex System Enterprise Chassis components
- Power supplies
- Fan modules
- Fan logic module
- Front information panel
- Cooling
IBM Flex System Enterprise Chassis I/O architecture
Chassis Management Module
This section covers the IBM Flex System Enterprise Chassis overview and architecture.
Copyright IBM Corporation 2012, 2013
IBM Flex System Enterprise Chassis topics
IBM Flex System Enterprise Chassis overview and architecture
IBM Flex System Enterprise Chassis components
Power supplies
Fan modules
Fan logic module
Front information panel
Cooling
IBM Flex System Enterprise Chassis I/O
architecture
Chassis Management Module
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-4 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-3. At a glance NGT113.0
Notes:
The IBM Flex System Enterprise Chassis is the foundation of the Flex System offering, which
features 14 standard (half-width) Flex System form factor compute node bays in a 10U chassis that
delivers high-performance connectivity for your integrated compute, storage, networking, and
management resources.
The chassis is designed to support multiple generations of technology, and offers independently
scalable resource pools for higher usage and lower cost per workload. With the ability to handle up
14 nodes, supporting the intermixing of IBM Power Systems and Intel x86, the Enterprise Chassis
provides flexibility and tremendous compute capacity in a 10U package. Additionally, the rear of the
chassis accommodates four high speed I/O bays that can accommodate up to 40 GbE high speed
networking, 16 Gb Fibre Channel, or 56 Gb InfiniBand. With interconnecting compute nodes,
networking, and storage that uses a high performance and scalable mid-plane, the Enterprise
Chassis can support latest high speed networking technologies.
The ground-up design of the Enterprise Chassis reaches new levels of energy efficiency through
innovations in power, cooling, and air flow. Simpler controls and futuristic designs allow the
Enterprise Chassis to break free of one size fits all energy schemes.
Copyright IBM Corporation 2012, 2013
At a glance
Support for technology
advancements
Current and future high performance
processors, memory, I/O, and storage
Fast, faster, fastest
Faster internal bus speeds
Faster I/O speeds
Energy efficiency
High efficiency components
Sophisticated control systems
Support for Intel and Power
Systems nodes in same chassis
Large portfolio of switch and I/O
adapter choices and speeds
Integrated management node and
chassis management elements
System specifications
10U in size
14 horizontal, half-width bays
Support for x86 and Power Systems compute nodes
IBM Flex System Manager Node
Includes Chassis Management Module
Up to six 2500 W or 2100 W power supplies
Up to four I/O modules
Two 40 mm fan modules
Up to eight 80 mm fan modules
Two fan logic modules
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-5
V9.0
Uempty
The ability to support the workload demands of tomorrows workloads is built in with a new I/O
architecture, which provides choice and flexibility in fabric and speed. With the ability to use
Ethernet, InfiniBand, Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), and iSCSI, the
Enterprise Chassis is uniquely positioned to meet the growing and future I/O needs of large and
small businesses.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-6 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-4. Product overview (1 of 3) NGT113.0
Notes:
IBM Flex System, a category of computing and the next generation of Smarter Computing, is
anchored by the IBM Flex System Enterprise Chassis. This platform offers intelligent workload
deployment and management for maximum business agility. This chassis delivers high-speed
performance complete with integrated servers, storage, and networking for multi-chassis
management in data center compute environments. Furthermore, its flexible design can meet the
needs of varying workloads with independently scalable IT resource pools for higher utilization and
lower cost per workload. While increased security and resiliency protect vital information and
promote maximum uptime, the integrated, easy-to-use management system reduces setup time
and complexity, providing a quicker path to ROI.
The IBM Flex System Enterprise Chassis is a rack-optimized, 10U modular design enclosure that
holds up to 14 half-width nodes. It features:
High-efficiency 2500 W or 2100 W power supplies.
Hot-swap 80 mm and 40 mm redundant fans.
Support for x86 and Power compute nodes.
Copyright IBM Corporation 2012, 2013
Product overview (1 of 3)
IBM Flex System Enterprise
Chassis
10U server platform
Front access components
Seven (full-width) horizontally
oriented bays
Height of each bay is 57 mm
Each chassis bay supports
either:
> Two half-width nodes
> One full-width node
Half-width node limited to
single height
Full-width node supports
double-height
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-7
V9.0
Uempty
A Chassis Management Module (CMM) that gives you control over the solutions at the chassis
level, simplifying installation and management of your installation. A second management
module is optional.
Support for up to four traditional fabrics using networking switches, storage switches, or
pass-through devices. The chassis also supports up to four switches scalable to eight logical
switches and support for support protocols such as Ethernet, Fibre Channel, FCoE, ISCI, and
InfiniBand.
The IBM Flex System Enterprise Chassis is a rack-optimized, 10U modular design enclosure
that holds up to 14 half-width nodes. The seven full-width node bays are horizontally oriented
are 57 mm in height (pitch). Each compute node bay supports either:
Two half-width nodes (approx. 217 mm or 8.5 inch wide)
One full-width node (approx. 435 mm or 17 inch wide)
The half-width node is limited to single height (one half-bay x 57 mm). The full-width node
supports double-height (one full-bay x 114 mm).
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-8 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-5. Product overview (2 of 3) NGT113.0
Notes:
The IBM Flex System Enterprise Chassis rear access components consist of:
Four vertically oriented, I/O module bays
Two Chassis Management Module bays
Two fan logic modules
Two 40 mm fan module bays
Eight 80 mm fan module bays
Six power supply bays
Copyright IBM Corporation 2012, 2013
Product overview (2 of 3)
Rear access components
Four vertically oriented, I/O
module bays
Two Chassis Management
Module bays
Two fan logic modules
Two 40 mm fan module bays
Eight 80 mm fan module bays
Six power supply bays
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-9
V9.0
Uempty
Figure 2-6. Product overview (3 of 3) NGT113.0
Notes:
IBM Flex System Enterprise Chassis major components include the following:
Chassis Management Modules
IBM Flex System Manager Node
Compute nodes
Storage nodes
Expansion nodes
I/O modules
Power supply modules
Fan distribution cards
Fan modules
Fan logic modules
Front and rear LED panel
Copyright IBM Corporation 2012, 2013
Product overview (3 of 3)
IBM Flex System Enterprise Chassis major components include
Chassis Management Modules
IBM Flex System Manager Node
Compute nodes
Storage nodes
Expansion nodes
I/O modules
Power supply modules
Fan distribution cards
Fan modules
Fan logic modules
Front and rear LED panel
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-10 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-7. Models (type 8721) NGT113.0
Notes:
As shown in the table, model 8721 of the IBM Flex System Enterprise Chassis includes the
following components:
One IBM Flex System Enterprise Chassis
One Chassis Management Module
Either two 2500 W power supplies or two 2100 W power supplies depending on model
Four 80 mm Fan Modules
Two 40 mm Fan Modules
One Console Breakout Cable
Two C19 to C20 2M Power Cables
One Rack Mount Kit
The table lists the current models of the IBM Flex System Enterprise Chassis, the included
components, and the quantity of each.
Copyright IBM Corporation 2012, 2013
Models (type 8721)
8721-A1x 8721-LRx Description
1 1 IBM Flex System Enterprise chassis
1 1 Chassis management module
2 0 2500 W power supply
0 2 2100 W power supply
4 4 80 mm fan module
2 2 40 mm fan module
1 1 Console Breakout Cable
2 2 C19 to C20 2M power cables
1 1 Rack mount kit
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-11
V9.0
Uempty
Figure 2-8. Front view NGT113.0
Notes:
This shows the IBM Flex System Enterprise Chassis from the front. The front of the chassis has 14
horizontal bays (half-width) with removable dividers that allow nodes and future elements to be
installed within the chassis. The nodes can be installed with the chassis powered. In addition to the
14 node bays, the front of the chassis has the following features:
The front information panel that is on the lower left of the chassis
Lower airflow inlet apertures that provide air cooling for switches, Chassis Management
Module, and power supplies
Upper airflow inlet apertures that provide cooling for power supplies
The chassis employs a die-cast mechanical bezel for rigidity. This kind of chassis construction
allows for tight tolerances between both nodes, shelves, and chassis bezel to ensure accurate
location and mating of connectors to the midplane. The chassis midplane and I/O adapters are
described in further detail later.
The IBM Flex System Enterprise Chassis is 10U in height (440 mm), 447 mm in width, and 800 mm
in depth.
The graphic highlights examples of both a half-width node and full-width node and the location of
the front information panel.
Copyright IBM Corporation 2012, 2013
Front view
Front
information
panel
Chassis
bays
containing
half-width
node
10U
(440 mm)
Chassis
bays
containing
full-width
node
800 mm
447 mm
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-12 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-9. Rear view NGT113.0
Notes:
This shows the IBM Flex System Enterprise Chassis from the rear. The following components can
be installed in the rear of the chassis:
Up to two Chassis Management Modules (or CMMs).
Up to six 2500 W or 2100 W power supplies.
Mixing of 2500 W and 2100 W power supplies within a chassis is not supported.
Up to six fan modules consisting of four 80 mm fan modules and two 40 mm fan modules.
Additional fan modules can be installed to a total of ten modules.
Up to four I/O modules
Copyright IBM Corporation 2012, 2013
Rear view
Four I/O
module bays
Two 40 mm
fan bays
Eight
80 mm fan
bays
Six power
supply bays
Two Chassis
Management
Module bays
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-13
V9.0
Uempty
Figure 2-10. Chassis component parts NGT113.0
Notes:
This shows the component parts of the chassis, with the shuttle removed. The shuttle forms the
rear of the chassis where the I/O modules, power supplies, fan modules, and Chassis Management
Modules are installed. The shuttle is removed only to gain access to the midplane or fan distribution
cards in the rare event of a service action.
Within the chassis, a personality card holds VPD and other information relevant to the particular
chassis. This card can be replaced only under service action, and is not normally accessible. The
personality card is attached to the midplane as shown in the graphic.
Copyright IBM Corporation 2012, 2013
Chassis component parts
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-14 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-11. Midplane: Front view NGT113.0
Notes:
This 3-D mechanical drawing shows the front of the midplane of the IBM Flex System Enterprise
Chassis. The midplane is the circuit board that interconnects the compute nodes from the front of
the chassis, and I/O modules, fan modules, and power supply modules from the rear of the chassis.
The midplane is located within the chassis and if required can be accessed by removing the shuttle
assembly. Removing the midplane is rare and only necessary in case of service
The midplane is passive, which is to say that there are no electronic components on it. The
midplane has apertures to allow air to pass through and when no node is installed in a standard
node bay, the air damper is closed totally for that bay, providing highly efficient cooling.
Highlighted here are the midplane connectors for power, management, and I/O adapters.
As was previously mentioned, the midplane of the IBM Flex System Enterprise Chassis is passive.
In other words, it contains no electronic components. This is in contrast to the midplane of the IBM
BladeCenter chassis.
Copyright IBM Corporation 2012, 2013
Midplane: Front view
Node power
connectors
Management
connectors
I/O adapter
connectors
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-15
V9.0
Uempty
Figure 2-12. Midplane: Rear view NGT113.0
Notes:
This 3-D mechanical drawing shows the rear of the midplane of the IBM Flex System Enterprise
Chassis. The drawing shows the location of the I/O module connectors, the Chassis Management
Module connectors, the power supply connectors, the personality card connector, and the fan
power and signal connectors.
The personality card holds VPD and other information relevant to the particular chassis.
Copyright IBM Corporation 2012, 2013
Midplane: Rear view
I/O module
connectors
Power supply
connectors
Fan power and
signal connectors
Chassis
Management
Module
connectors
Personality card
connector
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-16 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-13. Compute node insertion NGT113.0
Notes:
The compute nodes and IBM Flex System Manager are inserted into the front of the IBM Flex
System Enterprise Chassis after removing any installed bay fillers. The other components are
inserted into the rear of the chassis. The graphic shows removal of a bay filler and insertion of a
compute node into the front of the chassis.
If a full-width node needs to be inserted into the chassis, the chassis shelf also needs to be
removed. If a full-width, double-height node needs to be inserted, two chassis shelves must be
removed.
Copyright IBM Corporation 2012, 2013
Compute node insertion
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-17
V9.0
Uempty
Figure 2-14. Chassis bay numbering (1 of 3) NGT113.0
Notes:
The numbering of the IBM Flex System Enterprise Chassis node bays is based on half-width nodes
starting on the lower-left and moving to the upper-right bay. For example, the left-most bottom bay
is bay number 1; the rightmost upper bay is number 14.
Copyright IBM Corporation 2012, 2013
Chassis bay numbering (1 of 3)
Bay 13 Bay 14
Bay 11 Bay 12
Bay 9 Bay 10
Bay 7 Bay 8
Bay 5 Bay 6
Bay 3 Bay 4
Bay 1 Bay 2
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-18 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-15. Chassis bay numbering (2 of 3) NGT113.0
Notes:
In this example, two full-width nodes are installed in the chassis in what would normally be
referenced as bays 3 and 4 and bays 7 and 8. However, the correct designation of the full-width
node is the lowest number bay in this case, bay 3 and bay 7.
Another important point to notice is that the left-hand bay just above the full-width nodes does not
change its numbering (in this case, bays 5 and 9).
It is important to understand the correct numbering of chassis bays as it is critical when managing
the chassis via either the Chassis Management Module or IBM Flex System Manager.
Copyright IBM Corporation 2012, 2013
Chassis bay numbering (2 of 3)
Ten half-width / two full-width
Bay 13 Bay 14
Bay 11 Bay 12
Bay 9 Bay 10
Bay 7
Bay 5 Bay 6
Bay 3
Bay 1 Bay 2
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-19
V9.0
Uempty
Figure 2-16. Chassis bay numbering (3 of 3) NGT113.0
Notes:
This example mixes half-width nodes, full-width nodes, and full-width, double-height nodes. Even
when mixing nodes of these types, the numbering rules remain the same: the node is numbered
using the lowest bay number. In this case, the full-width node is bay number 3 and the double-width,
double-height node is bay number 7.
Copyright IBM Corporation 2012, 2013
Chassis bay numbering (3 of 3)
Eight half-width / one full-width
One full-width double-height
Bay 13 Bay 14
Bay 11 Bay 12
Bay 7
Bay 5 Bay 6
Bay 3
Bay 1 Bay 2
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-20 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-17. Chassis air filter NGT113.0
Notes:
There is an optional airborne contaminate filter (P/N 43W9055) that can be fitted to the front of the
chassis as shown in the graphic.
Copyright IBM Corporation 2012, 2013
Chassis air filter
Air
filter
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-21
V9.0
Uempty
Figure 2-18. Hot plug and hot swap components NGT113.0
Notes:
The IBM Flex System Enterprise Chassis follows IBMs consistent color coding scheme for touch
points and hot swap components.
Touch points are blue and are found on the fillers that cover empty fan and power supply bays, on
the handle of nodes and other items that are not hot swap.
Hot Swap components have orange touch points. Orange tabs are found on fan modules, fan logic
modules, power supplies and I/O module handles. The orange designates the items are hot swap
and can be both removed and replaced while the chassis is powered.
Nodes can be plugged into the chassis while the chassis is powered, but the node should be
powered off prior to removal.
Notes from table:
A - Node should be powered off, in standby before removal.
B - I/O module may require re-configuration, and removal is disruptive to any communications
taking place.
Copyright IBM Corporation 2012, 2013
Hot plug and hot swap components
Component Hot plug Hot swap
Node Yes No
a
I/O module Yes Yes
b
40 mm fan pack Yes Yes
80 mm fan pack Yes Yes
Power supply Yes Yes
Fan logic module Yes Yes
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-22 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-19. IBM Flex System Enterprise Chassis topics NGT113.0
Notes:
This section covers the IBM Flex System Enterprise Chassis power supplies.
Copyright IBM Corporation 2012, 2013
IBM Flex System Enterprise Chassis topics
IBM Flex System Enterprise Chassis overview and architecture
IBM Flex System Enterprise Chassis components
Power supplies
Fan modules
Fan logic module
Front information panel
Cooling
IBM Flex System Enterprise Chassis I/O
architecture
Chassis Management Module
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-23
V9.0
Uempty
Figure 2-20. IBM Flex System 2100 W power supply option NGT113.0
Notes:
In November, 2012, IBM announced a power supply option for the IBM Flex System Enterprise
Chassis a 2100 W power supply. The 2100 W power supplies provide a more cost-effective
solution for deployments with lower power demands. The 2100 W power supplies also have the
advantage that they draw a maximum of 11.8A as opposed to the 13.8A of the 2500 W power
supply. This means that when using a 30A supply which is UL derated to 24A when using a PDU,
two 2100 W supplies may be connected to the same PDU with 0.4A remaining. Thus for 30A UL
derated PDU deployments that are common in North America the 2100 W power supply may be
advantageous.
The 2100 W power modules support N+1 redundant power only and do not support the IBM Power
Systems compute nodes.
Copyright IBM Corporation 2012, 2013
IBM Flex System 2100 W power supply option
Enables a lower price point for
chassis configurations with
reduced power requirements
Maintains full support of
networking and
integrated storage options
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-24 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-21. Power supply location and numbering NGT113.0
Notes:
A maximum of six power supplies may be installed within the IBM Flex System Enterprise Chassis.
The 2500 W power supplies are 2500 watts output rated at 200VAC, with over-subscription to 3538
watts output at 200VAC. The power supplies also contain two independently powered 40 mm
cooling fans that pick power up from the midplane, not the power supply itself.
The 2100 W power supplies are 2100 watts output power rated at 200-240VAC. Similar to the 2500
W unit, this power supply also supports over-subscription; the 2100 W unit can run up to 2895W for
short duration. As with the 2500 W units, the 2100 W supplies have two independently powered
dual 40 mm cooling fans that pick up power from the midplane included within the power supply
assembly.
Both the 2500 W and 2100 W power supplies are 80 PLUS Platinum certified. 80 PLUS is a
performance specification for power supplies used within servers and computers. To meet the 80
PLUS standard, the power supply must have power factor (PF) of 0.95 or greater at 50% rated load
and efficiency equal to or greater than 90% at 20% of rated load, 94% at 50% of rated load, and
91% at 100% of rated load. The standard has several grades, such as Bronze, Silver, Gold, and
Platinum.
Further information on 80 PLUS can be found at http://www.80PLUS.org.
Copyright IBM Corporation 2012, 2013
Power supply location and numbering
Power supply
bay 4
Power supply
bay 6
Power supply
bay 5
Power supply
bay 1
Power supply
bay 3
Power supply
bay 2
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-25
V9.0
Uempty
Figure 2-22. Power policies NGT113.0
Notes:
There are five power management policies that may be selected to dictate how the chassis is
protected in the case of potential power module or supply failures. These policies are configured
within the Chassis Management Module graphical interface. They are:
AC power source redundancy: Power is allocated under the assumption that no throttling of the
nodes is allowed should a power supply fault occur. This is an N+N configuration.
AC power source redundancy with compute node throttling allowed: Power is allocated under
the assumption that throttling of the nodes is allowed should a power supply fault occur. This is
an N+N configuration.
Power module redundancy: Maximum input power is limited to one less than the number of
power modules when more than one power module is present. One power module can fail
without affecting compute node operation. Multiple power node failures can cause the chassis
to power off. Some compute nodes may not be able to power on if doing so would exceed the
power policy limit.
Power module redundancy with compute node throttling allowed: This can be described as
over-subscription mode. Operation in this mode assumes that a nodes load can be reduced, or
Copyright IBM Corporation 2012, 2013
Power policies
Five power management
policies can be applied.
AC power source redundancy
AC power source redundancy
with compute node throttling
allowed
Power module redundancy
Power module redundancy with
compute node throttling allowed
Basic power management
Power policies are applied
through the Chassis
Management Module.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-26 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
throttled, to the continuous load rating within a specified time, following a loss of one or more
power supplies. The power supplies have the ability to exceed their continuous rating of 2500w
for short periods. This is for an N+1 configuration.
Basic power management: This allows the total output power of all power supplies to be used.
When operating in this mode, there is no power redundancy. If a power supply should fail, or an
AC feed to one or more supplies is lost, the entire chassis may shut down. There is no power
throttling.
If for any reason there is not enough DC power available to meet the load demand installed in the
chassis, the Chassis Management Module will automatically begin powering-down devices to
reduce the load demand.
There are also two power capping policies, the chassis runs in either one or the other setting:
No power capping: Maximum input power will be determined by the active power redundancy
policy.
Static capping: This sets an overall chassis limit on the maximum input power. In a situation
where powering on a component would cause the limit to be exceeded the component would be
prevented from powering on.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-27
V9.0
Uempty
Figure 2-23. Power supply NGT113.0
Notes:
The power supplies are 80 PLUS Platinum certified and are either 2500 or 2100 watts output rated
at 200VAC. The power supplies also contain two independently powered 40 mm cooling fans that
pick power up from the midplane, not the power supply itself.
There is power monitoring of both the DC and AC signals from the power supplies, which allows the
Chassis Management Module to accurately monitor these signals. The integral power supply fans
are not dependent upon the power supply being functional, they operate and are powered
independently from the midplane.
The graphic shows a power supply and the location of the removal latch, pull handle, and AC
power, DC power, and fault LEDs. On the rear of the power supply there is a C20 inlet socket for
connection to power cables, such as a C19-C20 power cable which can connect to a suitable IBM
DPI rack PDU.
The rear LEDs are:
AC Power: When lit green, this LED indicates power is being supplied to the PSU inlet.
DC Power: When lit green, this LED indicates DC power is being supplied to the chassis
midplane
Copyright IBM Corporation 2012, 2013
Power supply
Removal
latch
Pull handle
LEDs (left to right)
AC power
DC power
Fault
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-28 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Fault: When lit amber, this LED indicates a fault with the PSU
Before you remove any power supplies, ensure that the remaining power supplies have sufficient
capacity to power the Enterprise Chassis. Power usage information can be found in the Chassis
Management Module web interface.
All power supply supplies are combined into a single 12.2V DC power domain within the chassis.
This combination distributes power to each of the compute nodes, I/O modules, and ancillary
components through the Enterprise Chassis midplane.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-29
V9.0
Uempty
Figure 2-24. Power supply selection matrix (1 of 3) NGT113.0
Notes:
The table provides guidelines regarding the number of x220 and x222 compute nodes that can be
installed in a chassis. The actual number of x220 and x222 compute nodes that can be installed in
a chassis depends on:
The TDP power rating for the processors that are installed in the x220 and x222
The number of power supplies installed in the chassis
The capacity of the power supplies installed (2100 W or 2500 W)
The power redundancy policy used (N+1 or N+N)
For more guidance, use the Power Configurator, found at the following website:
http://ibm.com/systems/bladecenter/resources/powerconfig.html
As you can see, a full complement of any compute nodes at all TDP ratings are supported if all six
power supplies are installed and an N+1 power policy is selected.
In the table:
N+1 or N+N: The power redundancy policy used
Copyright IBM Corporation 2012, 2013
Power supply selection matrix (1 of 3)
2100 W power supplies installed 2500 W power supplies installed
x220 TDP
processor
rating
N+1, N=5
6 power
supplies
N+1, N=4
5 power
supplies
N+1, N=3
4 power
supplies
N+N, N=3
6 power
supplies
N+1, N=5
6 power
supplies
N+1, N=4
5 power
supplies
N+1, N=3
4 power
supplies
N+N, N=3
6 power
supplies
50W 14 14 14 14 14 14 14 14
60W 14 14 14 14 14 14 14 14
70W 14 14 14 14 14 14 14 14
80W 14 14 14 14 14 14 14 14
95W 14 14 14 14 14 14 14 14
x222 TDP
processor
rating
50W 14 14 13 14 14 14 14 14
60W 14 14 12 13 14 14 14 14
70W 14 14 11 12 14 14 14 14
80W 14 14 10 11 14 14 13 14
95W 14 13 9 10 14 14 12 13
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-30 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
N=number: Number of power supplies required to meet the minimum power requirements
For example: N+1, N=5 6 power supplies should be read as chassis power requirements
based on 5 power supplies and using an N+1 power policy, require a total of 6 power supplies
installed.
Gray: No restriction to the number of x220 or x222 compute nodes that are installable
White: Some bays must be left empty in the chassis
The following assumptions are made:
All compute nodes are the same and fully configured
Throttling and over-subscription are enabled
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-31
V9.0
Uempty
Figure 2-25. Power supply selection matrix (2 of 3) NGT113.0
Notes:
The table provides guidelines regarding the number of x240 and x440 compute nodes that can be
installed in a chassis. The actual number of x240 and x440 compute nodes that can be installed in
a chassis depends on:
The TDP power rating for the processors that are installed in the x240 and x440
The number of power supplies installed in the chassis
The capacity of the power supplies installed (2100 W or 2500 W)
The power redundancy policy used (N+1 or N+N)
For more guidance, use the Power Configurator, found at the following website:
http://ibm.com/systems/bladecenter/resources/powerconfig.html
As you can see, a full complement of any compute nodes at all TDP ratings are supported if all six
power supplies are installed and an N+1 power policy is selected.
In the table:
N+1 or N+N: The power redundancy policy used
Copyright IBM Corporation 2012, 2013
Power supply selection matrix (2 of 3)
2100 W power supplies installed 2500 W power supplies installed
x240 TDP
processor
rating
N+1, N=5
6 power
supplies
N+1, N=4
5 power
supplies
N+1, N=3
4 power
supplies
N+N, N=3
6 power
supplies
N+1, N=5
6 power
supplies
N+1, N=4
5 power
supplies
N+1, N=3
4 power
supplies
N+N, N=3
6 power
supplies
60 W 14 14 14 14 14 14 14 14
70 W 14 14 13 14 14 14 14 14
80 W 14 14 13 13 14 14 14 14
95 W 14 14 12 12 14 14 14 14
115 W 14 14 11 12 14 14 14 14
130 W 14 14 11 11 14 14 13 14
135 W 14 14 10 11 14 14 13 14
x440 TDP
processor
rating
95 W 7 7 6 6 7 7 7 7
115 W 7 7 5 6 7 7 7 7
130 W 7 7 5 5 7 7 6 7
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-32 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
N=number: Number of power supplies required to meet the minimum power requirements
For example: N+1, N=5 6 power supplies should be read as chassis power requirements
based on 5 power supplies and using an N+1 power policy, require a total of 6 power supplies
installed.
Gray: No restriction to the number of x240 or x440 compute nodes that are installable
White: Some bays must be left empty in the chassis
The following assumptions are made:
All compute nodes are the same and fully configured
Throttling and over-subscription are enabled
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-33
V9.0
Uempty
Figure 2-26. Power supply selection matrix (3 of 3) NGT113.0
Notes:
The table provides guidelines regarding the number of Power Systems compute nodes, FSM, and
V7000 storage nodes that can be installed in a chassis. The actual number of Power Systems
compute nodes, FSM, and V7000 storage nodes that can be installed in a chassis depends on:
The TDP power rating for the processors that are installed in the Power Systems compute
nodes
The number of power supplies installed in the chassis
The capacity of the power supplies installed (2100 W or 2500 W)
The power redundancy policy used (N+1 or N+N)
For more guidance, use the Power Configurator, found at the following website:
http://ibm.com/systems/bladecenter/resources/powerconfig.html
As you can see, a full complement of any compute nodes at all TDP ratings are supported if all six
power supplies are installed and an N+1 power policy is selected.
In the table:
Copyright IBM Corporation 2012, 2013
Power supply selection matrix (3 of 3)
2100 W power supplies installed 2500 W power supplies installed
N+1, N=5
6 power
supplies
N+1, N=4
5 power
supplies
N+1, N=3
4 power
supplies
N+N, N=3
6 power
supplies
N+1, N=5
6 power
supplies
N+1, N=4
5 power
supplies
N+1, N=3
4 power
supplies
N+N, N=3
6 power
supplies
p24L all models 14 12 9 10 14 14 12 13
p260 all models 14 12 9 10 14 14 12 13
p270 all models 14 12 9 9 14 14 12 12
p460 all models 7 6 4 5 7 7 6 6
FSM 2 2 2 2 2 2 2 2
V7000 3 3 3 3 3 3 3 3
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-34 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
N+1 or N+N: The power redundancy policy used
N=number: Number of power supplies required to meet the minimum power requirements
For example: N+1, N=5 6 power supplies should be read as chassis power requirements
based on 5 power supplies and using an N+1 power policy, require a total of 6 power supplies
installed.
Gray: No restriction to the number of x220 or x222 compute nodes that are installable
White: Some bays must be left empty in the chassis
The following assumptions are made:
All compute nodes are the same and fully configured
Throttling and over-subscription are enabled
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-35
V9.0
Uempty
Figure 2-27. IBM Flex System Enterprise Chassis topics NGT113.0
Notes:
This section covers the IBM Flex System Enterprise Chassis fan modules and fan logic modules.
Copyright IBM Corporation 2012, 2013
IBM Flex System Enterprise Chassis topics
IBM Flex System Enterprise Chassis overview and architecture
IBM Flex System Enterprise Chassis components
Power supplies
Fan modules
Fan logic modules
Front information panel
Cooling
IBM Flex System Enterprise Chassis I/O
architecture
Chassis Management Module
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-36 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-28. Fan module location and numbering NGT113.0
Notes:
The IBM Flex System Enterprise Chassis supports up to ten hot pluggable fan modules consisting
of two 40 mm fan modules and eight 80 mm fan modules.
A chassis can operate with a minimum of six hot-swap fan modules installed, consisting of four 80
mm fan modules and two 40 mm fan modules.
The fan modules plug into the chassis and connect to the fan distribution cards. The 80 mm fan
modules may be added as required to support chassis cooling requirements.
The graphic shows the location and numbering of the IBM Flex System Enterprise Chassis fan
module bays.
Copyright IBM Corporation 2012, 2013
Fan module location and numbering
Fan module
bay 7
Fan module
bay 9
Fan module
bay 8
Fan module
bay 2
Fan module
bay 4
Fan module
bay 3
Fan module
bay 6
Fan module
bay 1
Fan module bay 10
Fan module bay 5
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-37
V9.0
Uempty
Figure 2-29. 40 mm fan module NGT113.0
Notes:
The 40 mm fan modules are located at the top of the chassis and are installed in fan module bays 5
and 10. They are used to distribute cooling to:
I/O modules
Chassis Management Modules
Each 40 mm fan module actually contains two 40 mm fans side-by-side. The LEDs on back of fan
module are:
Power on (green)
Fault (amber)
Copyright IBM Corporation 2012, 2013
40 mm fan module
The 40 mm fan modules are
located at the top of the
chassis.
Installed in fan module bays 5
and 10
They are used to distribute
cooling to:
I/O modules
Chassis Management Modules
LEDs on back of fan module:
Power on (green)
Fault (amber)
Removal latch Pull handle
Location of 40 mm fan modules
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-38 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-30. 80 mm fan module NGT113.0
Notes:
The 80 mm fan modules are located in two columns in the rear of the chassis and are installed in
fan module bays 1 4 and 6 9. They are used to distribute cooling to any components installed in
the 14 chassis bays.
Each 80 mm fan module contains two 80 mm fans, back to back at each end of the module, which
are counter rotating.
Both fan modules have an EMC (electromagnetic compatibility) mesh screen on the rear internal
face of the module. The design of this also has an additional benefit for the airflow, by providing a
laminar flow through the screen, which reduces turbulence of the exhaust air and improves the
efficiency of the overall fan assembly.
Laminar flow is a smooth flow of air, sometimes called streamline flow. The opposite of a laminar
flow is a turbulent flow. The design of the whole fan assembly, the fan blade design, the distance
between and size of the fans and the EMC mesh screen ensures a highly efficient fan design that
provides the best cooling for lowest energy input.
The LEDs on back of fan module are:
Power on (green)
Fault (amber)
Copyright IBM Corporation 2012, 2013
80 mm fan module
The 80 mm fan modules are
located in two columns in the
rear of the chassis.
Installed in fan module bays
1 4 and 6 9
They are used to distribute
cooling to any components
installed in the 14 chassis
bays.
LEDs on back of fan module:
Power on (green)
Fault (amber)
Removal latch Pull handle
Location of 80 mm fan modules
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-39
V9.0
Uempty
Figure 2-31. Fan logic module location and numbering NGT113.0
Notes:
There are two fan logic modules included within the IBM Flex System Enterprise Chassis as shown
in the graphic. Fan logic modules are multiplexers for the internal I2C bus, which is used for
signaling communications between hardware components within the chassis. Each fan pack is
accessed via a dedicated I2C bus, switched by the Fan Mux card, from each Chassis Management
Module. The fan logic module switches the I2C bus to each individual fan pack and can be used by
the Chassis Management Module to determine multiple parameters, such as fan RPM.
There is a fan logic module for the left and right side of the chassis. The left fan logic module access
the left fan modules and the right fan logic module accesses the right fan modules. Fan presence
indication for each fan pack is read by the fan logic module. Power and fault LEDs are also
controlled by the fan logic module.
The graphic shows the location and numbering of the IBM Flex System Enterprise Chassis fan logic
modules.
Copyright IBM Corporation 2012, 2013
Fan logic module location and numbering
Fan logic
module bay 2
Fan logic
module bay 1
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-40 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-32. Fan modules: Base configuration NGT113.0
Notes:
The fan modules are populated dependent on nodes installed. To support the base configuration
and up to four nodes, a chassis ships with four 80 mm fans and two 40 mm fans pre-installed.
The minimum configuration of 80 mm fans is four, which will provide cooling for a maximum of four
nodes. This is shown in the graphic and is the base configuration.
If there are insufficient fan modules for the number of nodes that are installed, the nodes might be
throttled.
Copyright IBM Corporation 2012, 2013
Fan modules: Base configuration
Node bays Cooling zone Cooling zone
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-41
V9.0
Uempty
Figure 2-33. Fan modules: Eight nodes installed NGT113.0
Notes:
Six 80 mm fans installed will allow another four nodes to be supported within the chassis, to a
maximum of eight, as shown in the graphic.
Copyright IBM Corporation 2012, 2013
Fan modules: Eight nodes installed
Node bays Cooling zone Cooling zone
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-42 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-34. Fan modules: Maximum configuration NGT113.0
Notes:
To cool more than eight nodes, all fans must be installed as shown in the graphic.
Copyright IBM Corporation 2012, 2013
Fan modules: Maximum configuration
Node bays Cooling zone Cooling zone
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-43
V9.0
Uempty
Figure 2-35. IBM Flex System Enterprise Chassis topics NGT113.0
Notes:
This section covers the IBM Flex System Enterprise Chassis front information panel.
Copyright IBM Corporation 2012, 2013
IBM Flex System Enterprise Chassis topics
IBM Flex System Enterprise Chassis overview and architecture
IBM Flex System Enterprise Chassis components
Power supplies
Fan modules
Fan logic modules
Front information panel
Cooling
IBM Flex System Enterprise Chassis I/O
architecture
Chassis Management Module
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-44 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-36. Front information panel NGT113.0
Notes:
The following items are displayed on the front information panel of the IBM Flex System Enterprise
Chassis:
White Backlit IBM Logo: When lit, this logo indicates that the chassis is powered.
Locate LED: When lit (blue) solid, this LED indicates the location of the chassis. When the LED
is flashing, this LED indicates that a condition occurred that caused the CMM to indicate that the
chassis needs attention.
Check error log LED: When lit (amber), this LED indicates that a noncritical event occurred.
This event might be an incorrect I/O module that is inserted into a bay, or a power requirement
that exceeds the capacity of the installed power modules.
Fault LED: When lit (amber), this LED indicates that a critical system error occurred. This error
can be an error in a power module or a system error in a node.
The graphic shows the location of the front information panel on the IBM Flex System Enterprise
Chassis.
Copyright IBM Corporation 2012, 2013
Front information panel
Front information panel
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-45
V9.0
Uempty
Figure 2-37. Rear chassis LEDs NGT113.0
Notes:
The graphic shows the LEDs on the rear of the chassis.
Copyright IBM Corporation 2012, 2013
Rear chassis LEDs
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-46 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-38. IBM Flex System Enterprise Chassis topics NGT113.0
Notes:
This section covers the IBM Flex System Enterprise Chassis cooling.
Copyright IBM Corporation 2012, 2013
IBM Flex System Enterprise Chassis topics
IBM Flex System Enterprise Chassis overview and architecture
IBM Flex System Enterprise Chassis components
Power supplies
Fan modules
Fan logic modules
Front information panel
Cooling
IBM Flex System Enterprise Chassis I/O
architecture
Chassis Management Module
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-47
V9.0
Uempty
Figure 2-39. Upper and lower cooling apertures NGT113.0
Notes:
The flow of air within the IBM Flex System Enterprise Chassis follows a front to back cooling path -
cool air is drawn in at the front of the chassis and warm air is exhausted to the rear. There are two
cooling zones for the nodes, a left zone and a right zone. The cooling is scaled up as required,
based upon which node bays that are populated.
Air is drawn in both through the front node bays and also through the front airflow inlet apertures
that are located at the top and bottom the chassis. When a node is removed from a bay, an airflow
damper closes in the midplane. Therefore, no air is drawn in through an unpopulated bay. When a
node is inserted into a bay, the damper is opened by the node insertion, which allows for cooling of
the node in that bay.
The graphic shows the upper and lower cooling apertures highlighted.
For proper cooling, each bay in the front or rear in the chassis must contain either a device or a
filler.
Copyright IBM Corporation 2012, 2013
Upper and lower cooling apertures
Upper
cooling
apertures
Lower
cooling
apertures
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-48 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-40. Chassis air flow (1 of 3) NGT113.0
Notes:
Various fans are present in the chassis to assist with efficient cooling. Fans comprise of both 40 mm
and 80 mm types and are contained within hot pluggable fan modules. The power supplies also
have two integrated, independently powered 40 mm fans.
The graphic shows the cooling paths for the nodes, where air is drawn in from the front of the
chassis. The airflow intensity is controlled by the 80 mm fan modules in the rear. Air passes from
the front of the chassis, through the node then through openings in the midplane and into a plenum
chamber. Each plenum is isolated from the other thus providing a separate zone, a left and right
cooling zone. The 80 mm fan packs on each zone then combine to move warm air from the plenum
to the rear of the chassis.
In the case of a full-width node, the air flow within the node is not segregated as it spans both
airflow zones.
The graphic shows a chassis with the outer casing removed for clarity, to show airflow path through
the chassis as described. As can be seen, there is no airflow through the chassis midplane where a
node is not installed, because the air damper is opened only when a node is inserted in that bay.
Copyright IBM Corporation 2012, 2013
Chassis air flow (1 of 3)
80 mm fan
packs
Warm
airflow out
Cool
airflow in
Cool
airflow in
Midplane
Installed
compute nodes
Installed
compute nodes
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-49
V9.0
Uempty
Figure 2-41. Chassis air flow (2 of 3) NGT113.0
Notes:
The graphic shows the path of air from the upper and lower airflow inlet apertures to the power
supplies.
Copyright IBM Corporation 2012, 2013
Chassis air flow (2 of 3)
Power
supplies
Cool
airflow in
Installed
compute nodes
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-50 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-42. Chassis air flow (3 of 3) NGT113.0
Notes:
The graphic shows the airflow from the lower inlet aperture to the 40 mm fan modules which provide
cooling for the I/O modules and Chassis Management Modules installed in the rear of the chassis.
The right side 40 mm fan module cools the right I/O modules, while the left 40 mm fan module cools the
left pair of I/O modules. Each 40 mm fan module has a pair of counter rotating fans for redundancy.
Cool air flows in from the lower inlet aperture at the front of the chassis. It is drawn into the lower
openings in the CMM and I/O Modules where it provides cooling for these components. It passes
through and is drawn out the top of the CMM and I/O modules. The warm air is expelled to the rear
of the chassis by the 40 mm fan assembly.
Upon hot-swap removal of a 40 mm fan pack due to failure, the removal of the fan pack will expose
an opening in the bay to the 80 mm fan packs located below and a back flow damper within the fan
bay will close.
The backflow damper will prevent hot air from entering the system from the rear of the chassis. The 80
mm fan packs will then cool the switch modules and the CMM, while the fan pack is being replaced.
Notice that the form factor of I/O modules and Chassis Management Modules allow for airflow
directly through the components for greater cooling capacity.
Copyright IBM Corporation 2012, 2013
Chassis air flow (3 of 3)
Air flow
Chassis
Management
Modules
I/O
modules
Installed
compute nodes
40 mm fan
modules
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-51
V9.0
Uempty
Figure 2-43. IBM Flex System Enterprise Chassis topics NGT113.0
Notes:
This section covers the IBM Flex System Enterprise Chassis I/O architecture.
Copyright IBM Corporation 2012, 2013
IBM Flex System Enterprise Chassis topics
IBM Flex System Enterprise Chassis overview and architecture
IBM Flex System Enterprise Chassis components
Power supplies
Fan modules
Fan logic modules
Front information panel
Cooling
IBM Flex System Enterprise Chassis I/O
architecture
Chassis Management Module
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-52 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-44. I/O module location and numbering NGT113.0
Notes:
The IBM Flex System Enterprise Chassis can accommodate a total of four I/O modules which are
installed vertically into the rear of the chassis.
The graphic shows the location and numbering of the IBM Flex System Enterprise Chassis I/O
modules.
Copyright IBM Corporation 2012, 2013
I/O module location and numbering
I/O module
bay 3
I/O module
bay 2
I/O module
bay 1
I/O module
bay 4
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-53
V9.0
Uempty
Figure 2-45. Node I/O connectors NGT113.0
Notes:
The graphic shows the I/O connector area of a typical half-width and full-width compute node. The
fabric connector provides for routing of the LAN on motherboard (LOM) signaling to the chassis
midplane. If desired, the fabric connector may be removed and an I/O adapter card may be installed
into I/O connector one. There is a second I/O connector. An I/O adapter card may occupy either I/O
connector dependent upon overall chassis configuration.
Also noted here is the management connector that connects the compute node to the chassis
internal management network.
If the LAN on Motherboard (LOM) support is to be used in a compute node, then the I/O connector
cannot have an I/O adapter card inserted where the compute node fabric connect exists.
Copyright IBM Corporation 2012, 2013
Node I/O connectors
Each compute node connects to the midplane by way of I/O connectors.
In half-width nodes, two I/O connectors
In full-width nodes, four I/O connectors
One management connector
In some systems, an embedded 10 Gb Virtual Fabric Adapter is available
LAN on Motherboard (LOM) adapter
Connected to midplane using compute node fabric connector
Uses I/O connector 1 (half-width) or I/O connectors 1 and 3 (full-width)
Fabric
connector
Management
connector
I/O adapters
Half-width compute node
I/O adapter location
Full-width compute node
I/O adapter location
I/O adapters
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-54 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-46. I/O adapter and I/O module interconnects NGT113.0
Notes:
I/O modules 1 and 2 connect to the integrated LAN on motherboard (LOM) controller (if the node
has a LOM and the LOM connector is installed in the node), otherwise I/O modules 1 and 2 connect
to the I/O adapter installed in I/O connector 1 on each compute node.
I/O modules 3 and 4 connect to the I/O adapter that is installed within I/O connector 2 on the node.
These modules provide external connectivity, as well as connecting internally to each of the nodes
within the chassis.
The graphic shows the connections from the compute nodes on the left, to the I/O modules on the
right. The compute node in chassis bay 1 (Node Bay 1 in the graphic) shows that when shipped
with a LOM, the LOM connector provides the link from the compute node to the midplane. Note that
some nodes do not ship with the LOM.
If required this LOM connector may be removed and an I/O expansion adapter installed. This is
shown in the graphic as Node bay 2.
Copyright IBM Corporation 2012, 2013
with LOM
LOM
With I/O adapter
Four lanes (KX-4)
or 4 10 Gbps lanes (KR)
Node
bay 2
Node
bay 1
LOM
LOM connector (removed when I/O
adapter is installed on the Node)
14 internal groups (of
four lanes each), one
to each node
LOM not used
I/O module 1
I/O module 3
I/O module 2
I/O module 4
Node
Bay 14
Node
bay 14
I/O adapter and I/O module interconnects
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-55
V9.0
Uempty
Figure 2-47. Node to I/O module interconnects NGT113.0
Notes:
A total of two I/O expansion adapters (A1 and A2 in the graphic) can be plugged into a half-width
node. Up to 4 I/O adapters can be plugged into a full-width node.
Each I/O adapter has two connectors; one connects onto the compute nodes system board (PCI
Express Molex plug connection) and the second connector is a high speed interface to the
midplane that mates to the midplane when the node is installed into a bay in the chassis.
As shown in the graphic, each of the links to the midplane from the I/O adapter are in fact 4 links
wide. Exactly how many links are employed on each I/O adapter is dependent on the design of the
adapter and the number of ports that are wired.
Copyright IBM Corporation 2012, 2013
Node to I/O module interconnects
I/O module 1
Node
14
A1
A2
Node
3
A1
A2
Node
2
A1
A2
Node
1
A1
A2
I/O module 2
I/O module 3
I/O module 4
.
.
.
.
.
.
.
.
.
.
.
.
Each line between an I/O adapter and
a switch is four links.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-56 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-48. I/O expansion adapter form factor NGT113.0
Notes:
The IBM Flex System Enterprise Chassis supports a common form factor for all I/O expansion
adapters that are installed in compute nodes. This is the only form factor that the IBM Flex System
Enterprise Chassis supports. It is 96.7 mm x 84.8 mm in size and has the following connectors:
One PCIe connector - Molex 216-pin connector that connects to I/O expansion port on compute
node
One midplane connector
The graphic shows both connectors along with the guide block that allows easy installation.
Any I/O expansion adapter can be installed in any open I/O expansion port in any compute node.
Copyright IBM Corporation 2012, 2013
I/O expansion adapter form factor
All I/O expansion adapters
share one common form
factor.
96.7 mm x 84.8 mm in size
One PCIe connector
Molex 216-pin connector that
connects to I/O expansion port
on compute node
One midplane connector
Any I/O expansion adapter
can be installed in any open
I/O expansion port in any
compute node.
PCIe
connector
Midplane
connector
Guide block
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-57
V9.0
Uempty
Figure 2-49. LAN on Motherboard implementation NGT113.0
Notes:
The graphic shows how the integrated 2-port 10Gb LOM connects via a LOM connector to the
midplane on a compute node. This implementation provides a pair of 10Gb lanes. Each lane
connects to a 10Gb switch or 10Gb pass-thru module installed in I/O module bays in the rear of the
chassis.
Copyright IBM Corporation 2012, 2013
LAN on Motherboard implementation
L
O
M

c
o
n
n
e
c
t
o
r










P1
P2
L
O
M
P2
P1
10 Gbps KR lane
1
2
LOM: LAN on Motherboard
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-58 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-50. Single node/two I/O adapter interconnects NGT113.0
Notes:
A node with two standard I/O connectors and an I/O adapter card with two ports is shown in the
graphic. For a two port implementation with 14 nodes installed and each node requiring dual ports,
the I/O modules would need sufficient ports enabled internally. In this case each I/O module in bays
1 and 2 would need 14 internal ports.
Copyright IBM Corporation 2012, 2013
Single node/two I/O adapter interconnects
P1
P3
P5
P7
P2
P4
P6
P8
x
1

P
o
r
t
s
x
1

P
o
r
t
s
P1
P3
P5
P7
P2
P4
P6
P8
2
-
P
o
r
t
P1
P2
I/O adapter
in slot 1
Half-width
node
I/O modules
I/O adapter
in slot 2
1
2
3
4
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-59
V9.0
Uempty
Figure 2-51. Installing I/O modules NGT113.0
Notes:
There are four I/O module bays to the rear of the chassis. To insert an I/O module into a bay, the I/O
filler must first be removed.
The graphic shows removing an I/O filler and inserting an I/O module into the chassis using the two
handles.
Copyright IBM Corporation 2012, 2013
Installing I/O modules
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-60 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-52. I/O module LEDs NGT113.0
Notes:
I/O module status LEDs are located at the bottom of the module when inserted into the IBM Flex
System Enterprise Chassis. All modules share three consistent status LEDs as shown in the
graphic. The LEDs are as follows:
OK (power): When this LED is lit, it indicates that the switch is on, when this LED is not lit and
the amber switch error LED is lit, it indicates a critical alert. If the amber LED is also not lit, it
indicates that the switch is off.
Identify: This Blue LED can be used to identify the switch physically, by illuminating via the
management software.
Switch Error: When this LED is lit, it indicates a POST failure or critical alert. When this LED is
lit, the system-error LED on the chassis is also lit. When this LED is not lit and the green LED is
lit, it indicates that the switch is working correctly. If the green LED is also not lit, it indicates that
the switch is off.
Copyright IBM Corporation 2012, 2013
I/O module LEDs
Each I/O module has three
LEDs on the rear of the I/O
module.
OK (power)
On (green): I/O module on
Off; Error LED on: Critical alert
Off; Error LED off: I/O module off
Identify
Blue LED physically identify the
I/O module
Switch error (amber)
If on, indicates POST failure or
critical alert
Chassis system error LED would
also be on
Off; OK (green) on: I/O module
working correctly
OK Identify
Switch
error
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-61
V9.0
Uempty
Figure 2-53. IBM Flex System Enterprise Chassis topics NGT113.0
Notes:
This section covers the Chassis Management Module.
Copyright IBM Corporation 2012, 2013
IBM Flex System Enterprise Chassis topics
IBM Flex System Enterprise Chassis overview and architecture
IBM Flex System Enterprise Chassis components
Power supplies
Fan modules
Fan logic modules
Front information panel
Cooling
IBM Flex System Enterprise Chassis I/O
architecture
Chassis Management Module
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-62 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-54. Chassis Management Module location and numbering NGT113.0
Notes:
The IBM Flex System Enterprise Chassis can accommodate a total of two Chassis Management
Modules which are installed vertically into the rear of the chassis.
The graphic shows the location and numbering of the Chassis Management Modules.
Copyright IBM Corporation 2012, 2013
Chassis Management Module location and
numbering
Chassis
Management
Module bay 1
Chassis
Management
Module bay 2
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-63
V9.0
Uempty
Figure 2-55. Chassis Management Module NGT113.0
Notes:
The Chassis Management Module (CMM) provides management of a single chassis. The CMM is
used to communicate with the management controller in each compute node to provide system
monitoring, event recording and alerts, as well as to manage the chassis, its devices, and the
compute nodes. The CMM is a hot-swappable module that provides basic system management
functions for all devices installed in the IBM Flex System Enterprise Chassis. An IBM Flex System
Enterprise Chassis comes with at least one CMM and supports CMM redundancy. The chassis
supports up to two Chassis Management Modules. If one CMM fails, the second CMM can detect
the inactivity, and then activate itself to take control of the system without any disruption. The CMM
is required in the Enterprise Chassis and is central to the management of the chassis.
The Chassis Management Module (CMM) provides a single point of chassis management and the
networking path for remote keyboard-video-mouse (KVM) capability for compute nodes within the
IBM Flex System Enterprise Chassis. The chassis can accommodate either one or two CMMs. The
first is installed into CMM Bay 1 (and is the default / primary CMM) and the second is installed into
CMM bay 2 (if used). Two CMMs would be installed to provide CMM redundancy.
The Chassis Management Module provides the following functions:
Power control
Copyright IBM Corporation 2012, 2013
Chassis Management Module
The Chassis Management
Module (CMM) provides the
following functions:
Power control
Fan management
Chassis and compute node
initialization
Switch management
Diagnostics
Resource discovery and
inventory management
Resource alerts and monitoring
management
Chassis and compute node
power management
Network management
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-64 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Fan management
Chassis and compute node initialization
Switch management
Diagnostics
Resource discovery and inventory management
Resource alerts and monitoring management
Chassis and compute node power management
Network management
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-65
V9.0
Uempty
Figure 2-56. Chassis Management Module LEDs NGT113.0
Notes:
The IBM Flex System Chassis Management Module (CMM) has LEDs and controls that you can
use to obtain status information and restart the CMM:
Reset button: Use this button to restart the Chassis Management Module. Insert a
straightened paper clip into the reset button pinhole to restart the CMM. If you push the paper
clip in all the way and hold it for greater than 5 seconds (for example 10 15 seconds), the
CMM is reset to the default factory configuration and restarts. Be sure to save your current
configuration before you reset the CMM.
Power-on LED: When this LED is lit (green), it indicates that the CMM has power.
Active LED: When this LED is lit (green), it indicates that the CMM is actively controlling the
chassis. Only one CMM actively controls the chassis. If two CMMs are installed in the chassis,
this LED is lit on only one CMM.
Fault LED: When this LED is lit (yellow), an error has been detected in the CMM. When the
error LED is lit, the chassis fault LED is also lit.
Ethernet port link (RJ-45) LED: When this LED is lit (green), it indicates that there is an active
connection through the remote management and console (Ethernet) port to the management
network.
Copyright IBM Corporation 2012, 2013
Chassis Management Module LEDs
The CMM has the following controls
and LEDs:
Reset button
When pressed for less than 5 seconds, the
CMM restarts
When pressed for more than 5 seconds (for
example 10 15 seconds), the CMM is
reset to factory defaults and restarts
Power-on LED (green)
On when CMM has power
Active LED (green)
On indicates the CMM is actively controlling
the chassis
Fault LED (amber)
On when an error has been detected in the
CMM
If on, the chassis fault LED is also on
Ethernet port link (RJ-45) LED (green)
On it indicates that there is an active
connection through the remote
management and console (Ethernet) port to
the management network
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-66 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-57. Chassis Management Module ports NGT113.0
Notes:
The CMM has the following connectors:
USB connection. This may be used for insertion of a USB media key, for tasks such as firmware
updates.
10/100/1000 Mbps RJ45 Ethernet connection. For connection to a management network. The
CMM can be managed via this Ethernet port.
Serial port (mini-USB). This is reserved for engineering debug use.
Copyright IBM Corporation 2012, 2013
Chassis Management Module ports
The CMM supports the
following connections:
USB connection
Used for insertion of a USB
media key, for tasks like
firmware updates.
10/100/1000 Mbps RJ45
Ethernet connection
For connection to a
management network. The CMM
can be managed through this
Ethernet port.
Serial port (mini-USB)
This is reserved for engineering
debug use.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-67
V9.0
Uempty
Figure 2-58. Chassis management using CMM NGT113.0
Notes:
The Chassis Management Module supports a web-based graphical user interface that provides a
way to perform chassis management functions within a web browser. The CMM web-based
graphical user interface communicates with the management program to perform chassis
management tasks. You can also perform management functions through the CMM command-line
interface (CLI). Both the web-based and CLI interfaces are accessible via the single RJ45 Ethernet
connector on the CMM.
The CMM has the following default IPv4 settings:
IP address: 192.168.70.100
Subnet: 255.255.255.0
User ID: USERID (all capital letters)
Password: PASSW0RD (all capital letters, with a zero instead of the letter O)
Copyright IBM Corporation 2012, 2013
Chassis management using CMM
Management
network
Management
network
Chassis with CMM Chassis with CMM
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-68 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-59. CMM capabilities NGT113.0
Notes:
The Chassis Management Module provides management for the IBM Flex System Enterprise
Chassis and its components.
CMM is used for monitoring status, vital product data (VPD) and events related to chassis
components.
Various server and chassis tasks such as power on, power off, remote access, and firmware
update, can be performed using the CMM.
Copyright IBM Corporation 2012, 2013
CMM capabilities
CMM
Firmware
updates
Chassis
status
Server &
chassis
tasks
Manage
chassis
events
Remote
control
Chassis
VPD
info
Chassis
problem
areas
Chassis
service
data
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-69
V9.0
Uempty
Figure 2-60. CMM for IBM Flex System NGT113.0
Notes:
Through an embedded firmware stack, the CMM implements functions to monitor, control, and
provide external user interfaces to manage all chassis resources. Some of the main functions are:
Define login IDs and passwords
Configure security settings such as data encryption and user account security
Select recipients for alert notification of specific events
Monitor the status of the compute nodes and other components
Find chassis component information
Discover other chassis on the network and enable access to them
Control the chassis, compute nodes, and other components
Access the I/O modules to configure them
Change the startup sequence in a compute node
Set the date and time
Copyright IBM Corporation 2012, 2013
CMM for IBM Flex System
Provides system management functions for all devices installed in an
Enterprise chassis
Communicate with the management controller in each compute node
Each chassis supports up to two Chassis Management Modules in
activepassive
CMM is used to:
Configure recipients for alert notification of specific events
Monitor the status of the compute nodes and other components
Find chassis component information
Discover other chassis on the network and enable access to them
Control the chassis, compute nodes, and other components
Access the I/O modules to configure them
Change the startup sequence in a compute node
Set the date and time
Use a remote console for the compute nodes
Enable multi-chassis monitoring
Set power policies and view power consumption history for chassis components
Provides user repository for compute node service processors
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-70 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Use a remote console for the compute nodes
Enable multi-chassis monitoring
Set power policies and view power consumption history for chassis components
Provides user repository for compute node service processors
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-71
V9.0
Uempty
Figure 2-61. CMM management options NGT113.0
Notes:
The CMM user interface (UI) provides an easy to use interface for performing various chassis
management tasks. The CMM UI has six main management tabs:
System Status: Provide overall status of the chassis and various components of the chassis
(compute node, FSM, network switch, fan, power modules).
Multi-Chassis Monitor: View other chassis available in the network.
Events: Manage events related to the chassis.
Service and Support: Collect the details about the chassis configuration and events which are
used by service and support engineers.
Chassis Management: Provide tasks including management IP configuration, remote control,
power operations, chassis components status, power modules management.
Mgt Module Management: Perform tasks including user account management, CMM
configuration, and CMM restart.
Copyright IBM Corporation 2012, 2013
CMM management options
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-72 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-62. Glossary NGT113.0
Notes:
This slide presents a glossary of terms used in this topic.
Copyright IBM Corporation 2012, 2013
Glossary
IBM Flex System Enterprise
Chassis
Half-width node
Full-width node
Chassis Management Module
IBM Flex System Manager Node
Midplane
Chassis bay numbering
Power management policies
Power supply N+N configurations
Power supply N+1 configurations
Front information panel
Upper and lower cooling apertures
I/O connectors
LAN on Motherboard (LOM)
I/O module naming scheme
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 2. IBM Flex System Enterprise Chassis 2-73
V9.0
Uempty
Figure 2-63. Checkpoint NGT113.0
Notes:
Write down your answers here:
1.
2.
3.
Copyright IBM Corporation 2012, 2013
Checkpoint
1. The IBM Flex System Enterprise Chassis supports which of the
following combinations of compute nodes?
a. Four full-width compute nodes
b. Fourteen half-width compute nodes
c. Three full-width compute nodes and five half-width compute nodes
d. All of the above
2. In an N+N power configuration using 2100 W power supplies, the
minimum number of power supplies required to support four x240
compute nodes is:
a. Two
b. Three
c. Four
d. Six
3. In a base configuration of the IBM Flex System Enterprise Chassis,
four 80 mm fans are installed that support up to (blank) half-width
compute nodes.
a. Two
b. Four
c. Six
d. Eight
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-74 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 2-64. Unit summary NGT113.0
Notes:
Having completed this unit, you should be able to:
Summarize the features of the IBM Flex System Enterprise Chassis
Identify the major elements of the IBM Flex System Enterprise Chassis
Explain the power features of the IBM Flex System Enterprise Chassis
Explain the cooling features of the IBM Flex System Enterprise Chassis
Explain the management features of the IBM Flex System Enterprise Chassis
Copyright IBM Corporation 2012, 2013
Unit summary
Having completed this unit, you should be able to:
Summarize the features of the IBM Flex System Enterprise Chassis
Identify the major elements of the IBM Flex System Enterprise Chassis
Explain the power features of the IBM Flex System Enterprise Chassis
Explain the cooling features of the IBM Flex System Enterprise Chassis
Explain the management features of the IBM Flex System Enterprise
Chassis
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 3. IBM Flex System Manager 3-1
V9.0
Uempty
Unit 3. IBM Flex System Manager
What this unit is about
This unit provides an overview of the IBM Flex System Manager node and
management software.
What you should be able to do
After completing this unit, you should be able to:
Summarize the IBM Flex System Manager features
Predict when the IBM Flex System Manager is required
Classify the software management features of the IBM Flex System
Manager
Provide direction on the network topology design for the IBM Flex System
Manager
How you will check your progress
Checkpoint questions
Lab exercises
References
IBM Flex System Information Center:
http://publib.boulder.ibm.com/infocenter/flexsys/information/index.jsp
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
3-2 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 3-1. Unit objectives NGT113.0
Notes:
After completing this unit, you should be able to:
Summarize the IBM Flex System Manager features
Predict when the IBM Flex System Manager is required
Classify the software management features of the IBM Flex
System Manager
Provide direction on the network topology design for the IBM
Flex System Manager
Unit objectives
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 3. IBM Flex System Manager 3-3
V9.0
Uempty
Figure 3-2. IBM Flex System management NGT113.0
Notes:
The IBM PureFlex System is a self contained managed system that consists of managing multiple
Enterprise Chassis. The systems management of a single chassis and multiple chassis requires
support of various management paradigms. The following paradigms are supported:
Management controllers, such as Flexible Service Processors (FSP), or Integrated
Management Modules (IMM) provides basic out of band management for individual compute
nodes
Network switches have their own management interface with various network configuration
tasks that can be done
Storage systems have their own management interface to perform storage configuration tasks
The Chassis Management Module (CMM) provides base management for the chassis and its
components
The IBM Flex System Manager (FSM) is a management appliance which provides overall
management of all chassis components
Multiple chassis management
Virtualization management
Automation management
Status management
Management appliance
User management
Service and support
Network management
Storage management
IBM Flex System Manager (FSM)
Single chassis management
Chassis components status
Chassis components mgmt
Power policies
Chassis
Events
Service and support
User configuration
Remote access
Chassis Management Module (CMM)
IMM for Intel compute node
FSP for Power compute node
Out-of-band (OOB) mgmt and monitoring
Node
Management controller
Network
Network manager
Storage
Storage manager
GUI and CLI for switch mgmt
N/Wconfig tasks
GUI for storage mgmt
Storage config tasks
IBM Flex System management
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
3-4 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 3-3. IBM Flex System Manager node NGT113.0
Notes:
The IBM Flex System Manager node is a key component for the management and operations of the
chassis and its components. The IBM Flex System Manager management software runs on a
special Intel-based compute node within the Enterprise Chassis. It is preloaded with easy-to-use
management functions that are aided by the use of many task wizards. The IBM Flex System
Manager manages all the available resources within the chassis including Intel servers, Power
servers, storage systems and network switches. The IBM Flex System Manager offers
management tasks for physical resources, virtual resources, user administration, license
administration, and network fabric.
When a Flex System Manager management node is installed in IBM Flex System Enterprise
Chassis, it is the main access point for chassis management for the chassis and chassis
components. It provides information as obtained directly from a managed node rather than using
the chassis management module as an intermediate aggregator.
The IBM Flex System Manager runs in a special Intel-based compute
node.
All basic and advanced functions preloaded as an appliance
One Flex System Manager can manage up to four Flex System Enterprise
Chassis
Integrated X-Architecture and Power servers, storage, and network management
Includes management for full Power System compute node functionality
Virtualization management including resource pools
Robust security
Advanced management capabilities available thru priced add-ons
Upward integration into Tivoli and other third party enterprise managers
IBM Flex System Manager node
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 3. IBM Flex System Manager 3-5
V9.0
Uempty
Figure 3-4. IBM Flex System Manager node hardware NGT113.0
Notes:
The IBM Flex System Manager node runs on a specially modified X-Architecture compute node.
The IBM Flex System Manager node hardware comes preconfigured with processor, memory,
storage and network access. The IBM Flex System Manager node is inserted into one of the slots in
the IBM Flex System chassis.
Feature IBM Flex System Manager
Processor
Intel Xeon Processor E5-2650 8C 2.0GHz 20MB Cache 1600MHz
95W
Number of processors One
Memory Eight x 4 GB PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM
SAS Controller One LSI 2004 SAS controller
Disk
One x IBM 1 TB 7.2K 6 Gbps NL SATA 2.5 SFF HS HDD
Two x IBM 200GB SATA 1.8 MLC SSD
LAN on Motherboard (LOM) Embedded 10 Gb Virtual Fabric Ethernet controller
ETE (Everything to Everything
connector
Management network adapter
I/O Connector None
IBM Flex System Manager node hardware
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
3-6 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 3-5. IBM Flex System Manager node: Internal NGT113.0
Notes:
The internal architecture of the Flex System Manager is very similar to the X-Architecture compute
node. All parts have been clearly labeled. The main differences between the FSM node and the
X-Architecture compute node internal architecture is the management network adapter which gives
the Flex System Manager the ability to handle both data and management traffic, and a different
level of firmware.
IBM Flex System Manager node: Internal
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 3. IBM Flex System Manager 3-7
V9.0
Uempty
Figure 3-6. IBM Flex System Manager front panel NGT113.0
Notes:
You can press the power button on the front of the FSM node to start the FSM node. The power
button works only if local power control is enabled for the FSM. After the FSM node is powered on
initially, and the FSM software is configured, you can use the web interface to turn the FSM node
off.
Wait until the power LED on the FSM node flashes slowly before you press the power button. While
the service processor in the FSM node is initializing and synchronizing with the management
software, the power-on LED flashes rapidly, and the power-control button on the FSM does not
respond. This process can take approximately 90 seconds after the FSM node has been installed.
While the FSM node is starting, the power LED on the front of the FSM node is lit and does not
flash. Connect the IBM Flex System console breakout cable to the KVM connector. It provides a
local connection for a keyboard, mouse and console. The USB connector connects a USB device to
the FSM node.
There are various activity and status LEDs on the front panel for SSD, HDD, identity, check log, and
fault indications.
Power on FSM (similar to any compute node)
Wait until the power button LED is flashing slowly before pressing the power
button.
Button is not responsive until the LED is solid. Flashing LED indicates node is
starting IMM2 and doing low level boot and diagnostics.
A solid on Power button indicates initial IMM boot completed and FSM is
continuing to start imbedded OS and management functions.
Monitor Check Log LED and Fault LED for additional error indications.
SSD Power Identify
LEDs button/LED LED
USB KVM HDD HDD Check log Fault
Connector Connector activity status LED LED
LED LED
IBM Flex System Manager front panel
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
3-8 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 3-7. IBM Flex System management network NGT113.0
Notes:
One IBM Flex System Manager node can manage up to sixteen chassis. The IBM Flex System
Manager node is installed in one of the chassis frames. The Chassis Management Module (CMM)
connects to a management network through a top of rack switch. The IBM Flex System Manager
node uses the external network connection through the CMM to access the other chassis.
Internally, the CMM has a 1Gb Ethernet layer 2 switch with dedicated links to all 14 node bays, all
four switch bays, and the second CMM, if installed. These connections are all point-to-point,
ensuring dedicated bandwidth. The 1Gb links are full-duplex, fixed speed (not auto-negotiate) links.
FSM Management
Node
IBM Flex System management network
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 3. IBM Flex System Manager 3-9
V9.0
Uempty
Figure 3-8. IBM Flex System Manager networks separated NGT113.0
Notes:
The Flex System Manager can connect to the management network and the data network while
maintaining separation between these networks. The 1Gb management network is only accessible
by each node's management controller (IMMv2 or FSP), each switch module's management
interfaces, the CMM, and the IBM Flex System Manager (FSM) management appliance. If you
setup the Flex System Manager to separate the management network from the data network, you
will need to define two LAN adapters during the Flex System Manager setup. You might also need
to define more advanced routing definitions if you wish to have paths to the Flex System Manager
using the data network as well as the management network.
The Everything to Everything adapter (ETE) provides a small L2 switch for the IMM and ETH0
connection to the management network. The ETH1 interface (two 10Gb Ethernet ports) is
connected to the I/O modules that are installed in bay 1 and bay 2.
A setup option is available where the user may define that these two networks are to be merged
into one network.
IBM Flex System Manager networks separated
Copyright IBM Corporation 2012, 2013
IBM Flex System Enterprise Chassis
IBM Flex System System x Power Systems
Manager compute node compute node
IMM eth0 eth1
ETE LOM IMM LOM FSP ETH
CMM
Switch
Port Mgmt I/O module bay1/2
Management network Data network
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
3-10 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 3-9. IBM Flex System Manager networks merged NGT113.0
Notes:
If you setup the Flex System Manager to combine the management network and the data network,
then you only need to define one LAN adapter in the Flex System Manager. This setup simplifies
the routing definitions needed in the Flex System Manager, however, there may be concerns
related to the security of the management resources.
The Everything to Everything adapter (ETE) provides a small L2 switch for the IMM and ETH0
connection to the management network.
IBM Flex System System x Power Systems
Manager compute node compute node
IMM eth0
ETE IMM LOM FSP ETH
CMM
Switch
Port Mgmt I/O module bay1/2
Management network Data network
IBM Flex System Enterprise Chassis
IBM Flex System Manager networks merged
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 3. IBM Flex System Manager 3-11
V9.0
Uempty
Figure 3-10. IBM Flex System Manager capabilities NGT113.0
Notes:
The IBM Flex System Manager node is pre-loaded with software management functions that
provide systems management for the chassis and its resources. The IBM Flex System Manager
(FSM) main features include monitoring and problem determination, hardware management,
network management, storage management, virtualization and workload management, chassis and
chassis resource management, remote console support, inventory management, firmware
compliance and update management, hardware health status, administration management and
security management.
FSM
Updates
Status &
Health
Virtual
Systems
Inventory
Storage
Discovery
Automation
Network
IBM Flex System Manager capabilities
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
3-12 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 3-11. IBM Flex System Manager software NGT113.0
Notes:
When a Flex System Manager management node is installed in IBM Flex System Enterprise
Chassis, it is the main access point for chassis management for the chassis and chassis
components. It provides information as obtained directly from a managed node rather than using
the chassis management module as an intermediate aggregator.
The FSM has many wizards, in-context help screens, and short learning modules to quicken the
productivity of the users. It offers functions to monitor resources, establish thresholds and provide
hardware health status notifications. Navigating resources can be done intuitively from a task or
resource perspective and allow the user to drill down into the resource to more detail, as well as
view the information in a table or graphical perspective. It provides automatic detection of issues in
the chassis environment through event setup that triggers alerts and actions. It also manages
firmware and software updates by setting compliance policies for key resource elements.
Simplified setup
Start, manage, learn
Monitor resources
Health summary, alerts,
thresholds, updates, service,
and support
Visualize/navigate physical
and virtual relationships
Intuitive drilldown and views,
topology map, finger tip
troubleshooting
Automate responses
Custom actions and filters,
configure, edit, relocate,
automation plans
Manage firmware and
software updates
Set policies to track and
automate firmware and software
compliance
IBM Flex System Manager software
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 3. IBM Flex System Manager 3-13
V9.0
Uempty
Figure 3-12. IBM Flex System Manager v1.2 features NGT113.0
Notes:
Centralized user management with FSM offers a single user authentication registry for all Chassis
Management Modules and compute node service processors in a management domain.
The management software can assign a unique local address (ULA) for each chassis component.
By choosing this option, you ensure that all chassis components have routable IP addresses, and
you enable automatic discovery of those components.
The IBM FSM Explorer provides a resource-based view of the environment, and enables one to
perform some systems-management tasks. It is the basis of what is to be the FSM UI.
Configuration Patterns enables one to configure x86 compute nodes for settings related to local
storage, network adapters, boot order, the Integrated Management Module (IMM), and the Unified
Extensible Firmware Interface (UEFI).
Deploy Compute Node Images provides a way to deploy operating systems and virtualization
images to multiple bare metal x86 systems. This function is limited to deploying VMware ESXi and
RHEL Kernel-based Virtual Machine (KVM) hypervisors.
Centralized user management
Unique local address assignment
IBM FSM Explorer
Resource based view of environment
Configuration patterns
Deploy compute node images
Guided storage setup
Manage System Storage page
Support for Flex System V7000
Support for DVS 5000V and 802.1Qbg
Mobile System Management application
IBM Flex System Manager v1.2 features
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
3-14 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
The Manage System Storage page on the Home page Additional Setup tab provides a roadmap for
preparing storage subsystems and storage network switches to be managed by FSM. The Flex
System V7000 is supported as of FSM version 1.2 and above.
Network topology is collected continually in the background without any user action required.
Network management includes the integration of 802.1Qbg support for VMware through the IBM
System Networking Distributed Virtual Switch 5000V.
The Mobile System Management application is a free tool for mobile devices and enables one to
monitor the IBM Flex System hardware remotely.
Google Play in the Android operating system - https://play.google.com/store
iTunes in the Apple iOS - https://itunes.apple.com/us/app/apple-store/id375380948?mt=8
BlackBerry App World - http://appworld.blackberry.com/webstore/?
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 3. IBM Flex System Manager 3-15
V9.0
Uempty
Figure 3-13. IBM Flex System Manager v1.3 features NGT113.0
Notes:
FSM has expanded the number of concurrently managed chassis support to sixteen chassis and
5000 managed resources.
A utility to manage the FSM capacity utilization will display information related to resource utilization
and capacities that may affect the FSM performance. These include monitoring the number of
managed resources, the number of concurrently active users, CPU utilization, I/O utilization,
memory utilization, and disk space.
The FSM supports the additionally announced compute nodes and IOMs.
The FSM has added a display that provides a view of the firmware levels for all chassis
components. Compliance policies are built automatically when new firmware is available.
Operating system image for deployment is expanded from two to five images that can be loaded
onto the FSM.
Mobile System Management functions are enhanced to include power control, selected LED
controls, and virtual reseat command support.
Support up to 16 chassis concurrently
FSM capacity utilization management
HW currency
Added view to all firmware levels
OS deployment supports 5 images
Mobile System Management enhancements
IBM Flex System Manager v1.3 features
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
3-16 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 3-14. Flex System Manager management packaging NGT113.0
Notes:
Flex System Manager includes a preinstalled software stack. This software enables you to manage
the hardware resources in up to sixteen IBM Flex System Enterprise Chassis and will support up to
5000 managed elements. Flex System Manager management software is licensed on a per
managed chassis basis.
There are two feature levels of Flex System Manager. There is a base level and an advanced level.
The base level of Flex System Manager includes health status, resource monitoring, hardware
management, security management, administration support, managing network resources,
managing storage resources and managing virtual resources. The advanced level of Flex System
Manager includes the ability to manage virtual system images and optimize virtual resource pools.
When a chassis is managed, the Flex System Manager will automatically discover the components
in the chassis.
IBM Flex System Manager base
Support for up to sixteen managed chassis
Support for up to 5,000 managed elements
Auto-discovery of managed elements
Overall health status
Monitoring and availability
Hardware management
Security management
Administration
Network management (Network Control)
Storage management (Storage Control)
Virtual resource life cycle management (VMControl Express)
IBM Flex System Manager advanced
Image management (VMControl Standard)
Pool management (VMControl Enterprise)
Flex System Manager management packaging
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 3. IBM Flex System Manager 3-17
V9.0
Uempty
Figure 3-15. IBM PureFlex System configuration NGT113.0
Notes:
The Express configuration is designed for small and medium businesses, and is the most
affordable entry point for PureFlex Systems. It is packaged with the Flex System Manager base
with a one-year software subscription and support license.
The Enterprise configuration is optimized for transactional and database systems and has built-in
redundancy for highly reliable and resilient operation to support your most critical workloads. It is
packaged with the Flex System Manager Advanced with a three-year software subscription and
support license.
If a Power compute node is present in an IBM Flex System Enterprise Chassis, then either the IBM
Flex System Manager node is used for management, or an HMC/IVM facility is required. A Power
node cannot be concurrently managed by IVM/HMC and an IBM Flex System Manager node.
IBM PureFlex System Express
Flex System Manager with one-year service and support
IBM PureFlex System Enterprise
Flex System Manager advanced with three-year service and support
When a Power compute node is present
IBM Flex System Manager
HMC
IVM
IBM PureFlex System configuration
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
3-18 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 3-16. IBM Flex System Manager: Home page NGT113.0
Notes:
The Flex System Manager packages several wizards to assist in accomplishing many management
tasks. The Initial Setup tab on the FSM home page lists initial tasks for the FSM operation. These
tasks include:
Acquire and install updates for the FSM itself
View the available chassis and FSMs in the environment, as well as select which chassis set
will be managed by an FSM
Configure basic settings for chassis components including access, inventory collection, and
node configurations
Deploy operating systems to bare metal compute nodes
Acquire and install updates for the CMM, compute nodes, storage nodes, and I/O modules
Launch the resource based IBM FSM Explorer user interface
In order to update the various resources above, the resources would need to be discovered and
access requested for management operations by the FSM.
IBM Flex System Manager: Home page
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 3. IBM Flex System Manager 3-19
V9.0
Uempty
Figure 3-17. IBM Flex System Manager: Additional Setup NGT113.0
Notes:
The Additional Setup tab on the FSM home page lists several more initial tasks for the FSM in order
to prepare it for production use. These tasks include:
Setup of the Electronic Service Agent for reporting serviceable hardware events directly to IBM
Configure and change the FSM user registry
Manage the accounts for users and groups
Setup automatic checking of updates for the managed systems
Define and deploy configuration settings for operating systems, servers and switches
Deploy management agents on selected systems
Manage the Features on Demand keys
Manage the system storage in the environment
Manage the network devices in the environment
IBM Flex System Manager: Additional Setup
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
3-20 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 3-18. IBM Flex System Manager: Plug-ins NGT113.0
Notes:
The Plug-ins tab on the FSM home page lists specific categories of management tasks that are
available for FSM operation. These tasks include managing resource discovery, inventory
collection, system health and status, system updates, event driven task automation, system
configuration, remote access, storage, and virtualization.
Each entry on this page provides an active link to a page where the user can accomplish the
selected task. Basic and advanced tasks are preloaded as part of the management appliance.
IBM Flex System Manager: Plug-ins
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 3. IBM Flex System Manager 3-21
V9.0
Uempty
Figure 3-19. IBM Flex System Manager: Administration NGT113.0
Notes:
The FSM Administration tab provides links to configure and manage the FSM. This page supplies
links to:
Manage the power state of the FSM
Manage the FSM firmware
Configure various environment aspects including the user registry, SMTP, high availability
settings, date and time, and network definitions
Support service operations in including backup/restore and setting up Electronic Service Agent
Access security tasks such as working with passwords, user accounts and user authority roles
View FSM server status and access the FSM command line interface
Manage the Features on Demand keys
IBM Flex System Manager: Administration
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
3-22 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 3-20. IBM FSM Explorer console NGT113.0
Notes:
The IBM FSM Explorer console provides an alternate view of your resources and helps you
manage your systems-management environment. IBM FSM Explorer provides a resource-based
view of your environment with intuitive navigation of those resources. Some additional features are
as follows:
View basic information about resources just by hovering over them and no need to click to
access information about them
Use standard browser features to navigate between pages such as using the browsers back
and forward buttons, or to bookmark pages in order to return to them easily
Work on multiple pages at one time by having those pages open in separate browser tabs
Copy and paste the URL for a page and send it to a co-worker in e-mail or instant message. The
co-worker can copy and paste the URL into their browser and, after authenticating to the server,
view the same page
IBM FSM Explorer console
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 3. IBM Flex System Manager 3-23
V9.0
Uempty
Figure 3-21. IBM Flex System Manager hardware map NGT113.0
Notes:
The FSM Explorer hardware map offers a graphical view of the front and back view of the chassis.
The components that populate the chassis are easily identifiable. This view provides for several
overlays of information in order to manage various aspects of the chassis components including
information such as hardware status information, firmware compliance and notification, hardware
access states, highlight front panel LEDs, component properties, and configuration patterns. If you
fly over the resource with a mouse, additional pop-up information is viewable. Tasks can be
executed on chassis components from this view.
IBM Flex System Manager hardware map
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
3-24 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 3-22. VMControl: Automate with system pools NGT113.0
Notes:
IBM FSM VMControl Enterprise Edition is a licensed features of VMControl. With VMControl
Enterprise Edition, you can complete the following tasks:
Create server system pools, which enable you to consolidate your resources and workloads into
distinct and manageable groups.
Deploy virtual appliances into server system pools.
Manage server system pools, including adding hosts or additional storage space and monitoring
the health of the resources and the status of the workloads in them.
Group storage systems together using storage system pools to increase resource utilization
and automation.
Manage storage system pools by adding storage, editing the storage system pool policy, and
monitoring the health of the storage resources.
Intelligent virtual server placement
services
Dynamic workload mobility
Integrated storage and network
management
Automation policy control for workloads
Advise: VMControl recommends actions and
requires confirmation
Automate: VMControl automates actions
Availability automations
Automate relocation of virtual workloads in
response to predicted host system failures
without disruption
Restart virtual workloads when a host fails
Automate remote restart of virtual workloads
in response to host failures with minimal
disruption
Performance automations
Allows pool to spread VMs for optimum
performance
VMControl: Automate with system pools
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 3. IBM Flex System Manager 3-25
V9.0
Uempty
Figure 3-23. Remote Control NGT113.0
Notes:
The Remote Control application in the FSM management software can manage X-Architecture
compute nodes as a local console. Remote Control is a Java Web Start application that requires the
IBM or Oracle/Sun Java Runtime Environment (JRE) plug-in, version 6.0, update 18 or later. Obtain
and install the JRE plug-in before using the Remote Control application. Remote Control in the FSM
software is available only for X-Architecture compute nodes. You cannot access IBM Power
Systems compute nodes with Remote Control. Instead, you can open a terminal console to any
virtual server on a Power Systems compute node.
The thumbnail area of the Remote Control panel displays small window views of all compute node
sessions that are currently managed through the Remote Control. You can display multiple
compute node sessions and move between compute node sessions by clicking a thumbnail, which
displays the compute node console in the video session area. The thumbnail section is re-sizable.
The toolbar area provides for power on/off control, remote mount of local files, and soft key
definitions.
Remote Control
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
3-26 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 3-24. Glossary NGT113.0
Notes:
Check your understanding of the terms used in this unit.
IBM Flex System Manager Base
IBM Flex System Manager Advanced
Chassis manager
VMControl
Management network
Data network
IBM FSM Explorer
Glossary
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 3. IBM Flex System Manager 3-27
V9.0
Uempty
Figure 3-25. Checkpoint (1 of 3) NGT113.0
Notes:
Write down your answers here:
1.
2.
Checkpoint (1 of 3)
1. Management of multiple IBM Flex System chassis is done by
which component?
a. Chassis Management Module
b. Flex System Manager
c. Integrated Management Module
d. Flexible Service Processor
.
2. True or False: The IBM Flex System Manager is cabled
directly to a top of rack switch for connection to the
management network.
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
3-28 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 3-26. Checkpoint (2 of 3) NGT113.0
Notes:
Write down your answers here:
3.
4.
Checkpoint (2 of 3)
3. Virtual resource management is a function of which IBM Flex System
management component?
a. Integrated Management Module
b. Flexible Service Processor
c. Chassis Management Module
d. Flex System Manager
4. Which of the following statements is correct about the hardware
components of the IBM Flex System Manager node?
a. The node is based on the p260 Compute Node
b. The node includes two 200 GB SAS drives
c. The node includes 32 GB of RAM memory
d. The node includes a Fibre Channel mezzanine card
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 3. IBM Flex System Manager 3-29
V9.0
Uempty
Figure 3-27. Checkpoint (3 of 3) NGT113.0
Notes:
Write down your answers here
5.
6.
Checkpoint (3 of 3)
5. True or False: Optimization of virtual resources with pool
management for a Power compute node in the IBM Flex
System is only available with the IBM Flex System Manager
Advanced feature.
6. If an IBM Flex System Manager has configured an ETH1
network adapter, which of the following is correct?
a. The FSM management and data networks are intended to be
separated.
b. The FSM management and data networks are intended to be
merged.
c. There is FSM access only to the management network.
d. There is FSM access only to the data network.
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
3-30 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 3-28. Unit summary NGT113.0
Notes:
Having completed this unit, you should be able to:
Summarize the IBM Flex System Manager features
Predict when the IBM Flex System Manager is required
Classify the software management features of the IBM Flex
System Manager
Provide direction on the network topology design for the IBM
Flex System Manager
Unit summary
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-1
V9.0
Uempty
Unit 4. IBM Flex System X-Architecture compute
nodes
What this unit is about
This topic covers the IBM Flex System X-Architecture compute nodes.
What you should be able to do
After completing this unit, you should be able to:
Summarize the features of the IBM Flex System X-Architecture compute
nodes
Distinguish the major elements of the IBM Flex System X-Architecture
compute nodes
Recognize the processor subsystem features of the IBM Flex System
X-Architecture compute nodes
Recognize the memory subsystem features of the IBM Flex System
X-Architecture compute nodes
Recall the management features of the IBM Flex System X-Architecture
compute nodes
How you will check your progress
Checkpoint questions
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-2 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-1. Unit objectives NGT113.0
Notes:
After completing this unit, you will be able to:
Summarize the features of the IBM Flex System X-Architecture Compute Nodes
Distinguish the major elements of the IBM Flex System X-Architecture Compute Nodes
Recognize the processor subsystem features of the IBM Flex System X-Architecture Compute
Nodes
Recognize the memory subsystem features of the IBM Flex System X-Architecture Compute
Nodes
Recall the management features of the IBM Flex System X-Architecture Compute Nodes
Unit objectives
After completing this unit, you should be able to:
Summarize the features of the IBM Flex System X-Architecture
compute nodes
Distinguish the major elements of the IBM Flex System X-
Architecture compute nodes
Recognize the processor subsystem features of the IBM Flex
System X-Architecture compute nodes
Recognize the memory subsystem features of the IBM Flex
System X-Architecture compute nodes
Recall the management features of the IBM Flex System X-
Architecture compute nodes
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-3
V9.0
Uempty
Figure 4-2. IBM Flex System X-Architecture compute node topics NGT113.0
Notes:
The topics we will cover are:
Compute node overview and architecture
Server subsystems
- Disk subsystem
Storage expansion
- Processor subsystem
- Memory subsystem
- Network subsystem
I/O expansion
Standard onboard features
Systems management
This section covers the compute node overview and architecture.
IBM Flex System X-Architecture
compute node topics
Compute node overview and
architecture
Server subsystems
Disk subsystem
Storage expansion
Processor subsystem
Memory subsystem
Network subsystem
I/O expansion
Standard onboard features
Systems management
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-4 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-3. Compute node overview and architecture topics NGT113.0
Notes:
The topics we will cover are:
IBM Flex System x220 Compute Node
IBM Flex System x222 Compute Node
IBM Flex System x240 Compute Node
IBM Flex System x440 Compute Node
This section covers the IBM Flex System x220 Compute Node.
Compute node overview and architecture topics
IBM Flex System x220
Compute Node
IBM Flex System x222
Compute Node
IBM Flex System x240
Compute Node
IBM Flex System x440
Compute Node
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-5
V9.0
Uempty
Figure 4-4. IBM Flex System x220 Compute Node: At a glance NGT113.0
Notes:
The IBM Flex System x220 Compute Node, machine type 7906, is the next generation
cost-optimized compute node designed for less demanding workloads and low-density
virtualization. The x220 is efficient and equipped with flexible configuration options and advanced
management to run a broad range of workloads.
IBM Flex System x220 Compute Node: At a glance
Advanced technology
Intel E5-2400 series
processors
Up to 2.4GHZ, 8C, and QPI
at 8.0 GTps
LRDIMM technology
Maximize memory
Up to 384GB per node
Networking options
Integrated Dual-Port 1Gb
Broadcom Ethernet
Controller (some models)
Multiple fabrics and speeds
to choose from
System specifications
2x Xeon E5-2400 series (4C/6C/8C)
Intel QPI technology at up to 8.0 GTps
Up to 12 DDR3 DIMMs at up to 1600MHz
Up to 384GB memory capacity
IBM Active Memory including memory mirroring and
memory sparing
Two PCI-E Gen3 x8 I/O adapter slots
Integrated Dual-Port 1Gb Broadcom Ethernet
Controller (some models)
Internal USB for embedded hypervisor
IMMv2 and UEFI
Interoperability with IBM Flex System Manager Node
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-6 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-5. Product overview (1 of 3) NGT113.0
Notes:
The IBM Flex System x220 Compute Node is a high-availability, scalable compute node optimized
to support the next-generation microprocessor technology. With a balance between cost and
system features, the x220 is an ideal platform for general business workloads. It is part of IBM Flex
System, a category of computing that integrates multiple server architectures, networking, storage
and system management capability into a single system that is easy to deploy and manage. IBM
Flex System has full built-in virtualization support of servers, storage, and networking to speed
provisioning and increase resiliency. In addition, it supports open industry standards, such as
operating systems, networking and storage fabrics, virtualization, and system management
protocols, to easily fit within existing and future data center environments. IBM Flex System is
scalable and extendable with multi-generation upgrades to protect and maximize IT investments.
This slide summarizes some of the features of the IBM Flex System x220 Compute Node including:
Up to two Intel Xeon E5-2400 series processors providing up to 16 processor cores per system
Models include 4GB of DDR3 memory
- 6 DIMM slots are available per processor installed: 12 DIMM slots if two processors are
installed.
Product overview (1 of 3)
Intel Xeon two-socket server
Features Intel Xeon E5-2400
series 4C, 6C, or 8C
processors
Up to 20MB L3 cache
Intel QPI technology at up to
8.0 GTps
12 DIMM slots total
4GB standard (1 x 4GB DIMMs)
Up to 12 using LRDIMMs
Up to 12 using RDIMMs
Up to 12 using UDIMMs
Six DIMM sockets per
processor
Maximum memory
384GB using LRDIMMs
12 x 32GB
192GB using RDIMMs
12 x 16GB
48GB using UDIMMs
12 x 4GB
Up to 1600MHz DDR3 SDRAM
memory
1600MHz up to two DIMMs per
channel (DPC)
Active Memory including
memory mirroring and memory
sparing
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-7
V9.0
Uempty
- LRDIMM, UDIMM and RDIMM memory types supported.
- Maximum memory capacity
384GB using LRDIMMs (12 x 32GB)
192GB using RDIMMs (12 x 16GB)
48GB using UDIMMs (12 x 4GB)
- Up to 1600MHz DDR3 SDRAM memory
- Active Memory including memory mirroring and memory sparing supported.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-8 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-6. Product overview (2 of 3) NGT113.0
Notes:
Some additional features of the IBM Flex System x220 Compute Node include:
Two I/O slots
- Two PCI-E Gen3 x8 slots
- I/O slot 1 contains 1Gb Ethernet controller (some models)
Power and cooling
- Supplied by Flex Chassis
- Supports IBM Active Energy Manager
Standard integrated ServeRAID C105 SATA storage controller
- Entry level with software RAID capabilities
Non-RAID configurations are not supported with this storage controller
- Two 2.5-inch hot-swap SAS / SATA / SSD disk drives
- No hard drives standard
Product overview (2 of 3)
Two I/O slots
Two PCI-E Gen3 x8 slots
I/O slot 1 contains 1Gb Ethernet
controller (some models)
Power and cooling
Supplied by IBM Flex Enterprise
Chassis
Supports IBM Active Energy
Manager
Standard integrated ServeRAID
C105 SATA storage controller
Entry level with software RAID
capabilities
Non-RAID not supported
Two 2.5-inch hot-swap SAS / SATA /
SSD disk drives
No hard drives standard
RAID levels 0 and 1
Maximum storage
2TB (2 x 1TB)
ServeRAID H1135 (optional)
Entry level hardware RAID controller
Supports RAID 0 and 1
ServeRAID M5115 (optional)
Enterprise level, advanced hardware
RAID controller
Supports RAID 0/1/10/5/50
RAID 6/60 (optional)
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-9
V9.0
Uempty
- RAID levels 0 and 1
- Maximum storage
2TB (2 x 1TB)
- ServeRAID H1135 (optional)
Entry level hardware RAID controller
RAID levels 0 and 1
- ServeRAID M5115 (optional)
Enterprise level, advanced hardware RAID controller
Supports RAID 0/1/10/5/50
- RAID 6/60 (optional)
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-10 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-7. Product overview (3 of 3) NGT113.0
Notes:
The ports on the IBM Flex System x220 Compute Node include:
One Console Breakout Cable connector (one serial, two USB ports, one video), and three USB
(one front and two internal).
Product overview (3 of 3)
Ports
One console breakout cable connector (one serial, two USB ports,
one video), and three USB (one front and two internal)
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-11
V9.0
Uempty
Figure 4-8. Front view NGT113.0
Notes:
The front view of the IBM Flex System x220 Compute Node contains (front left to right) a USB
connector, a NMI control, the port to connect the Console Breakout Cable, two 2.5-inch hard drive
bays, the Power button and Power LED, the Identify LED, the Check log LED, and the Fault LED.
The overall physical dimensions of the x220 are: 492mm x 217mm x 56mm (LxWxH) and it weighs
approximately 14.11lbs when fully configured.
Front view
USB port
Console Breakout
Cable port
Power button / LED
Hard disk drive
activity LED
Hard disk drive
status LED Identify LED
Check log LED
Fault LED
NMI control
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-12 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-9. Interior view NGT113.0
Notes:
The image highlights the components of the IBM Flex System x220 Compute Node including the
Light path diagnostics panel, two Intel Xeon E5-2400 series processors, two PCIe I/O connectors,
integrated 1Gb Ethernet controller (some models), 12 DDR3 memory slots, and the hot-swap
backplane that connects to the front access hard drives.
Interior view
Hot-swap drive
bay backplane
Processor 2 and
6 memory DIMMs I/O connector 1
Fabric
Connector
Light path
diagnostic panel
Processor 1 and
6 memory DIMMs
I/O connector 2 Expansion
Connector
Optional
ServeRAID H1135
USB port 2
Broadcom
Ethernet
USB port 1
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-13
V9.0
Uempty
Figure 4-10. Compute node overview and architecture topics NGT113.0
Notes:
This section covers the IBM Flex System x222 Compute Node.
Compute node overview and architecture topics
IBM Flex System x220
Compute Node
IBM Flex System x222
Compute Node
IBM Flex System x240
Compute Node
IBM Flex System x440
Compute Node
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-14 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-11. IBM Flex System x222 Compute Node: At a glance NGT113.0
Notes:
The IBM Flex System x222 Compute Node is a high-density dual-server offering that is designed
for virtualization, dense cloud deployments, and hosted clients. The x222 has two independent
servers in one mechanical package, which means that the x222 has a double-density design that
allows up to 28 servers to be housed in a single 10U Flex System Enterprise Chassis.
The two servers are independent and cannot be combined to form a single four-socket system.
For clarity throughout this topic, the terms IBM Flex System x222 Compute Node, compute
node, or simply the x222 refers to features and functions that apply to the compute node as a
whole (in other words, affecting both upper and lower servers). The term server refers to features
and functions of just the upper or just the lower server.
IBM Flex System x222 Compute Node: At a glance
Advanced technology
Two independent servers in
one chassis bay
Intel E5-2400 series
processors
Up to 2.4 GHz, 8C, and QPI
at 8.0 GTps
Maximize memory
Up to 384 GB per node
Networking options that
include:
Integrated Dual-Port 10Gb
Virtual Fabric Adapter (one
per server)
Multiple fabrics and speeds
to choose from
System specifications
Each compute node supports:
2x Xeon E5-2400 series (4C/6C/8C)
Intel QPI technology at up to 8.0 GTps
Up to 12 DDR3 DIMMs at up to 1600MHz
Up to 384GB memory capacity
IBM Active Memory including memory mirroring and
memory sparing
Integrated Dual-Port 10Gb Virtual Fabric Adapter (one
per server)
One PCI-E Gen3 x16 I/O adapter slot (shared between
servers)
Internal USB for embedded hypervisor
IMMv2 and UEFI
Interoperability with IBM Flex System Manager Node
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-15
V9.0
Uempty
Figure 4-12. Product overview (1 of 4) NGT113.0
Notes:
The IBM Flex System x222 Compute Node is a high-density dual-server offering that is designed
for virtualization, dense cloud deployments, and hosted clients. The x222 has two independent
servers in one mechanical package, which means that the x222 has a double-density design that
allows up to 28 servers to be housed in a single 10U Flex System Enterprise Chassis. With a
balance between cost and system features, the x222 is an ideal platform for dense workloads, such
as virtualization.
This slide summarizes some of the features of the IBM Flex System x222 Compute Node including:
Two independent servers in one mechanical package to maximize computing capacity
Up to two Intel Xeon E5-2400 series processors per server providing up to 16 processor cores
per system
Models include 16GB of DDR3 memory
- 6 DIMM slots are available per processor installed: 12 DIMM slots if two processors are
installed
- LRDIMM and RDIMM memory types supported. UDIMMs are not supported.
Product overview (1 of 4)
Each server supports:
Intel Xeon two-socket server
Features Intel Xeon E5-2400 series
4C, 6C, or 8C processors
Up to 20MB L3 cache
Intel QPI technology at up to 8.0 GTps
12 DIMM slots total
16GB standard (2 x 8GB DIMMs)
Up to 12 using LRDIMMs
Up to 12 using RDIMMs
UDIMMs are not supported
Six DIMM sockets per processor
Maximum memory
384GB using LRDIMMs
12 x 32GB
192GB using RDIMMs
12 x 16GB
Up to 1600MHz DDR3 SDRAM
memory
1600MHz up to two DIMMs per
channel (DPC)
Active Memory including memory
mirroring and memory sparing
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-16 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
- Maximum memory capacity
384GB using LRDIMMs (12 x 32GB)
192GB using RDIMMs (12 x 16GB)
Up to 1600MHz DDR3 SDRAM memory
- Active Memory including memory mirroring and memory sparing supported.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-17
V9.0
Uempty
Figure 4-13. Product overview (2 of 4) NGT113.0
Notes:
Some additional features of the IBM Flex System x222 Compute Node include:
Two I/O slots
- Both I/O adapters installed are shared between both servers
- I/O slot 1 contains Embedded 10Gb Virtual Fabric Adapter
One two-port LAN on Motherboard (LOM) per server
- I/O slot 2
Shared PCI-E Gen3 x16 slot
Only special I/O adapters are supported
Power and cooling
Supplied by Flex Chassis
Supports IBM Active Energy Manager
Product overview (2 of 4)
Two I/O slots
Both I/O adapters shared
between both servers
I/O slot 1 contains Embedded
10Gb Virtual Fabric Adapter
One two-port LAN on
Motherboard (LOM) per server
I/O slot 2
Shared PCI-E Gen3 x16 slot
Only special I/O adapters are
supported
Power and cooling
Supplied by IBM Flex
Enterprise Chassis
Supports IBM Active Energy
Manager
Upper server
Lower server
IO connector 2
IO connector 1
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-18 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-14. Product overview (3 of 4) NGT113.0
Notes:
Some additional features of the IBM Flex System x222 Compute Node include:
Standard integrated 6Gbps SATA storage controller
- Provided by C600 Platform Controller Hub (PCH)
- Hardware RAID not supported
- One 2.5-inch hot-swap SATA / SSD disk drive per server
- No hard drives standard
- Maximum storage per server
1TB (1 x 1TB)
- Flex System SSD Expansion Kit (optional)
Supports two 1.8-inch hot-swap SSD disk drives per server
Maximum storage per server
- 400GB (2 x 200GB)
Product overview (3 of 4)
Standard integrated 6 Gbps
SATA storage controller
Provided by C600 Platform
Controller Hub (PCH)
Hardware RAID not supported
One 2.5-inch hot-swap SATA / SSD
disk drive per server
No hard drives standard
Maximum storage per server
1TB (1 x 1TB)
Flex System SSD Expansion Kit
(optional)
Supports two 1.8-inch hot-swap SSD
disk drives per server
Maximum storage per server
400GB (2 x 200GB)
Lower server
Upper server
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-19
V9.0
Uempty
Figure 4-15. Product overview (4 of 4) NGT113.0
Notes:
The ports on the IBM Flex System x222 Compute Node include (per server):
One Console Breakout Cable connector (one serial, two USB ports, one video), and three USB
(one front and two internal).
Product overview (4 of 4)
Ports (per server)
One console breakout cable connector (one serial, two USB ports,
one video), and three USB (one front and two internal)
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-20 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-16. Front view NGT113.0
Notes:
The front view of the IBM Flex System x222 Compute Node contains (front left to right) USB
connectors, ports to connect the Console Breakout Cable, two 2.5-inch hard drive bays (one each
for upper and lower servers, the Power buttons and Power LEDs, and the LED panels.
The overall physical dimensions of the x222 are: 492mm x 217mm x 56mm (LxWxH) and it weighs
approximately 18lbs when fully configured.
Front view
USB ports
Console Breakout
Cable ports
Power buttons /
LEDs
Upper hard disk
drive bay
Lower hard disk
drive bay
Upper server
LED panels
Lower server
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-21
V9.0
Uempty
Figure 4-17. Rear view NGT113.0
Notes:
The rear view of the IBM Flex System x222 Compute Node includes the power connector, I/O
connector 1, I/O connector 2, the management connector, and the mechanism to lock the upper
and lower servers together.
Rear view
Power connector
Management
connector
Upper / lower server
locking mechanism
I/O connector 2 I/O connector 1
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-22 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-18. Interior view: Upper server NGT113.0
Notes:
The image highlights the components of the upper server of the IBM Flex System x222 Compute
Node including the two internal USB ports, two Intel Xeon E5-2400 series processors, the I/O fabric
connector, the node connector, 12 DDR3 memory slots, the
HDD / SDD drive bay, and the upper / lower server locking mechanism.
Interior view: Upper server
Two internal
USB ports
Processor 2 and
6 memory DIMMs
I/O fabric
connector
Processor 1 and
6 memory DIMMs Node connector
HDD / SDD
drive bay
Upper / lower
server locking
mechanism
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-23
V9.0
Uempty
Figure 4-19. Interior view: Lower server NGT113.0
Notes:
The image highlights the components of the lower server of the IBM Flex System x222 Compute
Node including the two internal USB ports, two Intel Xeon E5-2400 series processors, the I/O
expansion connector, the node connector, 12 DDR3 memory slots, the HDD / SDD drive bay, the
management connector, and the fabric connector.
Interior view: Lower server
Two internal
USB ports
Processor 2 and
6 memory DIMMs
Node
connector
Processor 1 and
6 memory DIMMs
I/O expansion
connector
HDD / SDD
drive bay
Fabric connector
Management
connector
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-24 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-20. Upper server / lower server locking mechanism NGT113.0
Notes:
This series of images shows the locking mechanism that joins the upper and lower servers of the
x222.
Upper server / lower server locking mechanism
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-25
V9.0
Uempty
Figure 4-21. Compute node overview and architecture topics NGT113.0
Notes:
This section covers the IBM Flex System x240 Compute Node.
Compute node overview and architecture topics
IBM Flex System x220
Compute Node
IBM Flex System x222
Compute Node
IBM Flex System x240
Compute Node
IBM Flex System x440
Compute Node
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-26 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-22. IBM Flex System x240 Compute Node: At a glance NGT113.0
Notes:
The IBM Flex System x240 Compute Node is a high-performance server that offers outstanding
performance for virtualization with new levels of CPU performance and memory capacity, and
flexible configuration options. The x240 Compute Node is an efficient server designed to run a
broad range of workloads, armed with advanced management capabilities allowing you to manage
your physical and virtual IT resources from a single-pane of glass.
IBM Flex System x240 Compute Node: At a glance
Advanced technology
Intel E5-2600 series
processors
Up to 3.3GHz, 8C, and QPI
at 8.0 GTps
LRDIMM technology
Maximize memory
Up to 768GB per node
Networking options
Integrated 10Gb Virtual
Fabric Adapter (some
models)
Multiple fabrics and speeds
to choose from
System specifications
2x Xeon E5-2600 series (2C/4C/6C/8C)
Intel QPI technology at up to 8.0 GTps
Up to 24 DDR3 DIMMs at up to 1600MHz
Up to 768GB memory capacity
IBM Active Memory including memory mirroring and
memory sparing
Two PCI-E Gen3 x16 I/O adapter slots
Integrated 10Gb Virtual Fabric Adapter (some models)
Internal USB for embedded hypervisor
IMMv2 and UEFI
Interoperability with IBM Flex System Manager Node
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-27
V9.0
Uempty
Figure 4-23. Product overview (1 of 3) NGT113.0
Notes:
The IBM Flex System x240 Compute Node is a high-performance server that offers outstanding
performance for virtualization with new levels of CPU performance and memory capacity, and
flexible configuration options. It is part of IBM PureFlex System, a new category of computing that
integrates multiple server architectures, networking, storage and system management capability
into a single system that is easy to deploy and manage. IBM PureFlex System has full built-in
virtualization support of servers, storage, and networking to speed provisioning and increase
resiliency. In addition, it supports open industry standards, such as operating systems, networking
and storage fabrics, virtualization, and system management protocols, to easily fit within existing
and future data center environments. IBM PureFlex System is scalable and extendable with
multigenerational upgrades to protect and maximize IT investments.
This slide summarizes some of the features of the IBM Flex System x240 Compute Node including:
Up to two Intel Xeon E5-2600 series processors providing up to 16 processor cores per system
Models include 8GB of DDR3 memory
- 12 DIMM slots are available per processor installed: 24 DIMM slots if two processors are
installed.
Product overview (1 of 3)
Intel Xeon two-socket server
Features Intel Xeon E5-2600 series
2C, 4C, 6C, or 8C processors
Up to 20MB L3 cache
Intel QPI technology at up to 8.0
GTps
24 DIMM slots total
8GB standard (2 x 4GB DIMMs)
Up to 24 using LRDIMMS
Up to 24 using RDIMMs
Up to 16 using UDIMMs
12 DIMM sockets per processor
Maximum memory
768GB using LRDIMMs
24 x 32GB
384GB using RDIMMs
24 x 16GB
64GB using UDIMMs
16 x 4GB
Up to 1600MHz DDR3 SDRAM
memory
1600MHz up to two DIMMs per
channel (DPC)
Active Memory including memory
mirroring and memory sparing
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-28 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
- UDIMM, RDIMM, and LRDIMM memory types supported.
- Maximum memory capacity
768GB using LRDIMMs (24 x 32GB)
384GB using RDIMMs (24 x 16GB)
64GB using UDIMMs (16 x 4GB)
- Up to 1600MHz DDR3 SDRAM memory
- Active Memory including memory mirroring and memory sparing supported.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-29
V9.0
Uempty
Figure 4-24. Product overview (2 of 3) NGT113.0
Notes:
Some additional features of the IBM Flex System x240 Compute Node include:
Two I/O slots
- Two PCI-E Gen3 x16 slots
- I/O slot 1 contains 10Gb Virtual Fabric Adapter (some models)
Power and cooling
- Supplied by Flex Chassis
- Supports IBM Active Energy Manager
Standard integrated LSI 2004 SAS storage controller
- Two 2.5-inch hot-swap SAS / SATA / SSD disk drives
- No hard drives standard
- RAID levels 0 and 1
- Maximum storage
3.2TB (2 x 1.6TB)
- ServeRAID M5115 (optional)
Supports RAID 0/1/10/5/50
- RAID 6/60 (optional)
Product overview (2 of 3)
Two I/O slots
Two PCI-E Gen3 x16 slots
I/O slot one contains 10Gb Virtual
Fabric Adapter (some models)
Power and cooling
Supplied by IBM Flex Enterprise
Chassis
Supports IBM Active Energy
Manager
Standard integrated LSI 2004
SAS storage controller
Two 2.5-inch hot-swap SAS / SATA /
SSD disk drives
No hard drives standard
RAID levels 0 and 1
Maximum storage
3.2TB (2 x 1.6TB)
ServeRAID M5115 (optional)
Supports RAID 0/1/10/5/50
RAID 6/60 (optional)
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-30 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-25. Product overview (3 of 3) NGT113.0
Notes:
The ports on the IBM Flex System x240 Compute Node include:
One Console Breakout Cable connector (one serial, two USB ports, one video), and three USB
(one front and two internal).
Product overview (3 of 3)
Ports
One console breakout cable connector (one serial, two USB ports,
one video), and three USB (one front and two internal)
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-31
V9.0
Uempty
Figure 4-26. Front view NGT113.0
Notes:
The front view of the IBM Flex System x240 Compute Node contains (front left to right) a USB
connector, a NMI control, the port to connect the Console Breakout Cable, two 2.5-inch SAS hard
drive bays, the Power button and Power LED, the Identify LED, the Check log LED, and the Fault
LED.
The overall physical dimensions of the x240 are: 492.7mm x 215.5mm x 55.5mm (LxWxH) and it
weighs approximately 15.6lbs when fully configured.
USB port
Console breakout
cable port
Power button / LED
Hard disk drive
activity LED
Hard disk drive
status LED Identify LED
Check log LED
Fault LED
NMI control
Front view
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-32 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-27. Interior view NGT113.0
Notes:
The image highlights the components of the IBM Flex System x240 Compute Node including the
Light path diagnostics panel, two Intel Xeon E5-2600 series processors, two PCIe I/O connectors,
integrated 10Gb Virtual Fabric Adapter (some models), 24 DDR3 memory slots, and the hot-swap
backplane that connects to the front access hard drives.
Interior view
Hot-swap drive
bay backplane
Processor 2 and 12
memory DIMMs I/O connector 1
Fabric
connector
Light path
diagnostic panel
Processor 1 and 12
memory DIMMs
I/O connector 2
Expansion
Connector
Integrated
10Gb LOM
(some models)
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-33
V9.0
Uempty
Figure 4-28. Compute node overview and architecture topics NGT113.0
Notes:
This section covers the IBM Flex System x440 Compute Node.
Compute node overview and architecture topics
IBM Flex System x220
Compute Node
IBM Flex System x222
Compute Node
IBM Flex System x240
Compute Node
IBM Flex System x440
Compute Node
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-34 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-29. IBM Flex System x440 Compute Node: At a glance NGT113.0
Notes:
A building block for the IBM PureFlex System family, the IBM Flex System x440 Compute Node is a
four-socket Intel Xeon processor-based server optimized for high-end virtualization, mainstream
database deployments, and memory-intensive high performance environments. It is
price-performance optimized with a wide range of processors, memory, and I/O options to help you
match system capabilities and cost to workloads without compromise. With a dense design, the
Flex System x440 Compute Node can help reduce floor space used, and lower data center power
and cooling costs.
IBM Flex System x440 Compute Node: At a glance
Advanced technology
Intel E5-4600 series
processors
Up to 2.9GHz, 8C, and QPI
at 8.0 GTps
LRDIMM technology
Maximize memory
Up to 1.5 TB per node
Networking options
Two integrated 10Gb Virtual
Fabric Adapters (some
models)
Multiple fabrics and speeds
to choose from
System specifications
4x Xeon E5-4600 series (4C/6C/8C)
Intel QPI technology at up to 8.0 GTps
Up to 48 DDR3 DIMMs at up to 1600MHz
Up to 1.5TB memory capacity
IBM Active Memory including memory mirroring
Four PCI-E Gen3 x16 I/O adapter slots
Two integrated 10Gb Virtual Fabric Adapter (some
models)
Internal USB for embedded hypervisor
IMMv2 and UEFI
Interoperability with IBM Flex System Manager Node
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-35
V9.0
Uempty
Figure 4-30. Product overview (1 of 3) NGT113.0
Notes:
The IBM Flex System x440 Compute Node is a high-performance server that offers outstanding
performance for virtualization with new levels of CPU performance and memory capacity, and
flexible configuration options. It is part of IBM PureFlex System, a category of computing that
integrates multiple server architectures, networking, storage and system management capability
into a single system that is easy to deploy and manage. IBM PureFlex System has full built-in
virtualization support of servers, storage, and networking to speed provisioning and increase
resiliency. In addition, it supports open industry standards, such as operating systems, networking
and storage fabrics, virtualization, and system management protocols, to easily fit within existing
and future data center environments. IBM PureFlex System is scalable and extendable with
multigenerational upgrades to protect and maximize IT investments.
This slide summarizes some of the features of the IBM Flex System x440 Compute Node including:
Up to four Intel Xeon E5-4600 series processors providing up to 32 processor cores per system
- Three processors is not supported
Models include 8GB of DDR3 memory
Product overview (1 of 3)
Intel Xeon four-socket server
Features Intel Xeon E5-4600 series 4C,
6C, or 8C processors
Up to 20MB L3 cache
Intel QPI technology at up to 8.0
GTps
48 DIMM slots total
8GB standard (1 x 8GB DIMMs)
Up to 48 using LRDIMMS
Up to 48 using RDIMMs
Up to 32 using UDIMMs
12 DIMM sockets per processor
Maximum memory
1.5TB using LRDIMMs
48 x 32GB
768GB using RDIMMs
48 x 16GB
128GB using UDIMMs
32 x 4GB
Up to 1600MHz DDR3 SDRAM memory
1600MHz up to two DIMMs per channel
(DPC)
Active Memory including memory
mirroring
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-36 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
- 12 DIMM slots are available per processor installed: 48 DIMM slots if all four processors are
installed.
- RDIMM and LRDIMM memory types supported. UDIMM is not supported.
- Maximum memory capacity
1.5TB using LRDIMMs (48 x 32GB)
768GB using RDIMMs (48 x 16GB)
128GB using UDIMMs (32 x 4GB)
- Up to 1600MHz DDR3 SDRAM memory
1600MHz up to two DIMMs per channel (DPC)
- Active Memory including memory mirroring supported.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-37
V9.0
Uempty
Figure 4-31. Product overview (2 of 3) NGT113.0
Notes:
Some additional features of the IBM Flex System x440 Compute Node include:
Four I/O slots
- Four PCI-E Gen3 x16 slots
- I/O slots 1 and 3 contain 10Gb Virtual Fabric Adapter (some models)
Power and cooling
- Supplied by Flex Chassis
- Supports IBM Active Energy Manager
Standard integrated LSI 2004 SAS storage controller
- Two 2.5-inch hot-swap SAS / SATA / SSD disk drives
- No hard drives standard
- RAID levels 0 and 1
- Maximum storage
3.2TB (2 x 1.6TB)
- ServeRAID M5115 (optional)
Supports RAID 0/1/10/5/50
- RAID 6/60 (optional)
Product overview (2 of 3)
Four I/O slots
Four PCI-E Gen3 x16 slots
I/O slots 1 and 3 contain 10Gb
Virtual Fabric Adapter (some
models)
Power and cooling
Supplied by IBM Flex
Enterprise Chassis
Supports IBM Active Energy
Manager
Standard integrated LSI 2004
SAS storage controller
Two 2.5-inch hot-swap SAS /
SATA / SSD disk drives
No hard drives standard
RAID levels 0 and 1
Maximum storage
3.2TB (2 x 1.6TB)
ServeRAID M5115 (optional)
Supports RAID 0/1/10/5/50
RAID 6/60 (optional)
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-38 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-32. Product overview (3 of 3) NGT113.0
Notes:
The ports on the IBM Flex System x440 Compute Node include:
One Console Breakout Cable connector (one serial, two USB ports, one video), and three USB
(one front and two internal).
Product overview (3 of 3)
Ports
One console breakout cable connector (one serial, two USB ports,
one video), and three USB (one front and two internal)
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-39
V9.0
Uempty
Figure 4-33. Front view NGT113.0
Notes:
The front view of the IBM Flex System x440 Compute Node contains (front left to right) a USB
connector, a NMI control, the port to connect the Console Breakout Cable, two 2.5-inch SAS hard
drive bays, the Power button and Power LED, the Identify LED, the Check log LED, and the Fault
LED.
The overall physical dimensions of the x440 are: 492.7mm x 453.3mm x 55.5mm (LxWxH) and it
weighs approximately 27lbs when fully configured.
Front view
USB port
Console breakout
cable port
Power button / LED
Hard disk drive
activity LED
Hard disk drive
status LED Identify LED
Check log LED
Fault LED
NMI control
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-40 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-34. Interior view NGT113.0
Notes:
The image highlights the components of the IBM Flex System x440 Compute Node including the
Light path diagnostics panel, four Intel Xeon E5-4600 series processors, four PCIe I/O connectors,
integrated 10Gb Virtual Fabric Adapter (some models), 48 DDR3 memory slots, and the hot-swap
drive bays that connects to the front access hard drives.
Interior view
Hot-swap
drive
bays
I/O
adapters
1 (top) to
4(bottom)
Light path
diagnostic
panel
Processors 4 and 2 including 12
memory DIMMs per processor
Processors 3 and 1 including 12
memory DIMMs per processor
USB
ports
3 1
4 2
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-41
V9.0
Uempty
Figure 4-35. IBM Flex System X-Architecture compute node topics NGT113.0
Notes:
This section covers the compute node disk subsystem.
IBM Flex System X-Architecture
compute node topics
Compute node overview and
architecture
Server subsystems
Disk subsystem
Storage expansion
Processor subsystem
Memory subsystem
Network subsystem
I/O expansion
Standard onboard features
Systems management
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-42 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-36. IBM Flex System x220 Compute Node: Disk subsystem overview (1 of 2) NGT113.0
Notes:
The x220 ships standard with the ServeRAID C105 storage controller (supporting software RAID
levels 0 and 1). It supports up to two hot-swap 2.5-inch SAS, NL SAS, NL SATA, or SSD hard disk
drives. Optional controllers available are:
ServeRAID H1135 (supporting hardware RAID levels 0 and 1).
ServeRAID M5115 (supporting RAID levels 0, 1, 10, 5, and 50 with optional support of RAID 6
and 60).
IBM Flex System x220 Compute Node:
Disk subsystem overview (1 of 2)
Standard ServeRAID C105
SATA storage controller
Supports software RAID levels
0 and 1
ServeRAID H1135 (optional)
Entry level hardware RAID
controller
Supports RAID 0 and 1
ServeRAID M5115 (optional)
Enterprise level, advanced
hardware RAID controller
Supports RAID 0/1/10/5/50
RAID 6/60 (optional)
Supports hot-swap SAS, NL
SAS, NL SATA, or SSD
drives
Standard configuration
Two 2.5-inch hot-swap drive
bays
Accessible from front of system
No hard disks installed
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-43
V9.0
Uempty
Figure 4-37. IBM Flex System x220 Compute Node: Disk subsystem overview (2 of 2) NGT113.0
Notes:
The supported disk configurations / maximum storage capacity of the IBM Flex System x220
Compute Node includes the following:
Two 2.5-inch HS disks
- SAS: 2.4TB (2 x 1.2TB)
- NL SAS: 2TB (2 x 1TB)
- NL SATA: 2TB (2 x 1TB)
- SSD: 3.2TB (2 x 1.6TB)
Four 1.8-inch SSDs
- 2TB (4 x 512GB)
- Two 2.5-inch drive bays and four 1.8-inch SSDs
2.5-inch (see left) plus
Four 1.8-inch SSDs
- 2TB (4 x 512GB)
Eight 1.8-inch SSDs
4TB (8 x 512GB)
IBM Flex System x220 Compute Node:
Disk subsystem overview (2 of 2)
Supported disk
configurations / maximum
storage capacity
Two 2.5-inch HS disks
SAS: 2.4TB (2 x 1.2TB)
NL SAS: 2TB (2 x 1TB)
NL SATA: 2TB (2 x 1TB)
SSD: 3.2TB (2 x 1.6TB)
Four 1.8-inch SSDs
2TB (4 x 512GB)
Two 2.5-inch drive bays and
four 1.8-inch SSDs
2.5-inch (see left)
plus
Four 1.8-inch SSDs
2TB (4 x 512GB)
Eight 1.8-inch SSDs
4TB (8 x 512GB)
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-44 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-38. Optional disk controllers and kits NGT113.0
Notes:
The IBM Flex System x220 Compute Node optional disk controllers include:
ServeRAID H1135
Entry level hardware RAID supporting levels 0 and 1
ServeRAID M5115
Enterprise level, advanced hardware RAID supporting levels 0/1/5/10/50
- Optional RAID 6/60 and encryption
Optional kits include:
ServeRAID M5100 Series Enablement Kit for IBM Flex System x220
ServeRAID M5100 Series IBM Flash Kit for IBM Flex System x220
ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x220
Optional disk controllers and kits
The x220 optional disk controllers
and kits include:
ServeRAID H1135
Entry level hardware RAID supporting
levels 0 and 1
ServeRAID M5115
Enterprise level, advanced hardware
RAID supporting levels 0/1/5/10/50
Optional RAID 6/60 and encryption
ServeRAID M5100 Series Enablement
Kit for IBM Flex System x220
ServeRAID M5100 Series IBM Flash Kit
for IBM Flex System x220
ServeRAID M5100 Series SSD
Expansion Kit for IBM Flex System x220
ServeRAID M5115
ServeRAID M5100 Series Enablement
Kit for IBM Flex System x220
ServeRAID M5100 Series
IBM Flash Kit for IBM Flex System x220
ServeRAID M5100 Series
SSD Expansion Kit for IBM Flex System x220
ServeRAID H1135
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-45
V9.0
Uempty
Figure 4-39. Drive combinations: In summary NGT113.0
Notes:
The table shows the kits required for each combination of drives. For example, if you plan to install
eight 1.8-inch SSDs, you will need the M5115 controller, the Flash kit, and the SSD Expansion kit.
Drive combinations: In summary
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-46 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-40. IBM Flex System x222 Compute Node: Disk subsystem overview (1 of 2) NGT113.0
Notes:
The x222 has two 2.5-inch simple-swap drive bays that are accessible from the front of the unit,
with one bay for each of the servers. Each server offers a 6Gbps SATA controller that is
implemented by the Intel C600 chipset. RAID functionality is not provided by the chipset and must
be implemented by the operating system.
The 2.5-inch drive bays support SATA hard disk drives (HDDs) or SATA solid-state drives (SSDs).
The x222 does not support the IBM Flex System Storage Expansion Node.
IBM Flex System x222 Compute Node:
Disk subsystem overview (1 of 2)
Standard 6 Gbps SATA
storage controller provided
by Intel C600 chipset
No optional storage controllers
supported
No hardware RAID support
Must be implemented by NOS
installed
No IBM Flex System Storage
Expansion Node support
Supports simple-swap SATA
or hot-swap SSD drives
Standard configuration
Two 2.5-inch simple-swap drive
bays
One assigned to each server
Accessible from front of system
No hard disks installed
One 2.5-inch HDD bay
or
Two 1.8-inch SSDs
One 2.5-inch HDD bay
or
Two 1.8-inch SSDs
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-47
V9.0
Uempty
Figure 4-41. IBM Flex System x222 Compute Node: Disk subsystem overview (2 of 2) NGT113.0
Notes:
The supported disk configurations / maximum storage capacity of each server includes the
following:
One 2.5-inch SS disks
- SATA: 1TB (1 x 1TB)
- SSD: 256GB (1 x 256GB)
Two 1.8-inch SSDs
- 400GB (2 x 200GB)
The storage configurations of the two servers in the x222 must be identical.
IBM Flex System x222 Compute Node:
Disk subsystem overview (2 of 2)
Supported disk
configurations / maximum
storage capacity (per server)
One 2.5-inch SS disks
SATA: 1TB (1 x 1TB)
SSD: 256GB (1 x 256GB)
Two 1.8-inch SSDs
400GB (2 x 200GB)
Storage configurations of the
two servers must be identical
Upper server
Lower server
Hard drive cage
Hard drive cage
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-48 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-42. Optional disk controllers and kits NGT113.0
Notes:
The table shows the available 1.8-inch and 2.5-inch drive options along with the expansion kits
available for the x222. The x222 has two 2.5-inch simple-swap drive bays that are accessible from
the front of the unit, with one bay for each of the servers. The 2.5-inch drive bays support SATA
hard disk drives (HDDs) or SATA solid-state drives (SSDs). Each server in the x222 optionally
supports 1.8-inch SSDs by first installing the Flex System SSD Expansion Kit in to the 2.5-inch bay.
This kit then provides two 1.8-inch hot-swap drive bays. The storage configurations of the two
servers in the x222 must be identical.
Optional disk controllers and kits
Part number
Feature
code
Description
Maximum
supported
per server
1.8-inch drives and expansion kit
00W0366 A3HV
IBM Flex System SSD Expansion Kit
(used to convert the 2.5-inch bay into two 1.8-inch bays)
1
00W1120 A3HQ IBM 100GB SATA 1.8 MLC Enterprise SSD 2
49Y6119 A3AN IBM 200GB SATA 1.8 MLC Enterprise SSD 2
2.5-inch drives
90Y8974 A369 IBM 500GB 7.2K 6Gbps SATA 2.5 G2 SS HDD 1
90Y8979 A36A IBM 1TB 7.2K 6Gbps SATA 2.5 G2 SS HDD 1
90Y8984 A36B
IBM 128GB SATA 2.5 MLC Enterprise Value SSD for
Flex System x222
1
90Y8989 A36C
IBM 256GB SATA 2.5 MLC Enterprise Value SSD for
Flex System x222
1
90Y8994 A36D
IBM 100GB SATA 2.5 MLC Enterprise SSD for Flex
System x222
1
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-49
V9.0
Uempty
Figure 4-43. IBM Flex System x240 Compute Node: Disk subsystem overview (1 of 2) NGT113.0
Notes:
The IBM Flex System x240 Compute Node provides internal storage flexibility with either:
Up to two 2.5-inch drive bays
Up to four 1.8-inch SSDs
Up to two 2.5-inch drive bays and up to four 1.8-inch SSDs
Up to eight 1.8-inch SSDs
The x240 ships standard with the LSI 2004 storage controller (supporting RAID levels 0, 1, 10, and
1E).
The IBM Flex System x240 Compute Node supports only RAID levels 0 and 1 using the standard
LSI 2004 storage controller.
It supports up to two hot-swap 2.5-inch SAS, NL SAS, NL SATA, or SSD hard disk drives. Optional
controllers available are:
ServeRAID M5115 (supporting RAID levels 0, 1, 10, 5, and 50 with optional support of RAID 6 and
60).
IBM Flex System x240 Compute Node:
Disk subsystem overview (1 of 2)
Standard LSI 2004 storage
controller
Supports RAID levels 0 and 1
ServeRAID M5115 (optional)
Supports RAID 0/1/10/5/50
RAID 6/60 (optional)
Supports hot-swap SAS, NL SAS,
NL SATA, or SSD drives
Standard configuration
Two 2.5-inch hot-swap drive bays
Accessible from front of system
No hard disks installed
Supported disk configurations
Up to two 2.5-inch drive bays
Up to four 1.8-inch SSDs
Up to two 2.5-inch drive bays and up
to four 1.8-inch SSDs
Up to eight 1.8-inch SSDs
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-50 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-44. IBM Flex System x240 Compute Node: Disk subsystem overview (2 of 2) NGT113.0
Notes:
The maximum storage capacity of the IBM Flex System x240 Compute Node includes the following
configurations:
Two 2.5-inch HS disks
- SAS: 2.4TB (2 x 1.2TB)
- NL SAS: 2TB (2 x 1TB)
- NL SATA: 2TB (2 x 1TB)
- SSD: 3.2TB (2 x 1.6TB)
Four 1.8-inch SSDs
- 2TB (4 x 512GB)
Two 2.5-inch drive bays and four 1.8-inch SSDs
- 2.5-inch (see left) plus
- Four 1.8-inch SSDs
2TB (4 x 512GB)
Eight 1.8-inch SSDs
- 4TB (8 x 512GB)
IBM Flex System x240 Compute Node:
Disk subsystem overview (2 of 2)
Maximum storage capacity
Two 2.5-inch HS disks
SAS: 2.4TB (2 x 1.2TB)
NL SAS: 2TB (2 x 1TB)
NL SATA: 2TB (2 x 1TB)
SSD: 3.2TB (2 x 1.6TB)
Four 1.8-inch SSDs
2TB (4 x 512GB)
Two 2.5-inch drive bays and
four 1.8-inch SSDs
2.5-inch (see left)
plus
Four 1.8-inch SSDs
2TB (4 x 512GB)
Eight 1.8-inch SSDs
4TB (8 x 512GB)
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-51
V9.0
Uempty
Figure 4-45. Optional disk controllers/kits NGT113.0
Notes:
The IBM Flex System x240 Compute Node optional disk controllers include:
ServeRAID M5115
Features of the ServeRAID M5115 storage controller include:
Eight internal 6Gbps SAS/SATA ports
PCI Express 3.0 x8 host interface
6Gbps throughput per port
800MHz dual-core IBM PowerPC processor with LSI SAS2208 6Gbps RAID on Chip
(ROC) controller
Support for RAID levels 0, 1, 10, 5, 50 standard; support for RAID 6 and 60 with optional
upgrade using 90Y4411
Optional onboard 1GB data cache (DDR3 running at 1333MHz) with optional flash
backup (MegaRAID CacheVault technology) as part of the Enablement Kit 90Y4342.
Support for SAS and SATA HDDs and SSDs
Optional disk controllers/kits
The x240 optional disk controllers
and kits include:
ServeRAID M5115
ServeRAID M5100 Series Enablement
Kit for IBM Flex System x240
ServeRAID M5100 Series IBM Flash Kit
for IBM Flex System x240
ServeRAID M5100 Series IBM Flash Kit
v2 for IBM Flex System x240
ServeRAID M5100 Series SSD
Expansion Kit for IBM Flex System x240
ServeRAID M5115
ServeRAID M5100 Series Enablement
Kit for IBM Flex System x240
ServeRAID M5100 Series
IBM Flash Kit for IBM Flex System x240
ServeRAID M5100 Series
SSD Expansion Kit for IBM Flex System x240
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-52 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Support for intermixing SAS and SATA HDDs and SSDs; mixing different types of drives
in the same array (drive group) not recommended
Optional support for SSD performance acceleration with MegaRAID FastPath and SSD
caching with MegaRAID CacheCade Pro 2.0
Support for up to 64 virtual drives, up to 128 drive groups, up to 16 virtual drives per one
drive group, and up to 32 physical drives per one drive group
Support for logical unit number (LUN) sizes up to 64TB
Configurable stripe size up to 1MB
Compliant with Disk Data Format (DDF) configuration on disk (COD)
S.M.A.R.T. support
MegaRAID Storage Manager management software
The ServeRAID M5115 attaches to the I/O adapter 1 connector and can be attached even if the
Compute Node Fabric Connector is installed (used to route the Embedded 10Gb Virtual Fabric
Adapter to bays 1 and 2). The ServeRAID M5115 cannot be installed if an adapter is installed in I/O
adapter slot 1.
Optional kits include:
ServeRAID M5100 Series Enablement Kit for IBM Flex System x240
ServeRAID M5100 Series IBM Flash Kit for IBM Flex System x240
ServeRAID M5100 Series IBM Flash Kit v2 for IBM Flex System x240
- This updated kit provides support for the latest high-performance SSDs.
ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x240
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-53
V9.0
Uempty
Figure 4-46. Drive combinations: In summary NGT113.0
Notes:
The table shows the ServeRAID M5115 hardware kits required for each combination of drives. For
example, if you plan to install eight 1.8-inch SSDs, you will need the M5115 controller, the Flash Kit,
and the SSD Expansion kit.
Drive combinations: In summary
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-54 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-47. IBM Flex System x440 Compute Node: Disk subsystem overview (1 of 2) NGT113.0
Notes:
The IBM Flex System x440 Compute Node provides internal storage flexibility with either:
Up to two 2.5-inch drive bays
Up to four 1.8-inch SSDs
Up to two 2.5-inch drive bays and up to four 1.8-inch SSDs
Up to eight 1.8-inch SSDs
The x440 ships standard with the LSI 2004 storage controller (supporting RAID levels 0, 1, 10, and
1E).
The IBM Flex System x440 Compute Node supports only RAID levels 0 and 1 using the
standard LSI 2004 storage controller.
It supports up to two hot-swap 2.5-inch SAS, NL SAS, NL SATA, or SSD hard disk drives.
Optional controllers available are:
ServeRAID M5115 (supporting RAID levels 0, 1, 10, 5, and 50 with optional support of RAID 6
and 60).
IBM Flex System x440 Compute Node:
Disk subsystem overview (1 of 2)
Standard LSI 2004 storage
controller
Supports RAID levels 0 and 1
ServeRAID M5115 (optional)
Supports RAID 0/1/10/5/50
RAID 6/60 (optional)
Supports hot-swap SAS, NL SAS,
NL SATA, or SSD drives
Standard configuration
Two 2.5-inch hot-swap drive bays
Accessible from front of system
No hard disks installed
Supported disk configurations
Up to two 2.5-inch drive bays
Up to four 1.8-inch SSDs
Up to two 2.5-inch drive bays and up to
four 1.8-inch SSDs
Up to eight 1.8-inch SSDs
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-55
V9.0
Uempty
Figure 4-48. IBM Flex System x440 Compute Node: Disk subsystem overview (2 of 2) NGT113.0
Notes:
The maximum storage capacity of the IBM Flex System x440 Compute Node includes the following
configurations:
Two 2.5-inch HS disks
- SAS: 2.4TB (2 x 1.2TB)
- NL SAS: 2TB (2 x 1TB)
- NL SATA: 2TB (2 x 1TB)
- SSD: 3.2TB (2 x 1.6TB)
Four 1.8-inch SSDs
- 2TB (4 x 512GB)
Two 2.5-inch drive bays and four 1.8-inch SSDs
- 2.5-inch (see left) plus
- Four 1.8-inch SSDs
2TB (4 x 512GB)
Eight 1.8-inch SSDs
- 4TB (8 x 512GB)
IBM Flex System x440 Compute Node:
Disk subsystem overview (2 of 2)
Maximum storage capacity
Two 2.5-inch HS disks
SAS: 2.4TB (2 x 1.2TB)
NL SAS: 2TB (2 x 1TB)
NL SATA: 2TB (2 x 1TB)
SSD: 3.2TB (2 x 1.6TB)
Four 1.8-inch SSDs
2TB (4 x 512GB)
Two 2.5-inch drive bays and four 1.8-
inch SSDs
2.5-inch (see left)
plus
Four 1.8-inch SSDs
2TB (4 x 512GB)
Eight 1.8-inch SSDs
4TB (8 x 512GB)
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-56 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-49. Optional disk controllers/kits NGT113.0
Notes:
The IBM Flex System x440 Compute Node optional disk controllers include:
ServeRAID M5115
- Features of the ServeRAID M5115 storage controller include:
Eight internal 6Gbps SAS/SATA ports
PCI Express 3.0 x8 host interface
6Gbps throughput per port
800MHz dual-core IBM PowerPC processor with LSI SAS2208 6Gbps RAID on Chip
(ROC) controller
Support for RAID levels 0, 1, 10, 5, 50 standard; support for RAID 6 and 60 with optional
upgrade using 90Y4411
Optional onboard 1GB data cache (DDR3 running at 1333MHz) with optional flash
backup (MegaRAID CacheVault technology) as part of the Enablement Kit 90Y4342
Support for SAS and SATA HDDs and SSDs
Optional disk controllers/kits
The x440 optional disk controllers
and kits include:
ServeRAID M5115
Supports RAID 0/1/5/10/50
Optional RAID 6/60 and encryption
ServeRAID M5100 Series
Enablement Kit for IBM Flex System
x440
ServeRAID M5100 Series IBM Flex
System Flash Kit for x440
ServeRAID M5100 Series SSD
Expansion Kit for IBM Flex System
x440
ServeRAID M5115
ServeRAID M5100 Series Enablement
Kit for IBM Flex System x440
ServeRAID M5100 Series
IBM Flex System Flash Kit for x440
ServeRAID M5100 Series
SSD Expansion Kit for IBM Flex System x440
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-57
V9.0
Uempty
Support for intermixing SAS and SATA HDDs and SSDs; mixing different types of drives
in the same array (drive group) not recommended
Optional support for SSD performance acceleration with MegaRAID FastPath and SSD
caching with MegaRAID CacheCade Pro 2.0
Support for up to 64 virtual drives, up to 128 drive groups, up to 16 virtual drives per one
drive group, and up to 32 physical drives per one drive group
Support for logical unit number (LUN) sizes up to 64TB
Configurable stripe size up to 1MB
Compliant with Disk Data Format (DDF) configuration on disk (COD)
S.M.A.R.T. support
MegaRAID Storage Manager management software
Optional kits include:
ServeRAID M5100 Series Enablement Kit for IBM Flex System x440
ServeRAID M5100 Series IBM Flex System Flash Kit for x440
ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x440
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-58 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-50. Drive combinations: In summary NGT113.0
Notes:
The table shows the ServeRAID M5115 hardware kits required for each combination of drives. For
example, if you plan to install eight 1.8-inch SSDs, you will need the M5115 controller, the Flash Kit,
and the SSD Expansion kit.
Drive combinations: In summary
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-59
V9.0
Uempty
Figure 4-51. IBM Flex System X-Architecture compute node topics NGT113.0
Notes:
This section covers compute node storage expansion.
IBM Flex System X-Architecture
compute node topics
Compute node overview and
architecture
Server subsystems
Disk subsystem
Storage expansion
Processor subsystem
Memory subsystem
Network subsystem
I/O expansion
Standard onboard features
Systems management
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-60 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-52. IBM Flex System Storage Expansion Node (1 of 2) NGT113.0
Notes:
The x220 and x240 support the attachment of the IBM Flex System Storage Expansion Node. The
IBM Flex System Storage Expansion Node provides the ability to attach additional 12 hot-swap
2.5-inch HDDs or SSDs locally to the attached compute node. The Storage Expansion Node
provides storage capacity for Network Attach Storage (NAS) workloads, providing flexible storage
to match capacity, performance and reliability needs.
The Storage Expansion Node has the following features:
Connects directly to supported compute nodes via a PCIe 3.0 interface to the compute node's
interposer connector (also known as the everything-to-everything or ETE connector)
Support for 12 hot-swap 2.5-inch drives, accessible via a sliding tray
Support for 6Gbps SAS and SATA drives, both HDDs and SSDs
Based on an LSI SAS2208 6Gbps RAID on Chip (ROC) controller
Supports RAID 0, 1, 5, 10, and 50 as standard. JBOD also supported. Optional RAID 6 and 60
with a Features on Demand upgrade
Optional 512MB or 1GB cache with cache-to-flash super capacitor offload
IBM Flex System Storage Expansion Node (1 of 2)
Locally attached storage node
Dedicated storage
Directly attached to a single half-
wide compute node
The IBM Flex System Storage
Expansion Node features
Connection is via a PCIe 3.0
interface to the compute node's
expansion connector (everything-to-
everything connector)
Support for 12 hot-swap 2.5-inch
drive
Support for 6Gbps SAS and SATA
drives, both HDDs and SSDs
Supports RAID 0/1/5/10/50
JBOD also supported
Optional RAID 6 and 60 with a
Features on Demand upgrade
IBM Flex System Storage Expansion Node (right)
connected to a IBM Flex System x220 (left)
IBM Flex System Storage Expansion Node
with drive tray partially extended
Copyright IBM Corporation 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-61
V9.0
Uempty
Includes an expansion shelf to physically support the Storage Expansion Node and its compute
node attached together
Internal and external light path diagnostics
Optional Feature on Demand upgrades for RAID 6, 60 support and SSD performance and
caching enablers
Optional support for SSD performance acceleration and SSD caching with Features on
Demand upgrades
Support for up to 64 virtual drives, up to 128 drive groups, up to 16 virtual drives per one drive
group, and up to 32 physical drives per one drive group
Support for logical unit number (LUN) sizes up to 64TB
Configurable stripe size up to 1MB
Compliant with Disk Data Format (DDF) configuration on disk (COD)
S.M.A.R.T. support
Managed through the IMMv2 management processor on the compute node and with the
MegaRAID Storage Manager management software
The use of the Storage Expansion Node requires that the x220 and x240 compute nodes have both
processors installed.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-62 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-53. IBM Flex System Storage Expansion Node (2 of 2) NGT113.0
Notes:
The IBM Flex System Storage Expansion Node connects to a standard-width compute node using
the interposer cable, which plugs into the expansion connector on the compute node. This link
forms a PCIe 3.0 x8 connection between Processor 2 on the compute node and the LSI RAID
controller in the Expansion Node. The result is that the compute node sees the disks in the
expansion node as locally attached. Management of the Storage Expansion Node is via the IMM2
located on the compute node.
The expansion connector in the x220 and x240 compute nodes is routed through processor
2.Therefore, processor 2 must be installed in the compute node.
The table lists the hard drives that are supported, their drive sizes, and the maximum storage
capacity (if the largest size drive is used).
IBM Flex System Storage Expansion Node (2 of 2)
Connected by way of interposer
cable from compute node E2E
connector
ETE connector signaling routed
through Processor 2 of compute
node
Therefore Processor 2 must be
installed
IBM Flex System Storage Expansion Node
architecture
Supported hard
drives
Supported sizes
Maximum
capacity
Supported hard drives Supported sizes
Maximum
capacity
SAS 10K hard disk
drives
300GB, 600GB,
900GB, and 1.2TB
14.4TB
SAS SSD - Enterprise
200GB, 400GB,
800GB, and 1.6TB
19.2TB
NL SATA (7.2K)
250GB, 500GB, and
1TB
12TB
SATA SSD Enterprise
200, 400, and
800GB
9.6TB
SAS (10K self-
encrypting drives)
1.2TB
14.4TB
SATA SSD Enterprise
Value
120, 240, 256, 480,
and 800GB
9.6TB
Hybrid SAS (10K SSD) 600GB
7.2TB
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-63
V9.0
Uempty
Figure 4-54. IBM Flex System X-Architecture compute node topics NGT113.0
Notes:
This section covers the compute node processor subsystem.
IBM Flex System X-Architecture
compute node topics
Compute node overview and
architecture
Server subsystems
Disk subsystem
Storage expansion
Processor subsystem
Memory subsystem
Network subsystem
I/O expansion
Standard onboard features
Systems management
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-64 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-55. IBM Flex System x220 and x222 Compute Nodes: Intel Romley-EN platform (1 of 2) NGT113.0
Notes:
The x220 and x222 compute nodes feature the Intel Romley-EN platform which incorporates the
Intel Sandy Bridge micro-architecture including the Xeon E5-2400 series processors and the Intel
C600 (Patsburg B) Platform Controller Hub (PCH). Some features of the Romley-EN platform
include:
The Xeon E5-2400 series processor has models with either two, four, six or eight cores per
processor with up to 16 threads per socket. The processors have up to 20MB of shared L3
cache, Hyper-Threading, Turbo Boost Technology 2.0 (depending on processor model)
The Intel C600 (Patsburg B) Platform Controller Hub (PCH).
One QuickPath Interconnect (QPI) link that runs at up to 8 GT/s
PCI Express 3.0 with 24 lanes per processor socket
IBM Flex System x220 and x222 Compute Nodes:
Intel Romley-EN platform (1 of 2)
Intel Sandy Bridge
microarchitecture
Based on Intel Nehalem
microarchitecture
Features of the Intel
Romley-EN platform include:
Intel Xeon E5-2400 series
processors
Scales from two to eight cores
Intel C600 Platform Controller
Hub (PCH)
One QuickPath Interconnect
(QPI) link at up to 8.0 GTps
PCI Express 3.0
24 lanes per processor socket
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-65
V9.0
Uempty
Figure 4-56. IBM Flex System x220 and x222 Compute Nodes: Intel Romley-EN platform (2 of 2) NGT113.0
Notes:
Additional features of the Romley-EN platform include:
Support for UDIMM, RDIMM, and LRDIMM memory types.
Three memory channels that support up to two DIMMs per channel (DPC).
Memory module speeds up to 1600MHz.
IBM Flex System x220 and x222 Compute Nodes:
Intel Romley-EN platform (2 of 2)
DDR3 memory
UDIMMs, RDIMMs, and
LRDIMMs supported
Three memory channels
Up to two DIMMs per channel
(DPC)
Speeds up to 1600MHz
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-66 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-57. x220: Processor subsystem overview NGT113.0
Notes:
The IBM Flex System x220 Compute Node features Intel Xeon E5-2400 series quad-, six-, and
eight-core processors. The table lists the models available, processor speed during normal
operation, the core count and L3 cache size per processor, QPI link speed, maximum memory
speed, and whether or not the processor supports Turbo Boost and SMT.
x220: Processor subsystem overview
The IBM Flex System x220
Compute Node features:
Intel Xeon E5-2400 series
quad-, six-, or eight-core
processors
Intel QPI at up to 8.0 GTps
Turbo Boost Technology 2.0
Intel SMT technology
Intel Advanced Vector
Extensions (AVX)
Processor
model
Processor SKUs
Core count / L3
cache
QPI
speed
Maximum
memory
speed
Turbo
Boost
/ SMT
Advanced E5-2450, E5-2470
Eight cores
20MB
8.0 GTps 1600MHz Yes / Yes
Standard E5-2420, E5-2430, E5-2440
Six cores
15MB
7.2 GTps 1333MHz Yes / Yes
Basic E5-2403, E5-2407
Four cores
10MB
6.4 GTps 1066MHz No / Yes
Low power
E5-2450L, E5-2448L,
E5-2430L, E5-2428L,
E5-2418L
Four - eight cores
10 20MB
6.4 8.0
GTps
1333
1600MHz
Yes / Yes
Other Intel Pentium 1403
Two cores
5MB
N/A 1066MHz N/A
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-67
V9.0
Uempty
Figure 4-58. x222: Processor subsystem overview NGT113.0
Notes:
The IBM Flex System x222 Compute Node features Intel Xeon E5-2400 series quad-, six-, and
eight-core processors. The table lists the models available, processor speed during normal
operation, the core count and L3 cache size per processor, QPI link speed, maximum memory
speed, and whether or not the processor supports Turbo Boost and SMT.
x222: Processor subsystem overview
The IBM Flex System x222
Compute Node features:
Intel Xeon E5-2400 series quad-, six-
, or eight-core processors
Intel QPI at up to 8.0 GTps
Turbo Boost Technology 2.0
Intel SMT technology
Intel Advanced Vector Extensions
(AVX)
Processor
model
Processor SKUs
Core count / L3
cache
QPI
speed
Maximum
memory
speed
Turbo
Boost
/ SMT
Advanced E5-2450, E5-2470
Eight cores
20MB
8.0 GTps 1600MHz Yes / Yes
Standard E5-2420, E5-2430, E5-2440
Six cores
15MB
7.2 GTps 1333MHz Yes / Yes
Basic E5-2403, E5-2407
Four cores
10MB
6.4 GTps 1066MHz No / Yes
Low power
E5-2450L, E5-2448L,
E5-2430L, E5-2428L,
E5-2418L
Four - eight cores
10 20MB
6.4 8.0
GTps
1333
1600MHz
Yes / Yes
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-68 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-59. IBM Flex System x240 Compute Node: Intel Romley-EP platform (1 of 2) NGT113.0
Notes:
The IBM Flex System x240 Compute Node type 8737 features the Intel Romley-EP platform which
incorporates the Intel Sandy Bridge micro-architecture including the Xeon E5-2600 series
processors and the Intel C600 (Patsburg B) Platform Controller Hub (PCH). Some features of the
Romley-EP platform include:
The Xeon E5-2600 series processor has models with either two, four, six or eight cores per
processor with up to 16 threads per socket. The processors have up to 20MB of shared L3
cache, Hyper-Threading, Turbo Boost Technology 2.0 (depending on processor model)
The Intel C600 (Patsburg B) Platform Controller Hub (PCH).
Two QuickPath Interconnect (QPI) 1.1 links that run at up to 8 GTps
PCI Express 3.0 with 40 lanes per processor socket.
IBM Flex System x240 Compute Node:
Intel Romley-EP platform (1 of 2)
Intel Sandy Bridge
microarchitecture
Based on Intel Nehalem
microarchitecture
Features of the Intel Romley-EP
platform include:
Intel Xeon E5-2600 series processors
Scales from two to eight cores
Intel C600 Platform Controller Hub
(PCH)
Two QuickPath Interconnect (QPI) 1.1
links at up to 8.0 GTps
PCI Express 3.0
Forty lanes per processor socket
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-69
V9.0
Uempty
Figure 4-60. IBM Flex System x240 Compute Node: Intel Romley-EP platform (2 of 2) NGT113.0
Notes:
Additional features of the Romley-EP platform include:
Support for UDIMMs, RDIMMs, and LRDIMMs memory types.
Four memory channels that support up to three DIMMs per channel (DPC).
Memory module speeds up to 1600MHz.
IBM Flex System x240 Compute Node:
Intel Romley-EP platform (2 of 2)
DDR3 memory
UDIMMs, RDIMMs, and
LRDIMMs supported
Four memory channels
Up to three DIMMs per channel
(DPC)
Speeds up to 1600MHz
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-70 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-61. Processor subsystem overview NGT113.0
Notes:
The IBM Flex System x240 Compute Node features Intel Xeon E5-2600 series dual-, quad-, six-,
and eight-core processors. The table lists the models available, processor speed during normal
operation, the core count and L3 cache size per processor, QPI link speed, maximum memory
speed, and whether or not the processor supports Turbo Boost and SMT.
Processor subsystem overview
The IBM Flex System x240
Compute Node features:
Intel Xeon E5-2600 series dual-,
quad-, six-, or eight-core
processors
Intel QPI at up to 8.0 GTps
Turbo Boost Technology 2.0
Intel SMT technology
Intel Advanced Vector
Extensions (AVX)
Processor
model
Processor SKUs
Core count / L3
cache
QPI
speed
Maximum
memory
speed
Turbo
Boost
/ SMT
Advanced
E5-2650, E5-2658, E5-2660,
E5-2665, E5-2670,
E5-2680, E5-2690
Eight cores
20MB
8.0 GTps 1600MHz Yes / Yes
Standard E5-2620, E5-2630, E5-2640
Six cores
15MB
7.2 GTps 1333MHz Yes / Yes
Basic E5-2603, E5-2609
Four cores
10MB
6.4 GTps 1066MHz No / Yes
Low power
E5-2650L, E5-2648L,
E5-2630L
Six - eight cores
15 20MB
7.2 8.0
GTps
1333
1600MHz
Yes / Yes
Special
purpose
E5-2667, E5-2643, E5-2637
Two - eight cores
5 15MB
6.4 8.0
GTps
1600MHz Varies
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-71
V9.0
Uempty
Figure 4-62. IBM Flex System x440 Compute Node: Intel Romley-EP platform (1 of 2) NGT113.0
Notes:
The IBM Flex System x440 Compute Node type 7917 features the Intel Romley-EP platform which
incorporates the Intel Sandy Bridge micro-architecture including the Xeon E5-4600 series
processors and the Intel C600 (Patsburg B) Platform Controller Hub (PCH). Some features of the
Romley-EP platform include:
The Xeon E5-4600 series processor has models with either two, four, six or eight cores per
processor with up to 16 threads per socket. The processors have up to 20MB of shared L3
cache, Hyper-Threading, Turbo Boost Technology 2.0 (depending on processor model)
The Intel C600 (Patsburg B) Platform Controller Hub (PCH).
Two QuickPath Interconnect (QPI) 1.1 links that run at up to 8 GTps
PCI Express 3.0 with 40 lanes per processor socket.
IBM Flex System x440 Compute Node:
Intel Romley-EP platform (1 of 2)
Intel Sandy Bridge
microarchitecture
Based on Intel Nehalem
microarchitecture
Features of the Intel Romley-
EP platform include:
Intel Xeon E5-4600 series
processors
Scales from two to eight cores
Intel C600 Platform Controller
Hub (PCH)
Two QuickPath Interconnect
(QPI) 1.1 links at up to 8.0 GTps
PCI Express 3.0
Forty lanes per processor socket
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-72 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-63. IBM Flex System x440 Compute Node: Intel Romley-EP platform (2 of 2) NGT113.0
Notes:
Additional features of the Romley-EP platform include:
Support for UDIMMs, RDIMMs, and LRDIMMs memory types.
Four memory channels that support up to three DIMMs per channel (DPC).
Memory module speeds up to 1600MHz.
IBM Flex System x440 Compute Node:
Intel Romley-EP platform (2 of 2)
DDR3 memory
UDIMMs, RDIMMs, and
LRDIMMs supported
Four memory channels
Up to three DIMMs per channel
(DPC)
Speeds up to 1600MHz
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-73
V9.0
Uempty
Figure 4-64. Processor subsystem overview NGT113.0
Notes:
The IBM Flex System x440 Compute Node features Intel Xeon E5-4600 series quad-, six-, and
eight-core processors. The table lists the models available, processor speed during normal
operation, the core count and L3 cache size per processor, QPI link speed, maximum memory
speed, and whether or not the processor supports Turbo Boost and SMT.
Processor subsystem overview
The IBM Flex System x440
Compute Node features:
Intel Xeon E5-4600 series quad-, six-,
or eight-core processors
Intel QPI at up to 8.0 GTps
Turbo Boost Technology 2.0
Intel SMT technology
Intel Advanced Vector Extensions
(AVX)
Processor
model
Processor SKUs
Core count / L3
cache
QPI
speed
Maximum
memory
speed
Turbo
Boost
/ SMT
Advanced E5-4650, E5-4640
Eight cores
20MB
8.0 GTps 1600MHz Yes / Yes
Standard E5-4620, E5-4617, E5-4610
Six eight cores
15 - 16MB
7.2 GTps 1333MHz Yes / Yes
Basic E5-4607, E5-4603
Four - six cores
10 - 12MB
6.4 GTps 1066MHz No / Yes
Low power E5-4650L
Eight cores
20MB
8.0 GTps 1600MHz Yes / Yes
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-74 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-65. IBM Flex System X-Architecture compute node topics NGT113.0
Notes:
This section covers the compute node memory subsystem.
IBM Flex System X-Architecture
compute node topics
Compute node overview and
architecture
Server subsystems
Disk subsystem
Storage expansion
Processor subsystem
Memory subsystem
Network subsystem
I/O expansion
Standard onboard features
Systems management
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-75
V9.0
Uempty
Figure 4-66. Unbuffered DIMMs NGT113.0
Notes:
One memory type the compute nodes support is Unbuffered DIMMs (UDIMMs). In contrast to
RDIMMs that use registers to isolate the memory controller from the DRAMs, UDIMMs attach
directly to the memory controller and therefore does not introduce a delay (hence better
performance). The disadvantages are limited drive capability, which means that the number of
DIMMs that can be connected together on the same memory channel remains small, due to
electrical loading. This leads to overall fewer DIMM support, fewer DIMMs per channel (DPC), and
overall lower total system memory capacity than RDIMM systems.
UDIMMs have the lowest latency and lowest power usage. They also have the lowest overall
capacity.
Unbuffered DIMMs
Unbuffered DIMM (UDIMM)
modules
Attach directly to memory
controller
Produces slightly better
performance
Limited capacity due to
electrical loading
Fewer DIMMs per channel
Overall lower total system
memory capacity
Compute node support
x220: Supported
x222: Not supported
x240: Supported
x440: Supported
In summary
Lowest latency
Lowest power usage
Lowest overall capacity
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-76 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-67. Registered DIMMs NGT113.0
Notes:
Another memory type the compute nodes support is registered DIMMs (RDIMMs). Registered
DIMMs (RDIMMs) are the mainstream module solution for servers or any applications that demand
heavy data throughput, high density, and high reliability. RDIMMs use registers to isolate the
memory controller address, command, and clock signals from the DRAMs, which leads to a lighter
electrical load. Therefore, more DIMMs can be interconnected and larger memory capacity is
possible. The register does, however, typically impose a clock or more of delay, meaning that
registered DIMMs often have slightly longer access times than their unbuffered counterparts.
In general, RDIMMs have the best balance of capacity, reliability, and workload performance with
maximum performance up to 1600MHz (at 2 DPC).
Registered DIMMs
Registered DIMM (RDIMM)
modules
Registers isolate the DRAMS
from the memory controller
Produces slightly longer access
times
Lighter electrical loading
More DIMMs per channel
Greater total system memory
capacity
Compute node support
x220: Supported
x222: Supported
x240: Supported
x440: Supported
In summary
Best balance of capacity,
reliability, and performance
Larger memory capacity
Higher memory speeds
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-77
V9.0
Uempty
Figure 4-68. Load-reduced DIMMs NGT113.0
Notes:
A memory type that the compute nodes support is load-reduced DIMMs. Load-reduced DIMMs are
very similar to RDIMMs in that they use memory buffers to isolate the memory controller address,
command, and clock signals from the individual DRAMS on the DIMM. Load-reduced DIMMs take
the buffering a step further by buffering the memory controller data lines from the DRAMs also.
In essence, all signaling between the memory controller and the LRDIMM is now intercepted by the
memory buffers on the LRDIMM module. This allows additional ranks to be added to each LRDIMM
module without sacrificing signal integrity. This also means fewer actual ranks are seen by the
memory controller (for example, a 4R LRDIMM has the same look as a 2R RDIMM).
The additional buffering LRDIMMs support greatly reduces the electrical load on the system
thereby allowing the system to operate at a higher overall memory speed for a given capacity, or by
allowing a higher overall memory capacity at a given memory speed.
LRDIMMs allow for maximum system memory capacity and the highest performance for system
memory capacities above 384GB. They are well suited for system workloads that require maximum
memory such as virtualization and databases.
Load-reduced DIMMs
Load-reduced DIMM (LRDIMM)
modules
Similar to RDIMMs
Also buffer data lines from the
memory controller
Lighter electrical load
Compute node support
x220: Supported
x222: Supported
x240: Supported
x440: Supported
Allows additional ranks to be added
to DIMM without sacrificing signal
integrity
Example: A 4R LRDIMM has same
look as 2R RDIMM.
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-78 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-69. IBM Flex System x220 Compute Node: Unbuffered DIMMs NGT113.0
Notes:
Valid memory configurations of the IBM Flex System x220 Compute Node using UDIMMs are:
If one processor is installed, up to six slots can be used.
If two processors are installed, up to 12 slots can be used.
Maximum memory configurations include:
Single-rank DIMMs with one processor: 12GB (6 x 2GB). With two processors: 24GB
(12 x 2GB)
Dual-rank DIMMs with one processor: 24GB (6 x 4GB). With two processors: 48GB (12
x 4GB)
Maximum memory at maximum speed (1333MHz):
With one processor: 12GB (3 x 4GB). With two processors: 24GB (6 x 4GB).
The table shows the total memory capacity of the server with 12 DIMMs installed and varying the
rank and size of the DIMMs.
IBM Flex System x220 Compute Node:
Unbuffered DIMMs
Valid memory configurations
One processor installed
Up to six slots
Two processors installed
Up to 12 slots
Maximum configurations
Single-rank DIMMs
One processor: 12 GB (6 x 2 GB)
Two processors: 24 GB (12 x 2 GB)
Dual-rank DIMMs
One processor: 24 GB (6 x 4 GB)
Two processors: 48 GB (12 x 4 GB)
Maximum memory at maximum
speed (1333 MHz)
One processor
12 GB (3 x 4 GB)
Two processors
24 GB (6 x 4 GB)
Specification UDIMMs
Rank Single rank Dual rank
Part number 49Y1403 (2GB) 49Y1404 (4GB)
Rated speed /
Rated voltage
1333MHz / 1.35V 1333MHz / 1.35V
Operating
voltage
1.35V 1.5V 1.35V 1.5V
Maximum
quantity
12 12 12 12
Largest DIMM 2GB 2GB 4GB 4GB
Maximum
memory
capacity
24GB 24GB 48GB 48GB
Maximum
memory at
rated speed
12GB 12GB 24GB 24GB
Maximum operating speed
1 DPC 1333MHz 1333MHz 1333MHz 1333MHz
2 DPC 1066MHz 1066MHz 1066MHz 1066MHz
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-79
V9.0
Uempty
Figure 4-70. Registered DIMMs NGT113.0
Notes:
Valid memory configurations of the IBM Flex System x220 Compute Node using RDIMMs are:
If one processor is installed, up to 6 slots can be used.
If two processors are installed, up to 12 slots can be used.
Maximum memory configurations include:
Single-rank DIMMs
- One processor: 24GB (6 x 4GB)
- Two processors: 48GB (12 x 4GB)
Dual-rank DIMMs
- One processor: 96GB (6 x 16GB)
- Two processors: 192GB (12 x 16GB)
Quad-rank DIMMs
- One processor: 96GB (6 x 16GB)
Registered DIMMs
Specification RDIMMs
Rank Single rank Dual rank Quad rank
Part number 49Y1406 (4GB) 49Y1559 (4GB)
49Y1407 (4GB)
49Y1397 (8GB)
49Y1563 (16GB)
90Y3178 (4GB)
90Y3109 (8GB)
00D4968 (16GB)
49Y1400 (16GB)
Rated speed /
Rated voltage
1333MHz / 1.35V 1600MHz / 1.5V 1333MHz / 1.35V 1600MHz / 1.5V 1066MHz / 1.35V
Operating
voltage
1.35V 1.5V 1.5V 1.35V 1.5V 1.5V 1.35V 1.5V
Maximum
quantity
12 12 12 12 12 12 12 12
Largest DIMM 4GB 4GB 4GB 16GB 16GB 16GB 16GB 16GB
Maximum
memory
capacity
48GB 48GB 48GB 192GB 192GB 192GB 192GB 192GB
Maximum
memory at
rated speed
48GB 48GB 48GB 192GB 192GB 192GB N/A N/A
Maximum operating speed
1 DPC 1333MHz 1333MHz 1600MHz 1333MHz 1333MHz 1600MHz 800MHz 800MHz
2 DPC 1333MHz 1333MHz 1600MHz 1333MHz 1333MHz 1600MHz 800MHz 800MHz
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-80 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
- Two processors: 192GB (12 x 16GB)
Maximum memory at maximum speed (1600MHz)
One processor: 96GB (6 x 16GB)
Two processors: 192GB (12 x 16GB)
The table shows the total memory capacity of the server with 12 DIMMs installed and varying the
rank and size of the DIMMs.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-81
V9.0
Uempty
Figure 4-71. Load-reduced DIMMs NGT113.0
Notes:
Valid memory configurations of the IBM Flex System x220 Compute Node using LRDIMMs are:
If one processor is installed, up to 6 slots can be used.
If two processors are installed, up to 12 slots can be used.
Maximum memory configurations include:
Quad-rank DIMMs
- One processor: 192GB (6 x 32GB)
- Two processors: 384GB (12 x 32GB)
The table shows the total memory capacity of the server with 12 DIMMs installed and varying the
rank and size of the DIMMs.
Load-reduced DIMMs
Valid memory configurations
One processor installed
Up to six slots
Two processors installed
Up to 12 slots
Maximum configurations
Quad-rank DIMMs
One processor: 192 GB
(6 x 32 GB)
Two processors: 384 GB
(12 x 32 GB)
Maximum memory at maximum
speed (1333 MHz)
One processor
96 GB (3 x 32 GB)
Two processors
192 GB (6 x 32 GB)
Specification LRDIMMs
Rank Quad rank
Part number 90Y3105 (32GB)
Rated speed / Rated
voltage
1333MHz / 1.35V
Operating voltage 1.35V 1.5V
Maximum quantity 12 12
Largest DIMM 32GB 32GB
Maximum memory
capacity
384GB 384GB
Maximum memory at
rated speed
N/A 192GB
Maximum operating speed
1 DPC 1066MHz 1333MHz
2 DPC 1066MHz 1066MHz
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-82 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-72. IBM Flex System x222 Compute Node: Registered DIMMs NGT113.0
Notes:
The table shows some characteristics of the x222 memory subsystem with single- and dual-rank
RDIMMs installed. In summary:
Valid memory configurations (per server)
- One processor installed: up to six slots
- Two processors installed: up to 12 slots
Maximum configurations (per server)
- Single-rank DIMMs
One processor: 24GB (6 x 4GB)
Two processors: 48GB (12 x 4GB)
- Dual-rank DIMMs
One processor: 96GB (6 x 16GB)
Two processors: 192GB (12 x 16GB)
IBM Flex System x222 Compute Node:
Registered DIMMs
Specification RDIMMs
Rank Single rank Dual rank
Part number 49Y1406 (4GB) 49Y1559 (4GB)
49Y1407 (4GB)
49Y1397 (8GB)
49Y1563 (16GB)
90Y3178 (4GB)
90Y3109 (8GB)
00D4968 (16GB)
Rated speed /
Rated voltage
1333MHz / 1.35V 1600MHz / 1.5V 1333MHz / 1.35V 1600MHz / 1.5V
Operating
voltage
1.35V 1.5V 1.5V 1.35V 1.5V 1.5V
Maximum
quantity
12 12 12 12 12 12
Largest DIMM 4GB 4GB 4GB 16GB 16GB 16GB
Maximum
memory
capacity
48GB 48GB 48GB 192GB 192GB 192GB
Maximum
memory at
rated speed
48GB 48GB 48GB 192GB 192GB 192GB
Maximum operating speed
1 DPC 1333MHz 1333MHz 1600MHz 1333MHz 1333MHz 1600MHz
2 DPC 1333MHz 1333MHz 1600MHz 1333MHz 1333MHz 1600MHz
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-83
V9.0
Uempty
Maximum memory at maximum speed (1600MHz)
- One processor: 96GB (6 x 16GB)
- Two processors: 192GB (12 x 16GB)
All maximum values shown in the table (maximum quantity, maximum memory capacity, and
maximum memory at rated speed) are for two processors installed. When one processor is
installed, reduce the maximum values shown by half.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-84 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-73. Load-reduced DIMMs NGT113.0
Notes:
The table shows some characteristics of the x222 memory subsystem with quad-rank LRDIMMs
installed. In summary:
Valid memory configurations (per server)
- One processor installed: up to six slots
- Two processors installed: up to 12 slots
Maximum configurations (per server)
- Quad-rank DIMMs
One processor: 192GB (6 x 32GB)
Two processors: 384GB (12 x 32GB)
Maximum memory at maximum speed (1333MHz)
- One processor: 96GB (3 x 32GB)
- Two processors: 192GB (6 x 32GB)
Load-reduced DIMMs
Valid memory configurations
(per server)
One processor installed
Up to six slots
Two processors installed
Up to 12 slots
Maximum configurations
Quad-rank DIMMs
One processor: 192 GB
(6 x 32 GB)
Two processors: 384 GB
(12 x 32 GB)
Maximum memory at maximum
speed (1333 MHz)
One processor
96 GB (3 x 32 GB)
Two processors
192 GB (6 x 32 GB)
Specification LRDIMMs
Rank Quad rank
Part number 90Y3105 (32GB)
Rated speed / Rated
voltage
1333MHz / 1.35V
Operating voltage 1.35V 1.5V
Maximum quantity 12 12
Largest DIMM 32GB 32GB
Maximum memory
capacity
384GB 384GB
Maximum memory at
rated speed
N/A 192GB
Maximum operating speed
1 DPC 1066MHz 1333MHz
2 DPC 1066MHz 1066MHz
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-85
V9.0
Uempty
All maximum values shown in the table (maximum quantity, maximum memory capacity, and
maximum memory at rated speed) are for two processors installed. When one processor is
installed, reduce the maximum values shown by half.
At an operating voltage of 1.35V, the maximum operating speed of the memory is 1066MHz at
either 1 DPC or 2 DPC. Since this is below the rated speed of 1333MHz, the maximum memory at
rated speed is shown as N/A. At an operating voltage of 1.5V, the memory can operate at the
maximum rated speed, 1333MHz, but at 1 DPC only. Hence the maximum memory at rated speed
is 192GB (not 384GB).
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-86 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-74. IBM Flex System x240 Compute Node: Unbuffered DIMMs NGT113.0
Notes:
Valid memory configurations of the IBM Flex System x240 Compute Node using UDIMMs are:
If one processor is installed, up to eight slots can be used - slots 3, 6, 7, and 10 are not
supported.
If two processors are installed, up to 16 slots can be used - slots 3, 6, 7, 10, 15, 18, 19, and 22
are not used.
Maximum memory configurations include:
Dual-rank DIMMs with one processor: 32GB (8 x 4GB). With two processors: 64GB (16 x 4GB)
The table shows the total memory capacity of the server with 16 DIMMs installed and varying the
rank and size of the DIMMs.
IBM Flex System x240 Compute Node:
Unbuffered DIMMs
Valid memory configurations
One processor installed
Up to eight slots
DIMM slots 3, 6, 7, and 10 not used
Two processors installed
Up to 16 slots
DIMM slots 3, 6, 7, 10, 15, 18, 19, and 22
not used
Maximum configurations
Dual-rank DIMMs
One processor: 32 GB (8 x 4 GB)
Two processors: 64 GB (16 x 4 GB)
Maximum memory at maximum speed
(1333 MHz)
One processor
32 GB (8 x 4 GB)
Two processors
64 GB (16 x 4 GB)
Specification UDIMMs
Rank Dual rank
Part number 49Y1404 (4GB)
Rated speed /
Rated voltage
1333MHz / 1.35V
Operating
voltage
1.35V 1.5V
Maximum
quantity
16 16
Largest DIMM 4GB 4GB
Maximum
memory
capacity
64GB 64GB
Maximum
memory at
rated speed
64GB 64GB
Maximum operating speed
1 DPC 1333MHz 1333MHz
2 DPC 1333MHz 1333MHz
3 DPC
Not
supported
Not
supported
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-87
V9.0
Uempty
Figure 4-75. Registered DIMMs NGT113.0
Notes:
Valid memory configurations of the IBM Flex System x240 Compute Node using RDIMMs are:
If one processor is installed, up to 12 slots can be used. However, if quad-rank DIMMs are used,
only 8 slots can be populated.
If two processors are installed, up to 24 slots can be used. However, if quad-rank DIMMs are
used, only 16 slots can be populated.
Maximum memory configurations include:
- Single-rank DIMMs
One processor: 48GB (12 x 4GB)
Two processors: 96GB (24 x 4GB)
- Dual-rank DIMMs
One processor: 192GB (12 x 16GB)
Two processors: 384GB (24 x 16GB)
- Quad-rank DIMMs
Registered DIMMs
Specification
RDIMMs
Rank
Single rank Dual rank Quad rank
Part number
49Y1405 (2GB)
49Y1406 (4GB)
49Y1559
(4GB)
49Y1407 (4GB)
49Y1397 (8GB)
49Y1563 (16GB)
90Y3178 (4GB)
90Y3109 (8GB)
00D4968 (16GB)
49Y1400 (16GB)
Rated speed /
Rated voltage
1333MHz / 1.35V
1600MHz /
1.5V
1333MHz / 1.35V 1600MHz / 1.5V 1066MHz / 1.35V
Operating voltage 1.35V 1.5V 1.5V 1.35V 1.5V 1.5V 1.35V 1.5V
Maximum quantity 16 24 24 16 24 24 8 16
Largest DIMM
4GB 4GB 4GB 16GB 16GB 16GB 16GB 16GB
Maximum memory
capacity
64GB 96GB 96GB 256GB 384GB 384GB 128GB 256GB
Maximum memory
at rated speed
64GB 64GB 64GB 256GB 256GB 256GB N/A 128GB
Maximum operating speed
1 DPC 1333MHz 1333MHz 1600MHz 1333MHz 1333MHz 1600MHz 800MHz 1066MHz
2 DPC 1333MHz 1333MHz 1600MHz 1333MHz 1333MHz 1600MHz
Not
supported
800MHz
3 DPC Not supported 1066MHz 1066MHz
Not
supported
1066MHz 1066MHz
Not
supported
Not
supported
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-88 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
One processor: 128GB (8 x 16GB)
Two processors: 256GB (16 x 16GB)
The table shows the total memory capacity of the server with 8, 16, and 24 DIMMs installed and
varying the rank and size of the DIMMs.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-89
V9.0
Uempty
Figure 4-76. Load-reduced DIMMs NGT113.0
Notes:
Valid memory configurations of the IBM Flex System x240 Compute Node using LRDIMMs are:
If one processor is installed, up to 12 slots can be used.
If two processors are installed, up to 24 slots can be used.
Maximum memory configurations include:
Quad-rank DIMMs
- One processor: 384GB (12 x 32GB)
- Two processors: 768GB (24 x 32GB)
The table shows the total memory capacity of the server with 24 DIMMs installed and varying the
rank and size of the DIMMs.
Load-reduced DIMMs
Valid memory configurations
One processor installed
Up to 12 slots
Two processors installed
Up to 24 slots
Maximum configurations
Quad-rank DIMMs
One processor: 384 GB
(12 x 32 GB)
Two processors: 768 GB
(24 x 32 GB)
Maximum memory at
maximum speed (1333 MHz)
One processor
256 GB (8 x 32 GB)
Two processors
512 GB (16 x 32 GB)
Specification LRDIMMs
Rank Quad rank
Part number
49Y1567 (16GB)
90Y3105 (32GB)
Rated speed / Rated
voltage
1333MHz / 1.35V
Operating voltage 1.35V 1.5V
Maximum quantity 24 24
Largest DIMM 32GB 32GB
Maximum memory
capacity
768GB 768GB
Maximum memory at
rated speed
N/A 512GB
Maximum operating speed
1 DPC 1066MHz 1333MHz
2 DPC 1066MHz 1333MHz
3 DPC 1066MHz 1066MHz
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-90 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-77. IBM Flex System x440 Compute Node: Unbuffered DIMMs NGT113.0
Notes:
Valid memory configurations of the IBM Flex System x440 Compute Node using UDIMMs are:
If one processor is installed, up to eight slots can be used.
If two processors are installed, up to 16 slots can be used.
If four processors are installed, up to 32 slots can be used.
Maximum memory configurations include:
Dual-rank DIMMs:
- With one processor: 32GB (8 x 4GB).
- With two processors: 64GB (16 x 4GB).
- With four processors: 128GB (32 x 4GB)
Maximum memory at maximum speed (1333MHz):
- With one processor: 32GB (8 x 4GB).
- With two processors: 64GB (16 x 4GB).
- With four processors: 128GB (32 x 4GB).
The table shows the total memory capacity of the server with 32 DIMMs installed and varying the
rank and size of the DIMMs.
IBM Flex System x440 Compute Node:
Unbuffered DIMMs
Valid memory configurations
One processor installed
Up to eight slots
Two processors installed
Up to 16 slots
Four processors installed
Up to 32 slots
Maximum configurations
Dual-rank DIMMs
One processor: 32 GB (8 x 4 GB)
Two processors: 64 GB (16 x 4 GB)
Four processors: 128 GB (32 x 4 GB)
Maximum memory at maximum
speed (1333 MHz)
One processor
32 GB (8 x 4 GB)
Two processors
64 GB (16 x 4 GB)
Four processors
128 GB (32 x 4 GB)
Specification UDIMMs
Rank Dual rank
Part number 49Y1404 (4GB)
Rated speed /
Rated voltage
1333MHz / 1.35V
Operating
voltage
1.35V
Maximum
quantity
32
Largest DIMM 4GB
Maximum
memory
capacity
128GB
Maximum
memory at
rated speed
128GB
Maximum operating speed
1 DPC 1333MHz
2 DPC 1333MHz
3 DPC Not supported
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-91
V9.0
Uempty
Figure 4-78. Registered DIMMs NGT113.0
Notes:
Valid memory configurations of the IBM Flex System x440 Compute Node using RDIMMs are:
If one processor is installed, up to 12 slots can be used.
If two processors are installed, up to 24 slots can be used.
If all four processors are installed, all 48 slots can be used.
Maximum memory configurations include:
- Single-rank DIMMs
One processor: 48GB (12 x 4GB)
Two processors: 96GB (24 x 4GB)
Four processors: 192GB (48 x 4GB)
- Dual-rank DIMMs
One processor: 192GB (12 x 16GB)
Two processors: 384GB (24 x 16GB)
Four processors: 768GB (48 x 16GB)
The table shows the total memory capacity of the server with 48 DIMMs installed and varying the
rank and size of the DIMMs.
Registered DIMMs
Specification RDIMMs
Rank Single rank Dual rank
Part number 49Y1406 (4GB) 49Y1559 (4GB)
49Y1407 (4GB)
49Y1397 (8GB)
49Y1563 (16GB)
90Y3109 (8GB)
00D4968 (16GB)
Rated speed /
Rated voltage
1333MHz / 1.35V 1600MHz / 1.5V 1333MHz / 1.35V 1600MHz / 1.5V
Operating
voltage
1.35V 1.5V 1.35V 1.5V
Maximum
quantity
48 48 48 48
Largest DIMM 4GB 4GB 16GB 16GB
Maximum
memory
capacity
192GB 192GB 768GB 768GB
Maximum
memory at rated
speed
128GB 128GB 512GB 512GB
Maximum operating speed
1 DPC 1333MHz 1600MHz 1333MHz 1600MHz
2 DPC 1333MHz 1600MHz 1333MHz 1600MHz
3 DPC 1066MHz (1.5V) 1066MHz 1066MHz (1.5V) 1066MHz
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-92 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-79. Load-reduced DIMMs NGT113.0
Notes:
Valid memory configurations of the IBM Flex System x440 Compute Node using LRDIMMs are:
If one processor is installed, up to 12 slots can be used.
If two processors are installed, up to 24 slots can be used.
If four processors are installed, up to 48 slots can be used.
Maximum memory configurations include:
- Quad-rank DIMMs
One processor: 384GB (12 x 32GB)
Two processors: 768GB (24 x 32GB)
Four processors: 1.5TB (48 x 32GB)
The table shows the total memory capacity of the server with 48 DIMMs installed and varying the
rank and size of the DIMMs.
Load-reduced DIMMs
Valid memory configurations
One processor installed
Up to 12 slots
Two processors installed
Up to 24 slots
Four processors installed
Up to 48 slots
Maximum configurations
Quad-rank DIMMs
One processor: 384 GB (12 x 32 GB)
Two processors: 768 GB (24 x 32 GB)
Four processors: 1.5 TB (48 x 32 GB)
Maximum memory at maximum
speed (1333 MHz)
One processor
256 GB (8 x 32 GB)
Two processors
512 GB (16 x 32 GB)
Four processors
1 TB (32 x 32 GB)
Specification LRDIMMs
Rank Quad rank
Part number
49Y1567 (16GB)
90Y3105 (32GB)
Rated speed / Rated
voltage
1333MHz / 1.35V
Operating voltage 1.35V
Maximum quantity 48
Largest DIMM 32GB
Maximum memory
capacity
1.5TB
Maximum memory at
rated speed
1TB
Maximum operating speed
1 DPC 1333MHz (1.5V)
2 DPC 1333MHz (1.5V)
3 DPC 1066MHz
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-93
V9.0
Uempty
Figure 4-80. Memory modes (1 of 2) NGT113.0
Notes:
The compute nodes support three memory modes:
Independent channel mode
Rank-sparing mode
Mirrored-channel mode
These modes can be selected in the UEFI setup.
Independent channel mode
This is the default mode for DIMM population. DIMMs are to be populated in the last DIMM
connector on the channel first, then installed one DIMM per channel equally distributed between
channels and processors. In this memory mode, the operating system uses the full amount of
memory installed and no redundancy is provided.
Memory DIMMs must be installed in the correct order, starting with the last physical DIMM
socket of each channel first. The DIMMs can be installed without matching sizes, but this is not
recommended as it could effect optimal memory performance.
Rank-sparing mode
Memory modes (1 of 2)
The compute nodes support three
memory modes:
Independent channel mode
Default mode for DIMM population
Install last DIMM connector on channel
first
One DIMM per channel across all
channels and processors
Full amount of memory installed is
available
No redundancy
Rank-sparing mode
One DIMM rank is held in reserve as a
spare of the other ranks on the same
channel
Not used or counted as active memory
If error occurs
Failed rank contents copied to spare
rank
Failed rank taken offline
Spare rank put online
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-94 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
In rank-sparing mode, one memory DIMM rank serves as a spare of the other ranks on the
same channel. The spare rank is held in reserve and is not used as active memory. The spare
rank must have identical or larger memory capacity than all the other active memory ranks on
the same channel. After an error threshold is surpassed, the contents of that rank is copied to
the spare rank. The failed rank of memory is taken offline, and the spare rank is put online and
used as active memory in place of the failed rank.
The memory DIMM installation sequence when using rank-sparing mode is identical to independent
channel mode.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-95
V9.0
Uempty
Figure 4-81. Memory modes (2 of 2) NGT113.0
Notes:
The third memory mode the compute nodes support is mirrored-channel mode.
Mirrored-channel mode
In mirrored-channel mode, memory is installed in pairs. Each DIMM in a pair must be identical in
capacity, type, and rank count. The channels are grouped in pairs with each channel in the
same group receiving the same data. One channel is used as a backup of the other, which
provides redundancy. The memory contents on channel 0 are duplicated in channel 1, and the
memory contents of channel 2 are duplicated in channel 3. The DIMMs in channel 0 and
channel 1 must be the same size and type. The DIMMs in channel 2 and channel 3 must be the
same size and type. The effective memory that is available to the system is only half of what is
installed.
Because memory mirroring is handled in hardware, it is operating system-independent.
The figure shows the E5-2600 series processor with the four memory channels and which channels
are mirrored when operating in mirrored-channel mode.
In a two processor configuration, memory must be identical across the two processors to enable the
memory mirroring feature.
Memory modes (2 of 2)
Mirrored-channel mode
Memory installed in pairs
Must be identical DIMMs
Memory channels grouped in pairs
One channel used as backup of other
Channel 0 mirrored to channel 1
Channel 2 mirrored to channel 3
Effective memory is half of installed memory
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-96 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-82. IBM Flex System X-Architecture compute node topics NGT113.0
Notes:
This section covers the compute node network subsystem.
IBM Flex System X-Architecture
compute node topics
Compute node overview and
architecture
Server subsystems
Disk subsystem
Storage expansion
Processor subsystem
Memory subsystem
Network subsystem
I/O expansion
Standard onboard features
Systems management
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-97
V9.0
Uempty
Figure 4-83. Network subsystem overview (1 of 3) NGT113.0
Notes:
Some models of the IBM Flex System x220 Compute Node include an Embedded 1Gb Ethernet
controller (or LOM) built into the system board. Each x220 model that includes the LOM will have a
90 degree I/O connector (Compute Node Fabric Connector) installed in I/O connector 1 (and
physically screwed onto the system board) to provide connectivity to the Enterprise Chassis
midplane.
The Embedded 1Gb Ethernet controller has the following features:
Broadcom BCM5718 based
Dual-port Gigabit Ethernet controller
PCIe 2.0 x2 host bus interface
Supports Wake on LAN
Supports Serial over LAN
Supports IPv6
TCP/IP Offload Engine (TOE) is not supported.
Network subsystem overview (1 of 3)
Some models of the x220 include:
Embedded 1 Gb Ethernet controller
LAN on Motherboard (LOM)
adapter
Connected to midplane using
Compute Node Fabric Connector
Uses I/O connector 1
Important note
x220 models without the Embedded
1Gb Ethernet controller have no other
Ethernet connections to midplane
I/O adapter must be installed in I/O
connector 1 or I/O connector 2 to
provide network connectivity
Compute Node Fabric Connector
Location of Compute Node Fabric
Connector
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-98 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Models without the Embedded 1Gb Ethernet controller do not include any other Ethernet
connections to the Enterprise Chassis midplane. Therefore for those models, an I/O adapter must
be installed in either I/O connector 1 or I/O connector 2 to provide network connectivity between the
server and the chassis midplane and ultimately to the network switches.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-99
V9.0
Uempty
Figure 4-84. Network subsystem overview (2 of 3) NGT113.0
Notes:
All models of the IBM Flex System x222 Compute Node include an Embedded 10Gb Virtual Fabric
Adapter (VFA) Ethernet LAN on Motherboard (or LOM) built into the system board. The lower
server will have a 90 degree I/O connector (Compute Node Fabric Connector) connected to I/O
connector 1 (and physically screwed onto the system board) to provide connectivity to the
Enterprise Chassis midplane.
The Embedded 10Gb VFA is based on the Emulex BladeEngine 3 (BE3), which is a single-chip,
dual-port 10 Gigabit Ethernet (10GbE) Ethernet Controller. These are some of the features of the
Embedded 10Gb VFA:
PCI-Express Gen2 x8 host bus interface
Supports multiple virtual NIC (vNIC) functions
TCP/IP Offload Engine (TOE enabled)
SR-IOV capable
RDMA over TCP/IP capable
iSCSI and FCoE upgrade offering through FoD
Network subsystem overview (2 of 3)
All models of the x222 include:
Embedded 10 Gb Virtual Fabric
Adapter (VFA)
LAN on Motherboard (LOM) adapter
Connected to midplane using
Compute Node Fabric Connector on
lower server
Uses I/O connector 1
Important note
Embedded 10 Gb Virtual Fabric
Adapter is shared between upper and
lower servers
Two ports per server
Routed to switch bays 1 and 2
Both switches require Features on
Demand (FoD) Upgrade 1 enabled
If not, upper server will not have
Ethernet connectivity
Location of Compute Node Fabric Connector
VFA network connections to switch bays 1 and 2
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-100 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
- Two licenses are required for both servers in the x222 compute node.
You must have Upgrade 1 enabled in the two switches. Without this feature upgrade, the upper
server will not have any Ethernet connectivity.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-101
V9.0
Uempty
Figure 4-85. Network subsystem overview (3 of 3) NGT113.0
Notes:
Some models of the IBM Flex System x240 and x440 Compute Nodes include an Embedded 10Gb
Virtual Fabric Adapter (or LOM) built into the systemboard. Each x240 or x440 model that includes
the LOM will have a 90 degree I/O connector (Compute Node Fabric Connector) installed in I/O
connector 1 (if x240) and I/O connectors 1 and 3 (if x440) and physically screwed onto the
systemboard to provide connectivity to the Enterprise Chassis midplane.
The Embedded 10Gb Virtual Fabric Adapter is based on the Emulex Blade Engine 3 (BE3) which is
a single-chip, dual-port 10 Gigabit Ethernet (10GbE) Ethernet Controller. Some of the features of
the Embedded 10Gb Virtual Fabric Adapter include:
PCI-Express Gen2 x8 host bus interface
Supports multiple virtual NIC (vNIC) functions
TCP/IP Offload Engine (TOE enabled)
SRIOV capable
RDMA over TCP/IP capable
iSCSI and FCoE upgrade offering via Feature on Demand (FoD)
Network subsystem overview (3 of 3)
Some models of the x240 and
x440 include:
Embedded 10 Gb Virtual Fabric
Adapter
LAN on Motherboard (LOM) adapter
Connected to midplane using
Compute Node Fabric Connector
x240 - uses I/O connector 1
x440 - uses I/O connectors 1 and 3
Important note
Compute nodes without the
embedded 10 Gb Virtual Fabric
Adapter have no other Ethernet
connections to midplane
I/O adapters must be installed in I/O
connectors to provide network
connectivity
Compute node fabric connector
Location of compute node fabric connector
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-102 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Compute nodes without the Embedded 10Gb Virtual Fabric Adapter do not include any other
Ethernet connections to the Enterprise Chassis midplane. Therefore for those models, an I/O
adapter must be installed in an I/O connector provide network connectivity between the server and
the chassis midplane and ultimately to the network switches.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-103
V9.0
Uempty
Figure 4-86. IBM Flex System X-Architecture compute node topics NGT113.0
Notes:
This section covers the compute node I/O expansion options.
IBM Flex System X-Architecture
compute node topics
Compute node overview and
architecture
Server subsystems
Disk subsystem
Storage expansion
Processor subsystem
Memory subsystem
Network subsystem
I/O expansion
Standard onboard features
Systems management
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-104 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-87. I/O expansion overview (1 of 3) NGT113.0
Notes:
The x220 and x240 compute node have two I/O expansion connectors for attaching I/O adapter
cards. There is also another expansion connector designed for future expansion options. The I/O
expansion connectors are a very high-density 216 pin Molex PCIe connector. By installing I/O
adapter cards it allows the x220 and x240 to connect with switch modules in the IBM Flex System
Enterprise Chassis.
Any supported I/O adapter card can be installed in either I/O connector however you must be
consistent not only across chassis but across all compute nodes.
The graphic shows the location of both I/O connectors and the location of the Compute Node Fabric
Connector.
I/O expansion overview (1 of 3)
The x220 and x240 include two
I/O expansion connectors.
Very high-density 216 pin Molex
PCIe connectors
Ethernet, Fibre Channel, and
InfiniBand adapter cards available
Any supported I/O adapter card
can be installed in either I/O
connector
Must be consistent across all
compute nodes and within
chassis
If Embedded LOM installed,
cannot use I/O connector 1
Location of I/O connectors
Location of Compute Node Fabric Connector
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-105
V9.0
Uempty
Figure 4-88. I/O expansion overview (2 of 3) NGT113.0
Notes:
In addition to the Embedded 10GbE VFAs on each server, the x222 supports one additional I/O
adapter that is shared between the two servers and is routed to the I/O Modules that are installed in
bays 3 and 4 of the chassis. The shared I/O adapter is mounted in the lower server, as shown in the
figures. The adapter has two host interfaces, one on either side, for connecting to the servers. Each
host interface is PCI Express3.0 x16. Adapters are shared between the two servers with half the
ports routing to each server. Fibre Channel and InfiniBand adapter cards available using a special
form factor for x222.
The graphics show the underside of the new form factor I/O adapter cards used in the x222 and the
location of I/O connector 2 on the lower server (with I/O adapter 2 removed and installed).
The x222 does not support the IBM Flex System PCIe Expansion Node.
I/O expansion overview (2 of 3)
The x222 includes one
additional I/O expansion
connector.
Very high-density 216 pin Molex
PCIe connectors on underside
(lower server connection) and
topside (upper server connection)
Fibre Channel and InfiniBand
adapter cards available
Special form factor for x222
Adapters are shared between the
two servers with half the ports
routing to each server
No IBM Flex System PCIe
Expansion Node support
Special form factor for I/O adapter (underside)
Location of I/O adapter 2 (not installed)
Location of I/O adapter 2 (installed)
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-106 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-89. I/O expansion overview (3 of 3) NGT113.0
Notes:
The IBM Flex System x440 Compute Node has four I/O expansion connectors for attaching I/O
adapter cards. There is also another expansion connector designed for future expansion options.
The I/O expansion connectors are a very high-density 216 pin Molex PCIe connector. By installing
I/O adapter cards it allows the x440 to connect with switch modules in the IBM Flex System
Enterprise Chassis.
Any supported I/O adapter card can be installed in either I/O connector however you must be
consistent not only across chassis but across all compute nodes.
The graphic shows the location of all four I/O connectors.
I/O expansion overview (3 of 3)
The x440 includes four I/O
expansion connectors.
Very high-density 216 pin
Molex PCIe connectors
Ethernet, Fibre Channel, and
InfiniBand adapter cards
available
Any supported I/O adapter card
can be installed in either I/O
connector
Must be consistent across all
compute nodes and within
chassis
If embedded 10Gb Virtual
Fabric Adapters installed,
cannot use I/O connectors 1
and 3 Location of I/O connectors
One (top) to four (bottom)
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-107
V9.0
Uempty
Figure 4-90. IBM Flex System PCIe Expansion Node (1 of 2) NGT113.0
Notes:
The IBM Flex System PCIe Expansion Node provides the ability to attach additional PCI Express
cards, such as High IOPS SSD adapters, fabric mezzanine cards, and next-generation graphics
processing units (GPU), to supported IBM Flex System compute nodes. This capability is ideal for
many applications that require high performance I/O, special telecommunications network
interfaces, or hardware acceleration using a PCI Express GPU card. The PCIe Expansion Node
supports up to four PCIe adapters and two additional Flex System I/O expansion adapters.
IBM Flex System PCIe Expansion Node (1 of 2)
Locally attached PCIe node
Additional I/O expansion
adapters or additional PCIe
adapters
Directly attached to a single
half-wide compute node
The IBM Flex System PCIe
Expansion Node supports
Up to two I/O expansion
adapters
Up to two full height full-length
or full height half-length PCIe
adapters
Up to two low-profile PCIe
adapters
IBM Flex System PCIe Expansion Node (right)
connected to a IBM Flex System x240 (left)
IBM Flex System PCIe Expansion Node
Copyright IBM Corporation 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-108 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-91. IBM Flex System PCIe Expansion Node (2 of 2) NGT113.0
Notes:
The PCIe Expansion Node connects to a standard-width compute node using the interposer cable
which plugs into the expansion connector and interposer connector on the compute node and PCIe
Expansion Node respectively. This link forms a PCIe 2.0 x16 connection between the compute
node and the PCIe switch in the PCIe Expansion Node. The PCIe switch has connections to the six
PCIe slots in the Expansion Node:
PCIe 2.0 x16 connections to the two full-length full-height PCIe slots
PCIe 2.0 x8 connections to the two low-profile PCIe slots
PCIe 2.0 x16 connectors to the two Flex System I/O expansion slots (labeled I/O 3 and I/O 4 in
the graphic)
In compute nodes, such as the x220 and x240, I/O expansion slot 1 and 2 in the server operate at
PCIe 3.0 speeds. However, I/O expansion slots 3 and 4 in the PCIe Expansion Node (and also the
four standard PCIe slots) operate at PCIe 2.0 speeds.
The expansion connector in the x220 and x240 compute nodes is routed through processor
2.Therefore, processor 2 must be installed in the compute node.
IBM Flex System PCIe Expansion Node (2 of 2)
Connected via interposer cable
from compute node E2E
connector
ETE connector signaling routed
through Processor 2 of compute
node
Therefore Processor 2 must be
installed
PCIe adapter card support
Full-height cards
Low-profile cards
Half-length cards
Full-length cards
Maximum capacities (Six PCIe
slots)
Up to two Flex System I/O slots
and
Up to four low-profile cards
Up to two full-height cards
One full-height double-wide card
IBM Flex System PCIe Expansion Node
architecture
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-109
V9.0
Uempty
Figure 4-92. IBM Flex System x220 Compute Node: Supported network adapters NGT113.0
Notes:
The table lists the supported I/O adapter cards for the IBM Flex System x220 Compute Node. All
adapters listed can also be installed in the PCIe Expansion Node. The Maximum supported
column indicates the number of adapter than can be installed in the server and in the PCIe
Expansion Node (PEN).
IBM Flex System x220 Compute Node:
Supported network adapters
I/O adapter card description
Number of
ports
Maximum supported
Ethernet adapter cards
IBM Flex System EN6132 2-port 40Gb Ethernet Adapter 2 2
IBM Flex System CN4054 10Gb Virtual Fabric Adapter
4
2
IBM Flex System EN4132 2-port 10Gb Ethernet Adapter 2 2
IBM Flex System EN2024 4-port 1Gb Ethernet Adapter 4 2
Fibre Channel adapter cards
IBM Flex System FC5022 2-port 16Gb FC Adapter
2
2
IBM Flex System FC5052 2-port 16Gb FC Adapter
2
2
IBM Flex System FC5054 4-port 16Gb FC Adapter
4
2
IBM Flex System FC5172 2-port 16Gb FC Adapter
2
2
IBM Flex System FC3172 2-port 8Gb FC Adapter
2
2
IBM Flex System FC3052 2-port 8Gb FC Adapter
2
2
InfiniBand adapter cards
IBM Flex System IB6132 2-port FDR InfiniBand Adapter 2 2
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-110 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-93. IBM Flex System x222 Compute Node: Supported network adapters NGT113.0
Notes:
The x222 supports only Ethernet scalable switches with at least the first internal port upgrade
enabled or Fibre Channel switches with dynamic port assignment. The table shows which Ethernet,
Fibre Channel, and InfiniBand switches are supported.
IBM Redbooks Product Guide for the IBM Flex System FC5022 SAN Scalable Switch can be found
at the following website: http://www.redbooks.ibm.com/abstracts/tips0870.html.
A switch upgrade is not required to activate the necessary InfiniBand ports. However, to run the
ports at FDR speeds, the FDR upgrade 90Y3462 is required.
The following switches are not supported by the x222 because they do not provide enough internal
ports to connect to both servers in the x222 Compute Node:
IBM Flex System EN4091 10Gb Ethernet Pass-thru Module
IBM Flex System FC3171 8Gb SAN Switch
IBM Flex System FC3171 8Gb SAN Pass-thru
IBM Flex System EN6131 40Gb Ethernet Switch
IBM Flex System x222 Compute Node:
Supported network adapters
Adapter Supported switches Minimum required switch upgrades
Embedded 10 GbE
Virtual Fabric Adapter
EN2092 1Gb Ethernet Scalable
Switch (49Y4294)
Upgrade 1 (90Y3562)
EN4093 10Gb Scalable Switch
(49Y4270)
Upgrade 1 (49Y4798)
EN4093R 10Gb Scalable Switch
(95Y3309)
Upgrade 1 (49Y4798)
CN4093 10Gb Converged Scalable
Switch(00D5823)
Upgrade 1 (00D5845) or
Upgrade 2 (00D5847)
SI4093 System Interconnect Module
(95Y3313)
Upgrade 1 (95Y3318)
FC5024D 4-port 16Gb
FC Adapter
FC5022 16Gb SAN Scalable Switch
(88Y6374)
Switch port licenses can be used for
internal or external ports. Additional
ports may be needed depending on your
configuration. See the FC5022 Product
Guide
FC5022 24-port 16Gb SAN Scalable
Switch 4-port (00Y3324)
FC5022 24-port 16Gb ESB SAN
Scalable Switch(90Y9356)
IB6132D 2-port FDR
InfiniBand Adapter
IB6131 InfiniBand Switch (90Y3450)
None required
(See note below)
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-111
V9.0
Uempty
Figure 4-94. IBM Flex System x240 Compute Node: Supported network adapters NGT113.0
Notes:
The table lists the supported I/O adapter cards for the IBM Flex System x240 Compute Node. All
adapters listed can also be installed in the PCIe Expansion Node. The Maximum supported
column indicates the number of adapter than can be installed in the server and in the PCIe
Expansion Node (PEN).
IBM Flex System x240 Compute Node:
Supported network adapters
I/O adapter card description
Number of
ports
Maximum supported
(x240 / PEN)
Ethernet adapter cards
IBM Flex System EN6132 2-port 40Gb Ethernet Adapter 2 2 / None
IBM Flex System CN4022 2-port 10Gb Converged Adapter 2 2 / 2
IBM Flex System CN4054 10Gb Virtual Fabric Adapter
4
2 / 2
IBM Flex System EN4132 2-port 10Gb Ethernet Adapter 2 2 / 2
IBM Flex System EN2024 4-port 1Gb Ethernet Adapter 4 2 / 2
Fibre Channel adapter cards
IBM Flex System FC5022 2-port 16Gb FC Adapter
2
2 / 2
IBM Flex System FC5052 2-port 16Gb FC Adapter
2
2 / 2
IBM Flex System FC5054 4-port 16Gb FC Adapter
4
2 / 2
IBM Flex System FC5172 2-port 16Gb FC Adapter
2
2 / 2
IBM Flex System FC3172 2-port 8Gb FC Adapter
2
2 / 2
IBM Flex System FC3052 2-port 8Gb FC Adapter
2
2 / 2
InfiniBand adapter cards
IBM Flex System IB6132 2-port FDR InfiniBand Adapter 2 2 / 2
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-112 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-95. IBM Flex System x440 Compute Node: Supported network adapters NGT113.0
Notes:
The table lists the supported I/O adapter cards for the IBM Flex System x440 Compute Node.
IBM Flex System x440 Compute Node:
Supported network adapters
I/O adapter card description
Number of
ports
Maximum supported
Ethernet adapter cards
IBM Flex System EN6132 2-port 40Gb Ethernet Adapter 2 4
IBM Flex System CN4022 2-port 10Gb Converged Adapter 2 4
IBM Flex System CN4054 10Gb Virtual Fabric Adapter
4
4
IBM Flex System EN4132 2-port 10Gb Ethernet Adapter 2 4
IBM Flex System EN2024 4-port 1Gb Ethernet Adapter 4 4
Fibre Channel adapter cards
IBM Flex System FC5022 2-port 16Gb FC Adapter
2
2
IBM Flex System FC5052 2-port 16Gb FC Adapter
2
2
IBM Flex System FC5054 4-port 16Gb FC Adapter
4
2
IBM Flex System FC5172 2-port 16Gb FC Adapter
2
2
IBM Flex System FC3172 2-port 8Gb FC Adapter
2
2
IBM Flex System FC3052 2-port 8Gb FC Adapter
2
2
InfiniBand adapter cards
IBM Flex System IB6132 2-port FDR InfiniBand Adapter 2 2
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-113
V9.0
Uempty
Figure 4-96. IBM Flex System X-Architecture compute node topics NGT113.0
Notes:
This section covers the compute node standard onboard features.
IBM Flex System X-Architecture
compute node topics
Compute node overview and
architecture
Server subsystems
Disk subsystem
Storage expansion
Processor subsystem
Memory subsystem
Network subsystem
I/O expansion
Standard onboard features
Systems management
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-114 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-97. Standard onboard features overview NGT113.0
Notes:
The compute nodes come with the following standard onboard features:
USB ports
Console breakout cable
Trusted Platform Module (TPM)
Standard onboard features overview
Compute nodes comes with
the following standard
onboard features
USB ports
Console breakout cable
Trusted Platform Module (TPM)
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-115
V9.0
Uempty
Figure 4-98. USB ports NGT113.0
Notes:
Each compute node has one external USB port on the front of the compute node. All systems also
supports an option that provides two internal USB ports (IBM Flex System USB Enablement Kit) to
be primarily used for attaching USB hypervisor keys.
USB ports
Each compute node includes
one external USB port.
IBM Flex System USB
Enablement Kit
Adds two internal USB ports
used for attaching USB
hypervisor keys
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-116 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-99. Integrated virtualization NGT113.0
Notes:
All compute nodes offer USB flash drive options preloaded with versions of VMware ESXi. This is an
embedded version of VMware ESXi and is fully contained on the flash drive, without requiring any
disk space. The USB memory key plugs into one of the USB ports on the optional IBM Flex System
USB Enablement Kit. The kit offers two ports and enables you to install two memory keys. If you do,
both devices will be listed in the boot menu allowing you to boot from either device or to set one as a
backup in case the first one gets corrupted.
ESXi is an embedded version of VMware ESX 5.1. The footprint of ESXi is so small (at around 32
MB) as it does not use the Linux based Service Console, instead using management tools like
VirtualCenter, the Remote Command-Line interface and CIM for standards-based and agentless
hardware monitoring.
In order to install a version of embedded hypervisor, you have to install the IBM Flex System
USB Enablement Kit.
If the ServeRAID M5100 Series SSD Expansion Kit (90Y4391) is installed, then the USB
Enablement Kit cannot be installed. This is because the SSD Expansion Kit also includes a
special memory baffle as does the USB Enablement Kit, and the two cannot be installed at the
same time.
Integrated virtualization
IBM branded 2GB USB flash device
Embedded hypervisor pre-installed
on USB flash device
VMware ESXi
The USB flash device becomes part
of the system firmware
Simplifies management of hardware
resources and virtual machine hosts
Provides rapid virtualization
deployment
Enhanced out-of-box plug-and-play
customer experience
Simple and intuitive start-up
experience for the new virtualization
user
Hypervisor USB
flash device
IBM Flex System
USB Enablement
Kit
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-117
V9.0
Uempty
Figure 4-100. Console breakout cable NGT113.0
Notes:
All compute nodes connect to local video, USB keyboard and USB mouse devices by connecting
the Console Breakout Cable. The Console Breakout Cable connects to a connector on the front
bezel of the compute nodes. The Console Breakout Cable also provides a serial connector.
Console breakout cable
The IBM Flex System
Enterprise Chassis ships with
one console breakout cable
standard.
Used for connecting to local
video, USB keyboard, and USB
mouse
Also provides serial connector
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-118 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-101. Trusted Platform Module NGT113.0
Notes:
Trusted computing is an industry initiative that provides a combination of secure software and
secure hardware to create a trusted platform. It is a specification that increases network security by
building unique hardware IDs into computing devices.
The compute nodes implement TPM Version 1.2 support.
The trusted platform module (TPM) in the compute nodes are one of the three layers of the trusted
computing initiative.
Trusted Platform Module
All compute nodes include support for Trusted Platform
Module (TPM) 1.2.
TPM 1.2 increases security by building unique hardware IDs
into computing devices.
Trusted Platform Module
computing layer
Implementation
Level 1: Tamper-proof hardware, used
to generate trustable keys
Trusted Platform Module
Level 2: Trustable platform
UEFI
Intel processor
Level 3: Trustable execution
Operating system
Drivers
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-119
V9.0
Uempty
Figure 4-102. IBM Flex System X-Architecture compute node topics NGT113.0
Notes:
This section covers the compute nodes systems management.
IBM Flex System X-Architecture
compute node topics
Compute node overview and
architecture
Server subsystems
Disk subsystem
Storage expansion
Processor subsystem
Memory subsystem
Network subsystem
I/O expansion
Standard onboard features
Systems management
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-120 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-103. Front panel LEDs and controls (1 of 3) NGT113.0
Notes:
The front of the compute node includes several LEDs and controls that assist in systems
management. They include a hard disk drive activity LED, status LEDs, as well as power, identify,
check log, fault, and light path diagnostic LEDs. The slide shows the location of the LEDs and
controls on the front of the x240.
USB port
Console breakout
cable port
Power button / LED
Hard disk drive
activity LED
Hard disk drive
status LED Identify LED
Check log LED
Fault LED
NMI control
Front panel LEDs and controls (1 of 3)
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-121
V9.0
Uempty
Figure 4-104. Front panel LEDs and controls (2 of 3) NGT113.0
Notes:
The following describes the compute node front panel LEDs:
Power (Green): This LED lights solid when system is powered up. When the compute node is
initially plugged into a chassis, this LED is off. If the power on button is pressed, the IMM blinks
this LED until it determines if the compute node is permitted to power up. If the compute node is
permitted to power up, the IMM powers the compute node on and turns this LED on solid. If the
compute node is not permitted to power up, the IMM turns off this LED and turns on the
information LED. When this button is pressed with the compute node out of the chassis, the
light path LEDs are lit.
Location LED (Blue): A user can use this LED to locate the compute node in the chassis by
requesting it to blink from the chassis management module console. The IMM blinks this LED
when instructed to by the Chassis Management Module. This LED only functions when the
compute node is powered on.
Check error log LED (Yellow): The IMM will turn this LED on when a condition occurs that
prompts the user to check the system error log in the Chassis Management Module.
Front panel LEDs and controls (2 of 3)
LED Color Status / description
Power Green
Off: No power to compute node.
On; fast blink mode: Compute node has power. Chassis Management
Module is in discovery mode (handshake).
On; slow blink mode: Compute node has power. Power in stand-by
mode.
On; solid: Compute node has power. Compute node is operational.
Location Blue
A user can use this LED to locate the compute node in the chassis by
requesting it to blink from the Chassis Management Module console.
Check error log Yellow
The IMM will turn this LED on when a condition occurs that prompts
the user to check the system error log in the Chassis Management
Module.
Fault Yellow
This LED lights solid when a fault has been detected somewhere on
the compute node.
Hard disk
drive activity
Green
Each hot-swap hard disk drive has an activity LED, and when this
LED is flashing, it indicates that the drive is in use.
Hard disk
drive status
Yellow
When this LED is lit, it indicates that the drive has failed. If an optional
IBM ServeRAID controller is installed in the server, when this LED is
flashing slowly (one flash per second), it indicates that the drive is
being rebuilt. When the LED is flashing rapidly (three flashes per
second), it indicates that the controller is identifying the drive.
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-122 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Fault LED (Yellow): This LED lights solid when a fault has been detected somewhere on the
compute node. If this indicator is on, then the general fault indicator on the chassis front panel
should also be on.
Hard disk drive activity LED (Green): Each hot-swap hard disk drive has an activity LED, and
when this LED is flashing, it indicates that the drive is in use.
Hard disk drive status LED (Yellow): When this LED is lit, it indicates that the drive has failed. If
an optional IBM ServeRAID controller is installed in the server, when this LED is flashing slowly
(one flash per second), it indicates that the drive is being rebuilt. When the LED is flashing
rapidly (three flashes per second), it indicates that the controller is identifying the drive.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-123
V9.0
Uempty
Figure 4-105. Front panel LEDs and controls (3 of 3) NGT113.0
Notes:
The following describes the compute node front panel controls:
Power on / off button (Recessed with Power LED): If the compute node is off, pressing this
button causes the compute node to power up and start loading. When the compute node is on,
pressing this button causes a graceful shut down of the individual compute node so that is it is
safe to remove. This includes shutting down the operating system (if possible) and removing
power from the compute node. If an operating system is running, the button may have to be
held for approximately four seconds to initiate the shutdown. This button should be protected
from accidental activation and should be grouped with the Power LED.
NMI (Recessed. It can only be accessed by using a small pointed object.): Causes an NMI for
debugging purposes.
Front panel LEDs and controls (3 of 3)
Control Characteristic Description
Power on / off
button
Recessed with
Power LED
If off, pressing this button causes the system to power up and
start loading.
If on, pressing this button causes a graceful shut down of the
operating system and removing power from the system.
NMI
Recessed. It
can only be
accessed by
using a small
pointed object.
Causes an NMI for debugging purposes.
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-124 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-106. Systems management NGT113.0
Notes:
All compute nodes include the following systems management features:
Integrated Management Module II (IMM2)
- New version of IMM with additional functionality
- Supports IPMI 2.0 standard
Unified Extensible Firmware Interface (UEFI)
- Next generation of system BIOS
Trusted Platform Module (TPM) 1.2 support
Light path diagnostics
- Allows quick and easy identification of component failures
Integration with IBM Flex System Manager Node and IBM Flex System Enterprise Chassis
Management Module
Systems management
All compute nodes include the following
systems management features:
Integrated Management Module II (IMM2)
New version of IMM with additional
functionality
Supports IPMI 2.0 standard
Unified Extensible Firmware Interface
(UEFI)
Next generation of system BIOS
Trusted Platform Module (TPM) 1.2 support
Light path diagnostics
Allows quick and easy identification of
component failures
Integration with IBM Flex System Manager
Node and IBM Flex System Enterprise
Chassis Management Module
IMM2
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-125
V9.0
Uempty
Figure 4-107. IMMv2 for IBM X-Architecture compute nodes NGT113.0
Notes:
The Integrated Management Module v2 (IMMv2) is the next generation of the IMMv1 (first released
in the Nehalem-EP class of products). It is present on all Intel Romley based platforms with
complete rework of hardware and firmware. The IMMv2 enhancements include a more responsive
user interface, faster power on, and increased remote presence performance.
The IMMv2 incorporates a new web user interface providing a common look and feel across all
IBM System x software products. In addition to the new interface, the following provides a list of
other major enhancements over IMM Version 1:
It provides faster CPU and memory
IMMv2 is manageable northbound from outside the chassis and enables consistent
management and scripting with System x rack servers
It offers remote presence with increased color depth and resolution for more detailed server
video. Remote presence supports Active X clients in addition to Java clients. Increased RDOC
capacity (~50MB) provides convenience for remote SW installations
No IMMv2 reset is required on configuration changes. The changes become effective
immediately without IMMv2 reboot
IMMv2 for IBM X-Architecture compute nodes
Next generation of IMM1 with new hardware, firmware, look, and feel
Faster CPU and memory
Enhanced remote presence with increased color depth and Active X
client
Improved system power-on and boot time
No IMMv2 reset required on configuration changes
Addition of Syslog alerting mechanism provides users with an
alternative to email and SNMP TRAPs
More detailed information for UEFI detected events enables easier
problem determination and fault isolation
Separate audit and event logs
Significant security enhancements
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-126 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Hardware management of non-volatile storage
Faster Ethernet over USB
1Gb Ethernet management capability
Improved system power-on and boot time
More detailed information of UEFI detected events enables easier problem determination & fault
isolation
User interface meets accessibility standards (CI-162 compliant)
Separate audit and event logs
Trusted IMM with significant security enhancements (CRTM/TPM, signed updates,
authentication policies, and so on)
Simplified update/flashing mechanism
Addition of Syslog alerting mechanism provides users with an alternative to email and SNMP
TRAPs.
Support for Features on Demand (FoD) enablement of server functions, option card features,
and System x solutions and applications
First Failure Data Capture - One button web click initiates data collection and download
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-127
V9.0
Uempty
Figure 4-108. IMM capabilities NGT113.0
Notes:
The Integrated Management Module (IMM) provides basic management for the Intel compute node.
The IMM is used for monitoring system status, vital product data (VPD) and events related to a
single Intel compute node. Various server tasks like power on, power off, remote access, and
firmware update can be performed using IMM.
IMM capabilities
IMM - II
Firmware
Updates
System
Status
Server
Tasks
Manage
Events
Remote
Control
VPD
Info
Problem
Areas
Service
Data
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-128 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-109. IMM management options NGT113.0
Notes:
The IMM GUI provides an easy to use interface for performing various management tasks. The
IMM GUI has five main management tabs:
System Status: Provides overall status of the compute node and status of various components
of the compute node (CPU, memory)
Events: Used to manage all the events related to the compute node
Service and Support: Used for collecting the details about the compute node configuration and
events which is used by service and support engineers
Server Management: Provides various server management related tasks like Remote Control,
Power operations, Server and Components status, PXE network configuration, and Last OS
failure screen
IMM Management: Used to perform IMM configuration related tasks including user
management, IMM firmware update, IMM reset and reboot
IMM management options
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-129
V9.0
Uempty
Figure 4-110. Light path diagnostics (1 of 3) NGT113.0
Notes:
Light path diagnostics allows you to quickly identify the type of system error that occurred by
monitoring and reporting the health of the processors, main memory, hard disk drives, PCI
adapters, fans, power supplies, VRMs, and the internal system temperature. The server is
designed so that any LEDs that are illuminated remain illuminated when the server shuts down as
long as the power source is good. This feature helps you isolate the problem if an error causes the
server to shut down.
The system board also contains LEDs beside specific components, such as DIMM slot 12, that
identify the failed part. Light path diagnostics works even when the server is unplugged. After AC
power has been removed from the server, power remains available to these LEDs for up to 12
hours.
If an error occurs, view the Light Path Diagnostics LEDs in the following order:
1. Check the front panel on the server.
- If the Fault LED is lit, it indicates that there is a fault or condition in the server and that Light
Path Diagnostics might light an additional LED to help diagnose the problem.
2. To view the Light Path Diagnostics panel on the top of the system, power off the compute node,
slide it out of the chassis, and press the Power button. The power button doubles as the light
Light path diagnostics (1 of 3)
Light path diagnostics
Allows quick diagnosis of any type of server
error
Introduced in 1998, now included in most System x,
BladeCenter, blade servers, and IBM Flex System
compute nodes
Level 1: Front panel containing fault LED
External notification that error has occurred
Level 2: Light path diagnostics panel on top
of compute node
LEDs that correspond to major server components
Must power off compute node, slide it out of the
chassis, and press the Power button to view
Level 3: LED identifying suspect component
LEDs placed throughout server next to individual
server components
Even without power to server, can be used for up to
12 hours
Compute node fault LED
Compute node light path diagnostic LEDs
System board LEDs
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-130 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
path diagnostics remind button when the server is removed from the chassis. Note any LEDs
that are lit.
3. Remove the top cover to look inside the server for lit LEDs. To identify the component that is
causing the error, note the lit LED on or next to the component.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-131
V9.0
Uempty
Figure 4-111. Light path diagnostics (2 of 3) NGT113.0
Notes:
This slide shows a close-up view of the location of the light path diagnostics panel on the top of the
compute node and a close-up view of the LEDs on the light path diagnostics panel.
Light path diagnostics (2 of 3)
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-132 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-112. Light path diagnostics (3 of 3) NGT113.0
Notes:
This slide shows a close-up view of the location of the light path diagnostics panel on the side of the
upper and lower servers of the IBM Flex System x222 Compute Node compute node and a
close-up view of the LEDs on the light path diagnostics panel.
Light path diagnostics (3 of 3)
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-133
V9.0
Uempty
Figure 4-113. Configuration patterns: Overview NGT113.0
Notes:
IBM Flex System Manager version 1.2 and later includes configuration pattern support for IBM Flex
System chassis and X-Architecture compute nodes.
Configuration patterns allow for quick, step-by-step configuration of compute nodes by configuring
local storage, network adapters, boot order, and Integrated Management Module (IMM) and Unified
Extensible Firmware Interface (UEFI) settings. Once you define a configuration pattern, you can
store it and deploy it to one or many compute nodes. To create a configuration pattern, you can use
the tool to create one from scratch or a pattern can be created from an existing compute node.
A server pattern represents a compute node configuration that is deployed before an operating
system is installed. It includes local storage configuration, network adapter configuration, boot
settings, and other IMM and UEFI firmware settings. When you define a server pattern, select the
category patterns and address pools that you need for the desired configuration for a specific group
of compute nodes. For example, a separate server pattern would be needed for a x220 and x240
node due to their different hardware configuration. Or, if some x240 compute nodes used InfiniBand
adapters and others used the 10 Gb LOM, separate server patterns would be required for those
groups of systems. You can define multiple server patterns to represent different configurations in
your data center.
Configuration patterns: Overview
Configuration patterns
Support for X-Architecture compute
nodes
Configuration patterns allow for quick,
step-by-step configuration of UEFI and
IMM settings
Define, store, and deploy to one or
many compute nodes
Create configuration patterns from
scratch or by capturing parameters
from existing compute nodes
Server pattern
For compute nodes with a common
hardware configuration
For example, unique server patterns
for x220, x240, and x440 compute
nodes
Server profile
Created when a server pattern is
deployed to a compute node
Unique for each compute node
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-134 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
When a server pattern is deployed to multiple compute nodes, multiple server profiles are
generated automatically (one profile for each compute node). Each profile inherits settings from the
parent server pattern, which enables you to control a common configuration pattern from a single
place. Each server profile represents the specific configuration of a single compute node and
contains system-unique information (for example, assigned IP addresses and MAC addresses).
Each server profile represents the specific configuration of a single compute node and contains
information that is unique to a compute node. The server profile is activated as part of the IMM
startup process. Once a server profile is activated for a compute node, any subsequent
configuration changes are done by editing the appropriate server pattern or category pattern
associated with the profile. This enables you to control a common configuration pattern from a
single place.
If a compute node needs to be moved or re-purposed, you can reassign a server profile from one
compute node to another.
You can deploy a server pattern to a compute node or to an empty chassis bay. In either case, the
profile is associated with the chassis bay. If you replace an existing compute node, you must
redeploy the server profile associated with that bay to activate the profile on the new compute node.
If you first deploy a server pattern to an empty bay, then you must redeploy the server profile
associated with that bay after a compute node is installed.
Recall that the IBM Flex System Manager, in some configurations, is optional.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-135
V9.0
Uempty
Figure 4-114. Deploy compute node images: Overview NGT113.0
Notes:
The IBM Flex System Manager v1.2 and later supports bare metal installation of an operating
system image to a compute node. Through the IBM Flex System Manager user interface, you can
deploy operating system images to one or more X-Architecture computes nodes (up to 56 nodes
concurrently). To deploy an image, select the compute node to which the image will be deployed
and then select the operating system image to be deployed. If you deploy an image to a compute
node that already has an operating system installed, the existing operating system will be
overwritten.
You can use the Deploy Compute Node Image task to install operating systems on X-Architecture
compute nodes only.
The IBM Flex System Manager management node supports a maximum of five operating system
images in local storage. A version of the IBM-customized VMware vSphere Hypervisor is preloaded
on the IBM Flex System Manager management node; therefore, you can import up to four
additional operating system images on the IBM Flex System Manager management node and then
deploy that image to X-Architecture compute nodes.
Deploy compute node images: Overview
Deploy compute node images
Only X-Architecture compute nodes
supported
Automates operating system
installation
One-to-one or one-to-many
operation
Up to 56 nodes concurrently
FSM supports a maximum of five OS
images in local storage at a time
Supported operating systems
VMware vSphere Hypervisor
(ESXi) 5.1 with IBM customization
Red Hat Enterprise Linux (RHEL)
6.2, 6.3, and 6.4
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-136 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
The following operating systems are supported:
VMware vSphere Hypervisor (ESXi) 5.1 with IBM customization.
Red Hat Enterprise Linux 6.2, 6.3, and 6.4. When you import a Red Hat Enterprise Linux ISO
image, it will generate three different OS image profiles: Minimal, Basic and Virtualization.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-137
V9.0
Uempty
Figure 4-115. Glossary NGT113.0
Notes:
This slide presents a glossary of terms used in this topic.
Glossary
IBM Flex System x220 Compute
Node
IBM Flex System x222 Compute
Node
IBM Flex System x240 Compute
Node
IBM Flex System x440 Compute
Node
Intel Xeon E5-2400 series
processor
Intel Xeon E5-2600 series
processor
Intel Xeon E5-4600 series
processor
UDIMMs, RDIMMs, and
LRDIMMs
10Gb Virtual Fabric Adapter
LAN on Motherboard (LOM)
I/O connectors
IBM Flex System Enterprise
Chassis
LSI 2004 storage controller
ServeRAID M5115
Intel QuickPath architecture
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-138 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 4-116. Checkpoint NGT113.0
Notes:
Write down your answers here:
1.
2.
3.
Checkpoint
1. The IBM Flex System x220 Compute Node supports which of the
following combinations of storage devices?
a. 2.5-inch HDDs
b. 2.5-inch HDDs and 2.5-inch SSDs
c. 2.5-inch HDDs, 2.5-inch SSDs, and 1.8-inch SSDs
d. All of the above
2. The embedded Virtual Fabric Adapter in the x222 requires that both
Ethernet switches installed in bay 1 and 2 have Upgrade 1 enabled.
Which of the following will happen if Upgrade 1 is not enabled?
a. The x222 compute node will not power on
b. The upper server will lose Ethernet connectivity
c. The lower server will lose Ethernet connectivity
d. Both B and C
3. To attach the IBM Flex System PCIe Expansion Node to an IBM Flex
System x240 Compute Node, which of the following must also be
installed?
a. At least 64GB of memory in the x240 Compute Node
b. Two I/O adapter cards in the PCIe expansion node
c. Two processors in the x240 Compute Node
d. Both two I/O adapter cards in the PCIe expansion node and two processors in
the x240 Compute Node
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 4. IBM Flex System X-Architecture compute nodes 4-139
V9.0
Uempty
Figure 4-117. Unit summary NGT113.0
Notes:
Having completed this unit, you should be able to:
Summarize the features of the IBM Flex System X-Architecture Compute Nodes
Distinguish the major elements of the IBM Flex System X-Architecture Compute Nodes
Recognize the processor subsystem features of the IBM Flex System X-Architecture Compute
Nodes
Recognize the memory subsystem features of the IBM Flex System X-Architecture Compute
Nodes
Recall the management features of the IBM Flex System X-Architecture Compute Nodes
Unit summary
Having completed this unit, you should be able to:
Summarize the features of the IBM Flex System X-Architecture
compute nodes
Distinguish the major elements of the IBM Flex System X-
Architecture compute nodes
Recognize the processor subsystem features of the IBM Flex
System X-Architecture compute nodes
Recognize the memory subsystem features of the IBM Flex
System X-Architecture compute nodes
Recall the management features of the IBM Flex System X-
Architecture compute nodes
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-140 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 5. IBM Power Systems compute nodes 5-1
V9.0
Uempty
Unit 5. IBM Power Systems compute nodes
What this unit is about
This section is an overview of the IBM Flex System Power Systems compute
nodes.
What you should be able to do
After completing this unit, you should be able to:
Recognize the features of the Power Systems family of servers
List the Power compute nodes and features
Plan adapter and I/O module placement to enable external traffic flow
Explain PowerVM based virtualization on a Power node
Plan for the management of a Power virtualized environment
How you will check your progress
Checkpoint questions
Lab exercise
References
IBM Information Center - pic.dhe.ibm.com/infocenter/flexsys/information
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-2 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 5-1. Unit objectives NGT113.0
Notes:
After completing this unit, you will be able to:
Recognize the features of the Power Systems family of servers
List the Power compute nodes and features
Plan adapter and I/O module placement to enable external traffic flow
Explain PowerVM based virtualization on a Power node
Plan for the management of a Power virtualized environment
Unit objectives
After completing this unit, you should be able to:
Recognize the features of the Power Systems family of servers
List the Power compute nodes and features
Plan adapter and I/O module placement to enable external
traffic flow
Explain PowerVM based virtualization on a Power node
Plan for the management of a Power virtualized environment
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 5. IBM Power Systems compute nodes 5-3
V9.0
Uempty
Figure 5-2. The Power node essentials NGT113.0
Notes:
The following components listed are involved in an IBM Flex System installation with Power-based
nodes. To effectively use the environment, you will need to understand what these components are,
how they fit together, and how to use them to create the desired configuration.
In this unit, we will cover the following topics:
What is Power?
Power Systems platform overview
Virtualization on Power Systems
The IBM Flex System Power compute node
IBM Flex System Power compute node overview and architecture
IBM Flex System Power node subsystems
- Disk subsystem
- Processor subsystem
- Memory subsystem
The Power node essentials
Flex System Manager (FSM)
Proven technology
A new platform
New compute nodes
N
e
w

n
o
d
e

d
e
t
a
i
l
s
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-4 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
IBM Flex System Power node I/O
IBM Flex System Power node systems management
Power Systems virtualization
Power Systems virtualization overview
Power Systems virtual servers (virtual servers)
Virtual I/O and the VIO Server (VIOS)
Creating Power Systems virtual servers
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 5. IBM Power Systems compute nodes 5-5
V9.0
Uempty
Figure 5-3. What is Power? NGT113.0
Notes:
The sections we will cover are:
Power Systems platform overview
Virtualization on Power Systems
The first section covers the Power platform overview.
What is Power?
Power Systems platform overview
A new platform
W
h
a
t

i
s

P
o
w
e
r
?
N
e
w

n
o
d
e

d
e
t
a
i
l
sP
o
w
e
r

V
i
r
t
u
a
l
i
z
a
t
i
o
n
?
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-6 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 5-4. Power Systems family (with a new addition) NGT113.0
Notes:
The entire portfolio of IBM Power Systems products are based on POWER7 technology.
The complete Power portfolio of systems, software and solutions is designed to help businesses of
all sizes address the challenges and opportunities of a smarter planet.
Power 770
Power 750
Power 795
PS Blades
Power 710/730
Power 780
Power 720/740
Power 775
Power 755
FSM, SDMC & HMC
IBM Flex Power System node
Major features:
Modular systems with linear scalability
PowerVM virtualization
Physical and virtual management
Roadmap to continuous availability
Binary compatibility
Energy/thermal management
Power Systems family (with a new addition)
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 5. IBM Power Systems compute nodes 5-7
V9.0
Uempty
Figure 5-5. Power Systems features NGT113.0
Notes:
Power Systems has consistently delivered processors with industry leading performance, along
with continuous improvements in Reliability, Scalability and Serviceability (RAS). Starting in 2001,
with POWER4, virtualization was introduced. This enabled multiple AIX and Linux (and later IBM i)
operating environments on the same physical hardware. With the addition of virtual adapters and
the VIO server, LAN and Storage resources could now be shared through virtualization, further
increasing the capabilities of the powerful processor and memory infrastructure. Now that
resources on a server could be divided up among different virtual servers, it was natural
progression for the assignment or removal of a resource from a virtual server to be performed
dynamically. Operating systems didnt have to be shutdown to add or remove hardware resources,
including excess capacity that was on the server that hadnt been provisioned yet. Workloads could
now be run more efficiently by moving resources, temporarily, from some virtual servers to one
virtual server for a CPU or memory intensive task. Once the big job was complete, the resources
could be returned to their original virtual servers. No need to buy more hardware, no need to bring
systems down to move resources. The pooling of processor cores was the next innovation, allowing
virtual servers to share access to the processing power in the server, pushing utilization of the
processors to ever more efficient and constant usage. What was once a red-flag to begin the
capacity planning exercises, now was seen as the natural exploitation of ones investment. Memory
Power Systems features
Industry leading hardware performance and RAS
CPU options from 2.4 GHz to 4.4 GHz, and from one to 256 cores
Memory options 8 GB to 16 TB
Virtualization through PowerVM
Always on Power Hypervisor
Virtual servers
Memory (Active Memory Sharing) and processor virtualization (micro-partitioning)
and pools
Virtualized networking and storage
Dynamic reconfiguration of virtual servers
Relocation of virtual servers (Live Partition Mobility)
Centralized management through IVM, HMC, SDMC, and IBM Systems
Director
Extended and cloud-enabled using VMControl
High availability on AIX and IBM i through PowerHA SystemMirror
Energy management through Active Energy Manager
Security through PowerSC
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-8 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
enhancements came next with the ability to share memory on the server between multiple virtual
servers, in an implementation of memory management and paging one layer away from the virtual
server, on the VIO server. A virtual server can be configured as though it has more memory than is
dedicated to it, using the excess from the pool. Providing memory is not constrained amongst the
virtual servers that are sharing, each virtual server will periodically be able to utilize more memory
than it would otherwise be able to in a dedicated configuration.
Other features that make Power systems a feature rich platform include:
PowerHA SystemMirror for high availability and disaster recovery
PowerSC for strict security measures
Lifecycle management and cloud enablement in VMControl
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 5. IBM Power Systems compute nodes 5-9
V9.0
Uempty
Figure 5-6. Power is operating system choices NGT113.0
Notes:
Every Power Express model can run your choice of AIX, IBM i, or Linux operating systems. In fact,
you could run all three on the same system via partitioning if you chose to. This provides
tremendous flexibility when choosing the applications that will work the best for your business. The
Minimum OS levels on POWER7-based IBM Flex System Compute Nodes include:
VIOS 2.2.1.4
AIX 7.1
- TL1 SP3 (with IV14284) or TL1 SP4
- TL0 SP6
AIX 6.1
- TL7 SP3 (with IV14283)
- TL6 SP8
AIX 5.3 TL12 SP6
IBM i 6.1.1
POWER7 support:
AIX 7.1, 6.1, 5.3
Technology levels depend on model
POWER7 support:
SLES 11, 10
RHEL 6.1, 5.7
POWER7 support:
i7.1, 6.1
Technology levels depend on model
Power is operating system choices
POWER7 support:
VIOS 2.2.1.0
POWER7+ support:
VIOS 2.2.2.0
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-10 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
IBM i 7.1 TR3
RHEL 5.7
RHEL 6.2
SLES 11 SP2
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 5. IBM Power Systems compute nodes 5-11
V9.0
Uempty
Figure 5-7. IBM Flex System Power compute node NGT113.0
Notes:
The topics we will cover are:
IBM Flex System Power compute node overview and architecture
IBM Flex System Power node subsystems
- Disk subsystem
- Processor subsystem
- Memory subsystem
IBM Flex System Power I/O
IBM Flex System Power systems management
This section covers the IBM Flex System Power overview and architecture.
IBM Flex System Power compute node
IBM Flex System Power compute node overview and
architecture
IBM Flex System Power node subsystems
IBM Flex System Power node systems management
A new platform
W
h
a
t

i
s

P
o
w
e
r
?
N
e
w

n
o
d
e

d
e
t
a
i
l
sP
o
w
e
r

V
i
r
t
u
a
l
i
z
a
t
i
o
n
?
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-12 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 5-8. Summary of features by form factor NGT113.0
Notes:
The half-wide node contains two sockets. The full-wide node contains four sockets. Each socket
contains four, six, or eight cores depending on the model. When virtualizing the node, each core
can be divided into 1/100, with a minimum of 1/10 of a core required for a virtual server.
There are 16 DIMM slots on the half-wide, 32 DIMM slots on the full-wide. The DIMMs come in
sizes from 2 GB to 16 GB.
Each node (regardless of size) supports two HDD or SSD drives.
Each node has the option for I/O adapters. Purchasing no I/O adapters is an option, but is not
recommended. The only option for communications outside the node is through an I/O adapter.
There are no on-board communications adapters. The half-wide nodes can support two adapters,
the full-wide nodes can support four adapters. Each adapter communicates through two switches in
the chassis. Understanding that relationship is critical to successfully communicating from the
virtual server that owns the I/O adapters and the outside network.
Architecture
Four sockets: Four or eight cores per
socket
Processor
POWER7 Four cores at 3.3 GHz
Eight cores at 3.2 / 3.5 GHz
DDR3 memory Up to 512 GB
DASD / bays
0 - 2 SAS HDD ( 300 / 600 / 900 GB )
0 2 SATA SSD ( 177 GB )
Adapter card
I/O options
Four
8 / 12 /16
core
16 / 32
core
Summary of features by form factor
Full-
wide
p460
Half-
Wide
P260/
p24L,
p270
Architecture
Two sockets: Four, six, or eight cores
per socket
Processor (p260)
POWER7+
Four cores at 4.0 GHz
Eight cores at 3.6 / 4.1 GHz
Processor (p24L)
POWER7
Six cores at 3.7 GHz
Eight cores at 3.2 / 3.55 GHz
DDR3 memory Up to 512 GB
DASD / bays
0 - 2 SAS HDD ( 300 / 600 / 900 GB )
0 2 SATA SSD ( 177 GB )
Adapter card
I/O options
Two
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 5. IBM Power Systems compute nodes 5-13
V9.0
Uempty
Figure 5-9. IBM PureFlex POWER7+ compute nodes NGT113.0
Notes:
The visual shows the POWER7+ processor based compute nodes for IBM PureFlex Systems.
These compute nodes run in IBM Flex System Enterprise Chassis units to provide a high-density,
high-performance compute node environment by using advanced processing technology. These
compute nodes supports AIX, Linux and IBM i operating systems.
The p260 is an entry level POWER7+ compute node which can support up to 8 processor cores
only.
The IBM Flex System p270 compute node is based on IBM POWER architecture technologies and
uses the new POWER7+ dual-chip module (DCM) processors.
The p460 (7895-43X) compute node is a full wide compute node which provides up to four
POWER7+ processor sockets.
IBM PureFlex POWER7+ compute nodes
Cores: 8 / 16
Max Memory: 512 GB
Cores: 24
Max Memory: 512 GB
Cores: 16 / 32
Max Memory: 1 TB
Cores: 4
Max Memory: 512 GB
p260
7895-23A
p260
7895-23X
p270
7954-24X
p460
7895-43X
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-14 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 5-10. Power compute nodes comparison NGT113.0
Notes:
Based on the supported processing power, memory and IO options you can position a suitable
compute node for a workload.
Power compute nodes comparison
p260 Entry p260 p270 p460
POWER7+
Sockets
2 2 2 4
Cores 4 8 or 16 24 16 or 32
Frequency GHz 4.0 3.6 / 4.1 / 4.0 3.1 / 3.4 3.6 / 4.1 / 4.0
Max Memory /
# DIMMs
512 GB / 16 512 GB / 16 512 GB / 16 1 TB / 32
DIMMs 2, 4, 8, 16 32 GB 2, 4, 8, 16 32 GB 4, 8, 16 32 GB 2, 4, 8, 16 32 GB
Mezzanine Slots 2 2 2 4
Dual VIOS
Adapter
No No Yes No
Processor Group P05 P10 P10 P10
HDD (GB) 300 / 600 / 900 300 / 600 / 900 300 / 600 / 900 300 / 600 / 900
SSD Yes Yes Yes Yes
RAID 0, 1, 10 0, 1, 10 0, 1, 10 0, 1, 10
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 5. IBM Power Systems compute nodes 5-15
V9.0
Uempty
Figure 5-11. IBM Flex System Power compute node NGT113.0
Notes:
This section covers the IBM Flex System Power node processor subsystem.
IBM Flex System Power compute node
IBM Flex System Power compute node overview and
architecture
IBM Flex System Power node subsystems
IBM Flex System Power node systems management
A new platform
W
h
a
t

i
s

P
o
w
e
r
?
N
e
w

n
o
d
e

d
e
t
a
i
l
sP
o
w
e
r

V
i
r
t
u
a
l
i
z
a
t
i
o
n
?
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-16 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 5-12. Memory options and form factors NGT113.0
Notes:
One benefit of deploying the Power compute nodes is the ability to use LP (Low Profile) memory
DIMMs. This design allows for more choices to configure the machine to match your needs.
All installed memory DIMMs do not have to be the same size, but it is strongly recommended that
the following groups of DIMMs are kept the same size:
Slots 1-4
Slots 5-8
Slots 9-12
Slots 13-16
Slots 17-20 (p460 Compute Node only)
Slots 21-24 (p460 Compute Node only)
Slots 25-28 (p460 Compute Node only)
Slots 29-32 (p460 Compute Node only)
Memory options and form factors
DIMMs installed in pairs of the same size, speed, type, and technology
Pairs of different sized DIMMs can be mixed in a node
SAS HDD only supported with VLP memory type
Very low profile (VLP) DIMMS
DIMM size DIMM height DIMM width Data Rate
4 GB 18 mm 133.4 mm 1066 MHz
8 GB 18 mm 133.4 mm 1066 MHz
Low profile (LP) DIMMS
DIMM size DIMM height DIMM width Data Rate
2 GB 30 mm 133.4 mm 1066 MHz
16 GB 30 mm 133.4 mm 1066 MHz
32 GB 30 mm 133.4 mm 1066 MHz
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 5. IBM Power Systems compute nodes 5-17
V9.0
Uempty
Figure 5-13. Local storage overview NGT113.0
Notes:
The POWER7-based compute nodes have an onboard SAS controller that can manage up to two,
non-hot-pluggable internal drives. The drives attach to the cover of the server. Even though the
p460 compute node is a full-wide server, it has the same storage options as the p260 compute
node.
Local storage overview
As shown, the local disk drives are mounted
under the cover.
The drives are not hot-pluggable.
Ordering no drives is an option.
Two drives maximum (in all models) can be
installed.
HDD: 300, 600, or 900 GB SAS drive
SSD: 177 GB SATA drive
Important: Drive type is dependent on
DIMM type.
HDD: VLP DIMMs only
SSD: VLP or LP DIMMs
Assigned to same virtual server:
RAID-0 or RAID-1 can be implemented
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-18 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 5-14. Power nodes: IO adapter options NGT113.0
Notes:
Each Power compute node has option of installing IO adapters for Ethernet and SAN connectivity.
There are various types of Ethernet, Fibre Channel and Converged Network adapters supported for
IBM PureFlex Power compute nodes which are listed here. Power nodes support up to 8 ports with
converged network adapter CN4058.
Power nodes: IO adapter options
(#1761) -IBM Flex System IB6132 2-port QDR InfiniBand Adapter
(#1762) -IBM Flex System EN4054 4-port 10Gb Ethernet Adapter
(#1763) -IBM Flex System EN2024 4-port 1Gb Ethernet Adapter
(#1764) -IBM Flex System FC3172 2-port 8Gb Fibre Channel Adapter
(#EC23) -IBM Flex System FC5052 2-port 16Gb Fibre Channel Adapter
(#EC24) -IBM Flex System CN4058 8-port 10Gb Converged Adapter
(#EC26) -IBM Flex System EN4132 2-port 10Gb RoCE Adapter
(#EC2E) -IBM Flex System FC5054 4-port 16Gb Fibre Channel Adapter
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 5. IBM Power Systems compute nodes 5-19
V9.0
Uempty
Figure 5-15. I/O adapter location code information NGT113.0
Notes:
Identifying the I/O resource in the Flex System Manager configuration menus will be necessary to
assign the correct physical resources to the intended virtual servers. This visual shows the physical
location codes on a both the full-wide and half-wide nodes. The locations codes as displayed in the
configuration menus will contain a prefix such as Utttt.mmm.ssssss, where tttt is Machine Type,
mmm is Model, ssssss is 7-digit Serial Number.
For example, 4-port 10Gb Ethernet Adapter located in adapter slot 1 of a p460 Compute Node will
be represented as U78AF.001.ssssss-P1-C34.
Furthermore, the first two ports are addressed using C34-L1 and the second two ports are
addressed by C34-L2
The 2-port 8Gb FC Adapter located in adapter slot 2 of a p260 Compute Node will be represented
as U78AF.001-ssssss-P1-C19.
The storage controller, if disks were ordered, has a location code of P1-T2 on both models.
The USB controller has a location code of P1-T1 on both models.
I/O adapter location code information
To assign the adapters to a virtual server, you must know the
physical location code.
Un-P1-C37
Un-P1-C36
Un-P1-C35
Un-P1-C34
Un-P1-C19
Un-P1-C18
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-20 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 5-16. Four-port 10 Gb Ethernet adapter connectivity NGT113.0
Notes:
The visual shows a relationship between adapter slot 1 and switch bay 1. This means that if you
have an Ethernet adapter in slot 1, you need to have an Ethernet switch in bay1 (and bay 2 if you
plan to use all 4 ports). In addition this requires switch upgrade 1, enabling the additional server
ports on the switch.
The switch bays have uplinks that would take the traffic externally to your existing network.
Historically, adapter assignments on POWER-based systems followed a simple concept, the slot
was assigned, resulting in the adapter and all its peripheral devices being assigned as well. This
changes with the 4-port Ethernet adapter in the IBM Flex System Power compute node. It has two
ASICs on it, each independently driving two of the ports. This allows the adapter to be presented as
two assignable location codes in the FSM when creating or modifying virtual servers. This
increases the physical connectivity options in the node.
Four-port 10 Gb Ethernet adapter connectivity
The four ports are split between different ASICs that are on different PCI busses.
Two ports can be assigned to virtual servers independent of the other two ports.
Switch bay 1
Switch bay 2
Switch bay 3
Switch bay 4
Midplane
The C18-L1 and C18-L2 represent
the end of the location codes for the
10 Gb Ethernet adapter in a p260.
The full location code would be in
the form: U78AF.001-ssssss-P1-
C18-L1.
ASIC
C18-L1
ASIC
C18-L2
PCIe
conn
4-port 10Gb Ethernet
slot 1
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 5. IBM Power Systems compute nodes 5-21
V9.0
Uempty
Figure 5-17. Two-port 8 Gb Fibre Channel adapter connectivity NGT113.0
Notes:
The visual shows the connectivity of the Fibre Channel adapter through the midplane to the ScSEs,
but there are a couple of big differences. First, the Fibre channel adapter has only a single ASIC,
meaning that both ports are assigned to the same virtual server. In addition, since the adapter is in
slot 2, the connectivity through the midplane goes to switch bays 3 and 4.
The switch bays have uplinks that would take the traffic externally to your existing network.
Two-port 8 Gb Fibre Channel adapter
connectivity
The two-port Fibre Channel adapter contains one ASIC. Both
ports must be assigned to the same virtual server.
Switch bay 1
Switch bay 2
Switch bay 3
Switch bay 4
Midplane
PCIe
conn
ASIC
C18-L1
ASIC
C18-L2
4-port 10Gb Ethernet
slot 1
ASIC
C19
PCIe
conn
2-port 8Gb FC
slot 2
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-22 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 5-18. IBM Flex System Power compute node NGT113.0
Notes:
This section covers management of the IBM Flex System Power node.
IBM Flex System Power compute node
IBM Flex System Power compute node overview and
architecture
IBM Flex System Power node subsystems
IBM Flex System Power node systems management
A new platform
W
h
a
t

i
s

P
o
w
e
r
?
N
e
w

n
o
d
e

d
e
t
a
i
l
sP
o
w
e
r

V
i
r
t
u
a
l
i
z
a
t
i
o
n
?
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 5. IBM Power Systems compute nodes 5-23
V9.0
Uempty
Figure 5-19. Managing Power servers: An evolution NGT113.0
Notes:
When POWER4 was introduced as the first AIX platform supporting virtualization (logical partitions,
or LPARs), the HMC was introduced as well. It is a PC running a customized version of Linux with
added applications for performing not only the virtualization tasks, but also system management,
event management (error coordination), and service and support functions. In an effort to create a
more cost effective solution for virtualization when low-cost, entry servers were being considered,
the IVM was created. It is internal to the server, as a function of the VIO server. There are
restrictions on installation, configuration and redundancy when using the IVM, but most of the same
functions that have been available with the HMC are available in the IVM. Having to use (and
maintain) a different management appliance for each entry server presentation management
issues. An advantage of the IVM was that its interface was more refined and user-friendly than the
HMC.
In an effort to solve the one-for-one implementation model and lack of VIO redundancy, as well as
to create one unified management appliance, the SDMC was created. Configured as a virtual
machine on IBM-supplied hardware (a hardware appliance) or customer supplied hardware (a
software appliance-only) the SDMC was created based on the IBM Systems Director. All Power
rack systems and the p703 and p704 blades could all now be managed by one management
appliance using and interface that was familiar to anyone using the IBM Systems Director.
Managing Power servers: An evolution
Over time, the management appliances used to manage
Power servers have evolved.
Entry rack systems
All Power blades
Internal, VIOS-based
Limited redundancy
All-virtual virtual servers
All rack systems
No Power blades
External, PC/Linux
Full redundancy
Virtual and physical
virtual servers
All rack systems
Power blades
Hardware or software
Full redundancy
Virtual and physical virtual servers
Unified and simplified
IVM HMC
SDMC
IBM Flex System components
Integrated, independent appliance
Full redundancy
Virtual and physical virtual servers
Specialized to the IBM Flex System
Power Systems Management
VMControl
Plus many more applications
FSM
Functional
integration
IVM: Integrated Virtualization Manager
HMC: Hardware Management Console
SDMC: Systems Director Management
Console
FSM: Flex System Manager
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-24 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
To enable the tightest integration and allow for specialized control and management of the IBM Flex
System components, a new appliance called the FSM was created, running on a node within the
chassis. The Power Systems Management capabilities will be familiar to any Power systems
administrator in nearly the same interface as the SDMC. The FSM does far more than just manage
Power nodes.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 5. IBM Power Systems compute nodes 5-25
V9.0
Uempty
Figure 5-20. IBM Flex System Manager: Integrated management appliance NGT113.0
Notes:
Fundamentally, the FSM is a locked-down compute node with a specific hardware configuration
designed for optimal performance of the preinstalled software stack. The FSM looks similar to the
X-Architecture based x240. However, there are slight differences between the motherboard designs
making these two hardware nodes not interchangeable.
The FSM can be places in any slot in the chassis. Initial startup of the CMM and FSM will result in
all of the components in the chassis being discovered by the FSM. The FSM has many plug-in
installed that assist in the management of the components. The Chassis Manager is a means of
doing many of the same tasks from the FSM that you can do from the CMM. The Storage Manager
and Network Manager are available for managing the chassis-based storage and network
components, including switches. Other plug-ins are provided to manage other components or
provide other functions.
The Power System Management plug-in is used to manage Power nodes. The FSM provides the
ability to create, manage, delete, and move virtual servers. It contains the tasks for performing
firmware updates, view and acting on system events, and monitoring the systems performance. If
you have experience working with the IVM, HMC, or SDMC, you will be familiar with the tasks that
IBM Flex System Manager management
appliance
All basic and advanced functions preloaded as an
appliance
Adds easy-to-use multi-chassis management
Quick start wizards with automated discovery
Advanced remote presence console across multiple chassis
Centralized FoD license key management
Integrated X-Architecture and Power servers,
storage, and network management
Includes full Power node functionality (for example,
Live Partition Mobility, redundant VIOS, concurrent
firmware updates)
Network fabric management (port profiles, VM
priority, rate limiting)
Virtualization management including resource pools
Robust security (centralized user management,
security policy, certificates)
Integrated LDAP and NTP servers for private
management network
Upward integration into Systems Director, Tivoli, and
other third party enterprise managers
Base & Extensions
Platform Mgr. Chassis Mgr.
Storage Mgr Network Mgr
Multi-Chassis
Management Appliance
IBM Flex System management
appliance
Active Energy Mgr POWER SW
IBM Flex System Manager: Integrated
management appliance
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-26 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
can be performed in Power Systems Management. Many of the menus and screens are very similar
to those found in the SDMC.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 5. IBM Power Systems compute nodes 5-27
V9.0
Uempty
Figure 5-21. Flex System Manager: Home and Plug-ins NGT113.0
Notes:
The Home tab contains the high-level categories of update tasks, for the FSM, Chassis, compute
nodes, or I/O Modules. To manage the chassis and the different components within the chassis, use
the Plug-ins tab.
Power Systems Management is the plug-in that provides full management access to the Power
compute node. The basic functions that can be performed from the Power Systems Management
plug-in include Discovery/access, Inventory, Hardware power on or off, Virtual server creation,
Creating virtual consoles to virtual servers, Firmware updates, Error collection and reporting, and
Mobility.
Flex System Manager: Home and Plug-ins
Upon logging in, you will reach the Home tab.
Use the Plug-ins tab to get to Power Systems Management.
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-28 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 5-22. Power Systems virtualization topics NGT113.0
Notes:
This section explains the concept of a virtual server.
Power Systems virtualization topics
Power Systems virtual servers
Creating Power Systems virtual servers
A new platform
W
h
a
t

i
s

P
o
w
e
r
?
N
e
w

n
o
d
e

d
e
t
a
i
l
sP
o
w
e
r

V
i
r
t
u
a
l
i
z
a
t
i
o
n
?
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 5. IBM Power Systems compute nodes 5-29
V9.0
Uempty
Figure 5-23. Virtualizing workloads with PowerVM NGT113.0
Notes:
Creating a virtualized workload in PowerVM is similar to setting up a new server, except that it only
needs to be done once. After that, creating an identical workload is as simple as copying a file,
saving many hours of repetitive admin work and minimizing the risk of errors. Virtualizing workloads
also makes it easier to provision, scale, and recover from system outages. It is therefore no surprise
that most Power Systems clients have standardized on virtual workloads as the default
configuration for deploying enterprise applications.
The POWER Hypervisor is firmware that provides:
Virtual memory management:
- Controls page table and I/O access
- Manages real memory addresses versus offset memory addresses
Virtual console support
Security and isolation between partitions:
- Partitions allowed access only to resources allocated to them (enforced by the POWER
Hypervisor)
Virtualizing workloads with PowerVM
Creating a virtualized workload with PowerVM is simple.
Create a new PowerVM virtual server.
Install the operating system (AIX, IBM i, or Linux) in the
virtual server.
Install the workload applications in the virtual server.
Configure the operating system and applications as required.
Virtualization is enabled through the POWER Hypervisor.
The completed virtualized workload can be stored, copied, archived, or
modified just like any other file.
The benefits of virtualizing workloads with PowerVM in this way:
Rapid provisioning: Deploying the ready-to-run workload is a quick and easy
process.
Scalability: Deploying multiple copies of the same workload type is simplified.
Recoverability: Bringing a workload back online after an outage is fast and
reliable.
Consolidation: Many diverse workloads can be hosted on the same server.
All of these benefits save system administrators time and resources.
In addition, workload consolidation offers significant IT infrastructure cost
reductions.
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-30 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
PowerVM offers Micro-Partitioning with the ability to run up to 10 virtual servers per processor core,
and dynamically move processor, memory, and I/O resources between virtual servers to support
changing workload requirements.
PowerVM Live Partition Mobility enables active virtual servers to be moved between servers,
virtually eliminating planned downtime. Live partition mobility can also be used to upgrade
workloads between POWER6 and POWER7 processor-based servers without an application
outage.
VMControl complements PowerVM by providing automated virtualization management that
minimizes time to provision virtual machine images and enables management of system pools.
With POWER7, PowerVM and VMControl virtualization software will support up to 1,000 virtual
machines on a single system, providing massive consolidation capability for exceptional costs
savings.
As businesses look for ways to maximize their IT infrastructure investment returns, they turn to
PowerVM virtualization to consolidate multiple workloads onto fewer systemsincreasing server
utilization and reducing cost. PowerVM provides a secure and scalable virtualization environment
for AIX, IBM i and Linux applications built upon the advanced RAS features and leading
performance of the Power Systems platform.
Unlike other systems, all Power Systems benchmarks are run and measured in a virtualized
environment, ensuring that companies can take advantage of costs savings of consolidation, with
near linear scalability and without paying a performance penalty. PowerVM which enables up to 32
times the virtual machine size of VMware, and enables dynamic add and removal of virtual machine
system resources.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 5. IBM Power Systems compute nodes 5-31
V9.0
Uempty
Figure 5-24. What is a POWER virtual server? NGT113.0
Notes:
Logical partitioning is the ability to make a single system run as if it were two or more systems.
Originally POWER virtual servers were known as logical partitions (LPARs) because of the
underlying logical partitioning. Each POWER virtual server represents a division of resources in
your computer system. The POWER virtual servers are logical because the division of resources is
virtual and not along physical boundaries. There are, however, configuration rules that must be
followed.
The system uses firmware to allocate resources to virtual servers and manage the access to those
resources. Although there are configuration rules, the granularity of the units of resources that can
be allocated to virtual servers is very flexible. You can add just a small amount of memory (if that is
all that is needed) without a dependency on the size of the memory cards and without having to add
more processors or I/O slots that are not needed.
Firmware refers to underlying software running on a system independently from any operating
system. On IBM Power Systems, this includes the software used by the flexible service processor
(FSP) and the POWER Hypervisor.
AIX Linux AIX i/OS
SYS1
1:00
Japan
SYS4
12:00
UK
SYS2
10:00
USA
SYS3
11:00
Brazil
ORD
What is a POWER virtual server?
A POWER virtual server is the allocation of system resources
to create logically separate systems within the same physical
footprint.
These are also known as logical partitions.
A virtual server exists when the isolation is implemented with
firmware.
Not based on physical system building block.
Provides configuration flexibility.
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-32 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 5-25. Virtual server resources NGT113.0
Notes:
Resources are the system components that are configured into virtual servers.
The maximum number of virtual servers is related to the total amount of resources on the system.
For example, a system with eight processors can be configured with a total of 80 virtual servers (if
there are sufficient resources).
Each virtual server must be configured with at least 128 MB of memory, one tenth of a physical
processor, and enough I/O devices to provide a boot disk and a connection to a network.
Memory is allocated in units known as the logical memory block (LMB). The default LMB size is
variable, depending on the total amount of physical memory installed, and might be as small as 16
MB. A virtual server can be configured with as little as 128 MB of memory or as much as all of the
available memory.
A virtual server is configured with either dedicated whole processors or shared processors.
Shared processors are allocated in processing units. 1.0 processing units is equivalent to the
processing power of one processor. Virtual servers are configured with at least 0.1 processing units
or with as much as the equivalent of all the available physical processors. After the 0.1 minimum is
satisfied, additional processing units can be allocated in quantities of 0.01 processing units.
Virtual server resources
Resources are allocated to virtual servers.
Memory allocated in units as small as the LMB size
Dedicated whole processors or shared processing units
Individual I/O slots
Including virtual devices
All resources can be managed dynamically
Some resources can be shared.
Virtual devices
Some core system components are inherently shared.
Linux
P P P P P
M M M
SSSS
AIX AIX
P P P
M M
SSSS
P P
M M
SSSS
S: I/O slot
M: Memory
P: Processor
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 5. IBM Power Systems compute nodes 5-33
V9.0
Uempty
I/O resources are allocated to virtual servers at the slot level. At a minimum, you must configure a
virtual server with enough I/O resources to include the boot disk and a network connection.
With software called the Virtual I/O Server installed in a special virtual server, Ethernet and storage
devices can be configured to be shared between virtual servers.
Highly secure environments can choose not to take full advantage of the cross-virtual server
sharing of devices. Even subtle visibility (for example, different response times from a shared
resource) can be considered a covert channel of communication. For this reason, by design, all
shared or virtual resources must be consciously enabled.
Some devices can be shared because they are core resources to the entire system. For example,
even though you have allocated separate amounts of memory to different virtual servers, that
memory can be on the same memory card. Likewise, processors, I/O drawers, and other core
system components are shared. Because of this, a hardware failure might bring down more than
one virtual server and could potentially bring down the entire system; however, there are many fault
containment, in-line recovery, and redundancy features of the system to minimize unrecoverable
failures.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-34 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 5-26. Virtual I/O adapters NGT113.0
Notes:
Each virtual server, by default, is configured to support 10 virtual I/O slots, and each slot can be
populated with a virtual adapter instance, which allows virtual servers to share devices. It also
provides virtual Ethernet connections between virtual servers on the same system. More virtual
slots can be configured.
Virtual adapters interact with the operating system like any other adapter card except they are not
physically present. Virtual adapters are recorded in system inventory and management utilities.
As with physical I/O adapters, a virtual I/O adapter must first be deconfigured from the operating
system to perform a DLPAR remove operation.
Virtual Ethernet provides the same function as using an Ethernet adapter and is implemented
through high-speed, inter-virtual server, in-memory communication. There are two options with
virtual Ethernet:
A virtual Ethernet connection can be configured between two virtual servers on the same
managed system. There is no actual physical adapter. This provides a fast network connection
between the virtual servers.
Virtual I/O adapters
Each virtual server has virtual I/O slots.
Configurable for each virtual server
Virtual slots can have a virtual adapter instance.
Ethernet, SCSI, or Fibre Channel
Virtual I/O slots can be dynamically added or removed just like
physical I/O slots.
Cannot be dynamically moved to another virtual server
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 5. IBM Power Systems compute nodes 5-35
V9.0
Uempty
A virtual Ethernet connection can be configured on one virtual server to connect to a network
using a shared Ethernet adapter (SEA) of another virtual server (called a hosting server or a
Virtual I/O Server) on that managed system.
The virtual SCSI (VSCSI) option provides access to block storage devices in other virtual servers
(that is, device sharing). It uses the client/server model where the server exports disks, logical
volumes, files, or other SCSI-based devices, and the client sees the imported device as a standard
SCSI device.
A virtual Fibre Channel adapter is a virtual adapter that provides client virtual servers with a Fibre
Channel connection to a storage area network through the Virtual I/O Server. The Virtual I/O Server
provides the connection between the virtual Fibre Channel server adapters and the physical Fibre
Channel adapters assigned to the Virtual I/O Server on the managed system.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-36 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 5-27. What is a Virtual I/O Server? NGT113.0
Notes:
The Virtual I/O Server is software that is located in a virtual server. This software facilitates the
sharing of physical I/O resources between client virtual servers within the server.
The Virtual I/O Server provides virtual SCSI (VSCSI) target and shared Ethernet adapter (SEA)
capability to client virtual servers within the system, allowing the client virtual servers to share
storage devices, such as SAS, SCSI, or SAN devices, and Ethernet adapters. The Virtual I/O
Server software requires that the virtual server be dedicated solely for its use. The Virtual I/O
Server is available as part of the PowerVM Editions hardware feature.
For the most recent information about devices that are supported on the Virtual I/O Server, to
download Virtual I/O Server fixes and updates, and to find additional information about the Virtual
I/O Server, see the Virtual I/O Server Web site at:
http://www14.software.ibm.com/webapp/set2/sas/f/vios/home.html.
Virtual I/O devices provide for sharing of physical resources, such as adapters and devices, among
partitions. Multiple partitions can share physical I/O resources and each partition can
simultaneously use virtual and physical I/O devices. When sharing adapters, the client/server
model is used to designate partitions as users or suppliers of adapters. A server must make its
physical adapter available and a client must configure the virtual adapter.
What is a Virtual I/O Server?
A special virtual server hosting physical resources (adapters)
and virtual adapters
Installed and used as an appliance
Physical devices virtualized for virtual I/O client virtual servers
Client virtual server can use both virtual and physical resources
Enables sharing of physical Ethernet adapters
This allows for external access of virtual Ethernet network
The SEA provides a bridge to the client virtual servers network
Enables sharing of physical storage adapters and devices
Physical disks, logical volumes, or files (backing devices) can be
shared
Mapped to VSCSI server adapter
Appear as VSCSI disks in client
Fibre Channel adapters can be shared using N_Port ID Virtualization
(NPIV)
Enables shared storage pools and Active Memory Sharing
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 5. IBM Power Systems compute nodes 5-37
V9.0
Uempty
If a server partition providing I/O for a client partition fails, the client partition might continue to
function, or it might fail, depending on the significance of the hardware it is using. For example, if
the server is providing the paging volume for another partition, a failure of the server partition would
be significant to the client.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-38 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 5-28. Virtual I/O Server summary NGT113.0
Notes:
Some virtual servers are shown using native attachment to physical resources while others are
shown using shared access through the VIOS. Although not depicted, it is possible for a virtual
server to have native access to some resources and shared access to other resources. Some
virtual servers are shown using the shared pool for their CPU access, while others are shown using
dedicated CPUs. Unlike with I/O resources, the CPU access is shared or dedicated on each virtual
server.
The POWER Hypervisor is key to all of the virtualization. It maintains the security, handles the
passing of packets on the virtual Ethernet network, provides console access to the virtual servers,
and much more.
Virtual I/O Server summary
Hosts physical adapters and devices
Management
appliance
POWER Hypervisor
Linux
AIX6.1 AIX7.1
I/O
Storage Network
LAN, WAN, ...
4
CPUs
2
CPUs
6
CPUs
I/O
Storage Network
A
I
X

5
.
3
6 CPUs
I/O
Storage Network
I/O
Storage Network
A
I
X

5
.
3
L
i
n
u
x
A
I
X

7
.
1
L
i
n
u
x
A
I
X

6
.
1
Ethernet
sharing
Virtual I/O paths
Virtual I/O
Server
Virtual
Disks
Virtual
disks/
Optical
SAN, SAS, SCSI disks
or
CD/DVD drives
or
SAN (NPIV) or SAS tape
drives
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 5. IBM Power Systems compute nodes 5-39
V9.0
Uempty
Figure 5-29. Power Systems virtualization topics NGT113.0
Notes:
This section covers creating virtual servers
Power Systems virtualization topics
Power Systems virtual servers (virtual servers)
Creating Power Systems virtual servers
A new platform
W
h
a
t

i
s

P
o
w
e
r
?
N
e
w

n
o
d
e

d
e
t
a
i
l
sP
o
w
e
r

V
i
r
t
u
a
l
i
z
a
t
i
o
n
?
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-40 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 5-30. Creating a partitioned environment NGT113.0
Notes:
The starting point is the management appliance. For IBM Flex System Power-based nodes, that is
the FSM. For rack-mounted systems, it could be the HMC or the SDMC. For Power-based Blades,
it is the IVM.
For our purposes, we will create the virtual server on a Power-based node that has been
discovered on the FSM. One of the Power-based nodes in the chassis must be selected.
A menu is accessed that shows the option to create a virtual server. This will start a wizard, that
walks you through the rest of the configuration options for the virtual server.
The first option that is presented allows you to name your virtual server, give the virtual server an ID
number and choose the operating environment. It is advisable to create the VIOS virtual servers first,
then create the client virtual servers. Choose a unique name on that node and one that fits the virtual
servers function. The wizard will take the next available whole number starting with 1 for the virtual
server ID, which is usually not a problem.
The options for processor, memory and I/O will follow in the wizard. The process is very
straightforward.
Creating a partitioned environment
Access the platforms management appliance interface
Select the virtual server environment (AIX/Linux, VIO, IBM i)
Select both processor mode and memory modes
Shared memory virtual server is valid only when processor mode is shared
processor virtual server
Select I/O, both physical and virtual
Management
appliance
Virtual
server
name and
ID
IBM i
VIOS
AIX/Linux
Processor
Memory
I/O
Dedicated
Shared
Dedicated
Shared
Physical
Virtual
Operating
environment
Power node
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 5. IBM Power Systems compute nodes 5-41
V9.0
Uempty
Figure 5-31. Creating virtual servers and profiles NGT113.0
Notes:
A virtual server has a name and an ID number stored in a table. Virtual server profiles are named
resource configurations and attributes. System profiles can be created to provide an easy way to
either validate that virtual servers can run simultaneously with no resource contention issues, or to
easily start a certain mix of virtual servers and their profiles.
Custom groups provide a mechanism for you to group system resources together in a single view or
a way to organize the systems or virtual servers into smaller business or workload entities, group of
objects can be for example servers and virtual servers.
When you create a virtual server, you must create its first (default) profile at the same time. You can
then create additional profiles with alternate configurations. When the virtual servers and their
profiles are created, you can create system profiles and custom groups.
Creating virtual servers and profiles
Virtual servers name and ID
Virtual servers and profiles have names which can be changed easily.
Virtual servers have an ID, which cannot be changed on a virtual
server after it has been created.
Virtual server profiles
These are used when the virtual server is activated (started).
A virtual server can have more than one profile, but only one is in use
at a time.
A wizard is used to step through the configuration tasks:
Virtual server name, type, and ID number
Processor and memory characteristics and quantities
Physical adapters
Virtual adapters
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-42 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 5-32. Accessing the virtual server creation wizard NGT113.0
Notes:
The visual shows the click-path to get to the menu option for creating a virtual server.
Accessing the virtual server creation wizard
The Power node is powered on and discovered by the FSM.
The process is very similar to that used on the SDMC for
Power rack servers.
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 5. IBM Power Systems compute nodes 5-43
V9.0
Uempty
Figure 5-33. Create Virtual Server wizard NGT113.0
Notes:
Review the summary, and click Finish.
This is a VIOS virtual server. No virtual storage was created. The virtual storage will be created as
each client virtual server is created. The Physical Volumes are assigned with the Physical Adapter
U78AE.001.WZS017R-P1-T2. This is the SAS controller. Two other Physical Adapters are
assigned. In this case, these are the only two physical adapters, as this is a p260.
Create Virtual Server wizard
Upon completion of all the steps, a summary of the virtual server to be
created is displayed.
Name
vios1
Virtual server ID
One
Environment
VIOS
Memory
2 GB (dedicated)
Processors
Ten (shared)
Virtual Ethernet
Two adapters
Virtual disk
None (yet)
Physical volume
All internal disks
Physical adapters
Two adapters
Click Finish.
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-44 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 5-34. Installing an OS in a virtual server NGT113.0
Notes:
Creating a virtual server is fairly straightforward using the wizard, but installing an operating system
has additional considerations. Standard operating system installation methods, such as Network
Installation Management (NIM) or physical optical device, are still applicable in the PureFlex
environment. If you don't have a NIM server or a supported optical device, the easiest way to install
the operating system is to use the virtual optical media.
Installing an OS in a virtual server
Completing the wizard saves the configuration in NVRAM on the node.
The next step is to activate the virtual server, open a console, and install
the operating system.
VIOS Virtual Media Repository
Verify that the required ISO file is in the VIOS media repository.
Configure the client virtual server to use the VIOS media repository.
Verify that the correct virtual adapters and virtual optical drives are configured in
the VIOS (running) and the client virtual server (profile).
NIM
Ensure a NIM server is installed and reachable from the Power nodes adapter.
Set up host name resolution to the address for the virtual server.
Define the virtual server as a machine on the NIM server.
Prepare an lpp_source and SPOT for the installation at a supported level.
Start the virtual server, booting to SMS
VIOS virtual media repository: Boot from CD/DVD and the virtual optical drive.
NIM: Boot from a network adapter to generate a bootp exchange with the NIM
server.
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 5. IBM Power Systems compute nodes 5-45
V9.0
Uempty
Figure 5-35. IBM Power systems management NGT113.0
Notes:
The IBM Hardware Management Console for Power Systems provides a standard user interface for
configuring and operating partitioned and SMP systems. The HMC supports the system with
features that enable a system administrator to manage configuration and operation of partitions in a
system, as well as to monitor the system for hardware problems.
As of September 2013, IBM Flex System Power nodes can be managed by HMC. Each Power
compute node appears as a stand alone server under HMC management.
With the support of HMC based management of Power compute nodes, IBM Flex based
infrastructure also becomes part of standard management stack which includes IBM Systems
Director and upward integration with Tivoli products to address end to end management
requirements of a customer.
IBM Power systems management
Copyright IBM Corporation 2012, 2013
POWER servers
BladeCenter
Stand alone servers
IVM
IVM
Upward Integration &
Service Management Software
End-to-End
Management
Service
Management
Can manage
multiple HMCs
Advanced virtualization
and Cloud capabilities
will continue to be
added.
IBM Systems Director
VMControl
AEM
Storage
Control
Network
Control
HMC
HMC HMC
Power
Flex Nodes
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-46 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 5-36. Keywords NGT113.0
Notes:
Recall these terms and you have done well to understand the concepts in this topic.
Keywords
PowerVM
Power Hypervisor
IBM Flex System Power compute nodes
Virtual I/O Server (VIOS)
Shared Ethernet Adapter
HMC, SDMC, IVM
VMControl
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 5. IBM Power Systems compute nodes 5-47
V9.0
Uempty
Figure 5-37. Checkpoint (1 of 2) NGT113.0
Notes:
Write your answers here:
1.
2.
Checkpoint (1 of 2)
1. Which of the following managers can be used to manage a
p260 or p460 Power node?
a. HMC
b. FSM
c. SDMC
d. IVM
2. The maximum memory on a p460 is (blank), and the
maximum number of cores on a p460 is (blank).
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-48 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 5-38. Checkpoint (2 of 2) NGT113.0
Notes:
Write your answers here:
3.
4.
5.
Checkpoint (2 of 2)
3. True or False: All virtual servers on a p460 must run the
same operating system from a common data store.
4. Name the three resource types that are assigned to virtual
servers.
5. What is the name of the appliance that enables virtual
servers to share physical resources?
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 5. IBM Power Systems compute nodes 5-49
V9.0
Uempty
Figure 5-39. Unit summary NGT113.0
Notes:
Having completed this unit, you should be able to:
Recognize the features of the Power Systems family of servers
List the Power compute nodes and features
Plan adapter and I/O module placement to enable external traffic flow
Explain PowerVM based virtualization on a Power node
Plan for the management of a Power virtualized environment
Unit summary
Having completed this unit, you should be able to:
Recognize the features of the Power Systems family of servers
List the Power compute nodes and features
Plan adapter and I/O module placement to enable external
traffic flow
Explain PowerVM based virtualization on a Power node
Plan for the management of a Power virtualized environment
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-50 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-1
V9.0
Uempty
Unit 6. IBM Flex System storage
What this unit is about
Welcome to the introduction to IBM Flex System V7000 Storage Node. This
provides a high level overview IBM Flex System V700 Storage Node and
enclosures, and its capabilities. We will introduce the Flex System V7000
advance storage functions and the storage management features.
What you should be able to do
After completing this unit, you should be able to:
Summarize the features of the IBM Flex System
Identify the Flex System V7000 Storage Node component features
Summarize the installation planning and configuration steps associated
with the IBM Flex System V7000
Identify the basic usage and functionality of IBM Flex System V7000 GUI
menu
Identify the major elements of the IBM Flex System V7000 storage
management
Recognize the IBM Flex System V7000 advanced storage features
How you will check your progress
Checkpoint questions
Lab exercises
References
1. IBM PureFlex System and IBM Flex System Products and Technology
(SG247984)
2. IBM Flex System V7000 Storage Node Introduction and Implementation
Guide (SG248068)
3. Implementing Systems Management of IBM PureFlex System
(SG248060)
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-2 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-1. Unit objectives NGT113.0
Notes:
After completing this unit, you should be able to:
Summarize the features of the IBM Flex System
Identify the Flex System V7000 Storage Node component features
Summarize the installation planning and configuration steps associated with the IBM Flex
System V7000
Identify the basic usage and functionality of IBM Flex System V7000 GUI menu
Identify the major elements of the IBM Flex System V7000 storage management
Recognize the IBM Flex System V7000 advanced storage features
Unit objectives
After completing this unit, you should be able to:
Summarize the features of the IBM Flex System
Identify the Flex System V7000 Storage Node component
features
Summarize the installation planning and configuration steps
associated with the IBM Flex System V7000
Identify the basic usage and functionality of IBM Flex System
V7000 GUI menu
Identify the major elements of the IBM Flex System V7000
storage management
Recognize the IBM Flex System V7000 advanced storage
features
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-3
V9.0
Uempty
Figure 6-2. IBM Flex System V7000 Storage Node topics NGT113.0
Notes:
This topic provides an overview of the IBM Flex System platform details, and the positioning of the
IBM Flex System V7000 with in the Flex System environment.
IBM Flex System V7000 Storage Node topics
IBM Flex System platform details
IBM Flex System V7000 Storage Node
overview
Flex System V7000 Installation and
GUI interface
Flex System V7000 storage
management
Flex System V7000 advanced
management features
Flex System V7000 packing options
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-4 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-3. IBM Flex System platform details NGT113.0
Notes:
IBM Flex System V7000 Storage Node is part of the comprehensive integration of components that
make up the IBM PureFlex System and IBM Flex System solution, combining a mix of compute
nodes, storage, networking, virtualization and management capabilities into a single infrastructure
system.
IBM Flex System platform details
IBM Flex System
Infrastructure Components
IBM PureFlex System
Integrated Infrastructure
Compute nodes
Power 2S/4S
x86 2S/4S
Storage node
V7000 Storage Expansion
in/out of chassis
Management
appliance
Optional
Expansion
PCIe
Storage
Networking
10/40GbE, FCoE, IB
8/16Gb FC
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-5
V9.0
Uempty
Figure 6-4. IBM Flex System storage portfolio positioning NGT113.0
Notes:
A key factor with Flex System is flexibility, as illustrated in this diagram with our various storage
options.
IBM Flex System storage portfolio positioning
Not Share Cluster
High Performance
Internal Shared
Boot
File/Print
File
Share/NAS
Small Database
Internal
Storage
(1-2 drives)
HDD/SSD
Raid 0,1
IO Exp.
(1-8 drives)
HDD/SSD
Flex System
Flash
Tiering
Raid 5
Storage
Expansion
Node
(1-12 drives)
Dist database
Entry level NAS
Caching
Raid 0, 1, 10, 5
and 6
Raided Direct
Attach Storage
JBOD Only
Mode
Flex System V7000
Storage Node
(1-240 drives)
Automatic Clustering,
Zoning, Pooling,
Discovery and
Inventory
Internal
Shared Block Storage
Raid 0, 1, 10, 5 and 6
FCoE
10Gb Ethernet
8Gb Fibre Channel
iSCSI
Easy Tier
Clustering
JBOD support
IBM Storwize V7000
IBM Storwize V7000
Unified
(1-240+ drives)
Traditional Storage
Implementation
Flexibility
Fits in racks without a
chassis
External
Shared Block Storage
Raid 0, 1, 10, 5 and 6
FCoE
10 Gb Ethernet
8 Gb Fibre Channel
iSCSI
Easy Tier
Clustering
JBOD support
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-6 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-5. A complete portfolio of supported IBM storage for PureFlex NGT113.0
Notes:
IBM offers a wide variety of storage solutions that will work with PureFlex.
A complete portfolio of supported IBM storage for
PureFlex
Data Protection
And Retention
TS Family
- Tape Drives
- Tape Libraries
- Tape Automation
Storage
Management
Software
Tivoli Productivity
Center (TPC)
Tivoli Storage
Manager (TSM)
Tivoli Key
Lifecycle Manager
FlashCopy
Manager (FCM)
Entry/Midrange
Storage Systems
Enterprise
Storage Systems
File Storage
Systems
Flex System
V7000
Storwize V7000
Storwize V7000
Unified
DS/DCS Family
- DS3500
- DCS3700
DS8000 Family
- DS8700
- DS8800
XIV Family
- XIV
- XIV Gen3
Scale-Out NAS
(SONAS)
N series
- N3000
- N6000
- N7000
Efficiency Enhancers
Real-time Compression Appliance
SAN Volume Controller (SVC)
ProtecTIER Deduplication
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-7
V9.0
Uempty
Figure 6-6. Virtualization: The big picture NGT113.0
Notes:
This visual shows a representation of an IBM PureFlex virtual storage environment. IBM PureFlex
combines compute nodes, storage, networking, virtualization and management into a single
infrastructure system providing a redundant, modular and scalable solution.
Designed to be a redundant, modular, and scalable solution.
Storage Area
Network
Managed Disks a Cluster Resource
Volumes Volumes Volumes Volumes
NC NC NC NC NC NC NC NC
System consists of
one to four I/O
Groups managed as
a single entity
Control
Enclosure
Control
Enclosure
Control
Enclosure
Control
Enclosure
Control enclosure with
2 node canisters (NC)
makes up an I/O Group
and owns given
volumes
Virtualization: The big picture
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-8 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-7. IBM Flex System V7000 Storage Node topics NGT113.0
Notes:
This topic provides an overview of the IBM Flex System V7000 Storage Node.
IBM Flex System V7000 Storage Node topics
Flex System platform details
Flex System V7000 Storage Node
overview
Flex System V7000 Installation and
GUI interface
Flex System V7000 storage
management
Flex System V7000 advanced
management features
Flex System V7000 packing options
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-9
V9.0
Uempty
Figure 6-8. IBM Flex System V7000 storage overview NGT113.0
Notes:
The IBM Flex System V7000 Storage Node plays an important role in this close integration of
resources by providing virtualized storage within the IBM PureFlex System environment, and
shared storage capacity to all the compute nodes by virtualizing internal disk drives and external
Fibre Channel storage system capacity.
The IBM Flex System V7000 consists of a set of drive enclosures, the IBM Flex System V7000
Control Enclosure and the IBM Flex System V7000 Expansion Enclosure. Control enclosures
contain disk drives and two control (node) canisters. A collection of up to four control enclosures
can be managed as a single system is an IBM Flex System V7000 clustered system.
The Flex System V7000 control enclosure also provides FCoE optimized offering in addition to FC
and iSCSI host connectivity through optional host interface network cards that connect to the Flex
System Enterprise Chassis midplane and its switch modules.
The expansion enclosure also contain drives and two expansion canisters are used to attach to the
control enclosure. Up to two expansion enclosures can be connected to a single control enclosure.
Each enclosures accommodates up to twenty-four 2.5-inch hard disk drives or solid states drives in
within the enclosure. Expansion canisters include the serial-attached SCSI (SAS) interface
hardware that enables the control canisters to use the drives of the expansion enclosures.
Shared storage (M/T 4939) control and
expansion enclosures
Dual canister enclosures:
Supports up to 24 hot swap internal SFF
HDDs/SSDs
Scalable up to 240 HDDs (960 HDDs with
4 system cluster) within/external to the
chassis
8 Gb FC, 1/10 Gb iSCSI, and 10 Gb
FCoE host protocol options
Supports IBM Flex System compute
nodes across multiple chassis
Customer installable and maintainable
S
y
s
t
e
m

i
n
f
r
a
s
t
r
u
c
t
u
r
e
Integrated Storage
Simplifies storage administration

Virtualizes for higher storage


utilization

Balances high performance and


cost for mixed workloads

Protects data and minimizes


downtime
IBM Flex System V7000 storage overview
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-10 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-9. Flex System V7000 storage chassis integration NGT113.0
Notes:
The IBM Flex System V7000 Storage Node is designed to fit into the IBM Flex System Enterprise
chassis, occupying four of the 14 bays. IBM Flex System Enterprise Chassis provides flexibility and
tremendous compute capacity by intermixing POWER7, Intel x86, storage, and networking all in a
single 10U package. Both the control and expansion enclosures connect to the Flex System
Enterprise chassis through the midplane interconnect for their power and internal control
connections.
Flex System V7000 storage chassis integration
Integrated in the Flex System
Enterprise Chassis
- Single 10 package with up to 14 bays
Flex System V7000 is a double-
wide / double-high node form
factor
Occupies 4 bays
Requires no power modules
- Power and internal connections
received through the chassis
midplane
10U
4U
Records the lowest left bay
Integrated into
the chassis
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-11
V9.0
Uempty
Figure 6-10. Flex System V7000 front view NGT113.0
Notes:
This visual is a front view of the IBM Flex System V7000 Control Enclosure (top) and the IBM Flex
System V7000 Expansion Enclosure (bottom); both are similar in design, and both are capable of
supporting up to twenty-four, 2.5-inch small form-factor (SFF) SAS hard disk drives (HDD) and solid
state drives (SSD). All drives are installed in the front (from left to right) of the enclosure.
The control enclosure and the expansion enclosures both shared the same machine type, but the
model numbers are different. Be sure to ensure you are using the correct machine type, model
number, and serial number when requesting servicing of a system. This information can be
obtained from a set of blue pull-out tabs on the front of the each enclosure.
The IBM Flex System V7000 Storage Node enclosure machine type and model (MTM) is as follows:
Machine type and model (MTM) for the control enclosure is 4939-A49, 4939-H49, or 4939-X49.
Machine type and model (MTM) for the expansion enclosure is 4939-A29, 4939-H29, or
4939-X29.
Control enclosure
Expansion enclosure
Hardware (MT/Model)
4939-#49 (Dual controller enclosure)
Hardware (MT/Model)
4939-#29 (Expansion enclosure)
Flex System V7000 front view
MTM
location
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-12 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-11. Control enclosure internal components NGT113.0
Notes:
This visual provides a structural diagram of the Flex System V7000 control enclosure.
Control enclosure internal components
Control
Enclosure
Control canisters
Drive Caddy
Full Wide
24, 2.5 drive bays
HIC cards
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-13
V9.0
Uempty
Figure 6-12. Flex System V7000 node canisters NGT113.0
Notes:
Each Flex System V7000 Storage Node canisters contains different features. The control canisters
provide host interfaces, management interfaces, and SAS interfaces to the control enclosure. A
control canister has two DIMMs which make up the cache memory; offering large scalable cache;
16 GB cache memory per control enclosure (8 GB per control canister) as a base feature. It also
contains a battery, internal solid state drive to store software and logs, SAS interface logic to the
expansion canisters in the expansion enclosures and the processing power to run the Flex System
V7000 storage virtualizing and management software.
The expansion canister connects the expansion disks to the control canister using the SAS
(SAS-2.0) 6 Gbps chain interface. This module also enables the daisy-chaining of additional
expansions to be connected behind it to further expand the capacity of the systems storage. The
usage of port 1 connecting to control enclosure is mandatory, whereas the use of port 2 optional for
connecting further expansion enclosures.
Flex System V7000 node canisters
V7000 control canister contains:
Up to two Host Interface Cards (HIC)
First HIC must (always)
Two 10 Gbps Ethernet ports (FCoE
and/or iSCSI)
Second HIC can be either:
Four 2/4/8 Gbps Fibre Channel ports
Two 10 Gbps Ethernet Ports (FCoE
and/or iSCSI)
One internal 10/100/1000 Mb/s Ethernet
for management
One external 6 Gbps SAS ports
Two external USB Ports (not used for
normal operation)
Large scalable cache: 8 GB/16 GB/64
GB
One battery
V7000 expansion canister contains:
Two 6 Gb/s SAS ports
Port 1 to connect to control enclosure -
required
Port 2 to connect further expansion
enclosures - optional
SAS port SAS ports
1 2
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-14 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-13. Integrated scalable storage NGT113.0
Notes:
The IBM Flex System V7000 storage node is designed to be a scalable internal storage system to
support the compute nodes of the IBM Flex System environment. With the addition of 9 expansion
enclosures (which can include both Flex System V7000 expansion nodes and Storwize V7000
expansion enclosures), you can scale beyond the internal storage of the Flex System chassis to a
capacity of 240 disk drives. A maximum of four clustered systems, brings the total number of drives
to 960.
Integrated scalable storage
Up to three enclosures per chassis
Scalable to 240 drives
Clustered systems support up to 960 drives
Supports intermixing of SAS, Nearline SAS
(NL-SAS), and solid state drives (SSDs)
F
l
e
x

S
y
s
t
e
m

C
h
a
s
s
i
s
S
t
a
n
d
a
r
d

R
a
c
k

E
n
c
l
o
s
u
r
e
Scalable within
Flex System chassis
F
l
e
x

S
y
s
t
e
m

C
h
a
s
s
i
s
Scalable to external rack expansion
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-15
V9.0
Uempty
Figure 6-14. SAS network cabling NGT113.0
Notes:
This visual shows an example of cabling a Flex System V7000 first to Flex System V7000
expansion enclosures, and then to a Storwize V7000 expansion enclosure.
Expansion cabling
accomplished through
canister ports on front of Flex
System V7000
Storwize V7000 systems attach
via rear canister ports
A Storwize V7000 cannot be the
first device on the chain
All expansion enclosures are
attached through a single
chain
Storwize V7000
expansion unit
SAS network cabling
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-16 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-15. Expand storage beyond the chassis NGT113.0
Notes:
With the Flex System V7000 Storage Node you can start small and pay as you grow for
performance or capacity.
Expand storage beyond the chassis
Internal
chassis
storage
External
storage
Integrated rack
(w/ internal and external storage)
Separate server and
storage racks
Just because you begin with storage internal to the chassis does not
mean you are limited to that space.
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-17
V9.0
Uempty
Figure 6-16. IBM Flex System V7000 Storage Node topics NGT113.0
Notes:
This topic provides an overview of the IBM Flex System V7000 installation and GUI.
IBM Flex System V7000 Storage Node topics
Flex System platform details
Flex System V7000 Storage Node
overview
Flex System V7000 Installation and
GUI interface
Flex System V7000 storage
management
Flex System V7000 advanced
management features
Flex System V7000 packing options
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-18 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-17. Flex System V7000 installation planning NGT113.0
Notes:
While the Flex System V7000 comes preinstalled and configured within a PureFlex solution, you
may find yourself installing a new system at some point. Before installing a Flex System V7000 in
your Flex System environment:
Verifying that adequate space is available in the chassis and those requirements for power and
environmental conditions are met. This documentation should have been included as part of the
physical planning for the environment of your system.
Ensure that the items that are listed in your packing slip match what is in the box, to include any
optional items that you ordered.
Installation of the Flex System V7000 requires removal of four front filler panels, two compute
node shelves. Once the fillers and shelves are removed, two chassis rails must be removed
from chassis.
Depending upon how the chassis is currently configured, it might be necessary to remove shelves
and add or remove shelf supports from the IBM Flex System chassis.
New installation
Verify environment requirements
Verify all items are included against
the packing slip
Removal of components
Fillers
Compute node shelves
Chassis rails (2)
Flex System V7000 installation planning
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-19
V9.0
Uempty
Figure 6-18. Flex System V7000 initial setup wizard (1 of 3) NGT113.0
Notes:
After installing a new IBM Flex System V7000 Storage Node control enclosure within the IBM Flex
System, and it is visible within the Chassis Map interface, you are ready to the Launch IBM Flex
System V7000 EZSetup task to complete the initial setup the storage node. There are two methods
that can be used for the initial setup of the IBM Flex System V7000 Storage Node. The method
used depends upon the configuration of the IBM Flex System.
If the IBM Flex System has an installed and configured IBM Flex System Manager, then it
should be used to set up the IBM Flex System V7000 Storage Node.
If the IBM Flex System does not have an IBM Flex System Manager (FSM) installed and
configured, use the Chassis Management Module to set up the IBM Flex System V7000
Storage Node. If the easy setup is launched from an FSM, then the support call home
configuration screens will be suppressed because the FSM will provide the call home function.
Flex System V7000 initial setup wizard (1 of 3)
Two methods that can be
used for the initial setup:
IBM Flex System Manager
If the FSM has an installed
and configured IBM Flex
System Manager, then it
should be used to set up
the Flex System V7000
Storage Node.
Chassis Management Module
If the Flex System chassis
does not have an FSM
installed and configured,
use the CMM to set up the
Flex System V7000 Storage
Node.
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-20 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-19. Flex System V7000 initial setup wizard (2 of 3) NGT113.0
Notes:
During this process you will need to select whether are using an IPv4 or IPv6 management IP
address. The subnet mask and gateway will be a listed by default, but can be changes if required.
Click Finish to set the management IP address for the system. System initialization begins and
might take several minutes to complete. When system initialization is complete, System Set Up is
launched automatically.
Flex System V7000 initial setup wizard (2 of 3)
Set IPv4 or IPv6 IP address:
Specify the desired V7000 storage node cluster management IP
address configuration.
You can use DHCP or statically assign one.
Once IP information is
provided, the system will
initialize
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-21
V9.0
Uempty
Figure 6-20. Flex System V7000 initial setup wizard (3 of 3) NGT113.0
Notes:
During the initial setup of the Flex System V7000, the installation wizard asks for various
information that you should have created during planning and have available during this installation
process. If you do not have this information ready or choose not to configure some of these settings
during the installation process, you can configure them later through the management GUI.
Flex System V7000 initial setup wizard (3 of 3)
Welcome to System Setup
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-22 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-21. IBM Flex System V7000 Storage Node topics NGT113.0
Notes:
This topic introduces the IBM Flex System V7000 storage management.
IBM Flex System V7000 Storage Node topics
Flex System platform details
Flex System V7000 Storage Node
overview
Flex System V7000 Installation and
GUI interface
Flex System V7000 storage
management
Flex System V7000 advanced
management features
Flex System V7000 packing options
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-23
V9.0
Uempty
Figure 6-22. Flex System V7000 enhancements NGT113.0
Notes:
IBM Flex System V7000 Storage Node provides a storage solution that can be directly managed
from the CMM as well as the FSM.
Physical chassis plug-and-play integration
- Once you install the Flex System V7000 in a chassis, it is recognized by the CMM
Automated deployment and discovery
- Once the Flex System V7000 is identified by the FSM, you can take advantage of these
features
Integrated into Flex System Manager chassis map
- Unlike external storage devices, the Flex System V7000 is a true chassis resource
FCoE optimized offering (plus FC and iSCSI)
- With enhanced Ethernet module support option you can utilize FCoE to your hosts
Flex System V7000 enhancements
Physical chassis plug-and-play integration
Automated deployment and discovery
Integrated into Flex System Manager chassis map
FCoE optimized offering (plus FC and iSCSI)
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-24 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-23. Flex System Manager chassis map with Flex System V7000 NGT113.0
Notes:
Flex System Manager (FSM) is a highly integrated management device that offers single-system
management across physical and virtual resources. It provides complete control of IBM Flex
System components and features. It offers:
Virtualization management
Resource allocation and control
Network and storage control
Flex System Manager chassis map
with Flex System V7000
Flex System Flex System V7000
(Control enclosure)
Flex System Flex System V7000
(Expansion enclosure)
Storage Node 01
Type: Storage Node
Model: xyz
Status Warning
Serial Number: 12345678
-----------------------------------------
Status Warning
Sto1023w - Drive in bay 21 has failed
Sto5386w Canister B is over heating
View contextual, aggregate device information,
health/status, issues, warnings, total/available capacity
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-25
V9.0
Uempty
Figure 6-24. FSM storage management NGT113.0
Notes:
To be a truly compelling solution, PureSystems cannot simply provide a data path to a storage
device that must be managed separately. With the Flex System Manager (FSM) as our starting
point, device management, storage allocation must be dynamic and flexible. The visual above
notes key elements of this strategy.
Integrated with virtual server
management, allowing virtual disks
to be defined and attached
Dynamic storage provisioning as
part of image deployment
Enable policy-driven placement
within storage system pools
Dynamic zoning/masking as part of
virtual server relocation
Integration with Tivoli Storage
Productivity Center
XiV
TPC
SVC
Hypervisor
Virtual I/O Stack
(intrinsic to hypervisor
or external)
File
System
Storage Pools
FSM storage management
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-26 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-25. FSM storage management capabilities NGT113.0
Notes:
IBM Flex System Manager helps you address storage management challenges from device
deployment and through the data life cycle. Storage deployment capabilities in the IBM Flex System
Manager include storage device discovery and simple logical and physical device configuration
from a single interface. IBM Flex System Manager can provide physical and logical storage
topology views and can show relationships between storage and server resources, giving you the
ability to track key resources based on their business usage. Provisioning capabilities include
image management for simple virtual machine creation, deployment and cloning. You can also
manage storage system pools for data life cycle management and storage placement based on
business policies.
FSM storage management capabilities
IBM Flex System Enterprise Chassis and
the management software offer many
storage-management capabilities:
Discovery of physical and virtual storage
devices
Support for virtual images on local storage
across multiple chassis
Inventory of physical storage configuration
Health status and alerts
Storage pool configuration
Disk sparing and redundancy management
Virtual volume management
Support for virtual volume discovery,
inventory, creation, modification, and deletion
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-27
V9.0
Uempty
Figure 6-26. Flex System V7000 GUI (v7.1): Home - Overview NGT113.0
Notes:
IBM Flex System V7000 Storage Node simplifies storage administration with an easy-to-use single
user interface for all your storage with a management console that is integrated with the
comprehensive management system. The web based Graphical User Interface (GUI) provides
faster and more efficient management tool to help you to monitor, manage, and configure your
storage environment system. After you have successfully logged into the Flex System V7000
management GUI (V7.1), the Home Overview panel contains three main sections for navigating
through the management tool.
1. On the far left of the window are eight function icons.
2. In the middle of the window is a diagram illustrating the existing configuration; you can hover
over the icons or click an icon to provide extended help references.
3. The Status Indicators that is located at the bottom of the window provide information about
Capacity usage, Compression ratio, Running Tasks and the Health Status of the system. The
Status Indicators are visible from all panels in the IBM Flex System V7000 Storage Node GUI.
Flex System V7000 GUI (v7.1): Home - Overview
.
Status indicators
Function
icons
Extended help
Hover over
icons
Action menu
Quick navigation
Status details
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-28 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-27. Home: Overview - Functions icons NGT113.0
Notes:
The functions icons were grouped to show all of the options available. You can hover the cursor
over any one of the eight functions icon to the display the associated menu options.
Home: Overview - Functions icons
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-29
V9.0
Uempty
Figure 6-28. IBM Flex System V7000 Storage Node topics NGT113.0
Notes:
This topic introduces the IBM Flex System V7000 advanced management features.
IBM Flex System V7000 Storage Node topics
Flex System platform details
Flex System V7000 Storage Node
overview
Flex System V7000 Installation and
GUI interface
Flex System V7000 storage
management
Flex System V7000 advanced
management features
Flex System V7000 packing options
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-30 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-29. Integration of Storwize V7000 technology NGT113.0
Notes:
The IBM Flex System V7000 is built on the industry-leading storage virtualization and efficiency
capabilities of IBM Storwize V7000 providing a completely integrated solution, with state of the art
storage resources, and offers customers a high capacity solution that can also grow beyond the
physical boundaries of the chassis.
Advanced storage efficiency capabilities
Thin provisioning, FlashCopy, Easy Tier, Real-time Compression,
non-disruptive migration
External virtualization for rapid data center integration
Metro and Global Mirror for multi-site recovery
Integration of Storwize V7000 technology
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-31
V9.0
Uempty
Figure 6-30. Advanced storage functions NGT113.0
Notes:
IBM Flex System V7000 Storage Node delivers extraordinary levels of storage efficiency through a
variety of IBM technologies that are included and optional.
Advanced storage functions
Thin Provisioning
Real-time Compression
External Virtualization
Remote Mirroring
Easy Tier
FlashCopy
Base software (Included) Optional software
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-32 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-31. Storage virtualization NGT113.0
Notes:
Flex System V7000 supports unmatched performance and flexibility through internal virtualization
and built-in SSD optimization technologies.
Storage virtualization is designed to improve utilization by enabling rapid, flexible provisioning, and
reduces complexity of management with simple configuration changes for greater scalability and
performance. It also offers consolidate application migration for non-disruptive movement of data
among tiers of storage, and a more improve flexibility in disaster recovery to help reduce the risk of
system failure.
External storage virtualization provides the ability for the IBM Flex System V7000 storage node to
manage capacity in other disk systems. When V7000 storage node virtualizes a disk system, its
capacity becomes part of the storage node systems and is managed in the same way as capacity
on internal drives. Capacity in external disk systems inherits all the functional richness and
ease-of-use of V7000 storage node including advanced replication, thin provisioning, Real-time
Compression and Easy Tier. Virtualizing external storage helps improve administrator productivity
and boost storage utilization while also enhancing and extending the value of an existing storage
asset.
Storage virtualization
Internal storage virtualized
Enables rapid, flexible provisioning and simple configuration changes
Enables non-disruptive movement of data among tiers of storage, including Easy
Tier
Enables data placement optimization to improve performance
External storage virtualized
Capacity from existing storage systems becomes part of the IBM storage system
Single user interface to manage all storage, regardless of vendor
Designed to significantly improve productivity
Virtualized storage inherits all the rich base system functions
including Real Time Compression, FlashCopy, Easy Tier, Thin Provisioning
Move data transparently between external storage and the IBM storage system
Extends life and enhances value of existing storage assets
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-33
V9.0
Uempty
Figure 6-32. Storage scalability NGT113.0
Notes:
The Flex System V7000 provides the ability to dynamically scale for high capacity applications such
as archive. You can dynamically add capacity by simply adding disk enclosures. Many features are
supported and some features are already installed for easy setup. Let me name some of the
features: thin provisioning, volume mirroring, FlashCopy, full and incremental copy, multi-target
FlashCopy, cascaded FlashCopy, reverse FlashCopy, FlashCopy nocopy with thin provisioning,
metro mirror, global mirror, data migration, performance management, virtualization, automated
failover/failback, and easy tier.
Configurations that include more than one control enclosure in the same system require native
Fibre Channel SAN connectivity for communication between the control enclosures. Fibre Channel
over Ethernet (FCoE) is not currently supported for creating a clustered system.
Storage scalability
Capacity scaling
For high capacity applications such as archive, dynamically add
capacity by adding disk enclosures
Scale up to 240 small form-factor (SFF) drives
Scale with large form-factor (LFF) drives optional with Storwize V7000
Clustering
For IO performance intensive applications achieve up to 4x the
performance
Extreme capacity scaling up to 960 SFF drives
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-34 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-33. Storage efficiency: Thin provisioning NGT113.0
Notes:
Greater efficiency can make a difference to the IT department. For example, without thin
provisioning, pre-allocated space is reserved whether the application uses it or not. With the use of
thin provisioning, applications can grow dynamically, but only consume space they are actually
using. Without performance optimization, hot spots may appear due to poor data layout,
Performance optimization can be used to transparently rearrange the data to eliminate hot-spots
and balance utilization of all components, and to analyze system performance and throughput.
Storage efficiency: Thin provisioning
Thin provisioning
More productive use of available storage
Across all supported host platforms
Improve storage utilization
Performance optimization
Transparently rearrange the data to eliminate hot-spots and balance
utilization of all components
Without thin provisioning, pre-allocated space is
reserved whether the application uses it or not.
With thin provisioning, applications can grow
dynamically, but only consume space they are
actually using.
Dynamic
growth
Hot-spots due to poor data layout. Optimized performance and throughput.
Transparent
reorganization
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-35
V9.0
Uempty
Figure 6-34. Storage efficiency: Easy Tier management NGT113.0
Notes:
IBM System Storage Easy Tier software provides automated storage tiering for SSD optimization.
SSDs increase throughput for critical activities, such as random reads of large analytics databases,
compared to traditional hard disks. According to IBM estimates, automatic SSD optimization with
easy tier provides up to a 200 percent performance increase for I/O bound applications by migrating
the most active data to SSDs. The easy tier technology moves small data extents rather than entire
volumes, so it makes more efficient use of expensive SSDs.
Storage efficiency: Easy Tier management
Busiest data extents are identified and automatically relocated to
highest performing Solid-state Disks
Remaining data extents can take advantage of higher capacity, price
optimized disks
Automatically analyzes data
Uses 24 hour rolling window
Improve performance up to
3x with as little as 10% of
data on SSD
No administrator involvement
Easy Tier
Learning
Easy Tier
In Action
Hot-spots due to poor data layout. Optimized performance and throughput.
Automatic
Relocation
SSDs
HDDs SSDs
HDDs
240% from
Original
brokerage
transaction
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-36 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-35. Storage efficiency: Real-time Compression NGT113.0
Notes:
Real-time Compression processes the data before it is written to the storage device. The key
advantage of this approach is that it reduces the storage resources required for a data set. If done
correctly, the capacity-reduction application preserves the inherent performance of the storage
environment. Already optimized data is written to storage, which mitigates the capacity explosion
challenge at the point of origin. It accomplishes this mitigation by eliminating the need to allocate
the additional storage capacity required by post-processing solutions. Because the primary storage
is used, any compression technique must be run in real time and maintain the high availability
features of the existing storage system.
To predict a compression can be done by the Compressimator tool.
Storage efficiency: Real-time Compression
Innovative compression with high performance implementation supports
active primary workloads
Compression can help freeze storage growth or delay need for
additional purchases
Uncompressed Data Capacity Reduced Disk Capacity Required
DB2 and Oracle databases Up to 80%
Virtual Servers
(VMware)
Linux virtual OSs Up to 70%
Windows virtual
OSs
Up to 50%
Office
2003 Up to 60%
2007 or later Up to 20%
CAD/CAM Up to 70%
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-37
V9.0
Uempty
Figure 6-36. Storage availability NGT113.0
Notes:
You can also use FlashCopy manager for integrated, instant copy for critical applications. Virtually
eliminate backup windows, rapidly create clones for application testing. View inventory of
application copies, and instantly restore.
Storage availability
FlashCopy
Create instant application copies for
backup or application testing
Make better use of space with incremental
(only changed blocks) or space-efficient
(thin provisioned) snapshots
Reduce space required for copies by 75%
or more
FlashCopy Manager
Integrated, instant copy for critical
applications
Virtually eliminate backup windows
Rapidly create clones for application
testing
View inventory of application copies and
instantly restore
Up to 256
Up to 256
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-38 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-37. Storage high availability NGT113.0
Notes:
For high availability, use FlashCopy to create instant application copies for backup or application
testing. This allows you to make better use of space with incremental (only changed blocks) or
space-efficient (thin provisioned) snapshots. This also reduce the space required for copies by 75%
or more.
This allows you to efficiently manage technology upgrades and lease terminations by transparently
moving application data from legacy disk arrays to the new system, and reduce migration elapsed
time from weeks or months to days. Use local mirror for ultra-high availability applications,
synchronously mirror application data between two separate disk enclosures attached to the same
system.
Storage high availability
Transparent data movement
Efficiently manage technology upgrades
and lease terminations by transparently
moving application data from legacy disk
arrays to the new system.
Reduce migration elapsed time from
weeks or months to days.
Local Mirror
For ultra-high availability applications,
synchronously mirror application data
between two separate disk enclosures
attached to the same system.
Convert volumes from thick to thin.
Legacy
Network
Application
server
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-39
V9.0
Uempty
Figure 6-38. Storage business continuance NGT113.0
Notes:
You can mirror data off site synchronously over metro distances or asynchronously over global
distances. Practice recovery procedures can be used for critical application consistency groups,
freeze the mirror and take a consistent FlashCopy, and, practice application recovery procedures
from the FlashCopy.
Site switching automation can be used to detect mirroring failure and automate failover to recovery
volume, or execute practiced application recovery procedures, and you can automate fail-back after
any repair.
Storage business continuance
Mirror data off-site
Synchronously over metro distances
Asynchronously over global distances
Application-level consistency groups
Practice recovery procedures
For critical application consistency groups, freeze
the Mirror and take a consistent FlashCopy
Practice application recovery procedures from the
FlashCopy
Site-switching automation
Detect mirroring failure and automate failover to
Recovery volume
Execute practiced application recovery
procedures
Automate fail-back after repair Recovery
volume
Recovery
practice
volume
Network
Network
Network
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-40 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-39. Packaging options NGT113.0
Notes:
This topic introduces the Flex System V7000 packaging options.
Packaging options
Flex System platform details
Flex System V7000 Storage Node
overview
Flex System V7000 Installation and
GUI interface
Flex System V7000 storage
management
Flex System V7000 advanced
management features
Flex System V7000 packing options
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-41
V9.0
Uempty
Figure 6-40. IBM PureFlex System configuration options NGT113.0
Notes:
The table shows various components of an IBM PureFlex System configuration. As highlighted, the
Flex System V7000 is a required component.
IBM PureFlex System delivers the following enhancements:
IBM PureFlex System Express: The Express configuration is designed for small and medium
businesses and is the most affordable entry point into a PureFlex System. Businesses desire
systems that deliver outstanding capabilities and can be tomorrow-ready today with the
infrastructure that allows the business to master big data, social, mobile, analytics and the flow
of critical information. PureFlex Express delivers an affordable starting point to build a
customized infrastructure that can deliver business advantages and higher client satisfaction.
IBM PureFlex System Enterprise: The Enterprise configuration is optimized for scalable cloud
deployments and has built-in redundancy for highly reliable and resilient operation to support
your critical applications and cloud services. Intended for your most demanding workloads and
environments PureFlex Enterprise can be scaled as needed with the flexibility and versatility
you demand and is designed for business-critical workloads and delivers on performance,
availability, efficiency and virtualization in a way that is unique in the industry.
IBM PureFlex System configuration options
IBM PureFlex System Configurations Express Enterprise
PureFlex Enterprise Chassis Required Required
Flex System Manager (HW) Required Required
Flex System Manager (SW) Edition Pre-
installed
Flex System Manager Flex System Manager Advanced
Integrated 10 GB IBM Switch Required Required (Redundant)
Integrated 8 GB Fibre Channel Switch Required Required (Redundant)
Emulex 4-port 10 GbE Network Adapter Required Required (Redundant)
8 Gb Fibre Channel Expansion Card Required Required (Redundant)
Flex System V7000 or Flex System
V7000 Storage Node
Required Required
Flex System V7000 or Flex System
V7000 Storage Node Pre-installed
Required Required
Railhawk 19 inch Rack Required Required
IBM SmartCloud Entry Optional
Not Pre-installed
Default on Pre-installed
IBM PureFlex System Expansion
Components:
Compute Nodes, Chassis, FSM,
Switches, I/O, Disks, and so forth
Selectable Selectable
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-42 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-41. Flex System V7000 licensing requirements NGT113.0
Notes:
The visual lists the software licenses for the IBM Flex System V7000 Storage Node.
Software:
5639-NZ7 / 5766-NX7 (Base Software) required
IBM Flex System V7000 Storage Node licenses is at a very high level this means that all
features are included except Remote Copy, external virtualization and Real-time Compression
are included with the base software.
5639-RE7 / 5766-RX7 (Remote Mirroring) optional
5639-EX7 / 5766-EV7 (External Virtualization) optional
5639-CM7 / 5766-CX1 (Real Time Compression) optional
Additional licenses might be required, and a temporary virtualization licenses are available at no
cost for migration purposes and for testing compression.
Flex System V7000 licensing requirements
License type Unit License name License required?
Enclosure Base+expansion
Physical Enclosure
Number
IBM Flex System
V7000 Base
Software*
Yes, software license
per enclosure.
External
Virtualization
Physical Enclosure
Number Of External
Storage
IBM Flex System
V7000 External
Virtualization Software
Optional add-on feature
Yes, software license
per external storage
enclosure.
Remote Copy
Physical
Physical Enclosure
Number
IBM Flex System
V7000 Remote
Mirroring Software
Optional add-on feature
Yes, software license
per enclosure.
Real-time
Compression
(RTC)
Physical Enclosure
Number
IBM Flex System
V7000 Real-time
Compression Software
Optional add-on feature
Yes, software license
per enclosure.
FlashCopy* N/A N/A No
Volume Mirroring N/A N/A No
Thin
Provisioning*
N/A N/A No
Volume Migration N/A N/A No
Easy Tier* N/A N/A No
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-43
V9.0
Uempty
Figure 6-42. Real-time Compression licensing enhancements NGT113.0
Notes:
As part of the new IBM Storwize Family Software Version 7.2 software announcement (Effective
immediately as of October 8, 2013), there is now a capped 3 licenses per control enclosure (1-3
enclosure systems). This becomes much more attractive for systems with 4 or more enclosures.
Real-time Compression licensing enhancements
Current licensing
5639-CM1 licensed per enclosure; all-or-nothing licensing
Enhanced licensing
5639-CM7 licensed per enclosure
All-or-nothing licensing but capped at three licenses per
control enclosure
Capped license covers both internal storage and externally
virtualized storage
Same price as 5639-CM1
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-44 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-43. Real-time Compression licensing scenarios NGT113.0
Notes:
Rather than purchasing a license for Storwize Family Software for Flex System V7000 Real-time
Compression (5639-CM1) for every licensed enclosure (internal and external) managed by the IBM
Flex V7000 Disk system as required when first released, you can now benefit from the following
modifications to the 5639-CM7 licensing rules:
For any Flex System V7000 system made up of one Flex System V7000 Control enclosure plus
more than two other enclosures (whether Flex System V7000 Expansion enclosures, Storwize
V7000 Expansion enclosures, or third party external storage enclosures), the maximum number
of 5639-CM7 licenses required is three (refer to scenario 2).
For any Flex System V7000 system made up of multiple Flex System V7000 Control enclosures
(in a clustered configuration, for example), the maximum number of 5639-CM7 licenses
required is three per control enclosure (refer to scenarios 3 and 4)
Under this Storwize Family Software V7.2 revision, to authorize use of Real-time Compression
capabilities of the IBM Flex System V7000, you must purchase a license for Storwize Family
Software for Flex System V7000 Real-time Compression (5639-CM7) for each licensed enclosure
managed by the IBM Flex System V7000 Disk System. This includes each internal enclosure
licensed with Storwize Family Software for Flex System V7000 base software (5639-NZ7) and each
Real-time Compression licensing scenarios
Scenario 2: You have Flex System
V7000 with 1 control enclosure and 2
expansion enclosures. You choose to
use RTC, you should purchase 3 CM7
licenses
Scenario 4: You have Flex System
V7000 with 1 control enclosure and 5
expansion enclosures. You choose to
use RTC, you should purchase 3 CM7
licenses
Scenario 3: You have Flex System
V7000 with 1 control enclosure and 2
expansion enclosures. In addition, you
have 3 externally virtualized enclosures.
You choose to use RTC on the system,
including externally virtualized
enclosures, you should purchase 3 CM7
licenses.
NZ7
NZ7
NZ7
Current offering
6 x CM1
EX7
EX7
EX7
NZ7
NZ7
NZ7
EX7
EX7
EX7
Current offering
6 x CM1
New offering
3 x CM7
New offering
3 x CM7
NZ7
NZ7
NZ7
Current offering
3 x CM1
New offering
3 x CM7
Scenario 1: You have Flex System with
1 control enclosure and 1 expansion
enclosure. You choose to use RTC, you
should purchase 2 CM7 licenses
NZ7
NZ7
Current offering
2 x CM1
New offering
2 x CM7
Flex System
Flex System
Flex System
Flex System
Storwize
Storwize
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-45
V9.0
Uempty
external enclosure licensed with Storwize Family Software for Flex System V7000 External
Virtualization (5639-EX7), up to a maximum number of three enclosure licenses for each Flex
System V7000 Control enclosure.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-46 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-44. Base storage configuration for PureFlex NGT113.0
Notes:
Listed are the base storage configuration options for the IBM PureFlex System.
45
Base storage configurations
IBM PureFlex
System
Express
Flex System/Storwize V7000
Default Configuration:
2 x 200GB SSD
8 x 600GB HDD
IBM PureFlex
System
Enterprise
Flex System/Storwize V7000
Default Configuration:
4 x 200GB SSD
16 x 600GB HDD
Base storage configuration for PureFlex
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-47
V9.0
Uempty
Figure 6-45. Storage configuration NGT113.0
Notes:
The LUN names reflect the serial number of the IBM Storwize V7000, followed by the LUNs
purpose.
VIOS1 is for the preinstalled primary PowerVM Virtual I/O Server.
VIOS2 is for the preinstalled secondary PowerVM Virtual I/O Server when redundant VIOS is
configured.
TEMPLATE_OS is for the preinstalled client operating system image used by the Virtual I/O
Server to create and install additional client virtual servers.
MEDIA is for the Virtual I/O Server to store various CD or DVD installation media and then
share it with client virtual servers as a virtual optical drive.
SCE is for the preinstalled Smart Cloud Entry virtual server, if the Smart Cloud Entry software
was ordered.
The creation of these LUNs and preinstalled software reflect some of the integration performed by
IBM manufacturing for the IBM PureFlex System.
Storage configuration
Pre-configured storage makes deployment simple.
When primary compute node is a Power Systems compute
node, the following LUNs are created*:
* http://publib.boulder.ibm.com/infocenter/flexsys/information/topic/com.ibm.acc.pureflex.doc/storage_standard_v1.0.pdf
Type
Storage
capacity
Name
VIOS-1 40 GB SN10XXXXX_VIOS1
VIOS-2 40 GB SN10XXXXX_VIOS2
OS 100 GB SN10XXXXX_TEMPLATE_OS
Media
repository
400 GB SN10XXXX_MEDIA
SCE 50 GB SN10XXXXX_SCE
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-48 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-46. Installed devices ready to go NGT113.0
Notes:
The Flex System V7000 GUI provides a visual representation of what is installed.
Installed devices ready to go
With storage as an appliance, you are ready to go once power
is provided to your chassis.
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-49
V9.0
Uempty
Figure 6-47. Glossary NGT113.0
Notes:
This slide presents a glossary of terms used in this topic.
Glossary
IBM Flex System
Expert integrated systems
Chassis Management Module
Clustering
Compute node
Control canister
Control enclosure
Detailed level
Easy tier
Expansion canister
Expansion enclosure
FlashCopy
IBM Flex System Enterprise
Chassis
IBM Flex System Manager
IBM Flex System V7000 Storage
Node
Platform management
Real-time Compression
Replication
Starting Level
Storage virtualization
Thin provisioning
Upper level
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-50 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-48. Checkpoint (1 of 2) NGT113.0
Notes:
Write your answers here:
1.
2.
3.
Checkpoint (1 of 2)
1. What is the maximum number of IBM Flex System V7000 Storage
Nodes that can be installed in a single IBM Flex System
configuration?
a. Two
b. Three
c. Four
d. Five
2. How many bays does the IBM Flex System V7000 Storage node
occupy in the IBM Flex System Enterprise Chassis?
a. One
b. Two
c. Four
3. True or False: During the Flex System V7000 Initial Setup, there are
three methods that can be used to set up the system.
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 6. IBM Flex System storage 6-51
V9.0
Uempty
Figure 6-49. Checkpoint (2 of 2) NGT113.0
Notes:
Write your answers here:
4.
5.
Checkpoint (2 of 2)
4. True or False: The IBM Flex System V7000 Storage Node is
based on two enclosures.
5. Identify the advanced features and functions that are
included at no charge with the Flex System V7000.
a. Thin provisioning
b. Real-time Compression
c. Easy Tier
d. Remote Mirroring
e. External virtualization
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
6-52 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 6-50. Unit summary NGT113.0
Notes:
Having completed this unit, you should be able to:
Summarize the features of the IBM Flex System
Identify the Flex System V7000 Storage Node component features
Summarize the installation planning and configuration steps associated with the IBM Flex
System V7000
Identify the basic usage and functionality of IBM Flex System V7000 GUI menu
Identify the major elements of the IBM Flex System V7000 storage management
Recognize the IBM Flex System V7000 advanced storage features
Unit summary
Having completed this unit, you should be able to:
Summarize the features of the IBM Flex System
Identify the Flex System V7000 Storage Node component
features
Summarize the installation planning and configuration steps
associated with the IBM Flex System V7000
Identify the basic usage and functionality of IBM Flex System
V7000 GUI menu
Identify the major elements of the IBM Flex System V7000
storage management
Recognize the IBM Flex System V7000 advanced storage
features
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-1
V9.0
Uempty
Unit 7. IBM Flex System networking
What this unit is about
This section is an overview of IBM Flex System networking.
What you should be able to do
After completing this unit, you should be able to:
Summarize the I/O architecture of IBM Flex System
Recognize the IBM Flex System I/O adapters
Recognize the IBM Flex System I/O modules
Identify some of the key features supported by IBM Flex System
networking components
Identify the different methods to perform switch administration
How you will check your progress
Checkpoint questions
Lab exercises
References
IBM Flex System Information Center:
http://publib.boulder.ibm.com/infocenter/flexsys/information/index.jsp
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-2 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-1. Unit objectives NGT113.0
Notes:
These are the objectives for this unit. In this overview unit, we will touch each of these objectives
briefly.
Unit objectives
After completing this unit, you should be able to:
Summarize the I/O architecture of IBM Flex System
Recognize the IBM Flex System I/O adapters
Recognize the IBM Flex System I/O modules
Identify some of the key features supported by IBM Flex
System networking components
Identify the different methods to perform switch administration
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-3
V9.0
Uempty
Figure 7-2. IBM Flex System networking topics NGT113.0
Notes:
This section covers:
I/O architecture
- Midplane
- CMM
- FSM
- Compute nodes (x compute node and Power compute node)
- ScSE
Port mapping of elements in IBM Flex System
Benefits of ScSE
IP address scheme for the IBM Flex System management network
IBM Flex System I/O architecture
IBM Flex System I/O adapters
IBM Flex System Scalable Switch Elements (ScSE)
IBM Flex System networking features
Basic switch administration
IBM Flex System networking topics
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-4 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-3. IBM Flex System I/O architecture NGT113.0
Notes:
In the IBM Flex System chassis network environment, you have the option to configure separate
management and data networks. In the diagram above, both networks are shown.
IBM Flex System I/O architecture
Enterprise Chassis
Flex System Manager System x
compute node
iMMv2
LOM
L2 Switch
eth0
eth1
iMMv2
CMM
1Gb
LOM
I/O bay 1 I/O bay 2
I/O bay 1 I/O bay 2 ScSE bay 1 ScSE bay 2
Power Systems
compute node
FSP
I/O
adapter
CMM CMM CMM
Management
network
Data network
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-5
V9.0
Uempty
Figure 7-4. Midplane: Front NGT113.0
Notes:
The IBM Flex System chassis midplane is a passive midplane and there are no active components.
The midplane has one power domain. It supports N+N and N+1 power redundancy.
Mezzanine connector: These are the connectors to which the node I/O adapters will connect to
the midplane. You will find four such connectors in a row on the midplane. A half-wide node will
occupy two mezzanine connectors in a row on the midplane. A full wide node occupies four
mezzanine connectors.
Management connector: Connects node management elements including Integrated
Management Module (IMM) for System x node and FSP for Power System node to the
midplane.
Mezzanine and Management connectors are isolated from each other so that data traffic and
management traffic are segregated.
Passive midplane: No
active components
Symmetrical
Mezzanine connectors
Power connector
Management connector
Mezzanine connectors
Midplane: Front
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-6 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-5. Midplane: Rear NGT113.0
Notes:
The diagram shows the rear of the midplane.
ScSE signal connectors: Scalable Switch Element connectors are placed vertically in four
columns, where four ScSE module switches connect to the midplane.
CMM connectors: Primary and standby CMMs connect to these slots.
Power supply connectors are for attaching the 6 power supplies.
ScSE power connector is a single connector from which a ScSE derives its power.
The IBM Flex System chassis provides support for up to three traditional fabrics using networking
switches, SAN switches, or pass-through devices. The chassis also supports up to four switches
and protocols such as Ethernet, Fibre Channel, FCoE, ISCSI, and InfiniBand.
ScSE signal connectors
CMM connectors
ScSE power connector
Power supply connectors
Midplane: Rear
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-7
V9.0
Uempty
Figure 7-6. I/O options: 2S X-Architecture compute node NGT113.0
Notes:
The IBM Flex System half wide X-Architecture compute node supports the following connectivity
options:
Two mezzanine cards
Dual port 10Gb Virtual Fabric LOM (selected models)
ETE connector for expansion connection options (for example, SEN and PEN)
IMMv2 via the management port
Either a mezzanine card in I/O connector 1 or a LOM periscope connector is installed, but not both.
Mezzanine slot 1
Mezzanine slot 2
LOM periscope connector
IMMv2
Management network connector
ETE connector
Management network connector
I/O options: 2S X-Architecture compute node
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-8 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-7. I/O options: 2S Power Systems compute node NGT113.0
Notes:
The IBM Flex System 2S Power Systems compute node supports the following connectivity
options:
Two Mezzanine cards
ETE connector for additional connectivity (for example, SAS adapter for dual VIOS support)
FSP by way of Management port
The Power Systems compute node does not have LOM option.
Mezzanine slot 1
Mezzanine slot 2
ETE connector
Management
network connector
Management
network connector
I/O options: 2S Power Systems compute node
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-9
V9.0
Uempty
Figure 7-8. I/O options: FSM NGT113.0
Notes:
IBM FSM node supports the following I/O connectivity options:
Dual port 10Gb Virtual Fabric LOM
ETE connector with the management adapter card
IMMv2 by way of the management port. However, it shares the external management port with
the management adapter card
The two mezzanine I/O connectors are not used.
With the above options, the FSM can communicate on both the data and the management network.
ETE
LOM connector
Management adapter
card
Management connector
IMM v2
Flex System Manager
iMMv2
LOM
L2 Switch
eth0
eth1
I/O options: FSM
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-10 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-9. Chassis Management Module NGT113.0
Notes:
For the CMM external port, IPv4 and IPv6 are enabled by default. IPv6 can be disabled on the
CMM. IPv4 cannot be disabled on the CMM.
Primary and Standby CMM have a 1Gb network port as the external management port. IPv4
address modes of Static, DHCP and DHCP then static are available. The default settings for the
primary CMM IPv4 address is 192.168.70.100 and for the standby CMM IPv4 address is
192.168.70.99.
IPv6 address modes of Link Local Addressing (LLA), static, DHCP and Stateless Auto
Configuration (SLAC) are available. By default, LLA, DHCP, SLAC are enabled.
A unique IPv4 and IPv6 static address can be configured for the standby CMM.
For failover and high availability, the user can also select to swap or not to swap the primary and
standby IPv4 / IPv6 static addresses, or to define a second floating IPv4 / IPv6 static address which
is always active for only the Primary CMM.
With compute node management, IPv4 and IPv6 are enabled by default. IPv4 can be disabled (not
configured), but IPv6 cannot be disabled (used for internal management network communication). If
you configure an externally route-able IP address for node management, this will enable direct
IBM Flex System CMM includes a 1 Gb network port.
CMM provides IPv4 and IPv6 configuration for all
chassis components except FSM software.
Default IPv4 static addresses:
Primary CMM: 192.168.70.100
Standby CMM: 192.168.70.99
Compute node: 192.168.70.1XX
x222 upper node: 192.168.70.Y (Y=130 + XX)
I/O modules: 192.168.70.12X
Default credentials of CMM and IMMv2:
User name: USERID
Password: PASSW0RD (0 is zero)
1 Gb network port
Chassis Management Module
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-11
V9.0
Uempty
access to the compute node service processor from an external console. If you do not configure an
externally route-able IP address for node management, this will disable direct access to the
compute node service processor from an external console. In this scenario, the external console
would need to connect to the CMM first before navigating to the service processor.
Compute node management default IPv4 address is 192.168.70.1xx (xx is based on bay number).
For the x222, the upper node default IPv4 address is 192.168.70.Y (Y = 130 + xx).
The Scalable Switch Elements have IPv4 and IPv6 enabled by default. Neither can be disabled.
The ScSE default IPv4 address is 192.168.70.12x (x is based on bay number).
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-12 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-10. Management IP network in CMM NGT113.0
Notes:
For one fully loaded IBM Flex System Chassis, users may need to plan for a minimum of 20 IP
address to individually manage the IBM Flex System components including two CMMs, fourteen
compute nodes (service processors), and four I/O modules.
Ensure that all management IP addresses in the chassis belong to same subnet. Administrators
can choose DHCP or static, IPv4 or IPv6 addressing. If there are multiple chassis, then there are
chances for IP conflicts with static addressing. In such cases DHCP addressing is recommended.
A ScSE can also be managed out-of-band via switch external ports.
The CMM must be connected to the network. When connecting to one of the IBM Flex System
chassis components, the packet is routed via the 1Gb switch that is in the CMM.
Minimum of 20 IP addresses might be required to directly manage
chassis components.
Two x CMM (primary and standby)
Fourteen x node (IMM on System x and FSP on Power Systems)
Four x I/O modules (ScSE elements)
Administrators can choose DHCP or static, IPv4, or IPv6 addressing.
All the above components can be accessed and managed
independently.
Managemen
t network
CMM1
CMM2
Switch user
Node 10 user
Admin
network
Data
network
Management IP network in CMM
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-13
V9.0
Uempty
Figure 7-11. ScSE bay order in IBM Flex System Chassis NGT113.0
Notes:
The visual shows the physical order of the ScSE modules installed in a IBM Flex System chassis.
S
c
S
E

B
a
y

1
S
c
S
E

B
a
y

3
S
c
S
E

B
a
y

2
S
c
S
E

B
a
y

4
ScSE bay order in IBM Flex System Chassis
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-14 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-12. ScSE I/O connectivity NGT113.0
Notes:
This graphic illustrates the high speed interconnect between the compute nodes and the scalable
switch elements.
Each mezzanine card connects to two ScSEs.
Connection from the mezzanine card to the ScSE provides four lanes of signaling.
The routing of the LOM or mezzanine card 1 signaling is to the scalable switch elements in bays
1 and 2.
The routing of the mezzanine card 2 signaling is to the scalable switch elements in bays 3 and
4.
Ensure that whichever I/O connector you insert an certain adapter card, the associated ScSE
switch bay must be compatible with it.
S
c
S
E

1
S
c
S
E

2
S
c
S
E

3
S
c
S
E

4
ScSE I/O connectivity
The IO module in a bay
determines the fabric
connection out of that bay
The adapter slot I/O connector
is prewired to certain I/O
module bays
Each adapter to I/O module
connection has 4 x1 lanes or 1
x4 lanes
The adapter determines the
number and type of lanes
consumed
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-15
V9.0
Uempty
Figure 7-13. I/O modules connector architecture NGT113.0
Notes:
The slide illustrates the LOM connector and mezzanine card connector.
Every LOM or mezzanine card connector has eight signaling lanes
The signaling lanes are divided into two groups of four lanes that are connected to a particular
ScSE
Each lane can carry a bandwidth of 16 Gb and is denoted by x1
Some mezzanine cards will group the four lanes to work as one lane in order to support higher
bandwidths
A lane can be loosely associated with a port that is presented to the compute node
I/O modules connector architecture
L
O
M

C
o
n
n
e
c
t
o
r
x
1
A
B
C
D
x
1
A
B
C
D
M
e
z
z
a
n
i
n
e

C
a
r
d

C
o
n
n
e
c
t
o
r
x
1
A
B
C
D
x
1
A
B
C
D
LOM
Mezzanine card
ScSE1
ScSE3
ScSE2
ScSE4
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-16 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-14. ScSE I/O architecture: ScSE scalability NGT113.0
Notes:
IBM Flex System switch modules are designed to scale as per the customer needs. IBM Flex
System I/O connectors are built to transmit signals into 4 lanes per ScSE connection. The four
lanes from the I/O connector in a node connect to four different port zones respectively in a ScSE.
A LOM or mezzanine card 1 connects to ScSE1 and ScSE2. Mezzanine card 2 connects to ScSE3
and ScSE4.
Consider the implementation of ScSE module to accept four x1 (one lane) signals. The base
feature of an ScSE is designed to connect to a maximum 14 internal ports, one lane from each of
14 nodes.
At a maximum, a future ScSE may support an I/O adapter to connect to all four x1 signal (lanes)
from each port of the I/O adapters providing up to 56 internal ports. Uplink ports on the ScSE may
also be enabled in various ways based on the ScSE.
For additional features like FCoE, ISCSI, additional ports, and so on, IBM offers licensing to activate
these features via IBM Flex System Manager (FSM). This is referred to as Features on Demand
(FoD).
The IBM Flex System Scalable Switch Element concept thus enables customers to save on their
initial costs (CAPEX) and also save on running costs (OPEX).
ScSE
Base feature
Upgrade1
Upgrade2
Future
14 internal ports
from 14 nodes
14 internal ports
from 14 nodes
14 internal ports
from 14 nodes
14 internal ports
from 14 nodes
L
O
M

/

m
e
z
z
a
n
i
n
e

c
a
r
d

c
o
n
n
e
c
t
o
r
4



x
1
A
B
C
D
4



x
1
A
B
C
D
Set of uplink
ports
ScSE I/O architecture: ScSE scalability
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-17
V9.0
Uempty
Figure 7-15. ScSE I/O architecture: Adapter to ScSE mapping NGT113.0
Notes:
This slide illustrates the inter-connectivity between the compute nodes and a scalable switch
element.
In review:
Every port maps to only one ScSE.
Each adapter has eight lanes connecting four lanes to each of the two ScSEs.
Four lanes are mapped one to one with internal ports on the ScSE.
On LOM/Mezzanine card 1: Four lanes connect to ScSE1 and four lanes connect to ScSE2.
On Mezzanine card 2: Four lanes connect to ScSE3 and four lanes connect to ScSE4.
On a half wide compute
node:
LOM/Mezz slot1
Four lanes connect to ScSE 1
A connects to Base
B connects to Update 1
C connects to Update 2
D connects to Future
Four lanes connect to ScSE 2
A connects to Base
B connects to Update 1
C connects to Update 2
D connects to Future
Mezz slot2
Four lanes connect to ScSE 3
Four lanes connect to ScSE 4
ScSE1
ScSE3
ScSE2
ScSE4
Half wide
compute
node
L
O
M

/

M
e
z
z

c
a
r
d
4



x
1
A
B
C
D
4



x
1
A
B
C
D
M
e
z
z

c
a
r
d
4



x
1
A
B
C
D
4



x
1
A
B
C
D
ScSE I/O architecture: Adapter to ScSE mapping
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-18 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-16. ScSE I/O architecture: Two- and four-port adapters NGT113.0
Notes:
This slide illustrates the mapping between the half wide compute node and the four ScSE. The
node has a total of six I/O ports (two on LOM on slot 1 + four on adapter on slot 2).
The connectivity of the dual port card is through one lane to each ScSE. The connection of the
quad port card uses two lanes to each ScSE.
Since two lanes are used per port on I/O connector 2, Base and Upgrade 1 on ScSE3 and ScSE4
must be activated and used. If 14 half wide nodes are installed, 28 internal ports (14 in Base and 14
in Upgrade 1) of ScSE3 and ScSE4 will be used.
ScSE licensing
ScSE 1 Base
14 internal
ScSE 2 Base
14 internal
ScSE 3 Update 1
28 internal
ScSE 4 Update 1
28 internal
ScSE1
ScSE3
ScSE2
ScSE4
C
o
n
n
e
c
t
o
r
x
1
A
B
C
D
x
1
A
B
C
D
1
2
x
1
A
B
C
D
x
1
A
B
C
D
Q
u
a
d

p
o
r
t
1
2
3
4
Represents Base connections
on ScSE modules
Represents Update 1
connections to ScSE modules
D
u
a
l

p
o
r
t
ScSE I/O architecture: Two- and four-port adapters
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-19
V9.0
Uempty
Figure 7-17. IBM Flex System Manager appliance networking NGT113.0
Notes:
In this example above with 4 chassis, connect all four CMMs to a Top of Rack switch (primary &
standby)
The FSM talks to the local CMM via the midplane since it is located in the same chassis. When
FSM manages more Enterprise Chassis, the FSM talks to other CMMs through the local CMM and
the Top of Rack switch.
In a scenario when customer wants both data and management network traffic separately, they
must configure the FSM on the data network which paths through the I/O modules(ETH1). And
configure the FSM on management network which paths through the CMM (ETH0).
In a scenario when a customer wants only one network for both data and management network
traffic, they must connect the CMMs to a merged network that includes access to the I/O switch
uplinks.
FSM can manage
multiple IBM Flex
System Chassis
Connect CMMs to a top
of rack switch (primary
and standby)
FSM communicates
with the CMMs to
manage them
FSM
IBM Flex
System4
IBM Flex
System3
IBM Flex
System2
IBM Flex
System1
CMM 1
CMM2
CMM3
CMM4
Admin
console
Standby CMM
FSM**
IBM Flex System Manager appliance networking
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-20 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-18. IBM Flex System networking topics NGT113.0
Notes:
This section covers the IBM Flex System I/O adapters. The topics we will cover are:
Various types of I/O adapters for IBM Flex System.
Compatibility of adapters.
Naming scheme of adapters.
Selecting the right adapter based on real life requirements
IBM Flex System I/O architecture
IBM Flex System I/O adapters
IBM Flex System Scalable Switch Elements (ScSE)
IBM Flex System networking features
Basic switch administration
IBM Flex System networking topics
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-21
V9.0
Uempty
Figure 7-19. I/O adapters: X-Architecture compute nodes NGT113.0
Notes:
This list of adapter cards are supported for the X-Architecture compute nodes.
The list includes Ethernet adapters, a converged network adapter, FC cards, and an InfiniBand
card. Any supported I/O adapter can be installed in either I/O connector. However, you must have a
consistent fabric type not only across chassis but across all compute nodes.
The X-Architecture compute nodes have selected models that include the onboard 10Gb Virtual
Fabric adapter (LOM) which provides a two port Ethernet adapter using the I/O connector 1.
*Supported on the x222 node. Other adapters in the table are not supported on x222.
Part number Description
49Y7900 IBM Flex System EN2024 4-port 1Gb Ethernet Adapter
90Y3466 IBM Flex System EN4132 2-port 10Gb Ethernet Adapter
90Y3482 IBM Flex System EN6132 2-port 40Gb Ethernet Adapter
90Y3554 IBM Flex System CN4054 10Gb Virtual Fabric Adapter
69Y1938 IBM Flex System FC3172 2-port 8Gb FC Adapter
95Y2375 IBM Flex System FC3052 2-port 8Gb FC Adapter
88Y6370 IBM Flex System FC5022 2-port 16Gb FC Adapter
95Y2386 IBM Flex System FC5052 2-port 16Gb FC Adapter
95Y2391 IBM Flex System FC5054 4-port 16Gb FC Adapter
69Y1942 IBM Flex System FC5172 2-port 16Gb FC Adapter
95Y2379 IBM Flex System FC5024D 4-port 16Gb FC Adapter*
90Y3486 IBM Flex System IB6132D 2-port FDR InfiniBand Adapter*
90Y3454 IBM Flex System IB6132 2-port FDR InfiniBand Adapter
I/O adapters: X-Architecture compute nodes
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-22 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-20. I/O adapters: Power Systems compute nodes NGT113.0
Notes:
All the I/O adapter slots on these nodes are identical in shape (form factor). Also different is that the
I/O adapters for the Power System compute nodes have their own connector that plugs into the
Enterprise Chassis midplane. The list includes Ethernet adapters, a converged network adapter, FC
cards, and an InfiniBand card. Any supported I/O adapter can be installed in either I/O connector.
However, you must have a consistent fabric type not only across chassis but across all compute
nodes.
There is no onboard NIC option on the Power Systems compute node. You must have I/O adapters
installed to have external network connectivity.
* Currently, all p260, p24L, and p460 configurations must include a 10Gb (#1762 or #EC24) or 1Gb
(#1763) Ethernet adapter in slot 1 of the compute node.
Feature code Description
1762* IBM Flex System EN4054 4-port 10Gb Ethernet Adapter
1763* IBM Flex System EN2024 4-port 1Gb Ethernet Adapter
EC24* IBM Flex System CN4058 8-port 10Gb Converged Adapter
EC26 IBM Flex System EN4132 2-port 10Gb RoCE Adapter
1764 IBM Flex System FC3172 2-port 8Gb FC Adapter
A45R IBM Flex System FC5052 2-port 16Gb FC Adapter
A45S IBM Flex System FC5054 4-port 16Gb FC Adapter
1761 IBM Flex System IB6132 2-port QDR InfiniBand Adapter
I/O adapters: Power Systems compute nodes
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-23
V9.0
Uempty
Figure 7-21. Naming scheme for I/O adapters NGT113.0
Notes:
This slide identifies the naming scheme of the adapters.
The first two characters stand for the type of the fabric that it supports.
The next digit represents the speed it is capable of handling.
The next two digits represent the vendor name. Note that the names are alphabetically numbered.
For IBM, that starts with the ninth letter of the alphabet, the two digit vendor code is 09 and so on.
The final digit represents the maximum number of supported ports on the card.
IBM Flex System EN2024 4-port 1Gb Ethernet adapter
EN2024
Fabric type
EN: Ethernet
FC: Fibre Channel
CN: Converged network
IB: InfiniBand
Series:
2 for 1Gb
3 for 8Gb
4 for 10Gb
5 for 16Gb
6 for 40Gb/56Gb
Vendor name where A:01
02: Brocade/Broadcom
05: Emulex
09: IBM
13: Mellanox
17: QLogic
Maximum ports
4: Four ports
Naming scheme for I/O adapters
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-24 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-22. IBM Flex System EN2024 4-port 1Gb Ethernet Adapter NGT113.0
Notes:
The EN2024 4-port 1Gb Ethernet Adapter is a quad-port network adapter from Broadcom that
provides 1Gb per second, full duplex, Ethernet links between a compute node and Ethernet switch
modules installed in the IBM Flex System. The adapter interfaces to the compute node using the
Peripheral Component Interconnect Express (PCIe) bus.
The IBM Flex System EN2024 4-port 1Gb Ethernet Adapter has the following features:
Connection to 1000BASE-X environments using IBM Flex System Ethernet switches.
Compliance with U.S. and international safety and emissions standards.
Supports Full-duplex (FDX) capability, enabling simultaneous transmission and reception of
data on the Ethernet local area network (LAN).
Virtual LANs (VLANs): IEEE 802.1Q VLAN tagging
Jumbo frames (9 KB)
IEEE 802.3x flow control
Compatible with X-Architecture and Power Systems compute
nodes
Two PCI Express 2.0 x1 host interfaces
Full-duplex (FDX) capability
IBM Flex System
EN2024 4-port 1Gb Ethernet Adapter
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-25
V9.0
Uempty
Figure 7-23. IBM Flex System CN4054 10Gb Virtual Fabric Adapter NGT113.0
Notes:
The CN4054 10Gb Virtual Fabric Adapter from Emulex is a four-port 10Gb converged network
adapter (CNA) that can scale to up to 16 virtual ports and support multiple protocols like Ethernet,
iSCSI and FCoE.
Two versions:
Base: 90Y3554 - IBM Flex System CN4054 10Gb Virtual Fabric Adapter
Upgrade: 90Y3558 - IBM Flex System CN4054 Virtual Fabric Adapter Upgrade
Features:
On-board flash memory: 16 MB for FC controller program storage
On-board configuration EEPROM to set FCoE or iSCSI modes of operation (not changeable in
the field)
Interoperates with existing FC SAN infrastructures switches, arrays, SRM tools (including
Emulex utilities), SAN practices, and so forth
Unified Ethernet-to-FC SAN connectivity provided by an FCoE switch
Compatible only with the X-Architecture compute nodes
Operates either as a four-port 1/10 Gb Ethernet adapter or supports up
to 16 vNICs
Supports vNIC modes: IBM Virtual Fabric Mode, Unified Fabric Port
and Switch Independent Mode
With the CN4054 Virtual Fabric Adapter Upgrade (90Y3558), the
adapter adds FCoE and iSCSI hardware initiator support
Supports virtual fabric
IBM Flex System
CN4054 10Gb Virtual Fabric Adapter
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-26 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Provides 10Gb MAC features such as MSI-X support, jumbo frames (9K bytes) support
VLAN tagging (802.1Q, PER priority pause / priority flow control), and advanced packet filtering
The adapter can connect at 1Gb or 10Gb bandwidth
FCoE offload option through FoD
iSCSI offload through FoD
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-27
V9.0
Uempty
Figure 7-24. IBM Flex System FC3172 2-port 8Gb FC Adapter NGT113.0
Notes:
FC3172 two-port 8Gb FC Adapter from QLogic enables high-speed access for enterprise Chassis
compute nodes to connect to a Fibre Channel storage area network (SAN). This adapter is based
on QLogic 2532 8Gb ASIC design and works with any of the 8Gb or 16Gb IBM Flex System
Enterprise Chassis Fibre Channel switch modules.
FC3172 two-port 8Gb FC Adapter has the following features:
Support for Fibre Channel protocol SCSI (FCP-SCSI) and Fibre Channel Internet protocol
(FCP-IP)
Support for point-to-point fabric connection (F-port fabric login)
Support for Fibre Channel service (classes 2 and 3)
Compatible with both X-Architecture and Power Systems
nodes
PCI Express 2.0 x4 host interface
Bandwidth: 8 Gb per second maximum at half-duplex and
16 Gb per second maximum at full-duplex per port
IBM Flex System
FC3172 2-port 8Gb FC Adapter
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-28 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-25. IBM Flex System networking topics NGT113.0
Notes:
This section covers the IBM Flex System Scalable Switch Elements (ScSE). The topics we will
cover are:
Naming scheme and classification of Scalable Switches
List of Scalable Switches
The EN4093 and EN4093R Virtual Fabric Scalable Switch and its scalability
Cables and connectivity options for EN4093 and EN4093R
Network management features
IBM Flex System I/O architecture
IBM Flex System I/O adapters
IBM Flex System Scalable Switch Elements (ScSE)
IBM Flex System networking features
Basic switch administration
IBM Flex System networking topics
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-29
V9.0
Uempty
Figure 7-26. Naming scheme for ScSE NGT113.0
Notes:
IBM Flex System Scalable Switch elements have code names with the following format:
The first two characters stand for the type of the fabric that it supports.
The next digit represents the series of the switches identifying the speed.
The next two digits represent the vendor name. Note that the names are alphabetically numbered.
For IBM, that starts with the ninth letter of the alphabet, the two digit vendor code is 09 and so on.
The final digit represents the maximum number of internal port groups on the ScSE.
IBM Flex System EN2092 1Gb Ethernet scalable switch
EN2092
Fabric type
EN: Ethernet
FC: Fibre Channel
CN: Converged network
IB: InfiniBand
SI: System Interconnect
Series:
2000 for 1Gb
3000 for 8Gb
4000 for 10Gb
5000 for 16Gb
6000 for 40Gb/56Gb
Vendor name where A:01
02: Brocade
09: IBM
13: Mellanox
17: QLogic
Max # Internal
port groups
2 = two
Naming scheme for ScSE
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-30 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-27. IBM Flex System Scalable Switch options NGT113.0
Notes:
These are the scalable switch element options that are available for IBM Flex System.
IBM Flex System EN2092 1Gb Ethernet Scalable Switch
IBM Flex System EN4091 10Gb Ethernet Pass-thru
IBM Flex System Fabric EN4093 and EN4093R 10Gb Scalable Switches
IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch
IBM Flex System EN6131 40Gb Ethernet Switch
IBM Flex System Fabric SI4093 System Interconnect Module
IBM Flex System FC5022 24-port 16Gb ESB SAN Scalable Switch
IBM Flex System FC3171 8Gb SAN Switch
IBM Flex System FC3171 8Gb SAN Pass-thru
IBM Flex System IB6131 InfiniBand Switch
IBM Flex System Scalable Switch options
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-31
V9.0
Uempty
Figure 7-28. IBM Flex System EN4093 / EN4093R 10Gb Scalable Switch NGT113.0
Notes:
The IBM Flex System Fabric EN4093 and EN4093R 10Gb Scalable Switches are a 10 Gb 64-port
upgradeable midrange to high-end switch module, offering Layer 2/3 switching designed to install
within the I/O module bays of the Enterprise Chassis. The switch has up to 42 internal 10Gb ports,
up to 14 external 10 Gb uplink ports (SFP+ connectors), and up to two external 40Gb uplink ports
(QSFP+ connectors).
The difference between EN4093 and its refreshed version EN4093R is that EN4093R supports
switch stacking. Stacking is a feature where multiple EN4093R switches can be bundled together to
work as one logical switch.
Each 40 Gb uplink port can also be converted to 4x10Gb using the QSFP+ to SFP+ cable.
IBM Flex System
EN4093 / EN4093R 10Gb Scalable Switch
Physical features
42 x 10Gb internal ports
14 x 10Gb SFP+ uplinks
2 x 40Gb QSFP+ uplinks
1 x RJ45 management port
2 x internal management ports
1 x mini-USB RS-232 serial port
FoD Scalability
Base
14 x 10Gb internal ports
10 x 10Gb SFP+ uplink ports
Upgrade 1
Additional 14 x 10Gb internal ports
2 x 40Gb QSFP+ uplink ports
Upgrade 2 (upgrade 1 is prerequisite)
Additional 14 x 10Gb internal ports
Additional 4 x 10Gb SFP+ uplink ports
Features
IBM Networking OS
Layer 2/3 Ethernet functionality
Easy Connect Mode
Virtual Fabric / Unified Fabric Port
VMready
FCoE transit switch operations (also in stacking mode)
OpenFlow (Software Defined Network)
Base
Upgrade 2
Upgrade 1
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-32 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-29. Connector and cable options for EN4093 and EN4093R NGT113.0
Notes:
The connectors and cabling suitable for EN4093 and EN4093R are shown in this table.
Transceivers have been highlighted.
Part number Description
44W4408 10GBase-SR SFP+ transceiver
46C3447 IBM SFP+ SR Transceiver
90Y9412 IBM SFP+ LR Transceiver
81Y1622 IBM SFP SX Transceiver
81Y1618 IBM SFP RJ-45 Transceiver
90Y9424 IBM SFP LX Transceiver
49Y7884 IBM QSFP+ 40Gbase-SR transceiver (requires either cable 90Y3519 or cable 90Y3521)
90Y9427 1m IBM Passive DAC SFP+
90Y9430 3m IBM Passive DAC SFP+
90Y9433 5m IBM Passive DAC SFP+
49Y7886 1m 40Gb QSFP+ to 4 x 10Gb SFP+ Cable
49Y7887 3m 40Gb QSFP+ to 4 x 10Gb SFP+ Cable
49Y7888 5m 40Gb QSFP+ to 4 x 10Gb SFP+ Cable
90Y3519 10m IBM MTP Fiber Optical Cable (requires transceiver 49Y7884)
90Y3521 30m IBM MTP Fiber Optical Cable (requires transceiver 49Y7884)
49Y7890 1m QSFP+ to QSFP+ DAC
49Y7891 3m QSFP+ to QSFP+ DAC
Connector and cable options for EN4093 and
EN4093R
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-33
V9.0
Uempty
Figure 7-30. IBM Flex System FC3171 8Gb SAN switch and pass-thru NGT113.0
Notes:
IBM Flex System IBM Flex System FC3171 8Gb SAN switch is from Qlogic. The FC3171 8Gb SAN
switch can be converted as a pass-thru switch when it is configured in transparent mode via CLI or
GUI. There are no port licensing requirements for the switch or in pass-thru mode.
IBM Flex System FC3171 8Gb SAN switch and
pass-thru
Physical features
14 x FC internal ports
6 x FC SFP+/SFP external ports
2 x 1 Gb internal management port
1 x RJ45 100Mb management port
1 x mini-USB RS-232 serial port
FoD scalability
N/A
Features
Web interface through QuickTool
Full fabric mode (p/n 69Y1930)
Transparent mode (p/n 69Y1934)
No port licensing for switch or pass-thru
4 Gb / 8 Gb support
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-34 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-31. FC3171 supported SFP modules NGT113.0
Notes:
The first feature code listed is for configurations ordered through System x sales channels. The
second feature code is for configurations ordered through the IBM Power Systems channel.
There are no SFP transceivers supplied as standard. The SFP modules and cables listed in the
table are supported.
There are no SFP modules supplied with the switch. These supported
options can be purchased.
Part number Feature codes
(For x/power)
Description
44X1964 5075 / 3286 IBM 8Gb SFP+ SW Optical
Transceiver
39R6475 4804 / 3238 4 Gb SFP Transceiver
Option
44X1964
39R6475
FC3171 supported SFP modules
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-35
V9.0
Uempty
Figure 7-32. Cable options for FC3171 NGT113.0
Notes:
The connectors and cabling suitable for FC3171 are shown.
Part number Description
39M5698 IBM 1m LC-LC Fiber Channel Cable
39M5697 IBM 5m LC-LC Fiber Channel Cable
39M5698 IBM 25m LC-LC Fiber Channel Cable
Cable options for FC3171
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-36 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-33. IBM Flex System networking topics NGT113.0
Notes:
This section covers some important features supported by the IBM Flex System networking
components.
IBM Flex System I/O architecture
IBM Flex System I/O adapters
IBM Flex System Scalable Switch Elements (ScSE)
IBM Flex System networking features
Basic switch administration
IBM Flex System networking topics
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-37
V9.0
Uempty
Figure 7-34. VLAN NGT113.0
Notes:
In order for a VLAN to be supported over multiple switches, there must be a way for each frame that
passes between switches to be associated with a particular VLAN. But there is no place in the
standard LAN frame to carry a VLAN identifier. So VLAN trunking protocols were created to carry
VLAN identifiers in LAN frames.
Cisco supports the use of two trunking protocols: ISL (Ciscos proprietary Inter-Switch Link) and
802.1Q, the IEEE standard. Cisco recommends the use of 802.1Q wherever possible. However,
some older Cisco switches only support ISL.
The 802.1Q standard is based on a frame-tagging mechanism that works over IEEE LANs
(Ethernet, token ring, and so on) and FDDI. The standard defines a means of tagging frames with
VLAN information. This tag is on frames sent between switches (and sometimes routers) and
permits vendor interoperability for VLANs.
All switch modules for
Enterprise Chassis support
the 802.1Q protocol for
VLAN tagging
Three departments three
VLANs
VLANs identified by a unique
number
Access port on only one VLAN
Trunks carry traffic for all
VLANs
Use 802.1Q trunking protocol to
mark frames
VLAN
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-38 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-35. VLAN tagging NGT113.0
Notes:
IBM Networking OS software supports 802.1Q VLAN tagging, providing a standards-based VLAN
support for Ethernet systems.
Tagging places the VLAN identifier in the frame header of a packet, allowing each port to belong to
multiple VLANs. When you add a port to multiple VLANs, you must then enable tagging on that
port.
Since tagging fundamentally changes the format of the frames transmitted on a tagged port, you
must carefully plan network designs to prevent tagged frames from being transmitted to devices
that do not support 802.1Q VLAN tags, or devices where tagging is not enabled.
The VLAN tagging information is a 32-bit field (VLAN tag) in the frame header that identifies the
frame as belonging to a specific VLAN.
The switches perform switching using the tag information.
In addition to this VLAN tag on the frames, the ports can also be tagged or untagged members of a
particular VLAN. This configuration is called PVID.
Each port in the switch has a configurable default VLAN number, known as its PVID. By default, the
PVID for all non-management ports is set to 1, which correlates to the default VLAN ID.
IBM switches support 802.1Q VLAN tagging.
VLAN tagging means a VLAN ID is inserted into a frame header
identifying to which VLAN the packet belongs.
VLAN tagging allows each port to belong to multiple VLANs.
Each port in the switch has a configurable default VLAN number, known
as its PVID.
When a port is added to multiple VLANs, tagging on the port must be
enabled.
CRC Data Tag SA DA
8100 Priority CFI VID = 2
16 bits 3 bits 1 bit
12 bits
Frame Header
VLAN tagging
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-39
V9.0
Uempty
The PVID for each port can be configured to any VLAN number between 1 and 4094.
Each port on the switch can belong to one or more VLANs, and each VLAN can have any number
of switch ports in its membership. Any port that belongs to multiple VLANs, however, must have
VLAN tagging enabled.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-40 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-36. Stacking NGT113.0
Notes:
As of IBM Networking OS v7.5, the EN4093R supports switch Stacking. A stack is a group of up to
eight switches with IBM Networking OS that work together as a unified system and are managed as
a single entity.
The EN4093R switch offers additional capabilities over the EN4093 when in Stacking mode,
including vNIC, Edge Virtual Bridging (EVB), and CEE/FCoE. It is ideal for clients looking to
implement a converged infrastructure with NAS, iSCSI or FCoE.
As of IBM Networking OS v7.7, support includes the ability to enable a hybrid stack of two CN4093
10Gb Converged Scalable Switches with up to six EN4093R 10Gb Scalable Switches in order to
add FCF capability into the stack.
A stack has the following properties, regardless of the number of switches included:
The network views the stack as a single entity.
The stack can be accessed and managed as a whole using standard switch IP interfaces.
After the stacking links have been established, the number of ports available in a stack equals
the total number of remaining ports of all the switches that are part of the stack.
A stack is a group of common switches that work together as a
unified system and are managed as a single entity.
Features:
The network views the stack as a single entity.
The stack can be accessed and managed as a whole using standard
switch IP interfaces.
After stacking, the number of ports available in a stack equals the total
number of available ports of all the switches that are part of the stack.
Stacking
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-41
V9.0
Uempty
The number of available IP interfaces, VLANs, trunks, trunk links, and other switch attributes
are not aggregated among the switches in a stack. The totals for the stack as a whole are the
same as for any single switch configured in stand-alone mode.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-42 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-37. VLAG versus STP NGT113.0
Notes:
In many data center environments, downstream servers or switches connect to upstream devices
which consolidate traffic.
A switch in the access layer may be connected to more than one switch in the aggregation layer in
order to provide for network redundancy. Typically, Spanning Tree Protocol (STP/PVST+, RSTP,
PVRST, or MSTP) is used to prevent broadcast loops by blocking redundant uplink paths. This has
the unwanted consequence of reducing the available bandwidth between the layers by as much as
50%. In addition, STP may be slow to resolve topology changes that occur during a link failure and
can result in considerable MAC address flooding.
The default STP mode in IBM Flex System switches is PVRST.
With Virtual Link Aggregation Groups (VLAGs) the redundant uplinks remain active, thereby
utilizing all available bandwidth.
STP topology
Blocked ports reduce network bandwidth
Slow convergence on link failure with STP
Topology changes result in MAC flooding
due to MAC flush
MSTP solves problem partially
Some links are still blocked
Not suitable for deployments with few
VLANs
vLAG topology
No blocked ports
Faster and predictable convergence during a
link failure
No unnecessary MAC flooding
Standards based: Uses LACP or static
trunking
STP can be used with VLAG, if required
STP
blocked
ISL
vLAG
ISL
VLAG versus STP
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-43
V9.0
Uempty
Figure 7-38. Virtual Router Redundancy Protocol NGT113.0
Notes:
Virtual Router Redundancy Protocol (VRRP) enables redundant router configurations within a LAN,
providing alternate router paths for a host and to eliminate the single point of failure within a
network. Each participating routing device using the VRRP function is configured with the same
virtual router IPv4 address and ID number. One of the routing devices is elected as the master
router and controls the shared virtual router IPv4 address. If the master fails, one of the backup
routing devices will take control of the virtual router IPv4 address and actively process traffic sent to
it. All IBM Flex System Ethernet capable networking switches support VRRP.
All IBM Flex System
Ethernet switches support
VRRP
Key features of VRRP:
Network availability in case the
default gateway fails
Master-backup architecture
If the master fails, one of the
backup switches will take
control of the virtual router
IPv4 address and actively
process traffic sent to it
Router 1
Master for virtual
router 1
Backup for virtual
router 2
Router 2
Backup for virtual
router 1
Master for virtual
router 2
Client 1
Default gateway:
Router 1 IP
Client 2
Default gateway:
Router 2 IP
Virtual Router Redundancy Protocol
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-44 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-39. VMready NGT113.0
Notes:
VMready enables the network to be virtual machine aware. The network can be configured and
managed for virtual ports (v-ports) rather than just for physical ports. VMready allows for a
define-once-use-many configuration. That means the network attributes are bundled with a v-port.
The v-port belongs to a VM, and is movable. Wherever the VM migrates, even to a different
physical host, the network attributes of the v-port remain the same.
The hypervisor manages the various virtual entities (VEs) on the host server. A VE can consist of
virtual machines (VMs) and virtual switches.
VMready works with all major virtualization products, including VMware, Hyper-V, Xen, and KVM
and Oracle VM, without modification of virtualization hypervisors or guest operating systems. A
VMready switch can also connect to a virtualization management server to collect configuration
information about associated VEs. It can automatically push VM group configuration profiles to the
virtualization management server. This process in turn configures the hypervisors and VEs,
providing enhanced VE mobility.
VMready 4.0 supports the Edge Virtual Bridging (IEEE 802.1Qbg) standards. An IBM Ethernet
switch can be configured to support VMready Edge Virtual Bridging (IEEE 802.1Qbg) or the original
VMready 3.0, but not both.
Copyright IBM Corporation 2013
VMready
All IBM Flex System Ethernet switches support VMready.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-45
V9.0
Uempty
Figure 7-40. Virtual NICs NGT113.0
Notes:
In a virtual fabric, physical I/O bandwidth is partitioned into multiple units called vNICs which is the
virtualization of the physical interface. Each vNIC is seen by a compute node host operating system
or virtual machine as a network interface. Rather than eight physical connections between the
adapter and the switch, there are two 10 Gb interfaces. Each interface can be presented to the
operating system as four physical interfaces.
If a virtual fabric aware switch is utilized, the switch can configure and manage the vNICs presented
to it. vNICs can have a portion of their allocated bandwidth dynamically adjusted to another vNIC. In
an environment not utilizing a virtual fabric aware switch the creation and allocation vNICs can be
done via UEFI.
Virtualizing the NIC helps to resolve issues caused by limited NIC slot availability. By virtualizing a
10 Gbps NIC, its resources can be divided into multiple logical instances known as vNICs. Each
vNIC appears as a regular, independent NIC to the server operating system or a hypervisor, with
each vNIC using a portion of the physical NICs overall bandwidth. Each vNIC can be allocated a
bandwidth between 100 MB and 10 Gb in increments of 100 MB, such that the total of all four vNICs
does not exceed 10 Gb.
Virtualized Layer 1 of the OSI model, by virtualizing the physical
interface
Each 10 Gb physical port can offer up to four virtual ports (vNIC)
Each vNIC appears as an individual adapter to the operating system
Each vNIC allocates bandwidth in increments of 100 Mb
Customer can run protocols like iSCSI or FCoE on the vNICs
m
e
z
z
a
n
i
n
e

C
a
r
d

C
o
n
n
e
c
t
o
r
x
1

P
o
r
t
s
P1
P3
P5
P7
x
1

P
o
r
t
s
P2
P4
P6
P8
Dual Port 10GB virtual fabric
mezzanine card
vnic 1
vnic 2
vnic 3
vnic 4
vnic 1
vnic 2
vnic 3
vnic 4
5Gb
600Mb
2.2Gb
2.2Gb
1Gb
3Gb
2.5Gb
2.5Gb
Virtual NICs
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-46 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
EN4093/R and CN4093 support the Virtual Fabric mode. Among the adapters, IBM Flex System
CN4054 4-port 10Gb Ethernet Adapter and the 10Gb LOM support Virtual Fabric.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-47
V9.0
Uempty
Figure 7-41. Unified Fabric Port NGT113.0
Notes:
Unified Fabric Port (UFP) is another approach to NIC virtualization, similar to vNIC but with
enhanced flexibility, and should be considered the direction for future development in the virtual NIC
area for IBM switching solutions. UFP is supported today on the EN4093, EN4093R 10Gb Scalable
Switch and CN4093 10Gb Converged Scalable Switch. UFP and vNIC are quite similar features but
both cant be active at the same time on the same switch.
By default each vPort is guaranteed 2.5 Gb, and can burst up to the full 10G if other vPorts do not
need the bandwidth. The guaranteed minimum bandwidth and maximum bandwidth for each vPort
are configurable.
Unified Fabric Port
UFP: Unified Fabric Port
Next Generation of vNIC
Supported on IBM Flex System Fabric EN4093, EN4093R and
CN4093 switches
Used to divide a 10 Gb connection to a server into separately
controllable virtual connections
Provides up to four virtual ports per 10 Gb NIC connection
Utilizes LLDP behind the scenes to communicate between the
Switch and NIC
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-48 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-42. Network convergence NGT113.0
Notes:
At the server level with Converged Network Adapters (CNAs), Fibre Channel traffic is encapsulated
into Fibre Channel over Ethernet (FCoE) frames and these FCoE frames are converged with
networking or clustering traffic. FCoE is the transport, or mapping, of encapsulated FC frames over
the Ethernet. Within the fabric FCoE-capable switches pass Fibre Channel traffic to the attached
SANs and Ethernet traffic to the attached Ethernet network. These switches support the Enhanced
Ethernet.
CN4093 is a full fledged converged network switch providing OmniPorts that can be used for either
Ethernet or FC fabric connection on the same ports.
Network convergence
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-49
V9.0
Uempty
Figure 7-43. IBM Flex System networking topics NGT113.0
Notes:
This section covers the methods for basic switch administration in IBM Flex System IO modules.
You will identify the different ways to access the switch and also learn about the various images that
you can have.
IBM Flex System I/O architecture
IBM Flex System I/O adapters
IBM Flex System Scalable Switch Elements (ScSE)
IBM Flex System networking features
Basic switch administration
IBM Flex System networking topics
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-50 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-44. Accessing the switch NGT113.0
Notes:
The switch has a variety of user-interfaces for administration including:
A built-in, text-based command-line interface and menu system for access through serial-port
connection or an optional Telnet or SSH session
The built-in Browser-Based Interface (BBI) available using a standard web-browser
SNMP support for access through network management software such as Flex System Manager in
case of IBM PureFlex.
The IBMNOS-CLI provides a simple, direct method for switch administration. Using a basic
terminal, you are presented with an organized hierarchy of menus, each with logically-related
sub-menus and commands. These allow you to view detailed information and statistics about the
switch, and to perform any necessary configuration and switch software maintenance.
Web
Telnet
SSH
SNMP
Serial
Command-line interface
Serial port connection
Telnet
SSH
Browser based interface
HTTP
HTTPS
SNMP
FSM
Other network management
Software like SNEM
Accessing the switch
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-51
V9.0
Uempty
Figure 7-45. Configure internal management port from the CMM NGT113.0
Notes:
These are the steps to assign IP address to each of the chassis switches from the CMM web
interface. You can also use the CMM command line interface to assign addresses.
Login to the CMM with the USERID account.
Go to the Chassis Management menu and select Component IP Configuration.
Click on the device name of the switch you want to assign an IP address to, and go to the IPv4
tab.
Enter the IP address information and click Apply.
Go to the Chassis Management menu and select I/O Modules.
Select the desired switch, then select Power and Restart from the drop-down list to restart it.
Configure internal management port from the
CMM
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-52 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-46. I/O module management console access NGT113.0
Notes:
Once the switch is configured with an IP address and gateway, you can use SSH or HTTPS to
perform switch administration, from any workstation connected to the management network.
The Secure Shell (SSH) protocol enables you to securely log into another device over a network to
execute commands remotely. As a secure alternative to using Telnet to manage switch
configuration, SSH ensures that all data sent over the network is encrypted and secure.
Switch management is available by way of a browser and by way of CLI.
By default, SSH and HTTPS access is enabled.
Default security policy is set to Secure.
HTTPS
SSH
I/O module management console access
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-53
V9.0
Uempty
Figure 7-47. Launch browser console from CMM NGT113.0
Notes:
From the CMM, there is the ability to establish a browser based UI session with the IOM switch
modules. By default switch GUI is launched using HTTPS.
The interface IP can be IPv4 or IPv6 (IPv4 is the default).
Launch browser console from CMM
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-54 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-48. Launch browser console from FSM NGT113.0
Notes:
From the FSM, there is the ability to establish a browser based UI session with the IOM switch
modules.
Navigate from FSMs Chassis Manager tab > Select the switch from the chassis map > click on
Launch Web Browser from Common Actions menu.
Launch browser console from FSM
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-55
V9.0
Uempty
Figure 7-49. Browser-based interface NGT113.0
Notes:
The network administrator can access switch configuration and monitoring functions through the
BBI, a web-based switch management interface. The BBI has the following features:
Many of the same configuration and monitoring functions as the command-line interface
Intuitive and easy-to-use interface structure
Password protection
Nothing to install; the BBI is part of the IBM Networking OS switch software
Automatically upgraded with each new software release
The toolbar is used for selecting the context for your actions in the other windows. The navigation
window is used for selecting particular items or features to act upon. The forms window is used for
viewing or altering switch information. The message window is used for displaying the most recent
switch syslog messages and events.
There are four main
regions on the IBM
Networking OS BBI
screen:
The toolbar
The navigation window
The forms window
The message window
Navigation Frame Forms Frame
Toolbar Message window
Browser-based interface
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-56 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-50. Industry standard command-line interface NGT113.0
Notes:
The ISCLI provides a direct method for collecting switch information and performing switch
configuration. Using a basic terminal, the ISCLI allows you to view information and statistics about
the switch, and to perform any necessary configuration. The ISCLI is the default CLI in standalone
switches, but is not the default on the embedded switches currently.
The ISCLI has three major command modes listed in order of increasing privileges.
User EXEC mode is the initial mode of access. By default, password checking is disabled for this
mode, on console.
Privileged EXEC mode is accessed from User EXEC mode. This mode can be accessed using the
command: enable.
Global Configuration mode allows you to make changes to the running configuration. If you save
the configuration, the settings survive a reload. Several sub-modes can be accessed from the
Global Configuration mode.
Each mode provides a specific set of commands. The command set of a higher-privilege mode is a
superset of a lower-privilege mode all lower-privilege mode commands are accessible when
Three major command modes listed in order of increasing
privileges:
User EXEC mode
Privileged EXEC mode
Global Configuration mode
Command line help is similar to Cisco IOS
A question mark (?) tells you commands available.
command ? displays the command options.
command? (no space) lists commands that begins with command.
Pressing Tab before fully keying in the command will complete the
command.
Industry standard command-line interface
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-57
V9.0
Uempty
using a higher-privilege mode. The ISCLI is similar in look and feel to Cisco IOS commands, but
some commands are different to varying degrees.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-58 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-51. IBM Networking OS command-line interface NGT113.0
Notes:
The IBM Networking OS command-line interface (IBMNOS-CLI) is used for viewing switch
information, statistics and for performing all levels of switch configuration for administration.
To make the CLI easy to use, the various commands have been logically grouped into a series of
menus and sub-menus. Each menu displays a list of commands and/or sub-menus that are
available, along with a summary of what each command will do. Below each menu is a prompt
where you can enter any command appropriate to the current menu.
Enter password: admin
------------------------------------------------------------
[Main Menu]
info - Information Menu
stats - Statistics Menu
cfg - Configuration Menu
oper - Operations Command Menu
boot - Boot Options Menu
maint - Maintenance Menu
diff - Show pending config changes [global command]
apply - Apply pending config changes [global command]
save - Save updated config to FLASH [global command]
revert - Revert pending or applied changes [global command]
exit - Exit [global command, always available]
>> Main#
IBM Networking OS command-line interface
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-59
V9.0
Uempty
Figure 7-52. Flex System I/O module login levels NGT113.0
Notes:
Users may display information that has no security or privacy implications, such as switch statistics
and current operational state information.
Operators can only effect temporary changes. These changes will be lost when the switch is
rebooted/reset. Operators have access to the switch management features used for daily switch
operations.
Administrators are the only ones that may make permanent changes to the switch configuration.
They can effect changes that are persistent across a reboot/reset of the switch. Administrators can
access switch functions to configure and troubleshoot problems.
With the exception of the admin user, access to each user level can be disabled by setting the
password to an empty value.
By default, the oper user is disabled. The USERID user has admin privileges.
The basic users of an I/O
module are listed.
It is recommended that the
default switch passwords are
changed after initial
configuration and as regularly
as required under local network
security policies.
User account Password
admin admin
USERID Passw0rd
oper oper
user user
access user user-password
access user operator-password
access user administrator-password
Flex System I/O module login levels
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-60 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-53. Glossary NGT113.0
Notes:
Check your understanding of the terms used in this unit.
ScSE: Scalable Switch Elements
CMM: Chassis Management Module
FSM: Flex System Manager
Node: Server
FC: Fibre Channel
EN: Ethernet
SPAR: Switch partitions
SFP: Small form-factor pluggable
QSFP: Quad small form-factor pluggable
VLAN: Virtual local area network
vLAG: Virtual link aggregation group
STP: Spanning Tree Protocol
NIC: Network interface card
VRRP: Virtual Router Redundancy Protocol
CLI modes
Glossary
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Unit 7. IBM Flex System networking 7-61
V9.0
Uempty
Figure 7-54. Checkpoint NGT113.0
Notes:
Write your answers here:
1.
2.
3.
1. Identify the component shown here.
a. Compute node
b. ScSE
c. CMM
d. ETE expansion unit
2. What I/O console access protocol is available by default on IBM
PureFlex scalable switches?
a. HTTP and SSH
b. HTTP and Telnet
c. HTTPS and Telnet
d. HTTPS and SSH
3. True or False: SFP transceivers are usually preferred to direct
attach cables for long distance connections.
Checkpoint
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
7-62 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Figure 7-55. Unit summary NGT113.0
Notes:

Having completed this unit, you should be able to:
Summarize the I/O architecture of IBM Flex System
Recognize the IBM Flex System I/O adapters
Recognize the IBM Flex System I/O modules
Identify some of the key features supported by IBM Flex
System networking components
Identify the different methods to perform switch administration
Unit summary
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Appendix A. Checkpoint solutions A-1
V9.0
AP
Appendix A. Checkpoint solutions
Unit 1, "IBM PureSystems and IBM Flex System"
Solutions for Figure 1-50, "Checkpoint (1 of 2)," on page 1-58
Checkpoint solutions (1 of 2)
1. Which of the following IBM PureApplication System X-Architecture
models offers 96 - 608 cores options?
a. IBM PureApplication W1500 (25U)
b. IBM PureApplication W1500 (42U)
c. IBM PureApplication W1700 (42U)
d. Both the IBM PureApplication W1500 (42U) and
IBM PureApplication W1700 (42U)
The answer is IBM PureApplication W1500 (42U), which is an Intel X-
Architecture compute node model that offer distinct configurations of
96 608 cores.
2. Which of the following IBM PureFlex offerings is shipped standard
with the IBM Flex System Manager Advanced?
a. IBM Flex System Express
b. IBM Flex System Enterprise
c. IBM Flex System Chassis
The answers are IBM Flex System Express.
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
A-2 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Solutions for Figure 1-51, "Checkpoint (2 of 2)," on page 1-59
Checkpoint solutions (2 of 2)
3. Which of the following is an example of an IBM
PureApplication platform pattern of expertise?
a. Operation system provisioning to compute nodes
b. SAP application deployment
c. Web application deployment
d. Automated configuration of compute nodes
The answer is web application deployment.
4. Which of the following storage options provides direct-attach
local storage to a compute node?
a. IBM Flex System Storage Expansion Node
b. IBM Flex System Storwize V7000 Unified
c. IBM Flex System V7000 Storage Node
The answer is IBM Flex System Storage Expansion Node
attaches to the X-Architecture x220 and x240 compute nodes.
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Appendix A. Checkpoint solutions A-3
V9.0
AP Unit 2, "IBM Flex System Enterprise Chassis"
Solutions for Figure 2-63, "Checkpoint," on page 2-73
Copyright IBM Corporation 2012, 2013
Checkpoint solutions
1. The IBM Flex System Enterprise Chassis supports which of the following
combinations of compute nodes?
a. Four full-width compute nodes
b. Fourteen half-width compute nodes
c. Three full-width compute nodes and five half-width compute nodes
d. All of the above
The answer is all of the above.
2. In an N+N power configuration using 2100 W power supplies, the minimum
number of power supplies required to support four x240 compute nodes is:
a. Two
b. Three
c. Four
d. Six
The answer is two.
3. In a base configuration of the IBM Flex System Enterprise Chassis, four 80 mm
fans are installed that support up to four half-width compute nodes.
a. Two
b. Four
c. Six
d. Eight
The answer is four.
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
A-4 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Unit 3, "IBM Flex System Manager"
Solutions for Figure 3-25, "Checkpoint (1 of 3)," on page 3-27
Checkpoint solutions (1 of 3)
1. Management of multiple IBM Flex System chassis is done by
which component?
a. Chassis Management Module
b. Flex System Manager
c. Integrated Management Module
d. Flexible Service Processor
The answer is Flex System Manager
2. True or False: The IBM Flex System Manager is cabled
directly to a top of rack switch for connection to the
management network.
The answer is false
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Appendix A. Checkpoint solutions A-5
V9.0
AP
Solutions for Figure 3-26, "Checkpoint (2 of 3)," on page 3-28
Checkpoint solutions (2 of 3)
3. Virtual resource management is a function of which IBM Flex System
management component?
a. Integrated Management Module
b. Flexible Service Processor
c. Chassis Management Module
d. Flex System Manager
The answer is Flex System Manager
4. Which of the following statements is correct about the hardware
components of the IBM Flex System Manager node?
a. The node is based on the p260 Compute Node
b. The node includes two 200 GB SAS drives
c. The node includes 32 GB of RAM memory
d. The node includes a Fibre Channel mezzanine card
The answer is the node includes 32 GB of RAM memory
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
A-6 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Solutions for Figure 3-27, "Checkpoint (3 of 3)," on page 3-29
Checkpoint solutions (3 of 3)
5. True or False: Optimization of virtual resources with pool management
for a Power compute node in the IBM Flex System is only available
with the IBM Flex System Manager Advanced feature.
The answer is true
6. If an IBM Flex System Manager has configured an ETH1 network
adapter, which of the following is correct ?
a. The FSM management and data networks are intended to be separated.
b. The FSM management and data networks are intended to be merged.
c. There is FSM access only to the management network.
d. There is FSM access only to the data network.
The answer is the FSM management and data networks are intended
to be separated
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Appendix A. Checkpoint solutions A-7
V9.0
AP Unit 4, "IBM Flex System X-Architecture compute nodes"
Solutions for Figure 4-116, "Checkpoint," on page 4-138
Checkpoint solutions
1. The IBM Flex System x220 Compute Node supports which of the following
combinations of storage devices?
a. 2.5-inch HDDs
b. 2.5-inch HDDs and 2.5-inch SSDs
c. 2.5-inch HDDs, 2.5-inch SSDs, and 1.8-inch SSDs
d. All of the above
The answer is all of the above.
2. The embedded Virtual Fabric Adapter in the x222 requires that both Ethernet
switches installed in bay 1 and 2 have Upgrade 1 enabled. Which of the following
will happen if Upgrade 1 is not enabled?
a. The x222 compute node will not power on
b. The upper server will lose Ethernet connectivity
c. The lower server will lose Ethernet connectivity
d. Both B and C
The answer is the upper server will lose Ethernet connectivity.
3. To attach the IBM Flex System PCIe Expansion Node to an IBM Flex System
x240 Compute Node, which of the following must also be installed?
a. At least 64GB of memory in the x240 Compute Node
b. Two I/O adapter cards in the PCIe expansion node
c. Two processors in the x240 Compute Node
d. Both two I/O adapter cards in the PCIe expansion node and two processors in the x240
Compute Node
The answer is two processors in the x240 Compute Node.
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
A-8 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Unit 5, "IBM Power Systems compute nodes"
Solutions for Figure 5-37, "Checkpoint (1 of 2)," on page 5-47
Checkpoint solutions (1 of 2)
1. Which of the following managers can be used to manage a
p260 or p460 Power node?
a. HMC
b. FSM
c. SDMC
d. IVM
The answers are HMC, FSM, and IVM.
2. The maximum memory on a p460 is 1024 GB, and the
maximum number of cores on a p460 is 32.
The answers are 1024 GB and 32.
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Appendix A. Checkpoint solutions A-9
V9.0
AP
Solutions for Figure 5-38, "Checkpoint (2 of 2)," on page 5-48
Checkpoint solutions (2 of 2)
3. True or False: All virtual servers on a p460 must run the
same operating system from a common data store.
The answer is false.
4. Name the three resource types that are assigned to virtual
servers:
The answers are processor, memory, and I/O slots.
5. What is the name of the appliance that enables virtual
servers to share physical resources?
The answer is Virtual I/O Server (VIOS).
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
A-10 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Unit 6, "IBM Flex System storage"
Solutions for Figure 6-48, "Checkpoint (1 of 2)," on page 6-50
1. What is the maximum number of IBM Flex System V7000 Storage Nodes that can
be installed in a single IBM Flex System configuration?
a. Two
b. Three
c. Four
d. Five
The answer is three.
2. How many bays does the IBM Flex System V7000 Storage node occupy in the
IBM Flex System Enterprise Chassis?
a. One
b. Two
c. Four
The answer is four.
3. True or False: During the Flex System V7000 Initial Setup, there are three
methods that can be used to set up the system.
The answer is false. There are two methods that can be used to set up the
system.
Checkpoint solutions (1 of 2)
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2012, 2013 Appendix A. Checkpoint solutions A-11
V9.0
AP
Solutions for Figure 6-49, "Checkpoint (2 of 2)," on page 6-51
4. True or False: The IBM Flex System V7000 Storage Node is
based on two enclosures.
The answer is true. the IBM Flex System V7000 Storage
Node is based on the expansion and control enclosures.
5. Identify the advanced features and functions that are
included at no charge with the Flex System V7000.
a. Thin provisioning
b. Real-time Compression
c. Easy Tier
d. Remote Mirroring
e. External virtualization
The answer is thin provisioning and Easy Tier.
Checkpoint solutions (2 of 2)
Copyright IBM Corporation 2012, 2013
Student Notebook
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
A-12 IBM PureFlex System Fundamentals Copyright IBM Corp. 2012, 2013
Unit 7, "IBM Flex System networking"
Solutions for Figure 7-54, "Checkpoint," on page 7-61
Checkpoint solutions
1. Identify the component shown here.
a. Compute node
b. ScSE
c. CMM
d. ETE expansion unit
The answer is CMM.
2. What I/O console access protocol is available by default on IBM
PureFlex scalable switches?
a. HTTP and SSH
b. HTTP and Telnet
c. HTTPS and Telnet
d. HTTPS and SSH
The answer is HTTPS and SSH.
3. True or False: SFP transceivers are usually preferred to direct attach
cables for longer distance connections.
The answer is true.
Copyright IBM Corporation 2012, 2013
V9.0
backpg
Back page

You might also like