Network Design Considerations in A Blade Server Environment - LAN & Storage

Download as pdf or txt
Download as pdf or txt
You are on page 1of 31

Network Design

Considerations in a Blade
Server Environment - LAN &
Storage

Session Number-BRKDCT-2869

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 2

© 2006, Cisco Systems, Inc. All rights reserved. 1


Presentation_ID.scr
Session Abstract

This session will cover design considerations for


blade server centric deployments. It will cover
interconnect considerations and topology options
for both LAN and storage technologies. Key
Blade Server specific features will be covered
along with installation instructions. Various
deployment scenarios will be described with pros
and cons for each design. Tools for simplifying
deployment will be described and demonstrated.

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 3

Session Objectives

ƒ After completing this session you will be able to:


Understand the architecture and deployment for the Blade Switches
Understand topology options
Identify key edge features and IOS commands
Manage and monitor Ethernet Blade Switches via Device Manager
and CNA
Understand Key Fibre Channel Edge features and Architecture

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 4

© 2006, Cisco Systems, Inc. All rights reserved. 2


Presentation_ID.scr
Outline
ƒ Network Design for Blade Servers
– DC Access Options for Blade Switchess: Pass-thru, ToR or Integrated Switch
– L2 vs L3 inside Blade Enclosure
– Single vs Two tier Access
– Services in a Blade Server deployment
– NIC teaming designs
– Virtual Server Deployment on bladeservers
ƒ Architectural Considerations for VBS Technology
– Overview & Benefits of Virtualization Technology
– Deployment scenarios – Pros & Cons
ƒ SAN
– Key issues faced by customers
– Solutions: NPV, FlexAttach

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 5

View of Data Center Networking


The BIG Picture

WAN
LAN MAN
Integrated Network Services
Server Balancing

VPN Termination Enterprise Now Inside BladeSystem


LAN/MAN/WAN
SSL Termination
Switching
Firewall Services

Intrusion Detection

Multiprotocol Integrated Virtualization Services


Integrated Storage Services
Gateway V Server Virtualization
Virtual Fabrics (VSANs) Services Virtual I/O
Storage Virtualization
Grid/Utility Computing
Enterprise Server Fabric
Topspin
Data Replication Svcs Low Latency
SAN Switching Switching
Family
RDMA Services
Fabric Routing Svcs Clustering

High End Tape Drive Mid-Tier Tier 1 Tier 2 Tier 3


Disk Storage Storage Storage Servers Servers Servers

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 6

© 2006, Cisco Systems, Inc. All rights reserved. 3


Presentation_ID.scr
Design with Pass-Thru Module and
Modular Access Switch
Modular
ƒ Cable density Access Blade Server
Switches Rack
ƒ Rack example:
Four Enclosures Per Rack
Up to 16 servers per enclosure
32 LOMs + 16 Service NICs per enc.
192 available access ports
Requires structured cabling to support
192 connections/blade rack
Single pair of access switches supports
12 blade server enclosures (three racks)

Gigabit Ethernet Connections


BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 7

Design with Pass-Thru Module and


Modular Access Switch

Does this look Manageable? How to I find and replace bad cable?

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 8

© 2006, Cisco Systems, Inc. All rights reserved. 4


Presentation_ID.scr
Design with Pass-Thru Module and
Top of the Rack (TOR) Switches

ƒ High Cable density within the rack Aggregation Layer

ƒ High capacity uplinks provide


aggregation layer connectivity
ƒ Rack example:
10 GigE
Up to Four blade enclosures/rack Uplinks
Up to 128 cables for server traffic
Up to 64 cables for Server management
Up to four rack switches support local blade
servers
Up to two switches for server management
ports
Requires up to 192 cables within the rack

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 9

Design with Blade Switches

ƒ Reduces cables within the rack Aggregation Layer

ƒ High capacity uplinks provide


aggregation layer connectivity
ƒ Rack example:
Up to Four blade enclosures/rack 10 GigE
Or GE
Two switches per enclosure
Uplinks
Either 8 GE or 1 10GE uplink per switch
Between 8 and 64 cables/fibers per rack
Reduces number of cables within the rack
but increases the number of uplinks
compared to ToR solution
Based on cable cost 10GE from Blade
Switch is a better option.

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 10

© 2006, Cisco Systems, Inc. All rights reserved. 5


Presentation_ID.scr
Design with Virtual Blade Switches

ƒ Removes Cables from Rack Aggregation Layer

ƒ High capacity uplinks provide


aggregation layer connectivity
ƒ Rack example:
Up to Four blade enclosures/rack 10 GigE
Or GE
Up to 64 Servers per rack
Uplinks
Two switches per enclosure
One Virtual Blade Switch per Enclosure
Two or Four 10GE uplinks per Rack
Reduces number of Access Layer
switches by factor of 8
Allows for local Rack traffic to stay within
the Rack

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 11

Networking in a Data Center


Each Layer Has a Distinct Role in the Network

• Layer 3 Core
Core • High Speed / Bandwidth
• High Availability

• Layer 2/3 Edge


• Data Center Service Integration
Aggregation – SSL termination, Load Balancer
– Firewall, IPS/ IDS, DOS Mitigation
– Caching
– Traffic Analysis

Access
• High Perf Server Connectivity
• Port Density
• Partitioning with VLANs
• Layer 2 resiliency with STP

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 12

© 2006, Cisco Systems, Inc. All rights reserved. 6


Presentation_ID.scr
Layer 2 Access and Layer 3 Access Compared
Aggregation Aggregation

Rapid PVST+, or PVST+

OSPF, EIGRP
trunks L3 links
Access
Access

Layer 2
Layer 3
ƒ The Choice of one design versus the other one has to do with:
Layer 2 loops being more difficult to manage than Layer 3 loops
Convergence time, Link Utilization, Specific Application Requirements
Requirements of NIC teaming and Clustering
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 13

When is Layer 2 Adjacency Required?


Meeting Server Farm Application Requirements

ƒ Clustering: applications MS-Windows


often execute on multiple Advanced
Server
servers clustered to Clustering
appear as a single device.
Common for HA, Load Linux
Balancing and High Beowulf or
Performance computing proprietary
requirements. (Windows clustering
(HPC)
2003 Advanced Server,
Linux Beowulf)
ƒ NIC teaming software
typically requires layer 2 AFT
adjacency SFT
ALB
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 14

© 2006, Cisco Systems, Inc. All rights reserved. 7


Presentation_ID.scr
Blade NIC Teaming Configurations
ƒ Network Fault Tolerance (NFT)
Typical referred to as Active/Standby

y
ar
nd
Used when server sees two or more upstream switches

co
Pr
im

Se
NIC connectivity is PREDEFINED with built-in switches and

ar
y
may limit NIC configuration options
ƒ Transmit Load Balancing (TLB)

ry
Primary adapter transmit and receives

a
nd
Secondary adapters transmit only

co
Pr
im

Se
Rarely used

ar
y
ƒ Switch Assisted Load Balancing (SLB)
Often referred to as Active/Active
Server must see same switch on all member NICs
GEC/802.3ad
Increased throughput
Available with VBS switches Active
Standby
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 15

Datacenter Access Evolution


Customers Have Two Options with Blade Servers

Create Two level Access Layer Maintain Single Access Layer


Core Core

or
Aggregation Aggregation

Access Access

- Existing Investment/Capacity in Aggregation/Access Layer


Decision Based on: - Size of Spanning Tree Domain
- Latency
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 16

© 2006, Cisco Systems, Inc. All rights reserved. 8


Presentation_ID.scr
Blade Server Access Topologies
Different Alternatives:

V-Topology U-Topology Trunk-Failover Topology

• Very Popular Topology • Not as Popular • Maximum Bandwidth


available
• Some Bandwidth
not available • Needs NIC Teaming

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 17

Reducing STP Complexity with Integrated Switching


Higher Resiliency with “Layer 2 Trunk Failover”
Typical Blade Network Topologies

L3
Switches

Link Link
State Cisco Blade State
Group 1 Switches Group 1

Blade Server Chassis Blade Server Chassis


FEATURE
• Map Uplink EtherChannel to downlink ports (Link State CUSTOMER BENEFIT
Group)
• Higher Resiliency / Availability
• If all uplinks fail, instantly shutdown downlink ports
• Reduce STP Complexity
• Server gets notified and starts using backup NIC/switch
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 18

© 2006, Cisco Systems, Inc. All rights reserved. 9


Presentation_ID.scr
Flexlink Overview

ƒ Achieve Layer 2 resiliency without using STP


ƒ Access switches have backup links to Aggregation switches
ƒ Target of sub-100msec convergence upon forwarding link failover
ƒ Convergence time independent of #vlans and #mac-addresses
ƒ Interrupt based link-detection for Flexlink ports.
ƒ Link-Down detected at a 24msec poll.
ƒ No STP instance for Flexlink ports.
ƒ Forwarding on all vlans on the <up> flexlink port occurs with a
single update operation – low cost.

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 19

Data Center / 3-Tier Network

Cat6K Cat6K

Core

Aggregation

Access

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 20

© 2006, Cisco Systems, Inc. All rights reserved. 10


Presentation_ID.scr
(Mac address Move Notification) MMN
Overview

ƒ Achieve near sub-100 msec downtime for the


downstream traffic too, upon flexlink switchover.
ƒ Lightweight protocol : Send a MMN packet to [(Vlan1,
Mac1, Mac2..) (Vlan2, Mac1, Mac2..) ..] distribution
network.
ƒ Receiver parses the MMN packet and learns or moves
the contained mac-addresses. Alternatively, it can flush
the mac-address table for the vlans.
ƒ Receiver forwards packet to other switches.

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 21

Flexlink MMN Performance –


Timings

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 22

© 2006, Cisco Systems, Inc. All rights reserved. 11


Presentation_ID.scr
Flexlink Preemption
Flexlink enhanced to :
ƒ provide flexibility in choosing FWD link, optimizing available bandwidth utilization
User can configure Flexlink pair when previous FWD link comes back up :
ƒ Current FWD link continues
ƒ Æ Preemption mode Off
ƒ Previous FWD link preempts the current and begins FWD instead
Æ Preemption mode Forced
ƒ Higher bandwidth interface preempts the other and goes FWD
Æ Preemption mode Bandwidth
Note: By default, flexlink preemption mode is OFF
When configuring preemption delay:
ƒ user can specify a preemption delay time (0 to 300 sec)
ƒ default preemption delay is 35 secs
Preemption Delay Time :
ƒ Once the switch identifies a Flexlink preemption case, it waits an amount of <preemption
delay> seconds before preempting the currently FWD Flexlink interface.

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 23

Flexlink Configuration Commands


CBS3120-VBS-TOP#config t
Enter configuration commands, one per line. End with CNTL/Z.
CBS3120-VBS-TOP(config)#int po1
CBS3120-VBS-TOP(config-if)#switchport backup int po 2
CBS3120-VBS-TOP(config-if)#
CBS3120-VBS-TOP#show interface switchport backup detail

Switch Backup Interface Pairs:

Active Interface Backup Interface State


------------------------------------------------------------------------
Port-channel1 Port-channel2 Active Up/Backup Down
Preemption Mode : off
Bandwidth : 20000000 Kbit (Po1), 10000000 Kbit (Po2)
Mac Address Move Update Vlan : auto

CBS3120-VBS-TOP#

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 24

© 2006, Cisco Systems, Inc. All rights reserved. 12


Presentation_ID.scr
Outline
ƒ Network Design for Blade Servers
– DC Access Options for Blade Switchess: Pass-thru, ToR or Integrated Switch
– L2 vs L3 inside Blade Enclosure
– Single vs Two tier Access
– Services in a Blade Server deployment
– NIC teaming designs
– Virtual Server Deployment on bladeservers
ƒ Architectural Considerations for VBS Technology
– Overview & Benefits of Virtualization Technology
– Deployment scenarios – Pros & Cons
ƒ SAN
– Key issues faced by customers
– Solutions: NPV, FlexAttach

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 25

Cisco Catalyst Virtual Blade Switch


Topology Highlighting Key Benefits
Access Layer (Virtual Blade Switch) Distribution Layer

Mix-n-match
GE & 10GE
switches

Local Traffic
doesn’t go to
distribution
switch

Higher
Resiliency With VSS on Cat
with 6K, all links
Etherchannel utilized

Single Switch / Node Greater Server BW –


(for Spanning Tree or via Active-Active
Layer 3 or Server Connectivity
Management)
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 26

© 2006, Cisco Systems, Inc. All rights reserved. 13


Presentation_ID.scr
“Multiple Deployment Options for Customers”
Caters to Different Customer Needs

Benefits
ƒ Common Scenario
ƒ Cost Effective
ƒ Single Virtual Blade switch per rack
ƒ Entire rack can be deployed with as
little as two 10 GE uplinks or two GE
Etherchannels
ƒ Allows for Active/Active NIC teams
ƒ Creates a single router for entire rack
if deploying L3 on the edge
ƒ Keeps Rack traffic in the Rack
Design Considerations
ƒ Ring is limited to 64 Gbps
ƒ May cause Oversubscription
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 27

“Multiple Deployment Options for Customers”


Caters to Different Customer Needs

Benefits
ƒ Separate VBS divide Left/Right switches
ƒ More resilient
ƒ Provides more Ring capacity since two
rings per Rack
Design Considerations
ƒ Requires more Uplinks per Rack
ƒ Servers can not form A/A NIC teams

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 28

© 2006, Cisco Systems, Inc. All rights reserved. 14


Presentation_ID.scr
“Multiple Deployment Options for Customers”
Caters to Different Customer Needs

Benefits
ƒ Allows for 4 NICs per server
ƒ Can Active/Active Team all 4 NICs
ƒ More Server Bandwidth
Design Considerations
ƒ Creates smaller Rings
ƒ Requires more Uplinks
ƒ May Increase Traffic on each Ring

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 29

Additional Options

ƒ By combining above three scenarios, the user can:


– Deploy up to 8 switches per enclosure
– Build smaller Rings with fewer Switches
– Split VBS between LAN on Motherboard (LOM) and
Daughter Card Ethernet NICs
– Split VBS across racks (See next slide)
– Connect unused uplinks to other Devices such as additional
Rack Servers or Appliances such as storage

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 30

© 2006, Cisco Systems, Inc. All rights reserved. 15


Presentation_ID.scr
Proper VBS Ring Configuration
Each offer a full ring, could be built with 1 meter cables, and looks similar – But:
Certain designs could lead to a split ring if an entire enclosure is powered down
For “No” example, in the 4 enclosure example, if enclosure 3 had
power removed you would end up with two rings, one made up of Yes
No
the switches in enclosures 1 and 2, and one made up of the
switches in enclosure 4. This, at a minimum would leave each
VBS contending for the same IP address, and remote switch ENC 4
management would become difficult
No Yes
The “Yes” examples
also have a better
chance of
ENC 3
maintaining
connectivity for the
No Yes
servers in the event
a ring does get
completely split due
to multiple faults ENC 2

Cable Lengths are 0.5,


1.0 and 3.0 Meter. The
1.0 Meter cable ships
standard
ENC 1

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 31

Virtual Blade Switch across Racks


VBS cables are limited to max of 3 meters
Insure that switches are not isolated in case of failure of switch or enclosure
May require cutting holes through side walls of Cabinets/Racks

~2 FT

~2 FT
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 32

© 2006, Cisco Systems, Inc. All rights reserved. 16


Presentation_ID.scr
Most Common Deployment Scenario
ƒ Straight forward configuration
Ensure uplinks are spread across switches and enclosures
If using EC, make sure members are not in same enclosure
By using RSTP and EC, recovery time on failure is minimized
Make Master Switch (and Alternate) are not Uplink switches
Use FlexLinks if STP is not desired

Aggregation
Layer

Core
Layer

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 33

Deployment Example

ƒ Switch Numbering 1 to 8, left to Right, Top to Bottom


ƒ Master Switch is Member 1
ƒ Alternate Masters will be 3,5,7
ƒ Uplink Switches will be Members 2,4,6,8
1 2

ƒ 10 GE ECs from 2,4 and 6,8 will be used


3 4
ƒ RSTP will be used
ƒ User Data VLANs will be interleaved 5 6

7 8

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 34

© 2006, Cisco Systems, Inc. All rights reserved. 17


Presentation_ID.scr
Configuration Commands
switch 1 priority 15 Å Sets Sw 1 to pri master
switch 3 priority 14 Å Sets Sw 3 to sec master
switch 5 priority 13 Å Sets Sw 5 to 3rd master
switch 7 priority 12 Å Sets Sw 7 to 4th Master
spanning-tree mode rapid-pvst Å Enables Rapid STP
vlan 1-10 Å Configures VLANs
state active
interface range gig1/0/1 – gig1/0/16
switchport access vlan xx Å Assign ports to VLANs

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 35

Configuration Commands
interface range ten2/0/1, ten4/0/1
switchport mode trunk
switchport trunk allowed vlans 1-10
channel group 1 mode active
interface range ten6/0/1, ten8/0/1
switchport mode trunk
switchport trunk allowed vlans 1-10
channel group 2 mode active
interface po1
spanning-tree vlan 1,3,5,6,7,9 port-priority 0
spanning-tree vlan 2,4,6,8,10 port-priority 16
interface po2
spanning-tree vlan 1,3,5,6,7,9 port-priority 16
spanning-tree vlan 2,4,6,8,10 port-priority 0

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 36

© 2006, Cisco Systems, Inc. All rights reserved. 18


Presentation_ID.scr
7 Rules to Live by for EtherChanneling
1. Split links across line cards on 6K side – prevents against Line
Card Outage
2. Split across 6Ks if using VSS - prevents against 6K outage
3. Split links across members on blade side if using VBS - prevents
against blade switch outage
4. Split links across Blade Enclosures if possible – prevents
against enclosure outage
5. Split VLANs across ECs for load balancing – prevents idle ECs.
6. Chose appropriate EC load balancing algorithm – example:
Blade servers generally have even number MAC addresses
7. Last but Not least, monitor your ECs - Only way to know if you
need more BW or Better EC load balance

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 37

Device Manager Screen Shot


• Embedded HTML server
• Built into each Ethernet Blade Switch
• Provides Initial Configuration
• Simple Monitoring Tool

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 38

© 2006, Cisco Systems, Inc. All rights reserved. 19


Presentation_ID.scr
CNA Screenshot – Topology View

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 39

CNA Screenshot – Front Panel View

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 40

© 2006, Cisco Systems, Inc. All rights reserved. 20


Presentation_ID.scr
Catalyst 6500 – Virtual Switching
What are the benefits?

VSS Benefits Network View (Logical)


L3 Distribution
• Single point of configuration and
managements
• Multi Chassis EtherChannel (MEC)
for active/active uplinks (No STP
Loops)
• Virtual Switch presents itself as a
single device consistently upstream
and downstream
MEC
• All PFC-based features handled in
hardware (multicast and unicast)
• Fully functional at either L2 or L3
• 50% Reduction in Routing Protocol
Neighbors = better scalability!
• VBS allows P2P with single EC Blade Switch Access Layer
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 41

Catalyst 6500 – Virtual Switch Link


(VSL) Hardware / Software Requirement
Physical View Minimum Requirement
L3 Distribution

Sup720-10GE

VSL Interconnect

WS-X6708-10GE
Sup720-10GE DFC3C/CXL

Software:
>=12.2 RLS6

Access Layer Equipment:


Any device with
User/Server Access EtherChannel support
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 42

© 2006, Cisco Systems, Inc. All rights reserved. 21


Presentation_ID.scr
Outline
ƒ Network Design for Blade Servers
– DC Access Options for Blade Switchess: Pass-thru, ToR or Integrated Switch
– L2 vs L3 inside Blade Enclosure
– Single vs Two tier Access
– Services in a Blade Server deployment
– NIC teaming designs
– Virtual Server Deployment on bladeservers
ƒ Architectural Considerations for VBS Technology
– Overview & Benefits of Virtualization Technology
– Deployment scenarios – Pros & Cons
ƒ SAN
– Key issues faced by customers
– Solutions: NPV, FlexAttach

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 43

SAN Storage Topology

Blade Server

Integrated
Blade Switch

Director Class
SAN Switch

FC

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 44

© 2006, Cisco Systems, Inc. All rights reserved. 22


Presentation_ID.scr
Key issues faced by customers

ƒ Not enough Domain IDs


ƒ World Wide Name (WWN) virtualization

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 45

What is FlexAttach
Flexibility for Adds, Moves, and Changes

Blade Server
ƒ FlexAttach (Based on WWN NAT)
….
Blade N
Blade 1

Blade
New

Each Blade Switch F-Port assigned a


virtual WWN
Flex Attach
Blade Switch performs NAT operations on
No Blade
NPV real WWN of attached server
Switch Config
Change
ƒ Benefits
No SAN re-configuration required when
No Switch new Blade Server attaches to Blade
Zoning
Change
SAN Switch port
Provides flexibility for server
administrator, by eliminating need for
coordinating change management with
No Array
networking team
Configuration Storage
Change Reduces downtime when replacing failed
Blade Servers

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 46

© 2006, Cisco Systems, Inc. All rights reserved. 23


Presentation_ID.scr
What is FlexAttach?

ƒ Creation of virtual PWWN for host initiators


ƒ Allows Server Administrative tasks with minimal
involvement of Storage Administrators
Pre-configure server for SAN access
Replacing Server HBA(s)
Replace Server HBA on same port
Moving blade server or server around the fabric
Can move blade server to another slot in same chassis
Can move blade server to another slot in another chassis
(has to be in the same physical SAN fabric)

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 47

NPV Brief Overview


Core Switch
ƒ Login Process for NP-port
NP-port first does FLOGI and PLOGI into
the core to register into the FC Name
Server (Cisco PWWN)
Must be NPIV capable
Any subsequent login from NPV switch
will do FDISC to the FC Name Server F F
ƒ VSAN Membership
Core switch interface and NP switch NP NP
interface must have VSAN match
NPV Edge Switch
Servers on NPV switch must reside on a
VSAN that matches one or more NP
uplink’s VSAN F F

ƒ Login Process for End Devices PWWN1 PWWN2


Server login as FLOGI
NPV switch converts FLOGI to FDISC
PWWN of server gets registered in FC
Name Server

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 48

© 2006, Cisco Systems, Inc. All rights reserved. 24


Presentation_ID.scr
FlexAttach – How does it work?
1. Interface fc1/1 is Core Switch (MDS or 3rd party switch with NPIV support)
FlexAttach enabled and
assigned vpwwn1
Server S1 is known by
2. Server S1 does FLOGI to vpwwn1 in the SAN
interface fc1/1
3. pwwn1 FLOGI is
rewritten to use vpwwn1 F
FLOGI
4. vpwwn1 FLOGI is port WWN of S1= vpwwn1
converted to FDISC to be NP
entered into FC Name NPV-FlexAttach pwwn rewrite rules
Server
F fc1/1 Æ vpwwn1
port WWN of S1 = pwwn1
N pwwn1

Server S1

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 49

A Typical Virtualized Data Center


Network Is Even More Important
Virtualization
VirtualCenter NAS Storage enables SOA –
Management Server requires server
and Network
Mobility of VMs possible
Dynamic
ONLY thru Network
Provisioning
Ethernet
Network
VMotion

ESX Servers
Centralized Storage for
Multiple Network / Storage OS & data (SAN or
SAN connections Network NAS)

Hence, Network needs to be Highly


Resilient, Secure, Agile and Manageable SAN Storage
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 50

© 2006, Cisco Systems, Inc. All rights reserved. 25


Presentation_ID.scr
Cisco VFrame DC Enables Dynamic Provisioning for
Virtualized Environments
Business Service
Management Monitoring
IBM Tivoli, HP Openview,
ƒ Orchestrate across
Mercury,
Tideway, BMC
BMC Patrol, CA Unicenter infrastructure resources
ƒ Platform for service
Management and Monitoring
abstraction
ƒ Integrate with other
management systems
Cisco VFrame Data Center
Network-Driven Service Orchestration

Virtualization Element Managers


Managers Cisco Fabric Manager, VMS, SOI Control
VMware VirtualCenter CiscoWorks, ANM Layer

SAN
NAS
Server Pool Network Pool Storage Pool

Data Center Networked Infrastructure


BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 51

VFrame Screenshot – Server Provisioning

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 52

© 2006, Cisco Systems, Inc. All rights reserved. 26


Presentation_ID.scr
VFrame Screenshot – Network Provisioning

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 53

VFrame Screenshot – VMWare Example

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 54

© 2006, Cisco Systems, Inc. All rights reserved. 27


Presentation_ID.scr
In Summary

ƒ Blade server and switch architectures simplify your vary


your data center design
ƒ Blade Switches provide the same rich feature set as the
Aggregation and Core switches
ƒ Many options available for blade server integration into
the data center for both storage and
IP connectivity
ƒ Virtualization is a Key part of the Next Generation DC

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 55

Q and A

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 56

© 2006, Cisco Systems, Inc. All rights reserved. 28


Presentation_ID.scr
Other Data Center Sessions
BRKDCT-2823 The Server Switch, Demultiplexing Networks with a Unified Fabric
BRKDCT-2825 Nexus Architecture
BRKDCT-2840 Data Center Networking – Taking risk out of Layer 2 Interconnects
BRKDCT-2863 DC Migration & Consolidation Discovery Methodology
BRKDCT-2866 Data Center Architecture Strategy and Planning
BRKDCT-2867 Data Center Facilities Consideration in Designing and Building Networks
BRKDCT-2868 Network Integration of Server Virtualization - LAN & Storage
BRKDCT-2870 Data Center Virtualization Overview/Concepts
BRKDCT-3831 Advanced Data Center Virtualization
BRKRST-3470 Cisco Nexus 7000 Switch Architecture
BRKRST-3471 Cisco NXOS Software - Architecture and Deployment
LABDCT-2870 Cisco Nexus 7000 Series Lab
LABDCT-2871 I/O Consolidation using Fibre Channel over Ethernet
TECDCT-2887 Architecting Distributed, Resilient Data Centers
TECDCT-3873 Data Center Design Power Session
TECRST-2003 Cisco Nexus 7000 Series Technical Deep Dive

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 57

Product Pages
ƒ For More information on Cisco Ethernet or FC Blades:
www.cisco.com/go/bladeswitch
ƒ For More information on VFrame Datacenter:
http://www.cisco.com/en/US/products/ps8463/index.html
ƒ For More information on Blade Server Partners:
DELL:
http://www.dell.com/content/products/compare.aspx/blade?c=us&cs=5
55&l=en&s=biz&~ck=mn
HP:
http://h18004.www1.hp.com/products/blades/components/c-class-
components.html
IBM:
http://www-03.ibm.com/systems/bladecenter/

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 58

© 2006, Cisco Systems, Inc. All rights reserved. 29


Presentation_ID.scr
Recommended Reading

ƒ Continue your Cisco Live learning


experience with further reading
from Cisco Press
ƒ Check the Recommended Reading
flyer for other suggested books

http://www.ciscopress.com/bookstore/browse.asp?st=60113
Available Onsite at the Cisco Company Store
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 59

Complete Your Online


Session Evaluation

ƒ Cisco values your input


ƒ Give us your feedback—we read
and carefully consider your scores
and comments, and incorporate
them into the content program
year after year
ƒ Go to the Internet stations located
throughout the Convention Center
to complete your session
evaluations
ƒ Thank you!

BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 60

© 2006, Cisco Systems, Inc. All rights reserved. 30


Presentation_ID.scr
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 61

© 2006, Cisco Systems, Inc. All rights reserved. 31


Presentation_ID.scr

You might also like