Dell M1000e Install Admin Troubleshooting
Dell M1000e Install Admin Troubleshooting
Dell M1000e Install Admin Troubleshooting
Blade Portfolio
M-Series blades combine powerful computing capabilities with optimized
connectivity that works virtually with any network, storage, or management
infrastructure. The image shows the latest blade servers that are supported by
the M1000e chassis.
In the N+1 configuration only power supply failures are protected, not grid failures. The
likelihood of multiple power supplies failing at the same time is remote.
When a blade server is installed and its power button is pressed, the button
starts to slowly blink green.
During this time the iDRAC interrogates the blade server’s components to calculate the
power requirements, and these requirements are fed back to the CMC.
If enough power is available, the blade server will then power up, and blade server’s
power button will display solid green.
The iDRAC provides the CMC with its power envelope requirements before
powering up the blade server. The power envelope consists of the maximum and
minimum power requirements that could keep the server operating. iDRAC’s initial
estimate is based on a worst-case model where all components in the blade server
draw maximum power and are often higher than the actual blade server requirements.
The M1000e allows you to specify a System Input power cap to ensure that the overall
chassis AC power draw stays under a given threshold.
The CMC first ensures enough power is available to run the
Fans
I/O Modules
iKVM (if present)
CMC
This power allocation is called the Input Power Allocated to Chassis infrastructure.
Power cap is set at the chassis level for our blade servers.
Servers are allocated power based on their Server Priority setting, with priority 1 servers
getting maximum power, priority 2 servers getting power after priority 1 servers, and so
on. Lower priority servers may get less power than priority 1 servers. It is based on
System Input Max Power Capacity and the user-configured setting of System Input
Power Cap.
If total power budget stays below the value of the System Input Power Cap, the CMC
allocates servers a value less than their maximum requested power.
The system administrator can also set priorities for each server module. The priority
works in conjunction with the CMC power budgeting and iDRAC power monitoring to
insure that the lowest priority blades are the first to enter any power optimization mode.
Fan Redundancy
M1000e Chassis has 9 standard hot pluggable, redundant fan modules.
The 9 fans are distributed evenly across the enclosure. The speed of each fan is
individually managed by the CMC. Fans are individually balanced to ensure smooth
operation and maximum throughput at lower speeds.
Fans are N+1 redundant. This means that failure of any single fan will not impact
system uptime or reliability.
o Any fan failure will immediately be visible through the chassis LCD or the
CMC GUI.
o Failure of more than one fan will not automatically result in shutting down
of blade servers.
This is because the blade servers have their own self-protection mechanisms to prevent
them from running too hot. Multiple fans failure can be because of the configuration,
ambient temperature, and its workload.
The CMC controls every element of the cooling subsystem.
However, the CMC depends on the server iDRACs to feed back temperature
information from within the blades, and the CMC is controlled by firmware.
Dynamic fan control results in to approximately 25% less airflow than 16 x 1U servers.
This results in less noise and longer life cycles for the cooling components.
The Server Modules are cooled with traditional front‐to‐back cooling. The front of the
system is dominated by an inlet area for the individual server modules.
The cooling process is as follows:
Air passes through the server modules and then through vent holes in the mid-
plane.
Air is drawn into the fans, which exhaust the air from the chassis.
The I/O Modules use a bypass duct to draw ambient air from the front of the system to the I/O
Module inlet. This duct is located above the server modules. This cool air is then drawn down
through the I/O Modules in a top to bottom flow path and into the plenum between the midplane
and fans, from where it is exhausted from the system.
The Power Supplies that are located in the rear of the system, use basic front‐to‐back
cooling. But they draw their inlet air from a duct located beneath the server modules.
This insures that the power supplies receive ambient temperature air.
iKVM Module
The local access KVM module for M1000e server chassis is called the Avocent®
Integrated KVM Switch Module (iKVM).
Some of the key features of iKVM are:
It acts as an analog keyboard, video, and mouse switch that plugs into the
chassis and active CMC's CLI.
Uses the On-Screen Configuration and Reporting (OSCAR) user interface to
select one of the servers or the Dell CMC command line you want to access.
It assigns an order of precedence to each type of connection, so that when there
are multiple connections, only one connection is available while others are disabled.
The order of precedence for iKVM connections is as follows:
Front panel
Analog Console Interface
Rear panel
Important: The ports on the control panel on the front of the chassis are
designed specifically for the iKVM, which is an optional module. If you do not
have the iKVM, you cannot use the front control panel ports.
Midplane
The Dell PowerEdge M1000e mid-plane is a passive board (no electrical components
other than connectors) that serves as the conduit for power, fabric connectivity, and
system management infrastructure. Also, it enables airflow paths for the front to back
cooling system through ventilation holes.
All M1000e mid-plane routing is fully isolated, supporting all chassis power, fabric,
system management, and fault-tolerance requirements.
The version 1.1 midplane allows for 10gb connectivity on Fabric A, while the original midplane
version 1.0 only supports 1gb connectivity.
Introduction to Fabrics
A Fabric is defined as a method of encoding, transporting, and synchronizing data
between multiple devices. Examples of Fabrics are GbE, Fibre Channel, or InfiniBand.
Fabrics are carried inside the PowerEdge M1000e system between server modules and
IOMs through the mid-plane. They are also carried to the outside world through the
physical copper or optical interfaces on the IOMs.
The fabrics are independent of each other.
A Fabric is defined as a method of encoding, transporting, and synchronizing data
between multiple devices. Examples of Fabrics are GbE, Fibre Channel, or InfiniBand.
Fabrics are carried inside the PowerEdge M1000e system between server modules and
IOMs through the mid-plane. They are also carried to the outside world through the
physical copper or optical interfaces on the IOMs.
The fabrics are independent of each other.
Fabric A is a redundant Gigabit Ethernet (GbE) fabric, supporting I/O module slots A1 and A2.
The integrated Ethernet controllers in each blade dictate Fabric A as a 1-to-10Gbps Ethernet-
only fabric.
Modules that are designed specifically for Fabric B or Fabric C cannot be installed in slots A1 or
A2, as indicated by the color-coded labeling on the faceplate of each module.
Fabric B is a 1-to-40Gbps redundant fabric, supporting I/O module slots B1 and B2. Fabric B
currently supports 1GbE or 10GbE, DDR/QDR/FDR InfiniBand, and 4Gbps, 8Gbps, or 16Gbps
FC modules.
To communicate with an I/O module in the Fabric B slots, a blade must have a matching
mezzanine card installed in a Fabric B mezzanine card location. Modules designed for Fabric A
may also be installed in the Fabric B slots.
Fabric C is a 1-to-40Gbps redundant fabric, supporting I/O module slots C1 and C2.
Fabric C currently supports 1GbE or 10GbE, DDR/QDR/FDR InfiniBand, and 4Gbps,
8Gbps, or 16Gbps FC modules.
To communicate with an I/O module in the Fabric C slots, a blade must have a
matching mezzanine card installed in a Fabric C mezzanine card location. Modules
designed for Fabric A may also be installed in the Fabric C slots.
Fabric Mapping
The M1000e I/O is fully scalable to current generations of server modules and I/O
Modules. There are three redundant multi‐lane fabrics in the system, as illustrated in the
image.
Fabric A is dedicated to Gigabit Ethernet.
Fabrics B and C are identical, fully customizable fabrics, which are routed as two sets of
four lanes from mezzanine cards on the server modules to the IOMs in the chassis rear.
Supported bandwidth ranges from 1 Gbps to 10 Gbps per lane depending on the fabric
type used.
In the adjoining figure:
In switch stacking,
All switches in a stack share a single management console and forward
packets between their ports as a single switch.
A stacked set of switches appear to all external connections as a single
logical switch
Modular switch stacking can be done for:
Higher Server Throughput
The prime benefit of this cabling scheme is that it enables increased throughput. The
uplink connection from one switch in the stack forwards network traffic from all the other
switches in the stack.
Configure using stacking cables and connect:
Port xg2 of the stacking module of each switch to port xg1 of the stacking module
of the neighboring switch
The first and last switch in the stack to complete the loop
Modular stacking configuration reduces the number of uplinks that need to be
connected to the stack
Increased Server Availability
When a stack spans across two or more chassis, the number of required uplinks per
server can be greatly reduced, minimizing total cost of ownership (TCO). Gain network
and application performance by allowing peer-to-peer network traffic between servers
connected to the same switch stack to transit across the stack without passing through
the distribution switches.
This figure shows two M6220 switch stacks deployed across two chassis. Each switch
stack should have its own set of uplinks to the same distribution switches as the other
switch stack. Each server should have a teamed pair of Ethernet ports configured for
switch fault tolerance, with one port connected to each stack.
The pair of stacks should be configured with the same connectivity to the servers and to
the distribution switches upstream.
CMC Overview
The Dell Chassis Management Controller (CMC) is a systems management hardware
and software solution for managing the M1000e chassis and installed components. It is
a hot-pluggable module that is installed in the rear of the Dell PowerEdge M1000e
chassis.
The M1000e must have at least one CMC and can support an optional redundant
module. Each CMC occupies a slot accessible at the rear of the chassis. Redundancy is
provided in an Active–Standby pairing of the modules, and failover occurs when the
active module has failed or degraded. The CMC interface ports are the stacking port,
10/100/1000 Ethernet ports, and one serial port. The CMC serial port interface provides
common management of up to six I/O modules through a single connection.
CMC Features
Some of the important features of CMC are:
FlexAddress
FlexAddress is an optional feature on M1000e which overrides server-assigned
addresses with chassis-assigned addresses.
Optional SD card storage is required for holding update images larger than 48
MB (iDRAC firmware and driver packs).
SD card storage is required to store the BIOS configurations.
If the configuration is a redundant CMC, then both CMCs need the storage
option.
The SD card storage can be satisfied with either a FlexAddress or a CMC
Extended Storage card.
Daisy-Chaining CMCs
CMC has a second Ethernet port for connection to other CMCs in the rack. CMC connects to
the management network to manage all blade servers. This saves port consumption on external
switches. If you have multiple chassis in a rack, you can reduce the number of connections to
the management network by daisy-chaining chassis together. This reduces the connections that
are required to one.
When daisy-chaining chassis together, Gb1 is the uplink port and STK/Gb2 is the stacking
(cable consolidation) port. Connect the Gb1 ports to the management network or to the
STK/Gb2 port of the CMC in a chassis that is closer to the network. You must connect the
STK/Gb2 port only to a Gb1 port further from the chain or network.
Caution: Connecting the STK/Gb2 port to the management network without first
configuring for redundancy in the CMC can have unpredictable results. Cabling
Gb1 and STK/Gb2 to the same network (broadcast domain) can cause a
broadcast storm.
CMC Ethernet port Gb1 is the uplink port. It is the uplink to the management
network, or used to connect to the STK in the adjacent chassis.
The CMC Ethernet port that is labeled STK is the daisy chain port. It connects
only to CMC port Gb1 on the adjacent chassis. Do not connect this cable directly to the
management network.
Up to four chassis units can be daisy chained.
Chassis units can be daisy chained in both redundant and non-redundant
deployments:
In a redundant CMC deployment, cable together all CMC modules in the
CMC primary slots. Cable together all CMC modules in the CMC secondary slots. Do
not connect the primary daisy chain with the secondary daisy chain (do not “cross-
cable” the two sets of CMCs).
In a non-redundant CMC, cable together all CMC modules in the CMC
primary slots.
Multi-chassis Management
Multi-chassis management is the capability to select chassis configuration properties
from Lead Chassis and push those properties to group of chassis or Members.
It is done for admin reasons, and user can configure the lead chassis and then copy
those settings and propagate them to the member chassis.
In Chassis Properties Propagation, the admin can select the categories of lead
configuration properties to be propagated to member chassis. In the setting categories
you can choose what you want identically configured, across all members of the chassis
group. For example, if you select Logging and Alerting Properties category, this enables
all chassis in the group to share the logging and alerting configuration settings of the
lead chassis.
RACADM Commands
The Dell Remote Access Controller Admin (RACADM) utility is a command line tool that
enables for remote or local management of Dell Servers using the iDRAC or DRAC.
RACADM provides similar functionality to the iDRAC/DRAC Graphical User Interface
(GUI). The Dell Chassis Management Controller (CMC) can also be managed remotely
with RACADM.
RACADM commands can be run remotely from a management station and/or locally on
the managed system.
RACADM commands enable you to view managed system information, perform power
operations on the managed system, perform firmware updates, configure settings and
more. Because RACADM is run from a command line interface (CLI), system
administrators can create scripts that control and update Dell systems in a one-to-many
fashion.
Updating Firmware
A vital part of maintaining a healthy chassis is the ability to update the CMC version.
Notice on this screen that you can update either the active or standby CMCs individually
or together. If you update them both together, you will be navigated to the version
location, and then it is transferred across to the CMCs, where it is validated. Once the
validation is complete, the update begins on both and is displayed on this screen.
During this time, the CMCs maintain control of the chassis. However, at the end, both
CMCs have to reset, at which time the fans ramp up to 100 percent until one of the
CMCs comes back online and takes control. It now becomes the active CMC and looks
after the chassis, bringing up the standby CMC gradually.
Therefore, it is normal to get an error stating that CMC redundancy is lost for a few
minutes after you hear the fans ramp down from 100 percent. Also notice that you can
update the iKVM firmware and initiate updating the iDRAC firmware from this page.
The following software components are included with CMC firmware package:
Compiled CMC firmware code and data
Web interface, JPEG, and other user interface data files
Default configuration files
Power on System
Powering on the system involves following steps.
Powering On Chassis
Pressing the chassis power button turns on all components that are related to the
chassis, and it affects the main power bus within the chassis. Components such as the
iDRACs, IOMs, and iKVM will begin to power up. The blade server will not power on
immediately.
To manually turn off the system, push and hold the power button for 10 seconds to
forcefully shut down the chassis and all systems.
Powering On Blades
Press the power button on the chassis. The power indicator should display a green
LED.
Blades can be powered on/off manually with the power button that is located on the
front of each blade server. Pressing the button will power on the blade and begin POST.
Pressing and holding the power button for approximately 10 seconds forcefully powers
off the blade.
Initial Configuration
The CMC’s default IP address is 192.168.0.120.
Initial Configuration involves assigning an IP to the CMC. There are five ways to
perform initial configuration.
Dynamic Host Configuration Protocol (DHCP) - The CMC retrieves IP
configuration (IP Address, mask, and gateway) automatically from a DHCP server on
your network. The CMC will always have a unique IP Address allotted over your
network.
LCD Configuration Wizard - You can configure the CMC using LCD
Configuration Wizard only when the CMC is deployed or the default password is
changed. If the password is not changed, using LCD to reconfigure the CMC, might
cause possible security risk.
RACADM CLI using a null modem cable - Serial connection using a null
modem cable.
o Cable: Null Modem
o Bits per second: 115200
o Data bits: 8
o Parity: None
o Stop bits: 1
o Flow control: None
RACADM CLI using iKVM - If the chassis has the iKVM, press <Print Screen>
and select blade number 17. Blade number 17 is a direct local connection to the CMC.
Web GUI - Web GUI provides remote access to CMC using a graphical user
interface. The Web interface is built into the CMC firmware and is accessed through the
NIC interface from a supported web browser on the management station.
BOSS
In the 14th generation of PowerEdge server, Dell EMC has introduced Boot Optimized Storage
Solution (BOSS) as a new boot device. The BOSS device provides a hardware mirror option so
that the user can create a RAID 1 volume. This helps to install the operating system on the LUN
from a redundancy viewpoint. Besides the RAID1 volume, this controller can operate in pass-
through mode (Non-Raid).
NVDIMMs
NVDIMM is nonvolatile memory with added flash storage and a battery backup system
for all data persistence. NVDIMM Persistent Memory is a disruptive Storage Class
Memory technology that enables unprecedented performance improvement over legacy
storage technologies.
In an event of a power outage, the data is backed-up to flash thus retaining the memory
contents during the system power loss. NVDIMMs integrate nonvolatile NAND flash
memory with dynamic random access memory (DRAM) and dedicated backup power on
a single memory subsystem.
Fabrics
A fabric is a method of encoding, transporting, and synchronizing data between multiple
devices. Fabrics are carried inside the PowerEdge M1000e system between server
modules and IOMs through the mid-plane. They are also carried to the outside world
through the physical copper or optical interfaces on the IOMs.
Fabrics are the main components in the PowerEdge M1000e architecture. The features
of fabrics are:
It enables the M1000e chassis to pass data between the blade servers and the
IOMs.
The M1000e chassis can house six I/O modules, enabling a greater diversity of
roles for all the enclosed blade servers.
The six I/O slots are connected to three fabrics, and each fabric connects
to two slots.
The fabrics are independent of each other.
IOMs are used as pairs, with two modules servicing each server module fabric
and providing a fully redundant solution.
The fabrics in a M1000e enclosure are classified into three types. The image shows the
fabric mapping in M1000e enclosure.
The PowerEdge blade server systems support NDC cards instead of the
traditional LAN on Motherboard (LOM) design. Some blade server models can house up
to two NDCs.
The feature is marketed as a Dell Select NIC Adapter.
The NDC has the typical features and behavior of a traditional LOM subsystem. It
adds the benefit of flexibility by enabling customers to choose their favored network
types, speed, and vendors.
Blade Mezzanine Cards
The M-series blades have two or four mezzanine card slots enabling the user to install
mezzanine cards. Some of the functionalities of mezzanine cards are:
Quarter-Height Blades
o It requires IOM with 32 internal ports (M6348 or Dell Force10 MXL) to
connect all LOM ports on all blades
o It requires 2 x 32-port IOMs to connect the two LOM ports on each blade.
o It consists of one Fabric B or Fabric C mezzanine card.
Half-Height Blades
o It has one Select Network Adapter or LOM.
o It consists of one Fabric B mezzanine card and one Fabric C mezzanine
card.
Full-Height Blades
o It has two Select Network Adapters or LOMs.
o It consists of two Fabric B mezzanine cards and two Fabric C mezzanine
cards.
Platform Restore
The Platform Restore option is used to restore individual hardware parts or the system
board configurations.
This option enables you to decide what happens if a hardware part fails and needs to be
replaced. Instead of manually reconfiguring the new part, you can set the Lifecycle
Controller to automatically push the configuration back out to the new part. This
includes reflashing the firmware with the code that was on the old part.
In the case of the system board containing the Lifecycle Controller fails, you can make
use of the Server Profile, which includes a back up of all the Lifecycle Controller
configuration information onto a USB stick, hard drive, or network share. Now, when the
new system board is installed, you can access the new Lifecycle Controller and import
the Server Profile.
Port-Mapping Half-Height Blades
The port-mapping feature enables the user to redirect a communication request from
one Port (IP address and Port number) to another port on the opposite side of the
gateway. The data packets are traversed through a network gateway, such as a router
or firewall.
Each LAN on Motherboard (LOM) on a blade has two port connections. For a half-height blade
with Dual Port Adapters, only the first LOM port (NIC 1 and NIC2) is active. The second
port(NIC3 and NIC4) gets disabled during system boot. All the IOMs have the same port
mapping for half-height blades.
An IOM with 32 internal ports, will only connect with 16 internal ports when using dual port
adapters.
Important: The image on the slide displays the port mapping for fabric A. If a
mezzanine card is installed, then you can determine which internal ports on the
switch will connect to the ports on the mezzanine adapters.
For a half-height blade with Quad Port Adapters, both the LOM port connections (LOM1
and LOM2) are active.
If LOM1, connection 1 connects to an IOM adapter A1 on a port 5, then the LOM2,
connection 1 connects to IOM adapter A1 on a port 21(n+16).
Port-Mapping Full-Height Blades
For a full-height blade with Quad Port NDC Adapters, both the LOM port connections
(LOM1 and LOM2) are active. All six IOMs have the same port mapping for full-height
blades.
An IOM with 32 internal ports, will connect to all 16 internal ports when using quad port
adapters.
Internal Dual SD Module/vFlash SD
The Internal Dual SD Module (IDSDM) card provides two SD card slots and a USB interface
that is dedicated for the embedded hypervisor.
The vFlash is a storage device that is part of the iDRAC subsystem. The iDRAC can use the
vFlash card for storage purposes, backing up files, and other functions. iDRAC also uses the
vFlash to store system configurations and as a buffer when updating the firmware.
System Setup
To access the System Setup, reboot your server and access the System Setup utility
using the <F2> hot key, when requested.
The System Setup utility consists of three options, which include:
Virtualization environments
SAN applications (Exchange, database)
High-performance cluster and grid environments
Front-end applications (web applications/Citrix/terminal services)
File sharing access
Web page serving and caching
SSL encrypting of web communication
Audio and video streaming
PS-M4110 Drawer
You can open the array inner drawer while the operating member gains the access to
hot-swap components. Steps to open:
Push the array's front panel and release it quickly to unlatch the array’s inner drawer.
When the drawer is unlatched, a Caution label will be visible.
The front panel is not designed as a handle. It can break if treated roughly. When opening the
array inner drawer, do not pull on the front panel. Grip and pull the drawer by its top, bottom,
or sides.
Push the release button located on the side of the PS-M4110 array, which releases the latch
that secures the array's drawer to its outer housing.
This prevents the array's drawer from opening accidentally during handling when outside of the
M1000e enclosure.
Drive
Identification
Drives are numbered from 0 to 13 from the front of the M4110 to the back.
Type 13 Control Modules
The PS-M4110 uses a Type 13 control module.
Ethernet port
A 10Gb/s iSCSI Ethernet port (Ethernet 0) is used for communication on
one of the two redundant fabrics.
Management port
Ethernet port 1 can optionally be set up as a management port.
Status and Power LEDs
Indicate status of the control module. ACT LED for activity and PWR LED
for power.
Serial port
It enables you to connect a computer directly to the array, without network
access.
Micro SD card
A field-replaceable micro SD card containing the PS Series firmware.
Release button and latch
It releases the control module from the array for replacement. The release
lever has a switch that detects activation and prompts the array to save data to non-
volatile storage (data residing in cache memory) .
When the user configures the blade array to work with fabric B, the user cannot specify to use
B1 or B2 fabric.
The PS-M4110 will automatically chooses either B1 or B2 fabric, so when the user has many
blade arrays inside the chassis configured for fabric B then B1 and B2 must be stacked
otherwise the two blade arrays might not talk to one another.
Switch Configurations
When using a PS-M4110 inside an M1000e enclosure, the IO modules must be
interconnected (stacked or use a link aggregation).
Replace Drive
Replacement steps:
If you are replacing a failed control module, remove the micro SD card from the
failed control module and install it in the replacement control module. This will make
sure that the new control module is running the correct firmware.
If two control modules are installed in the array, but only one is shown in the GUI
(or CLI), make sure you have allowed enough time (two to five minutes) for the two
control modules to boot and synchronize.
When synchronization completes, a message is displayed on the serial console
(if connected), and the ACT LED on the secondary module will be orange.
Fabric Configuration
By selecting the M4110 in the system tree, you can view and change certain settings.
As you can see below, you can also change the fabric that the M4110 connects to.
If the member is already installed, you can change the fabric it communicates on.
PS-M4110 Management Network
The M4110 management network path is via the CMC network port.
In-Band Deployment/Management
In-Band Management allows configuration management through the network locally.
The M4110 can be:
Fabric is transported within the converged system between server modules and
I/O Modules through the midplane.
Fabric is carried to the outside world through the physical copper or optical
interfaces on the I/O Modules or PCIe cards.
Examples of fabrics are GbE, Fibre Channel, and InfiniBand.
One example of GbE network fabrics is the 10GbE Pass-Through Modules for Dell
M1000e Blade Enclosures as shown in the picture below.
Lane
Defined as a single fabric data transport path between I/O end devices.
In modern high-speed serial interfaces, each lane comprises one transmit and one
receive differential pair.
In reality, a single lane has four wires in a cable or traces of copper on a printed circuit
board:
Link
Some Fabrics such as Fibre Channel do not define links, as they simply run
multiple lanes as individual transports for increased bandwidth.
A link as defined here provides synchronization across the multiple lanes, so they
effectively act together as a single transport.
Examples are 4-lane (x4), 8-lane (x8), and 16-lane (x16) PCIe, or 4-lane
10GBase-KX4.
PCIe, InfiniBand, and Ethernet define Fabric lanes as Link.
Port
Defined as the physical I/O end interface from a device to a link, a port can have single or
multiple lanes of Fabric I/O connected to it.
Fibre
Brocade M6505
Brocade M5424
8/4 Gb Fibre Channel Pass-Through Module
Inifiniband
Mellanox M4001F
Mellanox M4001T
MXL: 10/40-GbE Blade
MXL 10/40-GbE Blade as a converged network deployment helps improve data center
flexibility with configurable I/O choice.
56-port design
Two Flex I/O bays enabled choice (Modules can be different)
PVST+ protocol for easy integration into Cisco environments
Two FCoE options:
Native Fibre Channel uplinks with FC FlexIO module (FCoE only on
internal ports to the servers)
FCoE transit to top of rack switch with IOM acting as a FIP Snooping
Bridge
M8024k
M8024k is a fully modular and managed Layer 2/3 Ethernet switch:
Cables
CAT 5
48-port switch.
Stackable with rack-mount PowerConnect 7000 Series.
Supports Dell Simple Switch Mode.
M6348 Configuration
Important: M6348 works with all 1Gb Mezzanine cards and LOMs. For optimized use (full
internal-port utilization), pair with: Quad-port GbE Mezzanine cards or Quad-port Fabric A
adapters.
Option Modules: M6220
M6220 is a key component of the Flex I/O architecture of the M1000e. Key features:
Fibre Channel
The Fibre Channel (FC) networks are used to:
Transports SCSI commands and disk data between servers and storage arrays
on the network.
Can be formed on either a Fibre Channel or standard Ethernet TCP/IP network
infrastructure.
Brocade M6505
Brocade M5424
8/4 Gbps FC Pass-Through
InfiniBand
InfiniBand is a protocol and switched network infrastructure that is a high-speed
replacement for the internal PCI bus that is used in servers. It is configured as a
direct memory access interconnection between two or more servers. The
InfiniBand network becomes the high-performance server clustering interconnect
for clustered systems running high-speed computer applications.
The InfiniBand, Fibre Channel over IP(FCIP) and Fibre Channel over Ethernet(FCoE) are other
examples of various protocols and network infrastructures available to augment and enhance
existing end-to-end connectivity
Mellanox Blades
Mellanox blades in an M1000e use the InfiniBand protocol:
CMC Indicators
Troubleshooting Non-Responsive Chassis
Management Controllers
If you cannot log in to the CMC using any of the interfaces (the Web interface, Telnet,
SSH, remote RACADM, or serial), you can verify functionality by observing the CMC
LEDs. Facing the front of CMC as it is installed in the chassis, there are two LEDs on
the left side of the card.
The top green LED (item 10 in illustration) indicates power. If it is not on:
The iDRAC Video Capture feature automatically records the boot process and
stores the last three boot process recordings.
Video Capture
The iDRAC Video Capture feature automatically records the boot process
and stores the last three boot process recordings.
A boot cycle video logs the sequence of events for a boot cycle.
A crash video logs the sequence of events leading to the failure.
Boot Capture Video Settings
Disable - Boot capture is disabled.
Capture Until Buffer Full - Boot sequence is captured until the buffer size
has reached.
Capture Until End of POST - Boot sequence is captured until end of
POST.
SupportAssist Collection
This page accesses SupportAssist and creates SupportAssist collections.
This integration enables you to use other SupportAssist features. iDRAC
provides application interfaces for gathering platform information that enables
support services to resolve platform and system problems.
iDRAC helps you to generate a SupportAssist collection of the server. You can
and then export the collection to a location on the management station (local) or
to a shared network location.
The collection is generated in the standard .ZIP format. You can send this
collection to technical support for troubleshooting or inventory collection.
iDRAC Diagnostics
This page is used to diagnose issues that are related to the iDRAC hardware using network
diagnostic tools. Here are two sections of iDRAC Diagnostic.
Reset iDRAC
Resets the iDRAC.
A normal reboot operation is performed on the iDRAC.
After reboot, refresh the browser to reconnect and log in to iDRAC.
Reset iDRAC to Default Settings
This action resets the iDRAC to the factory defaults.
Choose any of the following options:
o Preserve user and network settings.
o Discard all settings and reset users to shipping value.
o Discard all settings and reset user name and password.
iDRAC Logs
Lifecycle Log
This page enables you to view and export the Lifecycle Controller log entries.
By default, the latest 100-log entries are displayed.
You can filter the log entries based on category, severity, keyword, or a date
range.
System event log
This page enables you to view, clear, or save the log events that occur on the
blade server.
You can configure the iDRAC to send emails, SNMP traps when specified events
occur.
Easy Restore
The Easy Restore function automatically backs up vital information to the restore Serial
Peripheral Interface (rSPI) card.
If the system board is ever replaced, a screen is presented giving you the option of
automatically restoring certain types of information:
Server Profile
The Server Provide backup on a vFlash SD card contains the server component
configuration and firmware that is installed on various components on the server. The
backup image file does not contain any operating system or hard-disk drive data. It is
used when the system board is replaced to restore all the configuration and firmware
revisions.
The Server Profile backs up the Lifecycle Controller configuration information, the BIOS,
and all the firmware code on to a local drive or network share.
When the new system board is installed, you access the new Lifecycle Controller and
import the Server Profile.
The backed-up configuration information is now restored to the new Lifecycle Controller,
which in turn pushes all of that information out to the hardware parts. That saves you
time and effort from manually reconfiguring devices like the iDRAC and the BIOS.
The Lifecycle Controller also pushes out the firmware and BIOS code. However, you
must back up the Server Profile before the system board fails.
Use the Part Replacement feature to automatically update a new part to the firmware
version or the configuration of the replaced part, or both. The update occurs
automatically when you reboot your system after replacing the part.
NOTE: Part Replacement does not support RAID operations such as resetting
configuration or recreating Virtual Disks.
One CMC
Three Power Supply Units
Nine fan modules