Dell Storage Center Owner's Manual
Dell Storage Center Owner's Manual
Owners Manual
Copyright 2015 Dell Inc. All rights reserved. This product is protected by U.S. and international copyright and
intellectual property laws. Dell and the Dell logo are trademarks of Dell Inc. in the United States and/or other
jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
2015 - 06
Rev. F
Contents
About this Guide.......................................................................................................5
Revision History..................................................................................................................................... 5
Audience................................................................................................................................................ 5
Contacting Dell......................................................................................................................................5
Related Publications.............................................................................................................................. 5
Preface
Revision History
Document Number: 680-100-001
Table 1. Document Revision History
Revision
Date
Description
May 2014
Initial release
June 2014
August 2014
October 2014
November 2014
June 2015
Audience
The information provided in this Owners Manual is intended for use by Dell end users.
Contacting Dell
Dell provides several online and telephone-based support and service options. Availability varies by
country and product, and some services may not be available in your area.
To contact Dell for sales, technical support, or customer service issues, go to www.dell.com/support.
For customized support, enter your system Service Tag on the support page and click Submit.
For general support, browse the product list on the support page and select your product.
Related Publications
The following documentation is available for the SC4020 Storage System.
Contains information about new features and known and resolved issues for the Storage Center
software.
Dell TechCenter
Provides technical white papers, best practice guides, and frequently asked questions about Dell
Storage products. Go to http://en.community.dell.com/techcenter/storage/.
The SC4020 storage system provides the central processing capabilities for the Storage Center Operating
System (OS), application software (Storage Center System Manager) and management of RAID storage.
Switches
Dell offers enterprise-class switches as part of the total Storage Center solution.
The SC4020 supports Fibre Channel (FC) and Ethernet switches, which provide robust connectivity to
servers and allow for the use of redundant transport paths. Fibre Channel (FC) or Ethernet switches can
provide connectivity to a remote Storage Center to allow for replication of data. In addition, Ethernet
switches provide connectivity to a management network to allow configuration, administration, and
management of the Storage Center.
Expansion Enclosures
Expansion enclosures allow the data storage capabilities of the SC4020 storage system to be expanded
beyond the 24 internal disks in the storage system chassis.
The number of disks that an SC4020 storage system supports depends on the version of the Storage
Center operating system:
An SC4020 running Storage Center 6.6.4 or later supports a total of 192 disks per Storage Center
system.
An SC4020 running Storage Center 6.6.3 or earlier supports a total of 120 disks per Storage Center
system.
This total includes the disks in the storage system chassis and the disks in the SC200/SC220 expansion
enclosures or SC280 expansion enclosures.
Any combination of SC200/SC220 expansion enclosures, as long as the total disk count of the system
does not exceed 192
Any combination of SC200/SC220 expansion enclosures, as long as the total disk count of the system
does not exceed 120
An SC4020 storage system deployed with one or more SC200/SC220 expansion enclosures.
An SC4020 storage system, running Storage Center 6.6.4 or later, deployed with up to two SC280
expansion enclosures.
Item Description
Speed
Communication Type
8 Gbps
Front End
8 Gbps
Front End
8 Gbps
Front End
Back End
115,200 Kbps
System Administration
(Service and installation
only)
Up to 1 Gbps
System Administration
Ethernet switch
1 Gbps or 10 Gbps
(Management/
Replication)
Front End
1 Gbps or 10 Gbps
Front End
10
Item Description
Speed
Communication Type
10 Gbps
Front End
Ethernet switch
10 Gbps (Front-end
connectivity)
Front End
1 Gbps or 10 Gbps
(Management/
Replication)
3
Front End
Back End
115,200 Kbps
System Administration
(Service and installation
only)
Up to 1 Gbps
System Administration
1 Gbps or 10 Gbps
Front End
11
Front-End Connectivity
Front-end connectivity provides IO paths from servers to a storage system and replication paths from
one Storage Center to another Storage Center. The SC4020 storage system provides the following types
of front-end connectivity:
Fibre Channel: Hosts, servers, or Network Attached Storage (NAS) appliances access storage by
connecting to the storage system Fibre Channel ports through one or more Fibre Channel switches.
Connecting host servers directly to the storage system, without using Fibre Channel switches, is not
supported.
When replication is licensed, the SC4020 can use the front-end Fibre Channel ports to replicate data
to another Storage Center.
iSCSI: Hosts, servers, or Network Attached Storage (NAS) appliances access storage by connecting to
the storage system iSCSI ports through one or more Ethernet switches. Connecting host servers
directly to the storage system, without using Ethernet switches, is not supported.
When replication is licensed, the SC4020 can use the front-end iSCSI ports to replicate data to
another Storage Center.
NOTE: When replication is licensed on an SC4020 running Storage Center 6.6.4 or later, the
SC4020 can use the embedded MGMT and REPL ports to perform iSCSI replication to another
Storage Center. In addition, the SC4020 can use the embedded MGMT and REPL ports as front-end
iSCSI ports for connectivity to host servers.
Back-End Connectivity
Back-end connectivity is strictly between the storage system and expansion enclosures, which hold the
physical drives that provide back-end expansion storage.
An SC4020 storage system supports back-end connectivity to multiple expansion enclosures.
System Administration
To perform system administration, communicate with the Storage Center using the Ethernet ports and
serial ports on the storage controllers.
Ethernet port: Used for configuration, administration, and management of Storage Center.
NOTE: The baseboard management controller (BMC) does not have a separate physical port on
the SC4020. The BMC is accessed through the same Ethernet port that is used for Storage
Center configuration, administration, and management.
Serial port: Used for initial configuration of the storage controllers. In addition, it is used to perform
support only functions when instructed by Technical Support Services.
12
Item
Name
Power indicator
Icon
Description
Lights when the storage system power is on.
Off: No power
On steady green: At least one power supply is providing
power to the storage system
Status indicator
Identification
button
Unit ID display
Hard drives
13
Item
Name
Icon
Power supply/
Storage controller
(2)
Description
AC power fault
indicator (2)
AC power status
indicator (2)
14
Item
Name
Icon
Description
DC power fault
indicator (2)
Controls power for the storage system. Each PSU has one
switch.
Item Control/Feature
1
Icon Description
Blinking green (on 0.5 sec. / off 1.5 sec.): Battery heartbeat
Fast blinking green (on 0.5 sec. / off 0.5 sec.): Battery is
charging
Steady green: Battery is ready
Off: No faults
Blinking amber: Correctable fault detected
Steady amber: Uncorrectable fault detected; replace battery
15
Item Control/Feature
4
Icon Description
10
Identification LED
11
USB port
Off: No faults
Steady amber: Firmware has detected an error
Blinking amber: Storage controller is performing POST
13
14
15
16
Item Control/Feature
16
Icon Description
Figure 9. SC4020 Storage Controller with Two 10 GbE iSCSI Front-End Ports
Item Control/Feature
1
Icon Description
Blinking green (on 0.5 sec. / off 1.5 sec.): Battery heartbeat
Fast blinking green (on 0.5 sec. / off 0.5 sec.): Battery is
charging
Steady green: Battery is ready
Off: No faults
Blinking amber: Correctable fault detected
Steady amber: Uncorrectable fault detected; replace battery
17
Item Control/Feature
Icon Description
10
Identification LED
11
USB port
Off: No faults
Steady amber: Firmware has detected an error
Blinking amber:Storage controller is performing POST
13
14
15
16
Off: No power
Steady Amber: Link
Blinking Green: Activity
SC4020 Drives
Dell Enterprise Plus hard disk drives (HDDs) and Enterprise solid-state drives (eSSDs) are the only drives
that can be installed in an SC4020 storage system. If a non-Dell Enterprise Plus drive is installed, Storage
Center prevents the drive from being managed.
Item
Control/Feature
Indicator Code
Drive activity
indicator
18
Item
Control/Feature
Drive status
indicator
Indicator Code
19
Front-end cabling refers to the connections between the storage system and external devices such as
host servers or another Storage Center.
Frontend connections can be made using Fibre Channel or iSCSI interfaces. Dell recommends
connecting the storage system to host servers using the most redundant option available.
Increased connectivity: Because all ports are active, additional front-end bandwidth is available
without sacrificing redundancy.
Improved redundancy
Fibre Channel: A Fibre Channel port can fail over to another Fibre Channel port in the same fault
domain on the storage controller.
iSCSI: In a single fault domain configuration, an iSCSI port can fail over to the other iSCSI port on
the storage controller. In a two fault domain configuration, an iSCSI port cannot fail over to the
other iSCSI port on the storage controller
Simplified iSCSI configuration: Each fault domain has an iSCSI control port that coordinates
discovery of the iSCSI ports in the domain. When a server targets the iSCSI port IP address, it
automatically discovers all ports in the fault domain.
20
A separate fault domain must be created for each front-end Fibre Channel fabric or Ethernet network.
A fault domain must contain a single type of transport media (FC or iSCSI, but not both).
Dell recommends configuring at least two connections from each storage controller to each Fibre
Channel fabric (fault domain) or Ethernet network (fault domain).
Requirement
Description
License
Switches
Multipathing
If multiple active paths are available to a server, the server must be configured
for MPIO to use more than one path simultaneously.
iSCSI networks
21
1.
Server 1
2.
Server 2
3.
FC switch 1
4.
FC switch 2
5.
Storage system
6.
Storage controller 1
7.
Storage controller 2
NOTE: To use multiple primary paths simultaneously, the server must be configured to use MPIO.
Legacy Mode
Legacy mode provides storage controller redundancy for a Storage Center by connecting multiple
primary and reserved ports to each Fibre Channel or Ethernet switch.
In legacy mode, each primary port on a storage controller is paired with a corresponding reserved port
on the other storage controller. During normal conditions, the primary ports process IO and the reserved
ports are in standby mode. If a storage controller fails, the primary ports fail over to the corresponding
reserved ports on the other storage controller. This approach ensures that servers connected to the
switch do not lose connectivity if one of the storage controller fails. For optimal performance, the
primary ports should be evenly distributed across both storage controller.
22
A fault domain must contain one type of transport media (FC or iSCSI, but not both).
A fault domain must contain one primary port and one reserved port.
The reserved port must be on a different storage controller than the primary port.
Requirement
Description
Storage controller front- On an SC4020 with FC front-end ports, each storage controller must have
two FC front-end ports to connect two paths to each Fibre Channel switch.
end ports
On an SC4020 with iSCSI front-end ports, each storage controller must have
two iSCSI front-end ports to connect two paths to each Ethernet switch.
Multipathing
If multiple active paths are available to a server, the server must be configured
for MPIO to use more than one path simultaneously.
Fibre Channel switches must be zoned to meet the legacy mode zoning
requirements.
Fault domain 1 (shown in orange) is comprised of primary port P1 on storage controller 1 and reserved
port R1 on storage controller 2.
Fault domain 2 (shown in blue) is comprised of primary port P2 on storage controller 2 and reserved
port R2 on storage controller 1.
Fault domain 3 (shown in gray) is comprised of primary port P3 on storage controller 1 and reserved
port R3 on storage controller 2.
Fault domain 4 (shown in red) is comprised of primary port P4 on storage controller 2 and reserved
port R4 on storage controller 1.
23
1.
Server 1
2.
Server 2
3.
Switch 1
4.
Switch 2
5.
Storage system
6.
Storage controller 1
7.
Storage controller 2
NOTE: To use multiple paths simultaneously, the server must be configured to use MPIO.
Path redundancy: When multiple paths are available from a server to a storage system, a server
configured for multipath IO (MPIO) can use multiple paths for IO. If a path becomes unavailable, the
server continues to use the remaining active paths.
Storage controller redundancy: If a storage controller becomes unavailable, the ports on the offline
storage controller can move to the available storage controller. Both front-end connectivity modes
(legacy mode and virtual port mode) provide storage controller redundancy.
Port redundancy: If a port becomes unavailable, the port can move to another available port in the
same fault domain. Port redundancy is available only in virtual port mode.
24
Failover Behavior
The Storage Center behaves in the following manner.
Table 4. Failover Behavior Scenarios
Scenario
Normal
operating
conditions
A single port
becomes
unavailable
A storage
controller
becomes
unavailable
Multipath IO
MPIO allows a server to use multiple paths for IO if they are available.
MPIO software offers redundancy at the path level. MPIO typically operates in a round-robin manner by
sending packets first down one path and then the other. If a path becomes unavailable, MPIO software
continues to send packets down the functioning path.
NOTE: MPIO is operating-system specific and it loads as a driver on the server or it is part of the
server operating system.
MPIO Behavior
The server must have at least two FC or iSCSI ports to use MPIO.
When MPIO is configured, a server can send IO to multiple ports on the same storage controller.
Operating System
IBM AIX
Linux
Dell Storage Center with Red Hat Enterprise Linux (RHEL) 6x Best
Practices
Dell Storage Center with Red Hat Enterprise Linux (RHEL) 7x Best Practices
Dell Compellent Best Practices: Storage Center with SUSE Linux
Enterprise Server 11
Dell Storage Center Best Practices with VMware vSphere 5.x
25
Operating System
Connectivity Type
Guidelines
Legacy mode
Include all Storage Center physical WWNs from Storage Center system A and Storage Center system
B in a single zone.
Include all Storage Center physical WWNs of Storage Center system A and the virtual WWNs of
Storage Center system B on the particular fabric.
Include all Storage Center physical WWNs of Storage Center system B and the virtual WWNs of
Storage Center system A on the particular fabric.
NOTE: Some ports may not be used or dedicated for replication, however ports that are used must
be in these zones.
26
For each host server port, create a zone that includes a single server HBA port and all Storage Center
ports.
Create server zones which contain all Storage Center front-end ports and a single server port.
For Fibre Channel replication, include all Storage Center front-end ports from Storage Center system
A and Storage Center system B in a single zone.
A storage system with Fibre Channel front-end ports connects to one or more FC switches, which
connect to one or more host servers.
A storage system with iSCSI front-end ports connects to one or more Ethernet switches, which
connect to one or more host servers.
Virtual Port Mode Two Fibre Channel Fabrics with Dual 8 Gb 4-Port Storage Controllers
Use two Fibre Channel (FC) fabrics with virtual port mode to prevent an unavailable port, switch, or
storage controller from causing a loss of connectivity between host servers and a storage system with
dual 8 Gb 4-port storage controllers.
About this task
In this configuration, there are two fault domains, two FC fabrics, and two FC switches. The storage
controllers connect to each FC switch using two FC connections.
If a physical port becomes unavailable, the virtual port moves to another physical port in the same
fault domain on the same storage controller.
If an FC switch becomes unavailable, the storage system is accessed from the switch in the other fault
domain.
If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to
the physical ports on the other storage controller.
Steps
1. Connect each server to both FC fabrics.
2.
27
3.
Example
Figure 14. Storage System in Virtual Port Mode with Dual 8 Gb Storage Controllers and Two FC Switches
1.
Server 1
2.
Server 2
3.
4.
5.
Storage system
6.
Storage controller 1
7.
Storage controller 2
Next steps
Install or enable MPIO on the host servers.
NOTE: For the latest best practices, see the Storage Center Best Practices document located on the
Dell TechCenter (http://en.community.dell.com/techcenter/storage/).
Virtual Port Mode One Fibre Channel Fabric with Dual 8 Gb 4Port Storage Controllers
Use one Fibre Channel (FC) fabric with virtual port mode to prevent an unavailable port or storage
controller from causing a loss of connectivity between the host servers a storage system with dual 8 Gb
4port storage controllers.
About this task
In this configuration, there are two fault domains, one fabric, and one FC switch. Each storage controller
connects to the FC switch using four FC connections.
28
If a physical port becomes unavailable, the virtual port moves to another physical port in the same
fault domain on the same storage controller.
If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to
the physical ports on the other storage controller.
NOTE: This configuration is vulnerable to switch unavailability, which results in a loss of connectivity
between the host servers and storage system.
Steps
1. Connect each server to the FC fabric.
2.
3.
Example
Figure 15. Storage System in Virtual Port Mode with Dual 8 Gb Storage Controllers and One FC Switch
1.
Server 1
2.
Server 2
3.
4.
Storage system
5.
Storage controller 1
6.
Storage controller 2
Next steps
Install or enable MPIO on the host servers.
NOTE: For the latest best practices, see the Storage Center Best Practices document located on the
Dell TechCenter (http://en.community.dell.com/techcenter/storage/).
29
Legacy Mode Two Fibre Channel Fabrics with Dual 8 Gb 4-Port Storage Controllers
Use two Fibre Channel (FC) fabrics with legacy mode to prevent an unavailable port, switch, or storage
controller from causing a loss of connectivity between host servers and a storage system with dual 8 Gb
4-port storage controllers.
About this task
In this configuration, there are four fault domains, two FC fabrics, and two FC switches.
Each fault domain contains a set of primary and reserve paths (P1-R1, P2-R2, P3-R3, and P4-R4).
To provide redundancy, the primary port and corresponding reserve port in a fault domain must
connect to the same fabric.
When MPIO is configured on the servers, the primary paths provide redundancy for an unavailable
server or storage controller port. The reserved paths provide redundancy for an unavailable storage
controller.
Steps
1. Connect each server to both FC fabrics.
2.
3.
4.
30
5.
Example
Figure 16. Storage System in Legacy Mode with Dual 8 Gb Storage Controllers and Two Fibre Channel
Switches
1.
Server 1
2.
Server 2
3.
4.
5.
Storage system
6.
Storage controller 1
7.
Storage controller 2
Next steps
Install or enable MPIO on the host servers.
NOTE: For the latest best practices, see the Storage Center Best Practices document located on the
Dell TechCenter (http://en.community.dell.com/techcenter/storage/).
Legacy Mode One Fibre Channel Fabric with Dual 8 Gb 4Port Storage Controllers
Use one Fibre Channel (FC) fabric with legacy mode to prevent an unavailable storage controller from
causing a loss of connectivity between host servers and a storage system with dual 8 Gb 4-port storage
controllers.
About this task
In this configuration, there are two fault domains, one FC fabric, and one FC switch.
Each fault domain contains a set of primary and reserve paths (P1-R1, P2-R2, P3-R3, and P4-R4).
To provide redundancy, the primary port and corresponding reserve port in a fault domain must
connect to the same fabric.
When MPIO is configured on the servers, the primary paths provide redundancy for an unavailable
server or storage controller port. The reserved paths provide redundancy for an unavailable storage
controller.
NOTE: This configuration is vulnerable to switch unavailability, which results in a loss of connectivity
between the host servers and storage system.
31
Steps
1. Connect each server to the FC fabric.
2.
3.
4.
5.
Example
Figure 17. Storage System in Legacy Mode with Dual 8 Gb Storage Controllers and One Fibre Channel Switch
1.
Server 1
2.
Server 2
3.
4.
Storage system
5.
Storage controller 1
6.
Storage controller 2
Next steps
Install or enable MPIO on the host servers.
NOTE: For the latest best practices, see the Storage Center Best Practices document located on the
Dell TechCenter (http://en.community.dell.com/techcenter/storage/).
32
2.
Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so
that it does not obscure the text.
3.
33
Virtual Port Mode Two iSCSI Networks with Dual 10 GbE 2Port Storage Controllers
Use two iSCSI networks with virtual port mode to prevent an unavailable port, switch, or storage
controller from causing a loss of connectivity between the host servers and a storage system with dual 10
GbE 2port storage controllers.
About this task
In this configuration, there are two fault domains, two iSCSI networks, and two Ethernet switches. The
storage controllers connect to each Ethernet switch using one iSCSI connection.
If a physical port or Ethernet switch becomes unavailable, the storage system is accessed from the
switch in the other fault domain.
If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to
the physical ports on the other storage controller.
Steps
1. Connect each server to both iSCSI networks.
2.
3.
Example
Figure 20. Storage System in Virtual Port Mode with Dual 10 GbE Storage Controllers and Two Ethernet
Switches
1.
34
Server 1
2.
Server 2
3.
4.
5.
Storage system
6.
Storage controller 1
7.
Storage controller 2
Next steps
Install or enable MPIO on the host servers.
NOTE: For the latest best practices, see the Storage Center Best Practices document located on the
Dell TechCenter (http://en.community.dell.com/techcenter/storage/).
Virtual Port Mode One iSCSI Network with Dual 10 GbE 2Port Storage Controllers
Use one iSCSI network with virtual port mode to prevent an unavailable port or storage controller from
causing a loss of connectivity between the host servers and a storage system with dual 10 GbE 2Port
storage controllers.
About this task
In this configuration, there are two fault domains, one iSCSI network, and one Ethernet switch. Each
storage controller connects to the Ethernet switch using two iSCSI connections.
If a physical port becomes unavailable, the storage system is accessed from another port on the
Ethernet switch.
If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to
the physical ports on the other storage controller.
NOTE: This configuration is vulnerable to switch unavailability, which results in a loss of connectivity
between the host servers and storage system.
Steps
1. Connect each server to the iSCSI network.
2.
3.
35
Example
Figure 21. Storage System in Virtual Port Mode with Dual 10 GbE Storage Controllers and One Ethernet
Switch
1.
Server 1
2.
Server 2
3.
4.
Storage system
5.
Storage controller 1
6.
Storage controller 2
Next steps
Install or enable MPIO on the host servers.
NOTE: For the latest best practices, see the Storage Center Best Practices document located on the
Dell TechCenter (http://en.community.dell.com/techcenter/storage/).
Legacy Mode Two iSCSI Networks with Dual 10 GbE 2Port Storage Controllers
Use two iSCSI networks with legacy mode to prevent an unavailable switch or storage controller from
causing a loss of connectivity between host servers and a storage system with dual 10 GbE 2-port storage
controllers.
About this task
In this configuration, there are two faults domains, two iSCSI networks, and two Ethernet switches
Each fault domain contains sets of primary and reserve paths (P1-R1 and P2-R2).
To provide redundancy, the primary port and the corresponding reserve port in a fault domain must
connect to the same network.
When MPIO is configured on the iSCSI servers, the primary paths provide redundancy for an
unavailable server port. The reserved paths provide redundancy for an unavailable storage controller.
Steps
1.
2.
3.
36
Example
Figure 22. Storage System in Legacy Mode with Dual 10 GbE Storage Controllers and Two Ethernet Switches
1.
Server 1
2.
Server 2
3.
4.
5.
Storage system
6.
Storage controller 1
7.
Storage controller 2
Next steps
Install or enable MPIO on the host servers.
NOTE: For the latest best practices, see the Storage Center Best Practices document located on the
Dell TechCenter (http://en.community.dell.com/techcenter/storage/).
Legacy Mode One iSCSI Network with Dual 10 GbE 2-Port Storage Controllers
Use one iSCSI network with legacy mode to prevent an unavailable storage controller from causing a loss
of connectivity between host servers and a storage system with dual 10 GbE 2-port storage controllers.
About this task
In this configuration, there are two fault domains, one iSCSI network, and one Ethernet switch.
Each fault domain contains sets of primary and reserve paths (P1-R1 and P2-R2).
To provide redundancy, the primary port and corresponding reserve port in a fault domain must
connect to the same fabric.
When MPIO is configured on the servers, the primary paths provide redundancy for an unavailable
server port. The reserved paths provide redundancy for an unavailable storage controller.
NOTE: This configuration is vulnerable to switch unavailability, which results in a loss of connectivity
between the host servers and storage system.
37
Steps
1. Connect each server to the iSCSI network.
2.
3.
Example
Figure 23. Storage System in Legacy Mode with Dual 10 GbE Storage Controllers and One Ethernet Switch
1.
Server 1
2.
Server 2
3.
4.
Storage system
5.
Storage controller 1
6.
Storage controller 2
Next steps
Install or enable MPIO on the host servers.
NOTE: For the latest best practices, see the Storage Center Best Practices document located on the
Dell TechCenter (http://en.community.dell.com/techcenter/storage/).
38
The SFP+ transceiver modules are installed into the front-end ports of a storage controller. Fiber-optic
cables are connected from the SFP+ transceiver modules in a storage controller to SFP+ transceiver
modules in Ethernet switches.
Use only Dell supported SFP+ transceiver modules with the SC4020. Other generic SFP+ transceiver
modules are not supported and may not work with the SC4020.
The SFP+ transceiver module housing has an integral guide key that is designed to prevent you from
inserting the transceiver module incorrectly.
Use minimal pressure when inserting an SFP+ transceiver module into an FC port. Forcing the SFP+
transceiver module into a port may damage the transceiver module or the port.
The SFP+ transceiver module must be installed into a port before you connect the fiber-optic cable.
The fiber-optic cable must be removed from the SFP+ transceiver module before you remove the
transceiver module from the port.
39
NOTE: When you are not using a transceiver module or fiber-optic cable, always install protective
covers to prevent contamination.
Cleaning SFP+ Transceiver Modules
Dell recommends using a can of compressed air to clean the fiber-optic ports of SFP+ transceiver
modules.
Prerequisites
Handle the SFP+ transceiver module in an ESD safe environment using the proper safety precautions.
Make sure that the can of compressed air is approved for cleaning fiber optics.
Make sure that the can of compressed air has a straw inserted into the nozzle.
Steps
1.
2.
Spray the can of compressed air for 35 seconds to make sure that any liquid propellant is expelled
from the straw.
Align a fiber-optic port of the transceiver module with the straw on the can of compressed air.
Hold the transceiver module near the end of the straw, but do not touch the inner surfaces of the
module.
3.
Hold the can of compressed air upright and level with the transceiver module.
CAUTION: Tipping the can of compressed air may release liquids in the air stream.
4.
Use the can of compressed air to blow out particles from the inside of the transceiver module.
5.
Examine the optical surface of the connector using high intensity light and a magnifying tool.
If contaminants still exist, repeat the cleaning process.
6.
Immediately install the protective dust cover into the transceiver module to avoid recontamination.
Keep the protective cover in the transceiver module until you are ready to connect it to a fiber optic
cable.
Do not allow the end of the fiber-optic cable to contact any surface, including your fingers.
Make sure that the can of compressed air is approved for cleaning fiber optics
Make sure that the can of compressed air has a straw inserted into the nozzle.
Only use fresh (dry) spectroscopic grade methanol or isopropyl alcohol as a cleaning solvent.
Only use lens tissues with long fibers and a low ash content type that has no chemical additives.
Steps
1.
Spray the can of compressed air for 35 seconds to make sure that any liquid propellant is expelled
from the straw.
2.
Hold the can of compressed air upright and level with the fiber-optic cable connector.
CAUTION: Tipping the can of compressed air may release liquids in the air stream.
3.
Use the can of compressed air to blow particles off the surface of the fiber-optic cable connector.
4.
5.
Place the wet portion of the lens tissue on the optical surface of the fiber-optic cable connector and
slowly drag it across.
6.
Examine the optical surface of the fiber-optic cable connector using high intensity light and a
magnifying tool.
40
If streaks or contaminants still exist, repeat the cleaning process using a fresh lens tissue.
7.
Immediately install the protective dust cover over the end of the cable to avoid recontamination.
Keep the protective cover on the end of the cable until you are ready to connect it.
Do not open any panels, operate controls, make adjustments, or perform procedures to a laser
device other than those specified herein.
CAUTION: Transceiver modules can be damaged by electrostatic discharge (ESD). To prevent ESD
damage to the transceiver module, take the following precautions:
Place transceiver modules in antistatic packing material when transporting or storing them.
Steps
1. Position the transceiver module so that the key is oriented correctly to the port in the storage
controller.
1.
2.
2.
Insert the transceiver module into the port until it is firmly seated and the latching mechanism clicks.
The transceiver modules are keyed so that they can only be inserted with the correct orientation. If a
transceiver module does not slide in easily, ensure that it is correctly oriented.
CAUTION: To reduce the risk of damage to the equipment, do not use excessive force when
inserting the transceiver module.
3.
Position fiber-optic cable so that the key (the ridge on one side of the cable connector) is aligned
with the slot in the transceiver module.
CAUTION: Touching the end of a fiber-optic cable damages the cable. Whenever a fiberoptic cable is not connected, replace the protective covers on the ends of the cable.
41
4.
Insert the fiber-optic cable into the transceiver module until the latching mechanism clicks.
5.
Insert the other end of the fiber-optic cable into the SFP+ transceiver module of an Ethernet switch.
Do not open any panels, operate controls, make adjustments, or perform procedures to a laser
device other than those specified herein.
CAUTION: Transceiver modules can be damaged by electrostatic discharge (ESD). To prevent ESD
damage to the transceiver module, take the following precautions:
Steps
1. Remove the fiber-optic cable that is inserted into the transceiver.
a. Make certain the fiber-optic cable is labeled before removing it.
b. Press the release clip on the bottom of the cable connector to remove the fiber-optic cable from
the transceiver.
CAUTION: Touching the end of a fiber-optic cable damages the cable. Whenever a fiberoptic cable is not connected, replace the protective covers on the ends of the cables.
2.
3.
Grasp the bail clasp latch on the transceiver module and pull the latch out and down to eject the
transceiver module from the socket.
4.
42
1.
2.
2.
Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so
that it does not obscure the text
43
3.
Connect the Ethernet management port on storage controller 1 to the Ethernet switch.
3.
Connect the Ethernet management port on storage controller 2 to the Ethernet switch.
44
1.
Corporate/management network
2.
Ethernet switch
3.
Storage system
4.
Storage controller 1
5.
Storage controller 2
NOTE: To use the management port as an iSCSI port, cable the management port to a network
switch dedicated to iSCSI traffic. Special considerations must be taken into account when
sharing the management port. For environments where the Storage Center system
management ports are mixed with network traffic from other devices (such as voice, backups,
or other computing devices), separate the iSCSI traffic from management traffic using VLANs.
2.
Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so
that it does not obscure the text.
3.
45
1.
Corporate/management network
2.
Ethernet switch 1
3.
4.
Storage system
5.
Storage controller 1
6.
Storage controller 2
3.
To configure the fault domains and ports, log in to the Storage Center System Manager and select
Storage Management System Setup Configure Local Ports.
4.
Cabling the Management Port and Replication Port for iSCSI Replication
If replication is licensed, the management (MGMT) port and replication (REPL) port can be used to
replicate data to another Storage Center.
About this task
Connect the management port and replication port on each storage controller to an Ethernet switch
through which the Storage Center can perform replication.
46
NOTE: In this configuration, Storage Center system management traffic and iSCSI traffic use the
same physical network ports. For environments where the Storage Center system management
ports are mixed with network traffic from other devices (such as voice, backups, or other computing
devices), separate iSCSI traffic from management traffic using VLANs.
Steps
1. Connect embedded fault domain 1 (shown in orange) to the iSCSI network.
a. Connect Ethernet switch 1 to the corporate/management network (shown in green).
b. Connect the management port on storage controller 1 to Ethernet switch 1.
c. Connect the management port on storage controller 2 to Ethernet switch 1.
2.
1.
Corporate/management network
2.
3.
4.
Storage system
5.
Storage controller 1
6.
Storage controller 2
3.
To configure the fault domains and ports, log in to the Storage Center System Manager and select
Storage Management System Setup Configure Local Ports.
4.
Two iSCSI Networks using the Embedded Ethernet Ports on a Storage System
with Fibre Channel Storage Controllers
Use two iSCSI networks to prevent an unavailable port, switch, or storage controller from causing a loss
of connectivity between the host servers and a storage system with dual Fibre Channel (FC) storage
controllers.
About this task
In this configuration, there are two fault domains, two iSCSI networks, and two Ethernet switches.
47
If a physical port or Ethernet switch becomes unavailable, the storage system is accessed from the
switch in the other fault domain.
If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to
the physical ports on the other storage controller.
NOTE: In this configuration, Storage Center system management traffic and iSCSI traffic use the
same physical network ports. For environments where the Storage Center system management
ports are mixed with network traffic from other devices (such as voice, backups, or other computing
devices), separate iSCSI traffic from management traffic using VLANs.
Steps
1. Connect each server and Ethernet switch 1 to the corporate/management network (shown in green).
2.
Connect the servers that support iSCSI connections to both iSCSI networks.
3.
4.
Figure 34. Two iSCSI Networks using the Embedded Ethernet Ports on Dual Fibre Channel Storage
Controllers
1.
48
Corporate/management network
2.
Server 1 (FC)
5.
3.
Server 2 (iSCSI)
4.
5.
6.
7.
8.
Storage system
9.
Storage controller 1
To configure the fault domains and ports, log in to the Storage Center System Manager and select
Storage Management System Setup Configure Local Ports.
Next steps
Install or enable MPIO on the host servers.
NOTE: For the latest best practices, see the Dell Storage Center Best Practices documents on the
Dell TechCenter (http://en.community.dell.com/techcenter/storage/).
If a physical port or Ethernet switch becomes unavailable, the storage system is accessed from the
switch in the other fault domain.
If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to
the physical ports on the other storage controller.
NOTE: In this configuration, Storage Center system management traffic and iSCSI traffic use the
same physical network ports. For environments where the Storage Center system management
ports are mixed with network traffic from other devices (such as voice, backups, or other computing
devices), separate iSCSI traffic from management traffic using VLANs.
Steps
1. Connect each server and Ethernet switch 1 to the corporate/management network (shown in green).
2.
3.
4.
49
Figure 35. Two iSCSI Networks using the Embedded Ethernet Ports on Dual iSCSI Storage Controllers
5.
1.
Corporate/management network
2.
Server 1
3.
Server 2
4.
5.
6.
Storage system
7.
Storage controller 1
8.
Storage controller 2
To configure the fault domains and ports, log in to the Storage Center System Manager and select
Storage Management System Setup Configure Local Ports.
Next steps
Install or enable MPIO on the host servers.
NOTE: For the latest best practices, see the Dell Storage Center Best Practices documents on the
Dell TechCenter (http://en.community.dell.com/techcenter/storage/).
50
This section contains technical specifications for the SC4020 storage systems.
Technical Specifications
The technical specifications of the SC4020 storage system are displayed in the following tables.
Table 7. Hard Drives
Drives
SAS hard drives
Storage Controllers
Configurations
Storage Connectivity
Configurations
Management
Ethernet connectors
51
6 Gbps SAS connectors for SAS port redundancy and additional expansion
enclosures
NOTE: SAS connectors are SFF-8086/SFF-8088 compliant
USB Connector
Serial connector
LED Indicators
Storage controller
module
Two single-color LEDs per Ethernet port indicating activity and link speed
Four dual-color LEDs per SAS connector indicating port activity and status
One single-color LED indicating status
One single-color LED indicating fault
One single-color LED for identification
Eight single-color LED for diagnostics
Power supply/cooling
fan
Four LED status indicators for Power Supply Status, AC Fail status, DC Fail
status, and Fan Fail status
Front panel
Power Supplies
AC power supply (per power supply)
Wattage
Voltage
Heat dissipation
Maximum inrush current Under typical line conditions and over the entire system ambient operating
range, the inrush current may reach 45 A per power supply for 40 ms or less
52
Up to 1.2 A at +5 V
Up to 0.5 A at +12 V
Physical
Height
Width
Depth
Weight (maximum
configuration)
24 kg (53 lb)
19 kg (41 lb)
Environmental
For additional information about environmental measurements for specific storage system
configurations, see dell.com/environmental_datasheets.
Temperature
Operating
Storage
Relative humidity
Operating
Storage
Maximum vibration
Operating
Storage
Maximum shock
Operating
Storage
Half-sine shock 30 G +/- 5% with a pulse duration of 10 ms +/- 10% (all sides)
Altitude
Operating
53
Environmental
For altitudes above 915 m (3,000 ft), the maximum operating temperature is
derated 1C per 300 m (1F per 547 ft)
Storage
54