Ua San

Download as pdf or txt
Download as pdf or txt
You are on page 1of 127

UA - Direct Attached Storage Page 1 of 123

1. Directed Attached Storage


1. Introduction to DAS
Direct-attached storage, or DAS, is the most basic level of storage, in which storage devices are part of the host
computer, as with drives, or directly connected to a single server. Network workstations must therefore access the
server in order to connect to the storage device. This is in contrast to networked storage such as NAS and SAN,
which are connected to workstations and servers over a network. As the first widely popular storage model, DAS
products still comprise a large majority of the installed base of storage systems in today's IT infrastructures.
Although the implementation of networked storage is growing at a faster rate than that of direct-attached storage,
it is still a viable option by virtue of being simple to deploy and having a lower initial cost when compared to
networked storage. When considering DAS, it is important to know what your data availability requirements are. In
order for clients on the network to access the storage device in the DAS model, they must be able to access the
server it is connected to. If the server is down or experiencing problems, it will have a direct impact on users'
ability to store and access data. In addition to storing and retrieving files, the server also bears the load of
processing applications such as e-mail and databases. Network bottlenecks and slowdowns in data availability
may occur as server bandwidth is consumed by applications, especially if there is a lot of data being shared from
workstation to workstation.
DAS is ideal for localized file sharing in environments with a single server or a few servers - for example, small
businesses or departments and workgroups that do not need to share information over long distances or across
an enterprise. Small companies traditionally utilize DAS for file serving and e-mail, while larger enterprises may
leverage DAS in a mixed storage environment that likely includes NAS and SAN. DAS also offers ease of
management and administration in this scenario, since it can be managed using the network operating system of
the attached server. However, management complexity can escalate quickly with the addition of new servers,
since storage for each server must be administered separately.
From an economical perspective, the initial investment in direct-attached storage is cheaper. DAS can also serve
as an interim solution for those planning to migrate to networked storage in the future. For organizations that
anticipate rapid data growth, it is important to keep in mind that DAS is limited in its scalability.

1.1. Advantages of DAS


 Expand Storage Without Adding Servers
When the internal storage capacity of your server is maximized, simply adding an INLINE storage array
allows you to increase network storage capacity with adding the management headaches associated
with putting another server on the network.
 Protect Your Data and Your Operations
INLINE arrays feature redundant hot swappable components. Should a component on the system fail, its
work is picked up by a redundant component. The failed component can be replaced or repaired without
taking the system off-line.
 Scalable Storage
Direct attached storage allows you to easily scale as your organization grows. Start out with a few hundred
gigabytes and increase to 17.2 terabytes* without increasing your system.
*Based on current maximum drive size of 300 GB

1.2. Direct Attached Storage (DAS) Model


The Direct Attached Storage (DAS) model can be thought of as the way computer systems worked before
networks. The DAS model contains three basic software layers: Application Software, File System Software
(which is part of the Unix or NT operating system software) and Disk Controller Software. The elements are
usually located close together physically and operate as a single entity. In the DAS model, the UNIX or NT
application software makes an I/O request to the file system, which organizes files and directories on each
individual disk partition into single hierarchy. The file system also manages buffer cache in UNIX.
When database applications are installed, the database software sometimes bypasses the UNIX buffer cache and
provides its own cache as with Oracle’s System Global Area (SGA). The system or database software
determines the location of the I/O requested by the application and manages all caching activity. If the data is not

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Direct Attached Storage Page 2 of 123

in cache, the file system then makes a request to the disk controller software the retrieves the data from its disk or
RAID array and returns the data to the file system to complete the I/O process.

Fig.1.3 - Direct Attached Storage Model

1.3. Ideal Situations for DAS


 When fast access to storage is required, but the newest SAN technology prices remain out of reach or
are not necessary.
 For very cost-conscious customers, DAS will remain the least expensive storage mechanism for a long
time. Of course, this is only in terms of hard physical media costs. When a full comparison with other
technologies is completed that takes into consideration administrative overhead and storage efficiencies,
you might find that DAS is not at the top of the chart anymore.
 For very small environments that just don't need anything more.

1.4. Adaptec Direct Attached Storage – SANbloc 2GB JBOD


The SANbloc 2GB JBOD is end-to-end 2Gb Fiber Channel. 2Gb Fiber Channel, the latest generation of Fiber
Channel network interface technology, offers greatly accelerated performance in network storage environments,
providing simultaneous transmit and receive operations in full-duplex mode as high as 400 MB/s. Each subsystem
boasts simplified cabling and supports up to 14 disk drives in a dense 3U form factor. The SANbloc 2GB JBOD
can be scaled in multiple dimensions, enabling flexible configuration of capacity, performance and functionality, to
match and grow with virtually any application or IT environment.

Highlights
 Upgradeable to RAID with the swap of a module
 Redundant data paths with dual-ported Fiber drives and dual Fiber Channel loops
 Quad Loop feature provides over 700 MB/s from a single subsystem
 Enhanced enclosure services (SES) monitoring and reporting
 Intuitive, comprehensive management with Adaptec Storage Examiner

1.5. Connectivity
Direct-attached storage refers to a storage device, such as a hard drive or tape drive that is directly connected to
a single computer. These connections are usually made by one of the following methods:
 Enhanced Integrated Drive Electronics (EIDE)
 Small Computer Systems Interface (SCSI)
 Fiber Channel

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Direct Attached Storage Page 3 of 123

EIDE connects internal Advanced Technology Attachment (ATA) storage to a computer, SCSI provides a means
to connect both internal and external storage to a computer, and Fiber Channel connects external storage to a
computer. Fiber Channel is most often used with external storage in a SAN. Although Fiber Channel can be used
for direct-attached storage, less expensive SCSI storage can offer similar performance, but works only over
limited distances due to the physical limitations of the SCSI bus. When external direct-attached storage devices
are located more than twelve meters away from a server, then Fiber Channel must be used.
Direct-attached storage retains its high popularity because of its low entry cost and ease of deployment. The
simple learning curve associated with direct-attached storage technologies is also a factor many organizations
consider. Direct-attached storage also makes it easy to logically and physically isolate data, because the data can
only be directly accessed through a single server.
Although it is simple to deploy, there are other management considerations to take into account with direct-
attached storage:
 Direct-attached storage can be more expensive to manage because you cannot redeploy unused
capacity, which results in underutilization.
 Having storage distributed throughout the organization makes it difficult to get a consolidated view of
storage across the organization.
 Disaster recovery scenarios are limited because a disaster will cause both server and storage outages.
 For data backup and recovery, you need to choose whether to attach local backup devices to each
server, install dual network adapters in each server and back up the data over a separate LAN, or back
up the server over the corporate LAN. Large organizations have found that placing stand-alone tape
drives in individual servers can quickly become expensive and difficult to manage, especially when the
number of servers in the organization grows into the hundreds. In this situation, it is often best to back up
servers over a network to a storage library, which offers backup consolidation and eases management.

1.5.1. Enhanced IDE


Enhanced Integrated Digital Electronics. The current standard for hard drives. Sometimes called "Super IDE."
Replaces the original IDE specification. The controller is built into the drive, and does not require an accessory
card on the bus. New hard drives should be EIDE.
Enhanced Integrated Drive Electronics. A newer version of the IDE standard for hard drives.
Enhanced IDE Western Digitals proprietary extension of the IDE interface standard with ATA-2 and ATAPI
features, used to connect hard drives and CD-ROMS to a PC.
Enhanced IDE An improved version of the IDE interface, used to connect hard drives and CD-ROMS to a PC.
(Enhanced Integrated Drive Electronics) An improved version of IDE, which offers better performance than
standard SCSI. It offers transfer rates between 4 and 16.6 MB/sec.
(Enhanced Integrated Device Electronics or Enhanced Intelligent Drive Electronics) An enhanced version of the
IDE drive interface that expands the maximum disk size from 504 MB to 8.4 GB, more than doubles the maximum
data transfer rate, and supports up to four drives per PC (as opposed to two in IDE systems). Now that hard disks
with capacities of 1 GB or more are commonplace in PCs, EIDE is an extremely popular interface. EIDE's primary
competitor is SCSI-2, which also supports large hard disks and high transfer rates. (See also IDE and SCSI.)
Enhanced Integrated Drive Electronics. Improved version of the IDE interface for hard drives and CD-ROMs.
EIDE uses Logical Block Addressing which allows for HD capacities over 528 MB. It makes use of Direct Memory
Access and can address up to four devices. Also called Fast ATA.
enhanced integrated device (or drive) electronics A second take on the PC platform's IDE drive interface that
increases the previous maximum disk size from 504MB to more than 8GB, speeds up the data transfer rate to
more than twice what IDE was capable of, and doubles the number of drives a PC can contain, bringing the
number up to four. On the PC platform, EIDE gives SCSI-2 a run for its money, and while most people agree that
SCSI-2 is technically superior, EIDE is cheaper to implement, which gained it widespread acceptance.
If someone were to say they preferred a serial connection to a parallel connection, most would laugh at them
uncontrollably. Serial COM ports have always been known to be one of the slowest connections in modern
computers. However, the newest version of Advanced Technology Attachment (ATA), Serial ATA, is set to
sweep parallel ATA off its feet.
 PATA vs. SATA

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Direct Attached Storage Page 4 of 123

 Hardware, Configurations & Pictures

1.5.1.1 PATA
Parallel ATA is the primary internal storage interconnects for the desktop, connecting the host system to
peripherals such as hard drives, optical drives, and removable magnetic media devices. Parallel ATA is an
extension of the original parallel ATA interface introduced in the mid 1980’s and maintains backward compatibility
with all previous versions of this technology. The latest revision of the Parallel ATA specification accepted by the
ANSI supported INCITS T13 committee, the governing body for ATA specifications, is ATA/ATAPI-6, which
supports up to 100Mbyte/sec data transfers. Development of the ATA/ATAPI-7 specification, an update of the
parallel bus architecture that provides up to 133Mbytes/sec, was recently finalized.

1.5.1.2 SATA
SATA is the next -generation internal storage interconnects, designed to replace parallel ATA technology. SATA
is the proactive evolution of the ATA interface from a parallel bus to serial bus architecture. This architecture
overcomes the electrical constraints that are increasing the difficulty of continued speed enhancements for the
classic parallel ATA bus. SATA will be introduced at 150Mbytes/sec, with a roadmap already planned to
600Mbytes/sec, supporting up to 10 years of storage evolution based on historical trends. Though SATA will not
be able to directly interface with legacy Ultra ATA hardware, it is fully compliant with the ATA protocol and thus is
software compatible.

1.5.1.3 Advantages of SATA over PATA


Parallel ATA Serial ATA SATA Advantages
Up to 133 Mbytes/sec Up to 150 Mbytes/sec (1.5 Gbits/sec) Faster, and room for expansion
Tiny jumpers No master/salve, point to point Ease of use
Eighteen-inch cable Up to 39-inch (1-meter) cable Ease of Integration
Two-inch-wide ribbon cable Thin cable (1/4-inch) Improved system airflow
Eliminates data integrity
80 conductor 7-wire differential (noise canceling)
problems
40 pin and socket Blade and beam connector (snap in) Ease of Use
Two-inch-wide data connector ½-inch-wide data connector Ease of Integration
Onboard DMA controller First-party DMA support Performance enhancement
High 5V tolerance for legacy drives Low voltage (.25V) tolerance Design improvement
Limited (legacy command queuing) Intelligent Data Handling Performance enhancement
---- Hot Swap Ease of Integration/use
CRC on data only CRC on data, command, status Enhanced data protection

1.5.1.4. PATA vs. SATA


Parallel ATA (PATA) has been the industry standard for connecting hard drives and other devices in computers
for well over a decade. However, due to a few major limitations, PATA could be a quickly dying breed with the
introduction of Serial ATA (SATA). To compare, PATA cables are limited to only 18 inches long, while SATA
cables can be up to 1 meter in length, which is less than 40 inches. It is possible to have longer cables but, due to
attenuation, these longer cables are generally more trouble than they are worth.
PATA cables are large and bulky and can easily restrict airflow. With the onslaught of better and faster devices,
computers continue to generate more heat and this can cause many problems including complete computer
failure. PATA cables are 40 wires wide and they block precious space, which can restrict airflow greatly. SATA
cables are only 7 pins wide and, with their longer maximum length, can be easily routed to not restrict any airflow
at all. The change to serial transfer is what allows the cable to be so thin, only two data channels are required,
one for sending and one for receiving data. Parallel cables use multiple wires for both sending and receiving and
this technology uses a total of 26 wires for data transfer.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Direct Attached Storage Page 5 of 123

Another comparison is that SATA devices require much less power than PATA. Chip core voltages continue to
decline and, because of this, PATA's 5-volt requirement is increasingly difficult to meet. In contrast, SATA only
requires 250 mV to effectively operate. SATA is also hot-swappable meaning that devices can be added or
removed while the computer is on.
The last, and most important, difference is the maximum bandwidth between the two technologies. The true
maximum transfer rate of PATA is 100 MB/sec with bursts up to 133 MB/sec. With the first introduction of SATA,
the maximum transfer rate is 150 MB/sec. This is supposed to increase every 3 years with a maximum transfer of
300 MB/sec in 2005 and 600 MB/sec in 2008. Finally, SATA doesn't require any changes to existing operating
systems for implementation. SATA is 100% software compatible and, with SATA adapters, some hardware
doesn't have to be immediately replaced.

Parallel ATA Serial ATA


Maximum Speed 100 MB/s with bursts up to 133 MB/s 150 MB/s Currently 300 MB/s by 2005 and
600 MB/s by 2008
Cable Length 18 Inches 1 Meter (about 40 inches)

Cable Pins 40 7

Power Connector Pins 4 15

Data transfer wires 26 2


used
Power Consumption 5V 250 mV

Hot Swappable? No Yes

1.5.1.5. Hardware, Configurations & Pictures


Between the last quarter of 2002 and the first quarter of 2003, motherboards with onboard SATA adapters were
released to the public market. For users that are not ready to purchase new motherboards, SATA RAID
controllers are available as well. Most Hard drive manufacturers released their first SATA hard drives for sale in
the first or second quarter of 2003. For those that would like to take advantage of SATA's longer and thinner
cabling requirements without having to purchase new hard drives, SATA adapters, can be purchased to convert
current drives to accept SATA cables. To fully implement the SATA standard a new motherboard, a new hard
drive or other storage device and a new power supply or power adapter must be purchased. Its unknown how
soon power supplies with new SATA power connectors will be available for sale but, for the time being, power
adapters, can be used with existing power supplies.
When looking at the hardware for serial connections, one can easily see the differences between it and parallel
ATA. A side-by-side comparison of the two connectors on a motherboard is shown in figure 1.5.1.5.1. As shown,
the SATA connector is much smaller than its parallel counterpart. This effectively means that motherboard
manufacturers will have more room to include more on board options as well as being able to offer better board
layouts, as this will not be so restricted by the ATA connectors.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Direct Attached Storage Page 6 of 123

Fig. 1.5.1.5.1 - These pictures show the difference in size of PATA and SATA connectors.
Furthermore, a look at figure 1.5.1.5.2 shows a PATA cable on the left and an SATA cable on the right. As is
easily apparent, the SATA cable is much more builder friendly and can be easily routed out of the way in a case
due to its length and flexibility.

Fig. 1.5.1.5.2 - SATA is the undisputed champion in terms of size and flexibility of cables.

Figure 1.5.1.5.3 shows an SATA power adapter with a 15-pin connector as opposed to the customary 4 pin
connectors in parallel ATA. The new 15-pin connector may sound as though it would be a hindrance in
comparison to the older ones but the two connectors measure almost the same width. The reason for the 15-pin
connector is so that different voltages are supplied to the appropriate places. In addition to the customary 5v and
12v wires, new 3.3v wires are included for the new devices. 9 of the pins provided are for the positive, negative
and ground contacts for each voltage. The remaining 6 connectors are for the hot-swappable feature of SATA,
designating an additional two contacts per voltage for this.

Fig. 1.5.1.5.3 - As seen in the picture above, SATA power connectors are still the same size as current power
connectors even though they have a total of 15 contacts.

As discussed earlier in this article, SATA to PATA adapters are currently available to allow existing hard drives to
be used with new motherboards or controller cards and one is shown below in figure 1.5.1.5.4.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Direct Attached Storage Page 7 of 123

Fig. 1.5.1.5.4 – SATA to PATA Adapter


The package, made by Soyo, includes the SATA to PATA adapter, 1 SATA cable and a short instructional
manual. To connect this to a hard drive, simply connect the 40-pin PATA adapter to the connector on the drive as
shown in figure 1.5.1.5.6. Also, 7 jumpers will have to be set according to the instructions shown in the manual.

Fig. 1.5.1.5.5 – Converting PATA hard disk to SATA Technology

Then, connect one end of the serial cable to the adapter and the other end to a motherboard or controller card.
Finally, connect a power connector to both the hard drive and the SATA adapter. This device can be used to
connect a PATA drive to a SATA connector on a motherboard or controller card, connect a SATA drive to a PATA
connector on a motherboard or, with the use of two adapter kits, connect a PATA drive to a PATA connector on a
motherboard using an SATA cable. Figure 1.5.1.5.7 below shows a comparison of the inside of a computer case
with a PATA cable connected to a hard drive and a SATA cable connected to a hard drive.

Fig. 1.5.1.5.6 - Standard PATA cable connection.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Direct Attached Storage Page 8 of 123

Fig. 1.5.1.5.7 - It’s quite easy to distinguish the winner here: SATA takes the gold without a doubt.

1.5.2. SCSI
1.5.2.1. Introduction
SCSI stands for (S)mall (C)omputer (S)ystems (I)nterface.
The official name of the SCSI standard is: ANSI X3.131 - 1986.
The SCSI interface is a local bus type interface for connecting multiple devices (up to eight), designated as either
initiators (drivers) or targets (receivers).

There are two electrical alternatives for this standard:


1. Single-ended type and
2. Differential type.
Single - ended and Differential devices are different and MAY NOT be mixed on the same bus; however, the
newest type of SCSI, called 'LVD' or 'Low Voltage Differential', can be used on the same SCSI bus with 'SE' type
devices, if:
a. Your SCSI card supports this; and
b. You are using a multi-mode type terminator, sometimes designated as 'LVD/SE' type.
The card is imprinted 'LVD/SE' on the internal 68-pin connector, and it does work with both types of devices on
the same bus.
In a SCSI environment, devices are daisy - chained together using a common cable. Both ends of the cable must
be terminated.
All signals are common between all SCSI devices.

SCSI-1
SCSI-1, the original SCSI Standard, was approved in 1986. It supports transfer rates of up to 5 Mbps and up to 7
devices on an 8-bit bus (not including the controller card). The most common type of connector for SCSI-1 is the
Centronics 50, also called Telco 50 or Amphenol 50 (for external use). Internally, SCSI-1 is always run on Dual-
Row Socket (F) connectors on a 50-conductor ribbon cable.

SCSI-2
Approved in 1994, SCSI-2 introduced optional 16 and 32 bit buses called "Wide SCSI". The transfer rate,
normally 10 Mbps, can be pushed up to 40 Mbps when combined with Fast and Wide SCSI. SCSI-2 usually uses
a Micro-D 50 pin connector with side clips, also known as the Mini-50, Micro-50, and Micro DB50 for external
cables. Internally it is run on the same 50-pin ribbon cables as is SCSI-1.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Direct Attached Storage Page 9 of 123

SCSI-3
Found mainly in high-end systems, SCSI-3 commonly uses a 68-pin ribbon cable for in-cabinet connections, and
a 68-pin shielded twisted-pair for external connections. Unlike SCSI-1 and SCSI-2, the internal and external 68-
pin connectors can be interconnected.
The most common bus width for SCSI-3 is 16-bit with transfer rates of 20 Mbps.

LVD SCSI
The newest SCSI innovation is currently LVD SCSI. The transfer rates can be up to 160 MB/sec with LVD SCSI,
under optimal conditions (good quality SCSI card, LVD-compliant cabling, and proper termination). The cabling
must be the proper type of twisted-pair cabling to support high-speed LVD signals.

1.5.2.2. Advantages of SCSI


 It's fast -- up to 160 megabytes per second (MBps).
 It's reliable.
 It allows you to put multiple devices on one bus.
 It works on most computer systems

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Direct Attached Storage Page 10 of 123

1.5.2.3. Comparison of SCSI Technologies


Name Specification # of Devices Bus Width Bus Speed MBps
Asynchronous SCSI SCSI-1 8 8 bits 5 MHz 4 MBps
Synchronous SCSI SCSI-1 8 8 bits 5 MHz 5 MBps
Wide SCSI SCSI-2 16 16 bits 5 MHz 10 MBps
Fast SCSI SCSI-2 8 8 bits 10 MHz 10 MBps
Fast/Wide SCSI SCSI-2 16 16 bits 10 MHz 20 MBps
Ultra SCSI SCSI-3 SPI 8 8 bits 20 MHz 20 MBps
Ultra/Wide SCSI SCSI-3 SPI 8 16 bits 20 MHz 40 MBps
Ultra2 SCSI SCSI-3 SPI-2 8 8 bits 40 MHz 40 MBps
Ultra2/Wide SCSI SCSI-3 SPI-2 16 16 bits 40 MHz 80 MBps
Ultra3 SCSI SCSI-3 SPI-3 16 16 bits 40 MHz 160 MBps

1.5.2.4. Single - Ended vs. Differential


The set of signals for Single - Ended devices is very different from the signals for Differential devices. For
Differential devices all signals consist of two lines denoted +SIGNAL and - SIGNAL, while for Single - Ended
devices all signals consist of one line (SIGNAL).
Single - Ended (SE) and Differential devices CANNOT be mixed on the same SCSI cable - except for LVD/SE,
mentioned above. The designation 'Differential' generally refers to High-Voltage differential, which is mainly used
on older, larger systems such as IBM mainframes. This type of SCSI is not common with personal computers.

Single - Ended Cables:


Single - Ended cables connect up to eight drivers and receivers.
A 50 conductor flat cable or 25 signal twisted-pair cable should be used. The maximum cable length shall be 6
meters (primarily for connection within a cabinet). A stub length of no more than 0.1 meters is allowed.

Differential Cables:
Differential cables connect up to eight Differential drivers and receivers. A 50 - conductor cable or 25 - signal
twisted - pair cable shall be used. The maximum cable length shall be 25 meters (primarily for connection outside
of a cabinet).
A stub length of no more than 0.2 meters is allowed.

Low-Voltage (LVD) Differential Cables:


LVD SCSI cables are similar to high-end SE cables, however they must meet more stringent requirements than
SE cables. An internal LVD ribbon cable will be a twisted-pair type cable, rather than the flat ribbon cable that is
used for SE type SCSI. If you want to combine SE and LVD devices on one SCSI cable, you must use the
higher-quality LVD cable in order for the LVD devices to work.

Differential:
 Allows up to 10 MB per sec., and cable lengths up to 25 meters (about 82.5 feet).
 Requires more powerful drivers than single-ended SCSI.
 Ideal impedance match is 122 Ohms.

Impedance Requirements
IDEAL characteristic impedance:

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Direct Attached Storage Page 11 of 123

- Single - Ended cables: 132 ohms


- Differential cables: 122 ohms
However, cables with such high characteristic impedance are not usually available. As a result, the SCSI standard
requires the following:
* For unshielded flat or twisted - pair cables: 100 ohms +/ - 10%
* For shielded cables: 90 ohms or greater
Somewhat lower characteristic impedance is acceptable since few cable types meet the above requirements.
Trade - offs in shielding effectiveness, characteristic impedance, cable length, number of loads, transfer rates,
and cost, can be made to some limited degree.
Note: To minimize discontinuities and signal reflections, cables of different impedances should not be used in the
same bus.

1.5.2.5. SCSI Devices that do no work together


The SCSI standard contains various alternatives, which are mutually exclusive within a system (or a bus).

These mutually exclusive alternatives are:


1. Single - ended or differential
2. Termination power supplied by the cable or not
3. Parity implemented or not
4. "Hard" RESET or "Soft" RESET
5. Reservation queuing implemented or not
This means that the two devices you connect with a SCSI cable must be compatible, or of the same type, in terms
of each of the five features listed above. For instance, both devices should be differential or both should be single
- ended; at the same time, both should have a "hard" RESET or a "soft" RESET and so on...for each of the five
features above.

1.5.2.6. SCSI Termination


Termination simply means that each end of the SCSI bus is closed, using a resistor circuit. If the bus were left
open, electrical signals sent down the bus could reflect back and interfere with communication between SCSI
devices and the SCSI controller. Only two terminators are used, one for each end of the SCSI bus. If there is only
one series of devices (internal or external), then the SCSI controller is one point of termination and the last device
in the series is the other one. If there are both internal and external devices, then the last device on each series
must be terminated.
Types of SCSI termination can be grouped into two main categories: passive and active. Passive termination is
typically used for SCSI systems that run at the standard bus clock speed and have a short distance, less than 3
feet (1 m), between the devices and the SCSI controller. Active termination is used for Fast SCSI systems or
systems with devices that are more than 3 ft (1 m) from the SCSI controller.
Another factor in the type of termination is the bus type itself. SCSI employs three distinct types of bus signaling.
Signal ling is the way that the electrical impulses are sent across the wires.
 Single-ended (SE) - The most common form of signaling for PCs, single-ended signaling means that the
controller generates the signal and pushes it out to all devices on the bus over a single data line. Each
device acts as a ground. Consequently, the signal quickly begins to degrade, which limits SE SCSI to a
maximum of about 10 ft (3 m).
 High-voltage differential (HVD) - The preferred method of bus signaling for servers, HVD uses a
tandem approach to signaling, with a data high line and a data low line. Each device on the SCSI bus
has a signal transceiver. When the controller communicates with the device, devices along the bus
receive the signal and retransmit it until it reaches the target device. This allows for much greater
distances between the controller and the device, up to 80 ft (25 m).
 Low-voltage differential (LVD) - A variation on the HVD signaling method, LVD works in much the
same way. The big difference is that the transceivers are smaller and built into the SCSI adapter of each
device. This makes LVD SCSI devices more affordable and allows LVD to use less electricity to
communicate. The downside to LVD is that the maximum distance is half of HVD -- 40 ft (12 m).

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Direct Attached Storage Page 12 of 123

Both HVD and LVD normally use passive terminators, even though the distance between devices and the
controller can be much greater than 3 ft (1 m). This is because the transceivers ensure that the signal is strong
from one end of the bus to the other.

1.5.2.7. Adaptec Ultra320 SCSI


Want lighting-fast data processing speed for your server or PC?
Consider Ultra320 SCSI from Adaptec
Ultra320 SCSI is a PCI-X to SCSI solution, and twice as fast as Ultra160, featuring a sustained data processing
rate of 320 MB/s in a single-channel configuration, or up to 640 MB/s, dual channel.
Adaptec, the driving force behind SCSI and industry leader in direct attached storage, has the technology you
need today to migrate to Ultra320 SCSI, when the technology becomes commercially available

A Smooth, “No Risk” Transition


Adaptec has adopted the industries current SPI-4 standards guidelines and is partnering with the industries best
server and disk drive manufactures, like ServerWorks and Seagate, to ensure your smooth transition to Ultra320.
Adaptec also offers a new chip that is pin-for-pin compatible with Ultra320 and provides outstanding performance
and flexibility for server and high-end workstations. We can help you design and prepare for Ultra320 today, so
that when the technology becomes available, you can plug in and get your data humming at Ultra-fast speeds in
no time.

“True” Ultra320 SCSI


Others claim to have Ultra320 technology today, but its really nothing more than Ultra160 with packetization.

1.5.2.8. SCSI Controllers


QLOGIC SCSI Controllers
QLogic's Parallel SCSI controllers deliver the ultimate in SCSI performance and reliability. Since 1987, when
QLogic introduced the industry's very first SCSI processor, we have led the way in SCSI innovation. Our line
consists of a complete product line covering the range of Ultra through the industry's first 66MHz Ultra3 products.

Bus Type Bus Speed I/0 data Rate Ports Model


PCI 66/33 160MB 2 ISP12160A
PCI 33 160MB 2 ISP12160
PCI 33 160MB 1 ISP10160
PCI 33 40MB 1 ISP1040C
PCI 33 40MB 1 ISP1040B

1.5.3. Fiber Channel

1.5.3.1. Introduction
There are two popular methods for connecting storage arrays to servers for block-level access to storage – Direct
Attached Storage (DAS) and Storage Area Networking (SAN). Both use the SCSI protocol and appear as local
storage to servers. These two methods present contrasting storage architectures.
The most common architecture or method remains DAS, which uses a direct connection between a server and its
dedicated SCSI storage system. These connections typically use parallel SCSI technology, which is used
internally for disks as well. DAS is simple to deploy yet becomes increasingly difficult to manage as the numbers
of DAS systems grow.
A newer method places fiber channel (FC) technology and FC switches between servers and storage to create a
Storage Area Network (SAN). The connectivity the switches provide allows the connection of more than one

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Direct Attached Storage Page 13 of 123

server to a storage system. This reduces the number of storage systems required but substantially increases
complexity and cost due to the switches.
Not surprisingly, both methods provide an almost mutually exclusive set of benefits, but an intermediate solution –
DAS supporting multiple servers using FC without switches – becomes a sensible and desired alternative.
Fortunately, innovative FC-based DAS solutions are now available to fill the void between traditional SCSI-based
DAS and FC-based SAN.
This white paper explores how FC DAS solutions apply the benefits of fiber channel to reduce SCSI storage costs
without requiring SAN switches.

1.5.3.2. Advantages of Fiber Channel


Fiber Channel is an open-standard technology that provides reliable and fast communications at high speeds. It is
most commonly used to network servers and storage using specialized switches into SANs. FC DAS solutions
use fiber channel and therefore share many of its benefits, such as:

Reduced Costs
SCSI DAS storage systems are available in a broad range of configurations and prices. Even so, there are two
general types based on where their controllers reside. Internal RAID types are DAS systems that require RAID
controllers to be installed inside their server. DAS systems with RAID controllers outside the server are external
RAID types. In any event, SCSI DAS storage systems can cost up to $10,000 each or more depending on their
configuration.
Storage costs are reduced significantly by consolidating the purchase of multiple SCSI DAS storage systems into
a FC DAS storage solution. Four external RAID systems can cost the same or more as an FC DAS solution,
without the added benefits that fiber channel provides. Moreover, the Total Costs of Ownership (TCO) for the FC
DAS solution will be far less than external RAID – and internal RAID in some cases – due to the far greater
management and maintenance costs of supporting multiple storage systems instead of a consolidated one.

Faster Performance
Fiber Channel is a newer and faster technology than SCSI. As such, storage systems utilizing FC technology are
generally more advanced and feature rich than those utilizing SCSI. This can result in a FC DAS storage solution
providing much faster performance. FC DAS storage solutions often provide performance similar to several SCSI-
based storage systems combined. This results in greatly improve performance for every server with FC DAS
storage solutions.

Better Scalability
Consolidating the storage requirements of several servers will surely increase the storage capacity requirements
of the storage system in use. Fortunately, this is another area in which FC DAS storage solutions are far superior
to internal RAID and external RAID alternatives. Each fiber channel disk connection supports a far greater
number of disks than SCSI can, so FC DAS storage solutions often scale to extremely large storage capacities. It
is unlikely servers connected to a FC DAS storage solution will out-grow the supported storage capacity, unless
their requirements are highly unusual.

Improved Utilization
The storage consolidation provided by a FC DAS storage solution provides far superior storage utilization. FC
DAS storage solutions allow adding capacity one disk at a time and allocating the new storage capacity to one or
multiple servers. Increasing storage capacities when using internal RAID or external RAID requires adding one or
more disks per system.
For example, adding storage capacity to four servers would require a minimum of four disks when using internal
RAID or external RAID – one for each storage system. FC DAS storage solutions can provision storage to servers
as needed, so it could require as little as one disk to increase the storage available to four servers.
An even more basic aspect of storage utilization involves the unusable disk capacity required for RAID protection
and spare disks. With internal RAID and external RAID, each includes an independent set of disks configured for
RAID protection. Using RAID 5 protection results in one disk lost to parity overhead and potentially one additional
disk for use as a spare. If there are four such systems in use, each is ‘wasting’ two disks each for a total of eight
disks.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Direct Attached Storage Page 14 of 123

A FC DAS storage solution would provide storage to all four servers using one set of disks configured for similar
RAID 5 protection and one spare disk. The number of disks made unusable for user storage is reduced by 75% in
this example. Moreover, storage capacity is more precisely allocated using FC DAS storage solutions since any
portion of the added capacity can be allocated to any server. The alternative is to add storage to internal RAID or
external RAID in exact increments of one disk per server. The efficiency and advantages of an FC DAS storage
solution grows as the number of servers increase.

More Dependability
The common measure of dependability for storage systems is Reliability, Availability and Serviceability (RAS).
Reliability reflects how infrequently the storage system will experience a component failure, regardless of the
effects of that failure. Availability describes the likelihood that the storage system will remain usable over time.
Serviceability describes the ability to perform maintenance on the storage system without removing it from
service. Together with uptime and downtime ratings, they provide common factors for comparing products.

1.5.3.3. Comparing FC DAS Storage Solutions


There are many factors that are important to consider when comparing FC DAS storage represent solutions,
though there is no perfect criteria list. The following questions and suggestions are a basic set of guidelines, so be
sure to add criteria important to each environment. The comparison checklist at the end of this document can help
with summarizing comparison ratings of FC DAS storage solutions.

Number of Host Ports


Identify FC storage systems that provide multiple FC host ports without requiring external FC switches or other
hardware options. FC storage systems with at least eight built-in FC host ports enable a transition from SCSI
storage to FC storage without increasing costs.

Supported Platforms
Confirm that the FC storage system under consideration can support multiple operating systems simultaneously
and can do so without requiring expensive software options. Also, ensure all features are available for every
supported server platform and operating system.

Sufficient Performance
Sharing an FC storage system among servers will result in sharing its performance as well. Fortunately, FC
storage systems are now available where the performance provided is greater than the performance provided by
several SCSI storage systems combined. Look for these for best results.

Dependability
Reliability, availability and serviceability (RAS) become critical with FC storage systems since any disruption can
affect multiple servers at once. Ask for documentation to support any RAS claims and avoid products without
proof of five 9’s (99.999 %) uptime or better.

Management Software
A comprehensive storage management suite greatly simplifies storage set-up, configuration and monitoring. Ideal
FC storage systems offer software that supports all popular server operating systems at no cost or low cost. The
availability of multi-pathing and load balancing software is a plus.

Total System Price


Watch out for FC storage solutions requiring expensive service and maintenance agreements. Such contracts can
increase the total system price of an FC storage system to levels that make these solutions impractical.

Scalability
Exactly what is required to scale the FC storage system? Many require substantial hardware and software
upgrades as they scale, which creates costly barriers in the future. This is rather common for product families that
have many outwardly similar models.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Network Attached Storage Page 15 of 123

2. Network Attached Storage


2.1 Introduction to NAS
Network attached storage (NAS) has come a long way from the early 1990s, having found their way into the
enterprise as companies look for alternatives to costly and challenging management of direct-attached storage
(DAS). The NAS market includes products ranging from low-end workgroup servers with limited features and few
expansion capabilities to enterprise solutions that are scalable, manageable, and expandable. NAS products have
also become the perfect complement to highly capable storage area networks (SAN). The integration of NAS and
SAN has been a huge development in the networked storage world, offering customers flexible solutions that
allow them to stretch their storage dollars.
Networked storage was developed to address the challenges inherent in a server based infrastructure such as
direct attached storage. Network attached storage is a special purpose device, comprised of both hard disks and
management software, which is 100% dedicated to serving files over a network. As discussed earlier, a server
has the dual functions of file sharing and application serving in the DAS model, potentially causing network
slowdowns.
NAS is an ideal choice for organizations looking for a simple and cost-effective way to achieve fast data access
for multiple clients at the file level. Implementers of NAS benefit from performance and productivity gains. First
popularized as an entry-level or midrange solution, NAS still has its largest install base in the small to medium
sized business sector. Yet the hallmarks of NAS - simplicity and value - are equally applicable for the enterprise
market. Smaller companies find NAS to be a plug and play solution that is easy to install, deploy and manage,
with or without IT staff at hand. Thanks to advances in disk drive technology, they also benefit from a lower cost of
entry.
In recent years, NAS has developed more sophisticated functionality, leading to its growing adoption in enterprise
departments and workgroups. It is not uncommon for NAS to go head to head with storage area networks in the
purchasing decision, or become part of a NAS/SAN convergence scheme. High reliability features such as RAID
and hot swappable drives and components are standard even in lower end NAS systems, while midrange
offerings provide enterprise data protection features such as replication and mirroring for business continuance.
NAS also makes sense for enterprises looking to consolidate their direct-attached storage resources for better
utilization. Since resources cannot be shared beyond a single server in DAS, systems may be using as little as
half of their full capacity. With NAS, the utilization rate is high since storage is shared across multiple servers.
The perception of value in enterprise IT infrastructures has also shifted over the years. A business and ROI case
must be made to justify technology investments. Considering the downsizing of IT budgets in recent years, this is
no easy task. NAS is an attractive investment that provides tremendous value, considering that the main
alternatives are adding new servers, which is an expensive proposition, or expanding the capacity of existing
servers, a long and arduous process that is usually more trouble than it's worth. NAS systems can provide many
terabytes of storage in high density form factors, making efficient use of data center space. As the volume of
digital information continues to grow, organizations with high scalability requirements will find it much more cost-
effective to expand upon NAS than DAS. Multiple NAS systems can also be centrally managed, conserving time
and resources.
Another important consideration for a medium sized business or large enterprise is heterogeneous data sharing.
With DAS, each server is running its own operating platform, so there is no common storage in an environment
that may include a mix of Windows, Mac and Linux workstations. NAS systems can integrate into any
environment and serve files across all operating platforms. On the network, a NAS system appears like a native
file server to each of its different clients. That means that files are saved on the NAS system, as well as retrieved
from the NAS system, in their native file formats. NAS is also based on industry standard network protocols such
as TCP/IP, FC and CIFS.

2.2. Advantages of NAS:


 Plug-and-play functionality: NAS servers are designed to attach onto a LAN, find their IP addresses,
and then appear on the network as an additional drive.
 It is a very fast, reliable solution that costs less: The unnecessary operating system overhead is
removed from NAS appliances, reducing the CPU power necessary to store and retrieve data.
 Protocol support: NAS boxes support multiple network file system protocols, such as NFS for UNIX,
Common Internet File System (CIFS) for Windows and Hypertext Transfer Protocol (HTTP) for the Web.
 Lower maintenance costs

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Network Attached Storage Page 16 of 123

 Faster data response times and application speeds


 Higher Availability and Reliability
 Y2K compliance
 Enhanced Migration of existing data
 Scalability

2.3. What is Filer?


NAS device known as filers, focus all of their processing power solely on file services and file storage. As
integrated storage devices, filers are optimized for use as dedicated file servers. They are attached directly to a
network, usually to a LAN, to provide file-level access to data. Filers help you keep administrative costs down
because they are easy to set up and manage, and they are platform-independent.
NAS filers can be located anywhere on a network, so you have the freedom to place them close to where their
storage services are needed. One of the chief benefits of filers is that they relieve your expensive general-
purpose servers of many file management operations. General-purpose servers often get bogged down with
CPU-intensive activities, and thus can’t handle file management tasks as effectively as filers. NAS filers not only
improve file-serving performance but also leave your general-purpose servers with more bandwidth to handle
critical business operations.

Fig. 2.2 - A basic Configuration for NAS Filer on LAN

Analysts at International Data Corporation (IDC) recommend NAS to help IT managers handle storage capacity
demand, which the analysts except will increase more that 10 times by 2003. says IDC, “Network-attached
storage (NAS) is the preferred implementation for serving filers for any organization currently using or planning on
deploying general-purpose file servers. Users report that better performance, significantly lower operational costs,
and improved client/user satisfaction typically results from installing and using specialized NAS appliance
platforms.”

2.4. Strong standards for Network Attached Storage (NAS)


In contrast, network standards are strong standards that are driven by system considerations. There are two true
network standards for accessing remote data that have been broadly implemented by virtually all UNIX and
Windows NT system vendors.
 Developed and put into the public domain by Sun Microsystems, Network File System (NFS) is the de-
facto standard for UNIX.
 Developed by IBM and Microsoft, Common Internet File System (CIFS) is the standard for all flavors of
the Windows operating system.
As a result of these broadly accepted standards for network data access, storage devices that serve data directly
over a network (called Network Attached Storage or NAS devices) are far easier to connect and manage than
DAS devices. Also, NAS devices support true file sharing between NFS and CIFS computers, which together
account for the vast majority of all computers sold.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Network Attached Storage Page 17 of 123

Fig. 2.3 – Configuration of NAS on UNIX and Win

2.5. Network Attached Storage versus Storage Area Network


A SAN comprises an entire network, while a NAS device is essentially just a file server. The NAS solution
transports files over the LAN, while the SAN moves entire blocks of data over a separate, dedicated network. The
key difference is that in a SAN the file system is on the server side of the network, while in NAS architecture the
network is located between the server and the file system.

Storage Type Network Attached Storage Storage Area Network


Connection Storage devices connected to Ethernet Storage devices on separate Fiber Cannel loop,
LAN and shared by all other devices on connected via a switch or hub to server devices on the
LAN. LAN.

Pros  Easy to install  Alleviates both CPU overhead and LAN


 In short run, comparatively bottleneck
cheaper than using servers for the  Offers opportunities for storage consolidation
task of setting up a SAN and improved utilization
 Takes file sharing burden off  Storage management solutions can allow for
general purpose servers CPU less management headcount, and more
comprehensive view
 Scalability is virtually infinite
Cons  Doesn’t alleviate LAN bandwidth  Expensive to implement
issues  Complicated to configure
 Not appropriate for block level  Interoperability can be a concern
data storage applications

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Network Attached Storage Page 18 of 123

Major Storage  Network Appliance  EMC


System  EMC  IBM
Competitors  Auspex  Sun
 HP-Compaq  Hitachi Data Systems (HDS)
 IBM  HP-Compaq
 Procom  Dell
 Quantum-DSS  TrueSAN
 Maxtor  XIOTech Corporation
 Spinnaker
 Connex
 Procom Technology
 Tricod Systems
 BlueArc
 Opanasas

Emerging players in the NAS/SAN landscape can be broadly categorized as developing systems that can
potentially scale to thousands of Terabits, far exceeding anything available today. This scalability, however,
comes at the expense of available features. Network Appliance and EMC, on the other hand, have the feature-
rich software capabilities for their platforms but do not yet have the scalability that some of the emerging
companies claim to offer.

2.6. NAS Plus Tape Based Data Protection


In environments where the amount of data held locally is increasing rapidly and the need for the fastest time to
recovery in case of data loss or catastrophic failure, a combination NAS and tape library solution looks
increasingly attractive.
The NAS device acts effectively as a cache for the most recent data and provides almost immediate time to that
data. This may prove invaluable to a local retail store or brand office that aim to achieve high levels of data
availability.
As the NAS technology is made up of proven components, and is plug and play, the ease of installation and ease
of use are big plus points for the deployment and ongoing use of such a solution. The ability to automate the
backup process effectively removes the issue of human error and concerns over the non-technical nature of local
personnel.
The provision of a complete data protection solution at a local level is very cost effective compared to the more
centralized options. A NAS / tape automation solution also provides the benefits of using hard disks without the
cost and complexity of a fully disk-based solution.
The scalable nature of the NAS / tape automation offerings means that the initial investment is largely protected –
larger drives can be added, for example.
NAS / tape automation is a good solution for those who value the time that can be lost when looking to recover
from data loss as well as the value of the data itself.

2.7. Streamlined Architecture


NAS servers address this problem by streamlining file server configurations and operations, stripping away
everything that is not needed to store and distribute data. Typically, a general file server would involve a reduced
instruction-set computing (RISC) chip-based server for Unix systems or an Intel Corp. PC-based server for
Windows NT networks, with a disk drive storage array attached to the server.
Much of the computing power of general servers is wasted in file server operations, which makes it a very poor
investment. According to a study conducted by Carnegie Mellon University most servers require 25% of available
CPU cycles for file I/O. Being a file server has everything to do with the [input/output] data path, not computing
power. The general-purpose flexibility of such servers extends all the way down to the operating system. A
modern multitasking operating system can have 6 million lines of code, and it provides many functions that are
not needed for file services. A stripped-down file-serving-specific program is a fraction of the size and runs much
faster.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Network Attached Storage Page 19 of 123

Networked file systems originally gained popularity after Sun Microsystems' Network File System (NFS) was
placed in the public domain and most UNIX-based systems adopted the protocol for networked file access.
Today, in some circles, NAS systems may still be referred to as NFS servers, even though products like E-
®
Disk NAS support multiple protocols including Microsoft's Windows SMB/CIFS Common Internet File system,
HTTP, and FTP.
Keeping files on a server in a format that is accessible by different users on different types of computers lets
users share data and integrate various types of computers on a network. This is a key benefit for NAS systems.
Because NAS systems use open, industry standard protocols, dissimilar clients running various operating
systems can access the same data. So it does not matter if there are Windows users or UNIX users on the
network. Both can utilize the NAS device safely and securely.

2.8. NAS Characteristics


 Simplicity – The installation and usage of a NAS device is, quite literally, a plug and play operation.
NAS is a well understood and very mature implementation such that the addition of NAS to a network is
considered to be a lower data risk proposition (vs. a tape only solution). This ease-of-use results in fewer
operator errors and reductions in mis-configuration and mis-management.
 High-Availability – Many NAS systems have built-in, fault tolerant capabilities or clustering functions to
provide highly available solutions. Some of the solutions utilize fault tolerant RAID storage systems in
addition to the fault tolerance in the NAS control function.
 Scalability – NAS systems scale easily in both capacity and performance.
 Connectivity – NAS typically allows for multiple network connections. This enables concurrent support
of multiple networks and allows more users to be connected to a common storage element.
 Data Sharing – One of the basic functions of NAS is to allow data sharing via its implementation of a
remote file system. Users on different client systems can have access to the same file on NAS with the
access serialization. However, serialization may add risk because of cross-platform locks and the
variability in application responsiveness. Some NAS systems provide a translation mechanism to allow
heterogeneous sharing for both NFS (UNIX® operating system) and CIFS Windows® NT®/2000
operating system) implementations.
 Storage Management – With NAS, storage management is centralized for that particular NAS device
which may be supporting a large number of application systems. The administration of NAS system
management is simple and handles all the storage devices connected. This provides a cost advantage
by allowing more capacity to be managed per administrator.
 Integrated Backup – The backup of a NAS device is a common feature of popular backup software.
With the automated procedures and available software, backup risks are greatly reduced with NAS
solutions. Some backup solutions may consume bandwidth of the NAS or may consume the local area
network if utilized. As with any storage, understanding and planning is important.
 Infrastructures – NAS leverages existing networks and network administration skills.

2.9. NAS Applications and Benefits


NAS applications tend to be most efficient for file-sharing tasks—such as NFS in UNIX and CIFS in Windows NT
environments—where network-based locking at the file level provides a high level of concurrent access
protection. NAS facilities can also be optimized to deliver file information to many clients with file-level protection.
Two common applications that utilize the effectiveness of NAS include hosting home directories and providing a
data store for static Web pages that are accessed by many different Web servers.
In certain situations, organizations can deploy NAS solutions in a limited manner for database applications. These
situations are usually limited to applications where the majority of data access is typically read-only, the
databases are small, access volume is low, and predictable performance is not mandatory. In this type of
situation, NAS solutions can help reduce overall storage costs.

2.10. Business Benefits of NAS Gateways


There are direct technological benefits provided by decoupled NAS solutions; however, significant business
benefits can also be realized through their use. Some of the most significant include:
 Enhanced flexibility in how and where storage resources are deployed within the enterprise.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Network Attached Storage Page 20 of 123

 Higher overall storage resource utilization which leads to decreased costs, since enterprise storage
needs can be met with fewer, and sometimes less expensive, storage assets.
 Improved storage management capabilities and processes as all of an organization’s storage assets can
be placed under centralized, automated control.
 The ability to more effectively and flexibly scale storage resources to meet the demands of business
processes and related applications.
 Potentially lower ongoing costs through enhanced resource utilization and reduction in the number of
discrete storage management activities.
 Decoupled NAS offering an effective model/methodology for storage consolidation initiatives across the
enterprise.

2.11. Drawback
 Bottlenecks: The major issue that NAS does not address is the LAN bandwidth requirements. The fact
that NAS appliances are connected directly to the messaging network can contribute to its congestion
and create bottlenecks.

2.12. File Storm NAS:


 Easy To Add
File Storm Series devices can be added to the network in a matter of minutes, without the need to bring
the network down.
 Increases Efficiency of Network Servers
Because the File Storm Series is designed specifically to serve files you can offload high-bandwidth, file-
serving tasks and allow the Network Server to focus on critical business tasks such as application
handling and email.
 Facilitates Sharing of Data
The File Storm Series allows you to connect to multiple operating systems and share data among
disparate clients and servers. The File Storm Series supports both the UNIX Network File System (NFS)
protocol and the Microsoft Common Internet File System (CIFS) in order to facilitate cross-platform data
sharing
 Minimizes Staffing Costs
Because of the File Storm Series simplified file serving nature the management and maintenance
required is negligible compared with network server Direct Attached Storage.
 Easy Management:
Browser-based Java, Unix Shell, Telnet, Windows GUI and Command Line Interface.
 Secure:
128-bit Kerberos Encryption; By-Directional ACL
 Built-In Tools
Snapshot: 250 snapshots per volume.
Backup: Native utility for use over SAN, LAN, WAN and Internet.
Replication: Native utility for use over SAN, LAN, WAN and Internet.
Unix Management: Over 300 CLI tools, Unix Shell.
 Manage 3rd Party Storage:
Turn your legacy storage into a NAS regardless of the manufacturer
 File Storm NAS appliances are Cluster Capable!

2.13 Benefits of Low end and workgroup NAS storage


Administrator
 Extreme ease of Installation
 No client software
 Uses existing security features within NetWare, and NT.
 Remote Management

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Network Attached Storage Page 21 of 123

 Low cost of acquisition & virtually no maintenance

User
 User experience is exactly the same as accessing a standard file server
 Supports multiple network operating environments and various protocols at the same time

Security
 NAS can include RAID functionality for security of data
 Existing client back up software will work with a NAS device

2.14. AS: Think Network Users


NAS is network-centric. Typically used for client storage consolidation on a LAN, NAS is a preferred storage
capacity solution for enabling clients to access files quickly and directly. This eliminates the bottlenecks users
often encounter when accessing files from a general-purpose server. NAS provides security and performs all file
and storage services through standard network protocols, using TCP/IP for data transfer, Ethernet and Gigabit
Ethernet for media access, and CIFS, http, and NFS for remote file service. In addition, NAS can serve both UNIX
and Microsoft Windows users seamlessly, sharing the same data between the different architectures. For client
users, NAS is the technology of choice for providing storage with unencumbered access to files. Although NAS
trades some performance for manageability and simplicity, it is by no means a lazy technology. Gigabit Ethernet
allows NAS to scale to high performance and low latency, making it possible to support a myriad of clients
through a single interface. Many NAS devices support multiple interfaces and can support multiple networks at
the same time. As networks evolve, gain speed, and achieve a latency (connection speed between nodes) that
approaches locally attached latency, NAS will become a real option for applications that demand high
performance.

2.15. Ns: Think Back-End/Computer Room Storage Needs


A SAN is data-centric -- a network dedicated to storage of data. Unlike NAS, a SAN is separate from the
traditional LAN or messaging network. Therefore, a SAN is able to avoid standard network traffic, which often
inhibits performance. Fiber channel-based SANs further enhance performance and decrease latency by
combining the advantages of I/O channels with a distinct, dedicated network. SANs employ gateways, switches,
and routers to facilitate data movement between heterogeneous server and storage environments. This allows
you to bring both network connectivity and the potential for semi-remote storage (up to 10 km distances are
feasible) to your storage management efforts. SAN architecture is optimal for transferring storage blocks. Inside
the computer room, a SAN is often the preferred choice for addressing issues of bandwidth and data accessibility
as well as for handling consolidations.
Due to their fundamentally different technologies and purposes, you need not choose between NAS and SAN.
Either or both can be used to address your storage needs. In fact, in the future, the lines between the two may
blur a bit according to some analysts. For example, down the road you may choose to back up your NAS devices
with your SAN, or attach your NAS devices directly to your SAN to allow immediate, non-bottlenecked access to
storage.

2.16 NAS Solutions


2.16.1 NetApp NAS Solution
Solution
Network-attached storage (NAS) is the most mature networked storage solution, and the only type of networked
storage that allows data sharing by connected host systems. Originally deployed in data sharing environments,
NetApp NAS solutions have become a preferred solution for enterprise applications and database environments
where automated performance tuning and sophisticated data management capabilities can reduce costs, improve
data availability, and simplify operations.

Benefits

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Network Attached Storage Page 22 of 123

Network Appliance NAS appliances deliver the lowest total cost of ownership of any storage approach, together
with enterprise-level performance, scalability, and availability.

Problems Solved
Direct-attached storage works well in environments with an individual server or a limited number of servers, but
the situation rapidly becomes unmanageable if there are dozens of servers or significant data growth. Storage for
each server must be managed separately and cannot be shared. Performance and scalability are often limited,
and storage resources cannot be efficiently allocated. The data management needs of today's enterprise IT
environments are typically much better served by a networked storage approach.
NAS has considerable advantages over direct-attached storage, including improved scalability, reliability,
availability, and performance. In addition NetApp NAS solutions provide true heterogeneous data sharing and
deliver unparalleled ease of use, enabling IT organizations to automate and greatly simplify their data
management operations.

Product Description
NAS was initially designed for data sharing in a LAN environment and incorporates file system capabilities into the
storage device. In a NAS environment, servers are connected to a storage system by a standard Ethernet
network and use standard file access protocols such as NFS and CIFS to make storage requests. Local file
system calls from the clients are redirected to the NAS device, which provides shared file storage for all clients. If
the clients are desktop systems, the NAS device provides "server less" file serving. If the clients are server
systems, the NAS device offloads the data management overhead from the servers.
The first NAS devices were general-purpose file servers. However, NetApp redefined NAS storage with its "filer"
appliances—special-purpose storage systems that combine high performance, exceptional reliability, and
unsurpassed ease of use. The key to these capabilities is the combination of modular hardware architecture with
the NetApp Data ONTAP™ storage operating system and Write Anywhere File Layout (WAFL®) software, which
enable the industry's most powerful data management features.

2.16.1.1 NetApp Filers Product Comparison


NetApp F250 - Filer for Remote Office

The FAS250 is particularly suitable for storage consolidation in Windows and


Unix environments. As a result of the combination of the NFS (Unix networks)
and CIFS (Windows networks) protocols, FAS250 is a universal, affordable
storage platform. The supplied snapshot feature set supports efficient backup
and user-initiated restore options.
Storage capacity is online scalable from 360 GB to 2 TB and therefore provides
solid investment protection. Without the need for data migration the FAS250
can be upgraded to the next larger NAS storage systems of the F800 and FAS
series (up to 64 TB). The filer head of the FAS250 is integrated in the disk shelf
and all components can be mounted in 19-inch racks.
The new FAS Filer series features effective storage management functions
such as Snapshot, SnapMirror, SnapManager and DFM (Data Fabric Manager)
and also supports the NFS, CIFS, FTP, HTTP and iSCSI protocols.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Network Attached Storage Page 23 of 123

NetApp F270 and F270c – Midrange Filers for NAS or SAN

This midrange Network Attached Storage system offers an entry-level SAN


solution while providing strong price/performance for NAS and iSCSI
infrastructures. Optional clustering capability provides high availability using an
active/active cluster over the backplane, for a no-single-point-of-failure solution.
The storage capacity is online scalable up to 6 TB and therefore provides solid
investment protection. Without the need for data migration the FAS270 and
FAS270c can be upgraded to the next larger NAS storage systems of the F800
and FAS series (up to 64 TB). The filer head of the FAS270/270c is integrated
in the disk shelf and all components can be mounted in 19-inch racks.
The new FAS Filer series features effective storage management functions
such as Snapshot, SnapMirror, SnapManager and DFM (Data Fabric Manager)
and also supports the NFS, CIFS, FTP, HTTP and iSCSI protocols.

NetApp F920 and F920c – Midsize Enterprise Structure

The FAS920/920c enterprise filer series is designed for medium sized


enterprises. Their storage capacity of up to 6 TB provides data consolidation in
heterogeneous IT environments at all application levels. The model also offers
the option of running the systems in a cluster configuration with up to 12 TB.
This enhancement of data consolidation at all application levels goes hand in
hand with a fail-safety that is once more noticeably increased.

NetApp F940 and F940c – Unify Enterprise Data Storage

The flexibility and performance of the FAS940 Filer brings features to a broad
range of Enterprise applications, including CRM (Customer Relationship
Management), ERP (Enterprise Resource Planning), DSS (Digital Signature
Standard), massive home directory consolidation and WEB serving.
The FAS940 can manage 12 TB of data in one system, in a cluster
configuration up to 24 TB. The Filer is configured with the Data ONTAP
Software, a highly optimized and scalable operating system that enables them
to interoperate easily and facilitates administration.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Network Attached Storage Page 24 of 123

NetApp F960 and F960c – Enterprise Class Filer

FAS960 is a high performance filer in the NAS market. It is designed to


accommodate thousands of independent users and large, high bandwidth
applications. With the capability of managing 24 TB of data in one system and
16 TB in one file system, the FAS960 meets the storage demands of nearly any
enterprise. In a cluster configuration it is possible to use up to 48 TB in one
system without a single point of failure. The FAS960 delivers one of the lowest
total costs of ownership (TCO) and highest returns on investment (ROI) in the
industry.

NetApp F980 and F980c – Powerful Enterprise Filer

The FAS980 is our highest-performance Filer in the NAS market. It is designed


to accommodate thousands of independent users and large, high bandwidth
applications. With the capability of managing 32 TB of data in one system and
16 TB in one file system, the FAS980 can meet the storage demands of nearly
any enterprise. In a cluster configuration is it possible to use up to 64 TB in one
system without a single point of failure. The FAS980 delivers one of the lowest
total costs of ownership (TCO) and highest returns on investment (ROI) in the
industry.

NetApp Nearstore R200

NearStore perfectly complements and improves existing tape backup processes


by inserting economical and easy-to-use disk based storage between
application storage and tape libraries, in a two-stage backup configuration.
Software that utilizes incremental block transfers to backup data to NearStore -
like SnapVault provides additional benefits. NearStore R200 is offered in
configurations with 8 TB and scalability in 8 TB steps up to 96 TB.

Network Appliance has dramatically enhanced how storage networks are deployed, enabling customers with large
capacity environment to simplify, share and scale their critical storage networking infrastructures

2.16.2. NAS by AUSPEX


The Auspex NS3000XR Series
The NS3000XR enhances the data protection capabilities of the entry-level NS3010 by providing two independent
data paths to every disk array through redundant RAID controllers and dual Fiber-channel host bus adapters. If
one path is blocked, the XR automatically redirects a service request to the open path, ensuring that all data on all
disks remains available to end-users. This Fiber-based strategy adds vital fault tolerance for any kind of mission-
critical enterprise information.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Network Attached Storage Page 25 of 123

Fig 2.15.2 – NAS Box of Auspex NS3000XR Series

Host Processor
 Standard Solaris Platform
 UltraSPARC-IIi, 300MHz
 512MB ECC System Memory
 PCI expansion
 Dual, redundant root drives
 No Direct Involvement in Data Delivery
 Full-featured Mgmt Environment
 Coordinates and Monitors I/O Nodes
 Compute power can be leveraged for Enterprise System and Network Mgmt

File System And Network Processors


 Dual 933 MHz Pentium III
 3 GB ECC Memory for System Cache

5 Concurrent High Speed Busses


 1GB/sec Pentium Pro processor bus
 2GB/sec Memory Bus
 Dual 512MB/sec Front side bus
 4 slot 33MHz 64bit Primary PCI Bus
 2 slot 66MHz 64bit Secondary PCI Bus

Network Interfaces
 10/100 Ethernet (4 ports per device) (up to 2)
 Gigabit Ethernet (up to 2)
 ATM OC-12 (up to 1 - XR systems Only)

Network Software
 EtherBand (High Speed Trunking for Fast Ethernet)

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Network Attached Storage Page 26 of 123

 NetGuard (NIC Failover for GB and Fast Ethernet)


 Implementation of Virtual IP Support

Raid Cluster Controllers


 Up to 7.5 TB Disk (with 73GB Drives)
 Full Environmental Monitoring
 No Single Point of Failure within the RCC
 Dual Active-Active Redundant RAID Controllers
 Dual Hubs with Dual Fiber Loops
 Dual Redundant Power
 Support for rolling firmware upgrades
 Rack Mountable
 RAID 0,1,5 and 0+1
 266MHz Pentium II RAID Processor
 256MB Battery backed DRAMM
 800 MB/sec PC-133 compatible memory interface
 4 Ultra-160 Channels to LPDAs shared by each Redundant RAID Controller
 Connected to I/O Node II with 100MB/sec FC HBAs
 Dual JNI Fiber Channel Host Bust Adapters with 1Gbs throughput (each) with load balancing and
Failover

Low Profile Disk Array (LPDA)


 Hot swap drives
 Full environmental monitoring
 Split bus (6 or 7 drives per channel) or open bus (13 drives per channel) operation

Power Distribution Unit


 30 AMP
 2U Rack Mountable
 Included with Auspex Cabinet
 Support for up to 12 Dual AC Components
 Support for Dual Power Cords

2.16.3. NAS by EMC


EMC Celerra NSX NAS Gateway

Bring data center NAS to your SAN with the industry’s most powerful, scalable NAS gateway
With Celerra NSX, you can add advanced, data center-class NAS capabilities to your new or existing EMC SAN
environment. Now you can consolidate hundreds of file servers and NAS applications-all on one consolidated
platform that’s easy to configure, manage, and use. Celerra NSX is the right choice when you need to lower your
costs, simplify your operation, and manage growth.
Celerra NSX supports both EMC Symmetrix DMX and/or EMC CLARiiON CX networked storage systems-letting
you create an extremely efficient, scalable NAS gateway-to-SAN solution.

Scale performance, availability, connectivity, and capacity


Celerra NSX provides the scalable performance; availability, connectivity, and capacity you need to bring NAS to
the data center-to take on even the largest consolidation challenges. An innovative architecture based on X-
Blade 60 server technology-an EMC-designed and built Intel-based blade server platform-lets you scale from

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Network Attached Storage Page 27 of 123

approximately 135,000 NFS operations per second to a blistering 300,000 operations per second. Scale from 4 to
8 clustered X-Blades, increasing your usable capacity from 48 to 112 terabytes.
Unmatched availability ensures non-stop access to vital data and applications. Celerra NSX features advanced
N+1 clustering to keep availability at its highest. Dual power and dual control stations with redundant Ethernet
connections further enhance availability, reliability, and serviceability-as does a dual-managed UPS
(uninterruptible power supply). Expand on the fly by adding additional X-Blades-without operational delays or
disruption. It’s just another way that Celerra NSX protects your investment.

Scalable management: Simplify installation, management, and placement


Celerra NSX makes it easy to leverage the powerful capabilities of data center-class NAS. Automated SAN
network configuration for EMC CLARiiON CX-based systems simplifies gateway installation. Centralized control
lets you manage, monitor, and analyze trends for multiple systems. And policy-based management automates file
movement and archiving across storage tiers.

Virtual Filesystem Technology: Automate filesystem provisioning and simplify access


With EMC Celerra Automated Volume Management (AVM) software, you can automatically optimize your NAS
solution for specific workloads. Now you can perform single-click volume and filesystem configurations to optimize
performance and capabilities. With Celerra NSX you also benefit from enhanced features that present one logical
view of multiple, independent filesystems for a clear picture of your entire environment-simplifying consolidation
and management of a large number of filesystems.

The right solution for enterprise-level challenges


Celerra NSX brings you the high capacity and performance you need for high-end NAS consolidations and
capacity-intensive applications. Enhanced archiving capabilities through integration with the EMC Centera
platform, provides transparent file archiving and retrieval. New security features-including anti-virus
protection/notification, LDAP authentication, and file-level retention capabilities-keep your enterprise safe and
secure at all times.

Scalable management capabilities


Celerra NSX includes powerful Celerra Manager Software that simplifies and speeds management-with predictive
monitoring, iSCSI wizards, and more. And with Celerra NSX, you can scale your management capabilities
seamlessly to confidently meet future challenges-without adding costs.

2.16.4. EMC NS Series/Gateway NAS Solution


Flexible, easy-to-use gateway NAS solutions
EMC NS Series/Gateway systems enable you to consolidate SAN and NAS storage with a single, highly
available, easily managed networked storage solution. Add enterprise-level file sharing capabilities to a SAN-
connected EMC CLARiiON CX or EMC Symmetrix DMX networked storage system to consolidate and optimize
your storage-and get the most from your investment. The EMC NS Series/Gateway offers single, dual, and quad
Data Mover models, as well as simple upgrades. All models provide extensive iSCSI support and iSCSI wizards-
providing a simple, cost-effective way to consolidate your file servers and application storage.
A choice of gateways
 EMC NS Series/Gateway solutions let you choose the price, performance, and capabilities you need:
 The EMC NS500G entry-level gateway delivers high performance, high availability, and simple
management-at an exceptional price.
 The EMC NS700G offers increased performance and capacity along with the ultimate in flexibility.
 The EMC NS704G NAS gateway offers advanced clustering, scalability of up to four Data Movers, and
exceptional performance-up to three times that of the NS700G-a level of performance formerly available
only on high-end NAS systems.

Flexible NAS solutions

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Network Attached Storage Page 28 of 123

The main benefit of NAS gateways is clear-you can leverage your current investment in storage while adding new
capabilities and improving consolidation. NS Series/Gateway solutions are compatible with CLARiiON CX or
Symmetrix DMX storage, ensuring a seamless, integrated solution.

Simple web-based management


EMC NS Series/Gateway solutions include intuitive software that puts advanced capabilities at your fingertips-
while making them extremely easy to use. Manage your NS Series/Gateway as a stand-alone system or as part
of your entire infrastructure-all from one simple, web-based console. Use wizards, intuitive GUIs, and other
enhancements to monitor, manage, and simplify your NAS environment.

Virtual File system Technology


Take the complexity out of provisioning with this innovative capability, which simplifies consolidation and
management of multiple file systems. With EMC Celerra Automated Volume Management software, one-click
volume and file system configuration lets you optimize your environment for specific workloads.

2.16.5. NAS by SUN


To protect your data and keep your business running smoothly, the Sun StorEdge 5000 Family of NAS
Appliances combines file system journaling, checkpointing (file copy), remote mirroring (file replication), remote
monitoring, clustering*, and fault-tolerant backend RAID arrays to deliver very high levels of availability and
performance in almost any open, file-based environment. Each of these filers is a great value and includes CIFS
and NFS licenses and checkpointing (file copy) software, is easy to operate and effortless to manage, and installs
in less than 15 minutes, thanks to a highly intuitive wizard.
The Sun StorEdge 5210 NAS Appliance scales easily to 6TB of RAID 5-protected SCSI-based storage capacity
with hot sparing. The Sun StorEdge 5310 NAS Appliance easily scales to 65 TB of raw FC or 179 TB of raw
SATA* RAID-protected storage.

2.16.6. Sun StorEdge N8400 and N8600 filers


Sun Microsystems has introduced the Sun StorEdge N8400 and N8600 filers. The Sun StorEdge N8000 filer
series has grown from 200GB to over 10TB of usable capacity in a standalone configuration. Sun customers can
now add robust NAS storage to their end-to-end Sun infrastructure and achieve a low cost of ownership with
single-vendor support across the enterprise.
Sun also announced that the Sun StorEdge N8400 and N8600 filers have successfully completed the Oracle
Storage Compatibility Program (OSCP) NAS test suite for Oracle database environments. With OSCP
qualification awarded in February to the Sun StorEdge N8200 filer, the entire filer series, has now been qualified
for seamless integration with datacenters running Oracle databases.
Sun's highly reliable filers are optimized for peak network file system performance. Hardware RAID 5 and built-in
component redundancy help provide data protection and high uptime. In addition, support for NFS and CIFS
protocols helps enable customers to consolidate storage and provides file sharing for Solaris and Microsoft
Windows clients, as well as other UNIX clients.
"The Sun StorEdge N8400 and N8600 filers present a good opportunity for channel providers to provide a full line
of high-performance NAS products for UNIX and Windows NT environments"

High Storage Densities and Scalability


The StorEdge N8600 Filer is made for storage service providers and other customers with limited floor space and
rapidly expanding storage needs. The N8600, the highest-capacity appliance in the StorEdge N8000 Series,
scales to 10 TB of storage capacity. Simple to set up and manage, the filer comes with the operating environment
and management software preinstalled and tuned for optimization. Hardware RAID 5 and built-in component
redundancy help provide data protection and high uptime. Support for NFS and CIFS protocols enables storage
consolidation, and provides file sharing for Solaris and Windows clients, as well as other Unix clients. The series
also includes the N8400, a midrange NAS appliance that scales to 4 TB.

2.16.7. Sun StorEdge 5310 NAS Appliance


The Sun StorEdge 5310 NAS Appliance is the newest addition to the Sun StorEdge 5000 NAS Appliance family.
Designed for multiprotocol IT environments seeking to consolidate storage, the Sun StorEdge 5310 NAS

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Network Attached Storage Page 29 of 123

Appliance supports UNIX and Microsoft Windows, simplifying file sharing between disparate platforms. To protect
your data and keep your business running smoothly, the Sun StorEdge 5310 NAS Appliance combines advanced
business-continuity functions such as file system journaling, checkpointing, remote mirroring, clustering*, and full
system redundancy with a full 2-gigabit Fiber Channel (FC) RAID array to deliver very high levels of availability
and performance in almost any open environment. Available in single and dual clustered NAS server
configurations, the Sun StorEdge 5310 NAS Appliance provides quick deployment, simple manageability,
seamless integration, and flexible policy-based services. The Sun StorEdge 5310 NAS Appliance is easy to
operate and effortless to manage, and installs in less than 15 minutes, thanks to its highly intuitive wizard.
Designed to grow along with your business, this powerful NAS appliance can easily be scaled to 65 terabytes of
raw FC or 179 terabytes of raw SATA* RAID-protected storage.

Key Features
 Easy Storage Platform to Deploy and Manage.
 Cross-Protocol Client Support and Management.
 Journaled File System with Checkpoint Capability.
 Remote Replication/Data Mirroring and Remote Monitoring.
 Clustering Capability*.
 Sun StorEdge Compliance Archiving Software
 Investment Protection with Common Storage Modules.

Specification
 Processor : CPU: One 3.06-GHz Intel Xeon processor with 512 KB of Level 2 cache
 Number of Slots : 4 GB in 6 DIMM slots, registered DDR-266 ECC SDRAM
 Fiber channel : 1 or 2 dual-port 2-Gb Fiber Channel (FC) HBAs
 Capacity : Scales to 65 TB of FC or 179 TB of SATA RAID-protected storage*
 Mass Storage: Max. Exp. Units: Up to 28 (7 per RAID Expansion Unit)

Simplicity
The affordable, plug-and-play Sun StorEdge 5310 NAS Appliance provides simple manageability, quick
deployment, seamless integration of UNIX and Windows, effortless configuration, and flexible policy-based data
services to match your unique IT requirements. The Sun StorEdge 5310 NAS Appliance is easy to operate,
effortless to manage, and installs in less than 15 minutes, thanks to its highly intuitive wizard.

Multi-Protocol
The highly flexible Sun StorEdge 5310 NAS Appliance supports Common Information File System (CIFS), NFS,
and FTP protocols, cross-protocol file sharing and cross-protocol file locking.

Availability and Reliability


The rock-solid Sun StorEdge 5310 NAS Appliance provides such advanced business-continuity functions as file
system journaling, checkpointing, remote mirroring, clustering, and full system redundancy.

Security and Compliance


The safe and secure Sun StorEdge 5310 NAS Appliance provides access rights and protection by individual user,
department, workgroup, or group. Sun StorEdge Compliance Archiving Software helps businesses address the
most stringent requirements for electronic storage media retention and protection.

High Performance
The Sun StorEdge 5310 NAS Appliance is a powerful NAS filer with fully optimized NAS heads and a full 2
gigabits of FC RAID backend array for fast response times critical to computational or content-creation
applications, including technical computing and oil and gas exploration.

Scalability

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Network Attached Storage Page 30 of 123

Designed to grow along with your business, the powerful Sun StorEdge 5310 NAS Appliance is easily expandable
and highly scalable. Non-disruptively add storage capacity as you grow this appliance to as much as 65 terabytes
of RAID-protected storage.

2.16.8. NAS by ADIC


ADIC's data management software provides enterprise-scale access and protection for critical digital assets
throughout their lifetime. It also provides unified data across a wide range of server/OS platforms. Used in 1000
organizations worldwide, the StorNext solution enables organizations to access, protect, and retain multiples
classes of data on various storage media based on their value to the business. ADIC's StorNext software and
AMASS for UNIX software products manage some of the largest and most challenging open systems data
storage environments.
As data increases in both volume and value, effective organizations need advanced storage technologies. The
ADIC-Network Appliance Certified backup Solution provides reliable and adaptable backup. ADIC storage
networking libraries provide certified data protection solutions for all models of Network Appliance dedicated file
servers, also known as filers, using SAN technology. Using SAN technology behind a NAS network provides
greater efficiency, cost savings, and reliability. With the ADIC-Network Appliance jointly certified solution, these
benefits are made possible.

2.16.8.1. Benefits of Using a SAN Behind a NAS Storage Network


The Network Attach Storage (NAS) market has grown in recent years, increasing the demand for creating
comprehensive data protection solutions. Until recently, the primary demand was met by connecting tape devices
into NAS devices through the standard SCSI interface. This has worked well for situations where there is a limited
amount of data stored on a single NAS device.
However, NAS systems have expanded within corporate environments both in number and in size. This growth
can be attributed to the limitations IT directors have encountered when purchasing small, stand-alone
autochanger tape devices. Today’s high-data growth environment demands a more comprehensive and
consolidated storage system for data and resources. IT directors are demanding larger centralized storage. This
trend has led to a rise in the number of large NAS deices that need to be protected.
In this environment, it is critical that automated data protection services are provided. To meet these needs, a
point-to-point SCSI solution is no longer appropriate. The capabilities of an SCSI solution are exceeded by the
demands of NAS in the enterprise data center.
Over time, these trends have shaped the practice of deploying Storage Area Networks, or SANs. This is done by
leveraging Fiber Channel technologies. Thanks to NAS vendors, SAN connectivity technology has been
recognized as providing a standardized mechanism for Fiber Channel connectivity on the “back” ends of NAS
systems. Backup software vendors also utilize these solutions with NDMP protocols and SAN resource sharing –
practices that have become commonplace in non-NAS data storage environments.
SAN technology allows IT directors to consolidate and automate data protection operations from multiple NAS
systems. The result is reduced management expense through automation, reduced equipment expense through
consolidation, and increased reliability through simplification.

2.16.8.2. ADIC / Network Appliance Solution Overview


ADIC has extended the benefits of SAN consolidation of enterprise backup by launching a joint certification
initiative with Network Appliance. The result is a fully qualified, jointly certified solution. This solution allows
customers to consolidate backup operations from multiple Network Appliance filers into one tape library.
Customers no longer have to attach a tape library directly to each filer because all filers are visible to all tape
devices in the SAN. Additionally, this solution takes advantage of ADIC’s clear technical advantages in storage
networking support. ADIC provides a streamlined backup solution that is easier to manage, more cost effective,
and more flexible than traditional, direct attached configurations. ADIC and Network Appliance are also offering
their customers a solution that takes advantage of both the NAS and SAN environments.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Network Attached Storage Page 31 of 123

Fig. 2.15.8.2 – NAS Configuration behind SAN

2.16.8.3. Benefits for ADIC-Network Appliance NAS backup solution to an enterprise:


 Cost-efficient tape sharing: Save time and money through dynamic tape sharing and optimal utilization
of tape resources with Data ONTAP ™ and NDMP software applications.
 Increased flexibility: The ADIC-Network Appliance NAS Certified Backup Solution provides backup
configuration combinations of up to 64 tape devices in as many as 16 different media changers and up to
15 Network Appliance filers on a 16-port fabric switch. FC attached tape devices may be dynamically
added, removed, or replaced on the filers, whereas SCSI attached drives require rebooting of the filer.
 Greater adaptability and efficiency: When backing up a NAS system, the ADIC-Network Appliance
NAS Certified Backup Solution supports distances of up to 500 meters between elements for
configurations when connecting data and centralized tape libraries.
 Decrease in network bandwidth requirements: By taking advantage of the inherent efficiencies of a
SAN bandwidth, requirements become smaller and performance remains high.
 Backup reliability: The ADIC-Network Appliance NAS Certified Backup Solution ensures dependable
backup through hot-swappable tape drives.
 Outstanding availability: by implementing backup tape library configuration changes behind the NAS,
the ADIC-Network Appliance NAS Certified Backup Solution increases Network Appliance Filer
availability.
 Increased backup efficiency: Time and money are saved by having up to 8 concurrent backups to
shared tape devices.

2.17. StorNext Storage Manager


With user-defined, automated policies, StorNext Storage Manager (StorNext SM) enables organizations to
access, protect, and retain multiple classes of data on storage media appropriate to their relative business value.
These policies determine where data will be stored (on RAID, ATA disk, or tape) over time, and whether
additional protection steps, such as file replication or vaulting, are needed. Plus, StorNext SM unifies data and
storage technologies to enable centralized management and improve storage utilization over the lifetime of the
data. The result is a reliable, automated system that:
 Frees up staff time
 Optimizes storage resource utilization
 Protects data integrity
 Increases data safety
IT management time is valuable and limited. Qualified IT professionals are difficult to find and retain. With
limited staff sizes and budgets, IT managers need to free up time currently spent on manual and complex storage

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Network Attached Storage Page 32 of 123

tasks. By taking advantage of the simple, centralized and automated control that StorNext SM provides, IT
managers can fully support enterprise data access and protection needs.
All data is "critical" when it's needed. Enterprises want to know that their critical data is always accessible with
reliable data integrity-despite any resource constraints. Through user-defined policies, StorNext SM balances
access needs with available capacity by storing critical data on high-performance media and lower priority data on
slower media. For data integrity, StorNext SM provides vital data protection options, such as versioning, file
replication, and media copy.
Data growth continues to soar. As data volumes grow, the pressure on enterprises to better utilize storage
resources is increasing. By using StorNext SM's policies to manage data movement between disk and tape
systems, based on Quality of Service (QOS) levels needed over time, enterprises can plan out the life cycles of
different data classes. The result is a system that scales easily, and allows you to handle growing volumes of data
with maximum flexibility and minimal disruption.

Fig. 2.16 - Policies of StorNext Storage Manager

2.17.1. Benefits of StorNext Storage Manager


 Allows Solaris, IRIX, Windows, and Linux clients to access SAN files and devices simultaneously.
 Improves data movement by choosing the optimal resource across shared libraries/drives
 Maximizes the available disk space by relocating files from disk to tape
 Improves system performance on large file reads
 Ensures extremely fast file system recovery and failover
 Ensures high availability
 Allows users to manage their data according to their own policies
 Provides instant recovery of deleted files
 Ensures data integrity during the life cycle of the data
 Allows rollback to previous versions and maintains a change history
 Duplicates data without disk staging, protects enterprise storage investments by enabling seamless data
transition to new tape drive formats
 Ensures long-term accessibility to data through a published format
 Manages data stored outside of library or off-site

2.16.2. Features of StorNext Storage Manager

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Network Attached Storage Page 33 of 123

High-Performance Data Access


 Support for heterogeneous SAN clients and data
 Control of media, drive and library functions
 File truncation
 Partial file retrieval
 Robust file system journaling
 Failover

Long-Term Data Management and Protection


 Automated real-time policy engine
 Trashcan (undelete)
 Multiple copy file replication
 Versioning
 Media-to-media direct copy
 Self-describing tape format
 Vaulting

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 34 of 123

3. Storage Area Network


Introduction of SAN
A Storage Area Network (SAN) is an independent network for storage subsystems, free from the rest of the
computer network. In effect, a SAN removes the storage from the servers; thus liberating the storage devices
from the ownership of the servers. In such a setup where no server has ownership of storage subsystems, any
server can gain access to any storage device. In other words, any user can gain access to any storage device of
the SAN, regardless of the physical location of the storage or the user.
In addition to offering any-to-any connections, a SAN creates a scaleable environment. Since storage and servers
are independent from each other, storage devices or servers can be added or removed from their respective
networks without affecting each other. Storage devices can be added to the SAN without any worry about a
server's configurations. Isolating the potential disruptions of the servers from those of the storage reduces
potential for interruptions.
SAN (Storage Area Networking) is designed around and encapsulated SCSI protocol. The most popular physical
connections are based on high-speed Optical and Copper Fiber interconnects and can be shared via a hub or
switched, much like the more common networking protocols. In this type of system data is transferred over a
storage loop to the various peripheral devices on the SAN. This is essentially a private network whose bandwidth
is 100MB/sec and that can support up to 128 devices. There is also a switching technology available that will
allow over 15 million devices to be addressed and configured within a single switched fabric network, but the full
specifications for this standard have not yet been formalized. The storage medium for SAN is based on SCSI disk
and tape drives or on the newer Fiber Channel interface drives now entering the market.

Fig. 3 – Basic SAN configuration


The creation of an independent SAN further enhances the workflow of information among storage devices and
other systems on the network. Additionally, moving storage-related functions and storage-to-storage data traffic to
the SAN relieves the front end of the network, the Local Area Network (LAN), of time consuming burdens such as
restore and backup.
SAN's are often contrasted with NAS's, but NAS is actually under the "storage network" umbrella. The major
difference is that the SAN is channel attached, and the NAS is network attached.
NAS -Primarily designed to provide access at the file level. Organizations working on LANs consider NAS the
most economical addition to storage.
DAS or SAN - Optimized for high-volume block-orientated data transfers. Storage Management Solutions

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 35 of 123

Traditionally, computers are directly connected to storage devices. Only the computer with the physical
connection to those storage devices can retrieve data stored on those devices. A SAN allows any computer to
access any storage device as long as both are connected to the SAN.
A SAN is usually built using Fiber-Channel technology. This technology allows devices to be connected to each
other over distances of up to 10 Kilometers. Devices connected using Fiber Channel can be setup in point-to-
point, loop, or switch topology.

3.1. Advantages of Storage Area Networks (SANs)


 Fault-tolerance - in a SAN environment, if a server were to fail, access to the data can still be
accomplished because other servers will have access to that data through the SAN.
 Disaster Recovery - provides a higher data transfer rate than conventional LAN/WAN technology, and
over greater distances, allowing backups or recovery to/from remote locations to be done during a
relatively short window of time.
 Network Performance Enhancement - data traffic does not traditionally travel over the LAN twice,
reducing the network traffic in half.
 Scalability - SAN technology breaks the physical distance limitation by allowing you to locate your
storage miles away as opposed to only a few feet.
 Manageability - Storage Management in a SAN environment allows off-loading the responsibility of
maintaining the storage devices to dedicated groups.
 Data Transfer Performance - SANs allow data transfer rates of up to 100 MB/s full duplex and initiatives
are underway to further increase this throughput.
 Cost Effectiveness - SAN technology allows the total capacity of storage to be allocated where it is
needed.
 Storage Pool - a storage pool can be accessed via a SAN, which reduces the total extra storage needed
for projected growth.
 Higher bandwidth and greater performance
 Modular scalability allowing for pay-as-you-grow expenditures
 Maximized hardware utilization
 High availability and fault tolerance for expanded business continuance
 Manageability
 Ease of integration into existing infrastructure
 Better access to information by sharing data across the enterprise
 Freedom from vendor dependence through the use of heterogeneous hardware and software

3.2. Advantages of SAN over DAS


The most effective SANs provide a wide range of benefits and advantages over DAS, including:
 More effective utilization of storage resources through centralized access
 Simplified, centralized management of storage, reducing administrative workload to save time and
money
 Increased flexibility and scalability through any-to-any storage and server connectivity
 Improved throughput performance to shorten data backup and recovery time
 Reduced LAN congestion due to removal of backups from production network
 Higher data availability for business continuance through a resilient network design
 Excellent scalability and investment protection allowing you to easily add more storage as your business
needs demand
 Superior security for storage environments
 Non-disruptive business operations when you add or re-deploy storage resources
 Proven short- and long-term return on investment (ROI

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 36 of 123

3.3. Today’s SAN Topologies


Many of today’s SAN topologies are fairly simple. However, SAN topologies are providing increased performance,
distance, and connectivity while creating a first generation SAN platform. Existing storage management
applications can be ported onto these SAN configurations since Fiber Channel networks encapsulate the legacy
SCSI protocol. As a result, SAN-attached devices appear to be SCSI devices. Most early SAN configurations fit
into one of the following topologies:

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 37 of 123

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 38 of 123

The most complex topologies use one more Fiber Channel switches with multiple storage management
applications. These configurations are made possible by using a technique commonly called “zoning” in which the
Fiber Channel network is partitioned to create multiple, smaller virtual SAN topologies. By doing this, the Fiber
Channel network looks like a simple SAN configuration to the host storage management application. Zoning
techniques and tools vary widely at this point, but are available from virtually every Fiber Channel vendor.

3.4. Difference between SAN and LAN


 Storage versus Network Protocol: A LAN uses network protocols that send smaller "chunks" of data
with increased communication overhead, reducing bandwidth. A SAN uses storage protocols (SCSI),
giving it the ability to transmit larger "chunks" of data with reduced overhead and higher bandwidth.
 Server Captive Storage: LAN-based systems connect servers to clients, with each server owning and
controlling access to its own storage resources, limiting accessibility. Any added storage is attached
directly to a server, and not shared over the LAN. A SAN allows storage resources to be added directly
to the network, without being tied to a specific server, allowing any server to access storage resources
anywhere on the SAN.

3.5. Difference between SAN and NAS


Storage Area Networks (SAN) and Network-attached storage (NAS) are both storage technologies attached to a
network, and represent the convergence of storage and networking technologies.
A SAN is a dedicated network of storage devices and host systems, separate and distinct from a company's
LAN/WAN. SANs are designed to handle large amounts of data traffic between servers and storage devices, and
keep the bandwidth-intensive backup traffic separate from the normal LAN/WAN traffic. Other benefits of a SAN
include improved connectivity from servers to storage devices, and centralized data management
A NAS is a specialized file server, connected to the network. It uses traditional LAN protocols such as Ethernet
and TCP/IP, preventing the device from being confined by the limitations of SCSI technology. NAS products, such
as Network Appliance Filers and Auspex servers are storage devices, and are attached directly to the messaging
or public network. NAS products tend to be optimized for file serving purposes only.
Each approach has its merits, but the general consensus is that SANs represent the future of storage
connectivity. NAS devices will continue to perform their specific functions, but trends indicate that data-centric
organizations are migrating towards the SAN model.

3.6. How do I manage a SAN?


There are two basic methods for SAN management:
SNMP (Simple Network Management Protocol): SNMP is based on TCP/IP and offers basic alert
management, allowing a node to alert the management system of failures of any system component. However
SNMP does not offer proactive management and lacks security.
Proprietary Management Protocol: There are a number of manufacturers that provide SAN management
software (see SAN Management). Running management software typically requires a separate terminal (typically
an NT server) connected to the SAN. Connecting this terminal to a SAN enables additional capabilities, such as
zoning (security), mapping, masking, as well as backup and restores functions, and fault management.

3.7. What is a SAN Manager?


A SAN Manager is proprietary Storage Area Network management software, allowing central management of
Fiber Channel hosts and storage devices. A SAN Manager enables systems to utilize a common pool of storage
devices on a SAN, enabling SAN administrators to take full advantage of storage assets, and reduce costs by
leveraging existing this equipment more efficiently.

3.8. When should I use a Switch vs. a Hub?


Hubs: Hubs used in small, entry-level environments and systems. They typically cost less than switches, but also
offer a lower throughput rate than switches.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 39 of 123

Switches: Used in data-intensive, high-bandwidth applications such as backup, video editing, and document
scanning. Due to the redundant data paths and superior manageability, switches are used in with large amounts
of data in high-availability environments
Are there reasons to use Switches instead of Hubs in a SAN?
Switches provide several advantages in a SAN environment:
 Failover Capabilities: If a single switch fails in a Switched Fabric environment, other switches in the
fabric remains operational. A Hub-based environment typically fails if a single hub on the loop fails.
 Increased Manageability: Switches support the Fiber Channel Switch (FC-SW) standard, making
addressing independent of the subsystem's location on the fabric, and provides superior fault isolation
along with high availability. FC-SW also allows host to better identify subsystems connected to the
switch.
 Superior Performance: Switches facilitate "multiple-transmission data flow", in which each fabric
connection can simultaneously maintain a 100MB/sec throughput. A hub offers a single data flow with an
aggregate throughput of 100MB/sec.
 Scalability: Interconnection switches provide thousands of connections without degrading bandwidth. A
hub-based loop is limited to 126 devices.
 Availability: Switches support the online addition of subsystems (servers or storage) without requiring
re-initialization or shutdown. Hubs require a Loop Initialization (LIP) to reacquire subsystem addresses
every time a change occurs on the loop. A LIP typically takes 0.5 seconds and can disable a tape
system during the backup process.

3.9. TruTechnology
Most SAN storage area network solutions utilize Fiber Channel technology, which provides higher speeds and
greater distances. SCSI devices, however, can function on a SAN by utilizing a SCSI to Fiber Bridge.

3.9.1. TruFiber
We call INLINE Corporation's Fiber Channel storage TruFiber because they feature Fiber Channel technology
from the host connections, through the controllers, to the drives. Many other Fiber Channel storage providers take
you down to slower SCSI, even in their high-end solutions. With INLINE TruFiber you know you are getting Fiber
Channel throughout.

3.9.2. TruCache
When it comes to performance, the single most important factor for any storage system is how well it makes use
of higher speed cache memory to enhance disk IO operations. Cache is used to increase the speed of read and
write operations as well as allow dual operation writes in applications such as mirroring. While the use of cache
provides an incredible performance gain, there is also an incredible risk associated with it. File system corruption
and lost data can result if the cache is not managed and maintained properly. For this reason, INLINE Corporation
utilizes our TruCache technology in high availability, redundant controller configurations. When you deploy a dual
controller system from INLINE, you are assured cache integrity because the system simultaneously mirrors all
cache and maintains complete coherency. In fact, INLINE Corporation differentiates itself from most other
vendors by offering independent paths from two different controllers to the same disk simultaneously, while
supporting reads and writes from both controllers. TruCache insures high-performance and data integrity when
you operate in an Active/Active (multi-controller) mode of operation.

3.9.3. TruMap
When offering multiple ports to hardware RAID controllers, one often-overlooked feature is port control. On the
SanFoundation and MorStor product lines you have up to 128 two-gigabit host connections. Flexibility in mapping
the ports on each controller makes management infinitely easier. TruMap gives you the ability to map each port
on a controller using one of three methods: One-to-One, One-to-Any, or Any-to-One. You can choose the
appropriate mapping scheme based on your needs such as security purposes, bandwidth provisioning (QoS),
Functions and Network segregation. This allows you to maintain bandwidth for mission critical and sensitive
applications as well as insure minimum or maximum data rates to a specific LUN.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 40 of 123

3.9.4. TruMask
Network security has never been more important than it is now. Because today's storage implementations are
often on a network, in addition to being directly attached, storage arrays must have their own level of security to
insure data integrity and privacy. To answer this need, INLINE Corporation utilizes our TruMask option to protect
your valuable data. TruMask gives you control over which arrays, and even LUNs, can be viewed by individual
hosts and storage management applications. Because TruMask works down at the LUN level you have the ability
to mix different data security classifications within a single array. When TruMask is invoked, the storage array
looks at each computer connected and instantaneously determines which LUNs a computer can see as well as
access. TruMask is a key component in successful SAN installation.

3.9.5. TruSwap
Today's data storage solutions need to be not only highly available and highly reliable but they also need to be
easily maintained. INLINE Corporation, realizing this need, has designed all of our data storage arrays to be truly
user friendly, even when it comes to maintenance. Our TruSwap technology allows hot-swap removal and
replacement of components while the array is operational. During normal operation, an INLINE array can be
serviced and maintained without ever shutting down the array and interrupting data access to the users. Every
component involved in data integrity is hot swappable and can be removed and replaced in less than 5 minutes.
INLINE solutions are quite different from other arrays because they do not require special tools, complex cabling,
a specially trained engineer and worst of all - downtime. Most of the servicing can be performed quickly and easily
by untrained personnel. With INLINE Corporation's TruSwap you stay online while you replace the necessary
component in less than 5 minutes.

3.10. Features of a SAN


 Backup Capacity: Increasing data storage requirements and the need for 100% availability of
applications have overwhelmed SCSI backups across the LAN.
 Capacity Growth: Both IDC and Gartner Group estimate that data is growing at a rate of over 88%
annually. To put this in perspective a 750GB enterprise in 2000 will require 5TB in 2003.
 System Flexibility/Cost: A SAN is a storage-centric network, providing easy scalability, allowing servers
and storage to be added independently of each other. Additional devices, including disk arrays and tape
backup devices can be added to the SAN without disrupting servers or the network.
 Availability/Performance: The use of storage data transmission protocol, including SCSI, permits the
transfer of large amounts of data with limited latency and overhead.

3.11. SANs : High Availability for Block-Level Data Transfer


A storage area network, or SAN, is a dedicated, high performance storage network that transfers data between
servers and storage devices, separate from the local area network. With their high degree of sophistication,
management complexity and cost, SANs are traditionally implemented for mission-critical applications in the
enterprise space. In a SAN infrastructure, storage devices such as NAS, DAS, RAID arrays or tape libraries are
connected to servers using Fiber Channel. Fiber Channel is a highly reliable, gigabit interconnect technology that
enables simultaneous communication among workstations, mainframes, servers, data storage systems and other
peripherals. Without the distance and bandwidth limitations of SCSI, Fiber Channel is ideal for moving large
volumes of data across long distances quickly and reliably.
In contrast to DAS or NAS, which is optimized for data sharing at the file level, the strength of SANs lies in its
ability to move large blocks of data. This is especially important for bandwidth-intensive applications such as
database, imaging and transaction processing. The distributed architecture of a SAN also enables it to offer
higher levels of performance and availability than any other storage medium today. By dynamically balancing
loads across the network, SANs provide fast data transfer while reducing I/O latency and server workload. The
benefit is that large numbers of users can simultaneously access data without creating bottlenecks on the local
area network and servers.
SANs are the best way to ensure predictable performance and 24x7 data availability and reliability. The
importance of this is obvious for companies that conduct business on the web and require high volume
transaction processing. Another example would be contractors that are bound to service-level agreements (SLAs)
and must maintain certain performance levels when delivering IT services. SANs have built in a wide variety of
failover and fault tolerance features to ensure maximum uptime. They also offer excellent scalability for large

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 41 of 123

enterprises that anticipate significant growth in information storage requirements. And unlike direct-attached
storage, excess capacity in SANs can be pooled, resulting in a very high utilization of resources.

3.12. Server-Free Backup and Restore


Backup and Restore are one of the major headaches for most IT departments today. Constantly increasing
storage and bandwidth requirements result in increases in backup time that often exceed the capability of
available systems to move critical data to a backup device. In the event of data loss, restoring data from system
backup can be a major issue, due not only to the excessive time to restore data, but also due to the compromises
made in the backup process that can result in inconsistent or incomplete file sets.
With the advent of Fiber Channel SANs, new methods of managing data are possible. LAN-Free backup allows
multiple servers to directly access and share multiple tape libraries without transporting data over the LAN. This
dramatically decreases server and network loads and can greatly increase device utilization of backup storage
devices.
Predicated on LAN-Free backup, Server-Free backup expands on these capabilities by using the Storage Router
to take over data movement operations on the SAN, under control of a backup server. This has immediate
benefits, resulting in less server utilization and greater backup bandwidth.
Traditional backup applications copy data into the server, building a memory image of the data to be backed up,
and then transfer that image to the backup device. In Server-Free backup, the server only has to build a list of
devices and blocks related to each file, by accessing the operating systems file allocation tables. This list is then
transferred to the data mover device, which then performs the actual data movement. The benefits are obvious,
as the server no longer has the memory usage and double copies of data across the systems busses and storage
interfaces. Processor workload is also substantially reduced, as management of the data movement and file
system conversions are simplified.

3.13. Backup Architecture Comparison


Regardless of the backup architecture used, all backup methods are supported. This includes file and image
backup and restores. Additionally, compatibility between backup sets is maintained, so that tapes created by one
backup architecture can be read by any of the other methods in use.
The key to Server-Free Backup is a Data Mover device. The data mover devices must play several roles in
fulfilling the functions required. The device responsible for moving the data on the SAN must have several key
characteristics. In addition to being an addressable device, it must be able to act as both an initiator and target
device, to receive data and commands, as well as to read and write data to and from disk and tape. A Storage
Router, when used as a connection point for tape libraries, is the ideal candidate for acting ads a Data Mover, not
only providing the base capabilities required, but also providing intrinsic segmentation of data transported on the
SAN.

3.14. SAN approach for connecting storage to your servers/network?


Fault Tolerance
Each server in a traditional network can be considered an island of data. If the server is unavailable, then access
to the data on that server is not possible. In traditional networks, most storage devices are physically connected to
servers using a SCSI connection. SCSI, an acronym for Small Computer Standard Interface, is a hardware
interface that enables a single expansion board in a computer to connect multiple peripheral devices (disk drives,
CD ROMs, Tape Drives, etc.). Since access to data attached in this method is only available if the server is
operating, a potential for a single point of failure exists. In a SAN environment, if a server were to fail, access to
the data can still be accomplished, because other servers will have access to that data through the SAN.
In LAN/WAN environments, completely fault-tolerant access to data requires mirrored servers. This is a costly
proposition to many organizations. Also, a mirror server approach places a large amount of traffic on the network,
or requires a proprietary approach for data replication.

Disaster Recovery
SANs allow greater flexibility in Disaster Recovery. They provide a higher data transfer rate over greater
distances than conventional LAN/WAN technology. Therefore, backups or recovery to/from remote locations can
be done during a relatively short window of time. Since storage devices are accessible by any server attached to

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 42 of 123

the SAN, a secondary data center could immediately recover from a failure should a primary data center go
offline.

Network Performance Enhancement


Gaining access to data in a network without SAN attached storage requires data transfer over a traditional
network. Many applications today are created in multi-tier configurations. Take the example of a corporate intranet
in which a user can request access to data. The server takes this request and accesses the data via a network
attached CD-ROM tower where the data is actually stored. When the data request is complete, the data would
have traveled over the network twice. This puts double the traffic load on the network; first when it is transferred
from the CD-ROM to the server, and again when the server sends the data to the user who requested it. This
increased traffic may lead to poor network performance. If the CD-ROM tower were connected to a SAN, the data
would travel over the network only once.
A SAN would allow off-loading of the traffic between servers and the Network Attached Storage (in this case, the
CD ROM tower is the NAS device). Since the data travel between server and storage device is no longer seen on
the same network as the general user population, the data traffic does not travel twice over the LAN. This reduces
the network traffic in half, an immediate benefit to users who travel the LAN.

Scalability
In today's computing environments, the demand for large amounts of high-speed storage is increasing at
phenomenal rates. This demand brings new problems to IT departments. Of major concern is the physical
location of the storage devices. The traditional connection of storage is through SCSI connections. However SCSI
has physical distance limitations that could make it impossible to connect the necessary storage devices to the
servers. SAN technology breaks this physical distance limitation by allowing you to locate your storage miles
away as opposed to only a few feet.

Manageability
Many organizations have groups whose tasks are dedicated to specific functions. It is common to find NT
Administrators, Novell Administrators or Unix Administrators all in the same company. All of these Administrators
have two things in common: they all use a network to communicate to the clients they serve and they all require
disk storage.
For an organization's networking needs, you will often find a Network Manager or Network Group. They maintain
the installed base of hubs, switches and routers. The Network Manager ensures the network is operating
effectively and makes plans for future growth.
Few organizations have groups whose responsibilities include managing the storage resources. It is ironic that a
company's most crucial resource, data storage, generally may have no formal group to manage it effectively. As
is, each type of system administrator is required to monitor the storage attached to their servers, perform backups
and plan for growth.
Storage Management in a SAN environment could offload the responsibility of maintaining the storage devices to
a dedicated group. This group can perform backups over the SAN, alleviating LAN/WAN traffic for all type of
servers. The group could allocate disk space to any server regardless of the type. The SAN managers could
actively monitor the storage systems of all platforms and take immediate corrective action, whenever needed.

Data Transfer Performance


Current SCSI specifications only allow for data throughput rates of up to 160 MB/s. SANs allow for data transfer
rates of up to 100 MB/s full duplex. This means that the effective transfer rate between devices can reach 200
MB/s (100 MB/s in each direction). Parallel connections can be used in SANs to increase performance. Initiatives
are underway to further increase this throughput. This is truly the next generation data transfer mechanism.

Cost Effectiveness
Each server requires its own equipment for storage devices. The storage cost for environments with multiple
servers running either the same or different operation systems can be enormous. SAN technology allows an
organization to reduce this cost through economies of scale. Multiple servers with different operating systems can
access storage in RAID clusters. SAN technology allows the total capacity of storage to be allocated where it is
needed. If requirements change, storage can be reallocated from devices with an excess of storage to those with

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 43 of 123

too little storage. Storage devices are no longer connected individually to a server; they are connected to the
SAN, from which all devices gain access to the data.

Storage Pool
Instead of putting an extra 10GB on each server for growth, a storage pool can be accessed via SAN, which
reduces the total extra storage needed for projected growth.

Summary
SANs connect storage devices and provide high-speed, fault tolerant access to data. A SAN is different than a
LAN/WAN, in that a LAN/WAN is solely a communications highway. SANs are usually built with Fiber-Channel,
and are setup either in a point-to-point, loop, or switch topology.

3.15. Evolution of SANs


Today
Today, SANs can be effectively deployed within an enterprise, without risking loss of infrastructure investment, or
compromising the integrity of the stored data. Many top-tier firms have successfully installed SANs, and continue
to do so.
With any emerging technology, prospective users need to be informed and kept current on specific industry
standards and when they will be adopted and approved by industry associations, equipment manufacturers and
developers. Several hardware, software and industry consortiums, including SNIA (Storage Network Industry
Alliance), FCIA (Fiber Channel Industry Association), and FCA (Fiber Channel Alliance) represent the SAN
industry. In order to promote and adopt universal SAN standards, these groups co-develop universal hardware
and software protocols.
The SAN industry has universally adopted Fiber Channel Arbitrated Loop (FC-AL) as a protocol for exchanging
data within switched and non-switched environments. (See "What is Fiber Channel Arbitrated Loop?") As a result
of these standards, FC-AL accounts for the majority of implemented SANs today. FC-AL allows more than two
devices to communicate through common bandwidth, and allows greater flexibility and support than other
topologies. The available bandwidth of the loop is determined by the amount of traffic generated in the loop. Each
device in the FC-AL SAN must arbitrate for access to the loop before sending any data. Fiber Channel provides a
superset of commands that allows the orderly and efficient transmission of data, and ensures its integrity.
Although Switched Fabric SANs have been successfully implemented, they currently do not have a complete set
of standards agreed on by the industry. Users looking to invest in SAN technology are advised to implement FC-
AL topology, at least until standards have been set for Switched protocol. Switch manufacturers are currently
releasing products that are FC-AL compatible, and are fabric upgradeable. Despite the delay in producing a fully
supportable set of Switched Fabric standards, users can still benefit substantially from the performance gains
within SAN architectures by using the FC-AL topology.

The Future
The potential of SAN technology is limitless. Advances in both cabling and Fiber Channel technology occurs on a
regular basis. Unlike any other existing data transport mechanisms, fiber-optic technology offers a substantial
increase in bandwidth capacity. Fiber-optic cabling transmits data through optical fibers in the form of light. A
single hair-thin fiber is capable of supporting 100 trillion bits per second.
Currently, SAN backbones can support 1.025Gbps throughput; 2Gbps throughput are going to be available
shortly, and exponential leaps will occur in more frequently in the next few years. As bandwidth becomes a
commodity, data exchange will be liberated from size constraints, and storage will soon be measured in petabytes
(equal to 1000 terabytes). To meet the demand for fiber interfaces, storage vendors are now designing their
products with fiber backplanes, controllers and disk modules.
Future offerings include "serverless" backup technology, which liberates the traditional server interface from
backup libraries, to enable faster backups. Currently, heterogeneous platforms can only share the physical
storage space within a SAN. As new standards and technologies emerge, UNIX, NT, and other open systems will
enable data sharing through a common file system. Some major vendors in the SAN field are presently
developing products designed for 4Gbps throughput.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 44 of 123

3.16. Comparison of SAN with Available Data Protection Technologies


In a SAN environment, storage devices such as Tape Drives and RAID arrays are connected to many kinds of
servers via a high-speed interconnection, such as Fiber Channel. This setup allows for any-to-any communication
among all devices on the SAN. It also provides alternative paths from server to storage device. In other words, if a
particular server is slow or completely unavailable, another server on the SAN can provide access to the storage
device.
A SAN also makes it possible to mirror data, making multiple copies available. The high-speed interconnection
that links servers and storage devices essentially creates a separate, external network that's connected to the
LAN but acts as an independent network. There are a number of advantages to SANs and the separate
environments they create within a network. SANs allow for the addition of bandwidth without burdening the main
LAN. SANs also make it easier to conduct online backups without users feeling the bandwidth pinch. And, when
more storage is needed, additional drives do not need to be connected to a specific server; rather, they can
simply be added to the storage network and can be accessed from any point. Another reason SANs are making
big waves is that all the devices can be centrally managed. Instead of managing the network on a per-device
basis, storage can be managed as a single entity, making it easier to deal with storage networks that could
potentially consist of dozens or even hundreds of servers and devices.

Fig. 3.16 – SAN implementation on LAN

The interconnection of choice in today's SAN is Fiber Channel, which has been used as an alternative to SCSI in
creating high-speed links among network devices. Fiber Channel was developed by ANSI in the early 1990s,
specifically as a means of transferring large amounts of data very quickly. Fiber Channel is compatible with SCSI,
IP, IEEE 802.2, ATM Adaptation Layer for computer data, and Link Encapsulation, and it can be used over copper
cabling or fiber-optic cable. Currently, Fiber Channel supports data rates of 133Mbytes/sec, 266Mbytes/sec,
532Mbytes/sec, and 1.0625Gbits/sec. A proposal to bump speeds to 4Gbits/sec is on the drawing board. The
technology supports distances of up to 10 kilometers, which makes it a good choice for disaster recovery, as
storage devices can be placed offsite.
SANs based on Fiber Channel may start out as a group of server systems and storage devices connected by
Fiber Channel adapters to a network. As the storage network grows, hubs can be added, and as SANs grow
further in size, Fiber Channel switches can be incorporated.
Fiber Channel supports several configurations, including point-to-point and switched topologies. In a SAN
environment, the Fiber Channel Arbitrated Loop (FCAL) is used most often to create this external, high-speed
storage network, due to its inherent ability to deliver any-to-any connectivity among storage devices and servers
An FCAL configuration consists of several components, including servers, storage devices, and a Fiber Channel
switch or hub. Another component that might be found in an arbitrated loop is a Fiber Channel-to-SCSI bridge,
which allows SCSI-based devices to connect into the Fiber Channel-based storage network. This not only
preserves the usefulness of SCSI devices but also does it in such a way that several SCSI devices can connect to
a server through a single I/O port on the server. This is accomplished through the use of a Fiber Channel Host
Bus Adapter (HBA). The HBA is actually a Fiber Channel port. The Fiber Channel-to-SCSI bridge multiplexes
several SCSI devices through one HBA.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 45 of 123

The FCAL provides not only a high-speed interconnection among storage devices but also strong reliability. In
fact, you can remove several devices from the loop without any interruption to the data flow.
The major benefit of SAN is its ability to share devices among many servers at high speeds and across a variety
of operating systems. This is particularly true in a centralized Data Center type environment. However, SAN’s are
expensive, difficult to configure and costly to manage. The costs of SAN implementation would make them
prohibitive in a geographically diverse, branch office or retail, environment.

3.17. SAN Solutions


3.17.1. SAN Hardware Solutions

3.17.1.1. ADIC SAN Solutions


Traditional Distributed Model
Distributed backup offers high speed and network performance however, since data is stored in isolated systems
throughout the network, the cost to manage and scale this model raises as data demands increase.

Fig. 3.17.1.1.1 - Distributed backup becomes costly to manage as data grows.

Traditional Backup over the LAN


Backup over the LAN offers centralized data storage and control. However, network speed and performance
decreases as network traffic increases, particularly in organizations that provide 24x7 network data services.

Fig. 3.17.1.2. - Network speed and performance suffer as backup traffic increases.

SAN Backup Includes Best of Both Worlds


ADIC's Open SAN Backup offers all the benefits of traditional models—with none of the drawbacks.
SAN Backup moves large backup data transfers off your LAN, freeing your network for critical business
transactions while reducing backup times significantly. Backup data is consolidated in a library of shared tape
drives, where you can centralize data control and scale storage easily and economically.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 46 of 123

Fig. 3.17.1.3. - SAN backup protects LAN performance and scales easily and cost-effectively.

3.17.1.2. SAN by SUN


Introducing the SUN StorEdge Open SAN
Simplify your Storage Area Network (SAN) with the Sun StorEdge Open SAN Architecture. The comprehensive,
unified SAN architecture scales from the workgroup to the data center and enables you to optimize storage
utilization, improve performance, simplify management, and increase availability while lowering TCO. Plus, it's
optimized for Sun environments and supports heterogeneous operating systems. The architecture provides a
common full-fabric SAN infrastructure combined with high-availability storage products to help ease SAN
management and allow you to consolidate storage resources on the network. What's more, the Sun ONE
software strategy makes it easier to design, deploy, and manage a massively scalable SAN. And the open SAN
management software provides a centralized management platform for viewing and managing the SAN storage
environment.

Overview
Storage growth continues to escalate, yet IT departments have to manage more data with constant or declining
resources. Sun helps you meet this challenge with a comprehensive set of products and services that eases
storage area network (SAN) management and consolidates storage resources on the network. The Sun StorEdge
Open SAN architecture delivers on the promise of SANs by simplifying SAN management, optimizing resource
utilization, and driving down total cost of ownership (TCO).

Flexibility
The Sun StorEdge Open SAN architecture has flexibility designed in to allow it to meet a wide range of customer
requirements. Whether your SAN needs are small or large, simple or more challenging, local or worldwide, the
Sun StorEdge Open SAN architecture can support your design today and grow with you in the future.

Manageability
Sun has taken a leadership role in designing, promoting, and adopting open standard based SAN management.
Taken together with existing management interfaces and tools, Sun is able to deliver simple to use
heterogeneous management software as well as enable third party software vendors to provide additional choice
for our customers.

Low Total Cost of Ownership


The Sun StorEdge Open SAN architecture enables storage consolidation, disaster recovery, shared file systems,
multi-host high availability, and improved backup and recovery solutions. Solutions built around the Sun StorEdge
Open SAN architecture can simplify management, reduce training requirements, improve utilization, increase
efficiency and improve reliability. All of these benefits can reduce costs.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 47 of 123

Compatibility
The Sun StorEdge Open SAN architecture has as a particular focus implementing and taking advantage of open
standards. Whether through early adoption of management standards, using SCSI and Fiber Channel standards
or ensuring that switches interoperate, openness is a key design goal and practice throughout the architecture.

Availability
The Sun StorEdge Open SAN architecture enables extreme levels of availability. From the component level
through best practices, the architecture is capable of meeting your availability requirements.

Performance
The Sun StorEdge Open SAN architecture offers very high performance. The architecture supports 1 and 2 GB
Fiber Channel today and will incorporate 10 GB Fiber Channel in the future. Trunking capabilities between
switches, a high performance shared file system, and load balancing on hosts are some of the means to provide a
powerful set of building blocks to construct a SAN capable of world record performance.

3.17.1.3. Features of SUN StorEdge


 Variable disk allocation unit (DAU) and I/O schemes
 2 Gb infrastructure support.
 The Sun StorEdge Open SAN architecture supports heterogeneous hosts.
 Backward compatible with 1 Gb products
 Brocade or McData operability in a SAN 4.2 environment.
 Failure Detection and Switchover Failover
 Support for Explicit LUN Failover (ELF)
 Heterogeneous load balancing.
 High-performance SAN file system

3.17.1.4. Benefits of SUN StorEdge


 Flexible architecture allows application requirements to dictate SAN design.
 Allows greater flexibility when creating larger SANs.
 Flexible configuration optimizes performance for your specific environment helping you to attain the best
return on your SAN investments.
 Allows better storage utilization for data consolidation and large cluster configurations.
 Sun StorEdge Traffic Manager software reduces system administrator costs by simplifying the process of
adding bandwidth to a server.
 Improved throughput for data-intensive applications offers better utilization of servers.
 Common building blocks and rules allow customers to build their SAN to best reduce complexity and
management costs.
 Helps lower system management costs by simplifying the process of adding bandwidth to servers.
 Provides ease-of-management and better return on storage investments
 Gives customers the ability to choose key SAN components with the confidence that everything will work
together.
 Provides the assurance that your data can be restored regardless of the retention period and the
environment in which you need to restore it. This is particularly important in regulatory environments
where records may be retained for a number of years.
 Allow customers to avoid SAN problems and protect system availability.
 Increases SAN design flexibility and reduces total cost of ownership (TCO) through storage
consolidation and data management.
 No single point of failure.
 Use Sun Cluster to further improve system availability. Eliminate single point of failure on non-Sun
systems.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 48 of 123

 Highly scalable performance.


 Performance can be scaled easily while ensuring availability.
 Provides enhanced productivity and faster information distribution.
 Dramatically reduces risk of data loss in the event of an outage and greatly improves mean time to
recovery.
 Improves total cost of ownership (TCO) as large amounts of data can be accessed from lower-cost
media.
 Reduces the cost of server memory and storage capacity.
 Lowers TCO by consolidating resources simplifying system management and minimizing administrator
training.

3.17.2 SAN Management Software Solutions

3.17.2.1. SAN by VERITAS


VERITAS empowers IT personnel with the heterogeneous SAN software solutions needed to meet the storage
requirements of applications, departments, external organizations, and individual end-users. The result of more
than ten years of storage virtualization expertise, VERITAS solutions allow storage administrators to take full
advantage of SAN infrastructures. Our SAN solutions speed recovery of applications and data after a fault and let
you quickly and easily manage storage capacity requirements. VERITAS solutions enable storage infrastructure
performance to meet application requirements--and ensure the survivability of your data.

Data Protection
VERITAS data protection solutions deliver robust, scalable storage management, backup, and recovery--from the
desktop to the data center--for heterogeneous environments. Organizations of every size rely on VERITAS for
comprehensive data protection.
With our data protection solutions, there's no need to use multiple backup products of UNIX, Windows, and
database backup. And you'll never have to rely on end users to copy critical corporate data from desktops and
mobile laptops onto a networked file server. VERITAS data protection solutions streamline, scale, and automate
backup throughout your organization.
VERITAS products safeguard the integrity of all corporate data on all platforms and in all databases. VERITAS is
the world's most powerful data protection solution for fast, reliable, enterprise-wide backup and recovery.

Disaster Recovery
Disaster recovery is a business essential. Companies large and small need their data protected, accessible, and
uninterrupted in the even of a disaster. VERIAS disaster recovery solutions are based on software products that
work together efficiently and seamlessly across all platforms and applications. And our solutions are flexible
enough to grow along with your business. As you build your disaster recovery plan, VERITAS can provide you
with a layer of protection at every stage.

High Availability
Maintaining high levels of access to information across heterogeneous environments without compromising a
quality user experience can challenge any IT organization. VERITAS high availability solutions protect the user
experience from servers to storage. IT staff can use VERITAS products to build higher levels of availability
throughout the data center, even at levels once thought too expensive, complex to install, or difficult to manage.

The VERITAS VERTEX Initiative


The VERITAS VERTEX Initiative includes a wide range of solutions for use with NetBackup™ DataCenter that
can translate into business success for your corporate IT environment. In today's rapidly changing business
environment, IT departments are being challenged to manage more data within shorter time frames. The
VERITAS VERTEX Initiative delivers alternate backup methods using frozen-image or "snapshot" technology. It's
the future of data protection.

3.17.2.2. Veritas SAN Applications

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 49 of 123

Reducing Total Cost of Storage with SANs


SANs reduce costs by providing a better management infrastructure for storage in a centralized data center. Two
examples of cost savings due to centralized SANs are peripheral sharing and capacity management.

Peripheral sharing
According to a June, 1999 Dataquest survey, 56% of respondents reported using less than 50% of RAID capacity
due to the inability to share the devices among many servers. As a result, they estimate an IT manager in a
distributed storage environment can manage only one-third the storage capacity managed in a centralized
environment.
The most obvious way in which SANs helps reduce costs is by facilitating sharing of sophisticated peripherals
between multiple servers. External storage is commonplace in data centers, and sophisticated peripherals are
generally used to provide high performance and availability. An enterprise RAID system or automated tape library
can be 5 to 10 times more expensive than a single server, making it prohibitively expensive to use a one-to-one
devices attach approach. Even with multiple channel controllers in the peripheral, the cost equation is often not
attractive.
Fiber Channel-based storage networking provides three key features to facilitate peripheral sharing. First, flexible
many-to-many connectivity using Fiber Channel hubs and switches improves the fan-out capabilities of a
peripheral, allowing multiple servers to be attached to each channel. Second, the increased distance capabilities
of fiber optic cables break the distance restrictions of SCSI, allowing servers to be located up to 10Km from the
peripheral. Finally, Fiber Channel hubs and switches support improved isolation capabilities, facilitating non-
disruptive addition of new peripherals or servers. This avoids unnecessary downtime for tasks such as installing a
new I/O card in a server.
However, storage management software is also required in combination with Fiber Channel networks to deliver
true SAN functionality. Software tools are used to allocate portions of an enterprise RAID to a server in a secure
and protected manner, avoiding data corruption and unwanted data access. Storage management software also
can also provide dynamic resource sharing, allocating a tape drive in an automated tape library to one of many
attached servers during a backup session on an as needed basis.

Capacity Management
With traditional locally attached storage, running of out disk space means that new storage must be physically
added to a server either by adding more disks to an attached RAID or adding another I/O card and a new
peripheral. This is a highly manual and reactive process, and leads IT managers to deploy large amounts of
excess capacity on servers to avoid downtime due to re-configuration or capacity saturation.
SANs allow many on-line storage peripherals to be attached to many servers over a FC network. Using tools to
monitor disk quotas and free space, administrators can detect when a server is about to run out of space and take
action to insure storage is available. Using storage allocation software, free space on any RAID can be allocated
to a hot server putting the storage where its needed most. As existing SAN-attached peripherals become
saturated, new peripherals can be added to the SAN hubs or switches in a non-disruptive way allowing free space
to be allocated as needed.

Increased Availability without Exponential Costs


Delivering high levels of availability without exponential costs is a key requirement for the new internet-driven data
center. SANs promise three techniques to achieve this: multi-server availability clusters, virtualization of SAN
resources to minimize application disruption, and the automation of manual storage tasks to avoid re-active
management.

Virtualization of Physical Storage to Minimize Application Disruption


Minimizing disruptions to applications while storage configurations change is key to achieving near-continuous
availability and performance. This can be a challenge, as new storage must be added to keep up with capacity
demands, or storage configurations must be changed to optimize performance or improve availability levels. With
traditional locally attached storage, external RAID controllers can be used to isolate configuration changes from
the host maintaining application uptime. However, downtime still has to be scheduled to add new RAIDs, and map
the new storage to applications. External RAID controllers, which comes at a premium to JBOD.
As noted, SAN-attached, external storage can be added to Fiber Channel hubs or switches and portions of the
new storage can be mapped to one or more servers. The characteristics of Fiber Channel allow these new

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 50 of 123

storage peripherals to be added without breaking a SCSI chain. However the server application is still unaware of
this new storage since it must be stopped and re-started to access new volumes. Storage virtualization software,
such as advanced logical volume managers, can allow an existing application volume to dynamically grow to
include the new SAN attached storage. This completes the process of adding new storage to a server without
disrupting application up-time. With logical volume management, an application volume can physically exist in one
more peripherals or peripheral types. Virtualizing physical storage into logical volumes is key to minimizing
disruptions.
SANs will also allow a large number of varying types of storage to be available to a server farm. Available storage
will vary in terms of cost, performance, location, and availability attributes. By virtualizing physical SAN-attached
storage in terms of its attributes, administrators will be able to add and re-configure storage based on its
properties rather than performance configuration through device level mapping tools. Allowing administrators to
dynamically reconfigure and tune storage while applications are on-line improves application performance and
dramatically reduces the likelihood of unplanned downtime. In addition, these attributes allow administrators to set
policies that automatically allocated unused storage to servers and applications where necessary.

Reducing the Cost of Availability with Multiple Paths to Storage


Implementing high levels of availability require that server applications recover from failures as quickly as
possible.
Traditionally server clusters with replicated data sets have been used to ensure that when failover software re-
starts the application on a new server, an up-to-date copy of the data is readily available and the application can
be quickly re-started. Advanced clustering tools allow fail-over between multiple servers, providing more flexible
and robust implementations. However this increases the cost of storage, as every server must have a copy of the
data locally attached.
Fiber Channel networks facilitate many-to-many connectivity between multiple servers and multiple peripherals.
This means that each server can have a physical path to the storage of each server in an availability cluster.
When an application fail-over occurs, a path from the new server can be provisioned to the failed servers’ data,
and the application can be re-started. Since the storage does not need to be replicated behind each server, it is
possible to implement increasing levels of availability without significantly increasing storage costs.

Automation of Manual Tasks using Policies


Several of the examples already discussed demonstrate how storage management tools, combined with SAN-
attached storage, can lower the administrative cost of application availability. Increasing the automation of these
tasks will keep a lid on management costs. Policy management refers to the use of a policy administration tool, in
which an IT manager assigns high level rules to storage management applications and storage resources, as well
as to policy agents which enforce those rules.
For example, a capacity management application could define that 50% of all new storage added to a SAN is
allocated to a hot application. The policy agents running on the SAN would detect the new storage, automatically
map the volumes to the hot server, and grow the application volumes to include the new capacity. Similarly, a
data protection application can define that every time a file system grows to a certain size a backup is performed
to the highest performance, most-available tape library attached to the SAN.
SANs provide many more resources to monitor and configure data, thus creating a more complex storage
environment. By using policies to automate storage management, IT managers can ensure that the benefits of
SANs are fully realized, while the total cost of managing these new systems does not increase as additional levels
of complexity are introduced.

3.17.2.3. Example for Increasing Availability Using Clustering


A new promotion for an eCommerce site requires that the organization’s data center facilitate additional business
processes for a short time, both internally and externally via the Internet. In support of the campaign, twenty new
servers must be brought on-line to deal with the short-term increase in demand, even though the additional
storage capacity will peak after a period of months, and then slow down until the next promotion. As a result, the
data center manager must be prepared to do two things: re-purpose application servers and plan for multi-
dimensional growth.
Re-purposing servers to accommodate temporary peaks in data access can be very labor intensive and
potentially costly. Data must be replicated to many servers and storage configurations have to be tuned for each
server. Performing extensive re-configurations results in a loss of productivity as use of the application servers is
suspended and a significant amount of administrative labor is invested in the process

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 51 of 123

Fig. 3.17.2.3.1 – Overview of SAN Clustering


However, as shown in the diagram above, implementing a SAN can allow server farms to share access to a
storage farm. With storage management tools applications can be moved to different servers and still have
access to their data. For read-only applications, a single copy of data can be shared between multiple application
servers removing the necessity of replicating data. And because this can all be done while applications are on-
line, productivity losses are minimized.

Fig. 3.17.2.3.2 – Implementation of SAN


SAN architectures can also accommodate multi-dimensional growth. Capacity management techniques can be
used to ensure new storage can be added continuously, so server applications always have storage capacity they
need. If more processing power is needed, more servers can be added to the SAN to provide better access to
stored data. For higher read performance access to data, multiple copies of data can be created on the SAN, thus
eliminating bottlenecks to a single disk.

3.17.2.4. VERITAS SAN Solutions


Online Storage Management: The SAN Virtualization Layer
Virtualization of physical storage is necessary to ensure that SAN applications remain on-line as storage
configurations change. VERITAS has been delivering enterprise class storage virtualization with VERITAS
Volume Manager (VxVM) for UNIX for over ten years, and is developing the volume manager bundle for
Microsoft’s Windows 2000. VxVM has many intrinsic features that allow it to immediately take advantage of SAN
configurations. Some key capabilities applicable to SANs are:
 Create, synchronize and fail-over to a Remote Mirror while the application remains on-line.
 Dynamic growth or shrinkage of application volumes allowing non-disruptive addition or deletion of SAN
storage
 Performance optimization to allow hot disk locations to be moved or RAID configurations to be changed
while the application remains on-line.
 Capability to assign ownership of disk groups to a single server, preventing unwanted storage access
from another server on the SAN

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 52 of 123

 Dynamic Multi-Pathing to provide non-disruptive path-level fail-over and load balancing over multiple
Fiber Channel links between a server and storage peripheral.
VxVM can perform all of these operations for both JBOD and RAID peripherals on a SAN today and even mix and
match between peripheral types. By building applications on top of VxVM, these intrinsic virtualization features
can be made available without the server application being aware of the physical SAN configuration. This includes
other VERITAS applications such VERITAS File Server, Foundation Suite Editions for Oracle, and other third-
party applications.

LAN-Free Backup: Reducing the Backup Window


Improving backup and recovery performance, and minimizing disruption to applications, is often considered the
“killer app” for this first generation of SANs. The most often requested capability is to share tape libraries, and
tape drives in those libraries, between multiple servers. This is sometimes also called “LAN Free Backup” since
most backup data is now transferred to tape using a SAN topology instead of a LAN. VERITAS provides two LAN
Free Backup solutions using the VERITAS Shared Storage Option add-on feature:
 VERITAS BackupEXEC SSO for NT and NetWare (department and workgroup application)
 VERITAS NetBackup 3.2 SSO for NT and UNIX (enterprise applications)
There are four benefits of LAN Free Backup. Since the many-to-many connectivity of Fiber Channel allows a tape
library to be shared by multiple servers, LAN Free Backup amortizes the cost of that resource over multiple
servers, making it much more affordable for mid-range UNIX or NT servers to have direct access to a library. It
also minimizes disruption by removing backup traffic from the production LAN and onto the SAN, avoiding
saturating the client-server LAN with backup traffic and allowing normal LAN operation to continue. Removing
backup traffic from the LAN also increases performance by reducing the backup window, since data is backed-up
and restored via a 1Gbps Fiber Channel-based SAN rather than across a 10/100 Mbps Ethernet network. In
addition, VERITAS' LAN Free Backup products increase automation by intelligently scheduling backup jobs -
dynamically sharing tape libraries based on specific backup policies. Since tape resources can be dynamically
allocated to backup sessions on each server, intelligent scheduling can optimize the use of shared drives.

Clustering for Improved Availability


As an example, many internet-centric organizations rely on the availability of data to acquire and retain their
customers. A customer stays as long as the site is available and responsive. When disappointed by downtime,
that customer will go elsewhere and rarely returns.
Improving availability through clustering while containing costs is also a key capability of SAN architectures.
Applications must be able to fail-over from one server to any other in a server farm and re-map the application
data to the new server, avoiding the necessity to replicate the storage to every server. VERITAS Cluster Server
(VCS) provides the ability to create multiple 32-node application fail-over clusters today in a SAN environment for
Solaris and HP/UX, and multiple 64 NT node clusters to be managed all from one common cluster console. VCS
for Solaris and HP/UX are available today, and VCS for NT will be available by the end of 1999.
To implement a VCS SAN installation, the application must be able to configure paths to all storage on the SAN.
This can be done in two ways. The first is to use VCS in conjunction with VERITAS Volume Manager (VxVM).
VxVM permits all servers to see the storage on the SAN, but the storage isn’t explicitly mapped to the server
application unless it has ownership of the data. Tight integration between VCS and VxVM allows VCS to quickly
re-map the failed servers’ storage to the new server. The second technique requires VCS to be aware of how to
configure SAN equipment to setup the new path.

LAN Free HSM to Shared Tape Libraries


One key element of managing storage costs is optimizing the use of expensive storage. Hierarchical Storage
Management (HSM) is a technique common in mainframe environments in which infrequently accessed data is
automatically migrated to lower performance peripherals or removable media. Although HSM has traditionally
been seen as expensive to implement, SAN configurations can be used to lower this barrier. VERITAS Storage
Migrator/Shared Storage Option for UNIX allows multiple HSM servers to share automated tape libraries. Storage
Migrator is implemented as an add-on product to VERITAS NetBackup, leveraging VERITAS NetBackup SSO
SAN installations to provide even more SAN capabilities. In effect, Storage Migrator provides LAN Free HSM.
Just as in LAN Free Backup, LAN Free HSM to shared tape libraries amortizes the costs of sophisticated tape
libraries over multiple servers, as well as costs between both backup and restore and HSM applications. It also
minimizes disruption to client-server traffic on the LAN as more HSM traffic is transferred over the high-speed

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 53 of 123

SAN. Like LAN Free Backup, HSM over a SAN also increases automation by intelligently scheduling HSM
sessions to shared tape drive resources.

3.17.2.5. VERITAS SAN 2000: The Next Generation


To create the next stage in the evolution of SAN, VERITAS is creating new technologies to improve the ability of
SANs to virtualize storage resources, increase automation, and adapt to change. Building on storage
management standards as they evolve, VERITAS is focusing on the following areas:
Creating a standard services and API layer that will make existing storage management applications SANaware.
For example, VCS will be able to dynamically re-configure SANs to provision paths to storage. Or VERITAS
Volume Manager will be able to discover performance and availability attributes of SAN-attached storage to
determine the best software RAID configuration. Providing a common service layer will allow all VERITAS
applications to exploit SANs over multiple operating systems.
Embedding intelligent storage management functions into new types of SAN devices. With the many-to-many
connectivity of Fiber Channel networks and the increases in processing power, new types of devices can
implement storage management functions such as storage migration, virtualization, or replication. For example,
VERITAS Volume Manager can be used to provide virtualization functionality in a RAID controller.
Creating central, SAN-wide management applications that can improve visualization of SAN resources and
increasingly automate storage management. Optimizing performance and availability across multiple servers and
storage peripherals in a SAN requires a centralized management application to co-ordinate functions such as
capacity management and automatic capacity allocation.
Creating clustered versions of VERITAS Volume Manager and VERITAS File Server products to realize
multiserver shared data access in a SAN. Lastly, some high performance applications will benefit from distributing
applications over multiple servers and operating on shared read/write data.
Create extensions to VERITAS data protection products that maximize application availability in a SAN. Using
secondary hosts to backup snapshots of a production application server’s data, or using embedded SAN fabric
copy agents to move data directly between on-line and off-line storage, will minimize the disruption to application
clients during a backup operation, increasing the overall information availability of the data center.

3.17.2.6. Tivoli Storage Manager


When it comes to enterprise storage management for heterogeneous computing environments, no other solution
can match IBM Tivoli's Storage Manager (TSM). TSM supports the broadest range of platforms, devices and
protocols in the industry, and offers highly reliable, automated backup and recovery, archiving, hierarchical
storage management (HSM) and disaster recovery. TSM offers automated backup that is not only consistent,
timely and accurate, but flexible. With TSM you can schedule backups to take advantage of off-peak times,
priorities the order in which various data is backed up, and store only new or changed files to reduce network
traffic.
With TSM, you are able to dramatically reduce costs. With TSM's HSM feature, you can automatically move data
to the most costs-effective media in the storage hierarchy including disk, optical, and tape. Plus, it greatly eases
administrator and user tasks with intuitive graphical interfaces. In fact, TSM simplifies data recovery so users can
easily restore files without administrator involvement.

Features:
 Flexible Storage Architecture
 Open and Interoperable Solutions
 Exploit Data Assets
 Rapid Access to Data
 Growth and Capacity Management

Business continuance benefits


Data protection:
 Reduce backup window and increase tape utilisation
 Faster, more effective restore process

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 54 of 123

 Decrease server CPU utilization


Disaster tolerance:
 Offer remote vaulting/mirroring over 10 km
 Provide no single point of failure (SPOF)
 Increase availability, including automatic path selection/failover
 Enhance load balancing

Business efficiency benefits:


Storage consolidation:
 Improve storage asset utilization
 Provide capacity-on-demand
 Reduce management cost
Data sharing and access:
 Allow any-to-any access with a single view of data
 Increase return on assets
 Improve reliability, availability and serviceability

3.17.2.7. Tivoli SANergy


With Tivoli SANergy, customers can efficiently centralize their disk storage resources to reduce administration
overhead, improve network bandwidth performance, and achieve greater ROI. Tivoli SANergy enables users
implementing storage area networks (SANs) to transparently share access to common storage, volumes, and
files. Centralized disk storage volumes and files can be shared by multiple computers running UNIX®, Mac OS®
and/or Microsoft® Windows® to provide higher bandwidth and lower CPU overhead than server-based sharing
over an IP network. The resulting high-performance shared storage environment can significantly reduce IT costs
by consolidating storage space and eliminating the replicated data common to multihost environments. Tivoli
SANergy helps you reach your full SAN potential because it can:
 Simplify SAN storage centralization and administration through the power of heterogeneous sharing at
the volume, file, and byte level
 Extend industry-standard networking to utilize the high bandwidth of any SAN media, including Fiber
Channel, SCSI, SSA, iSCSI, and InfiniBand
 Enable storage centralization without the performance limiting overhead of server-based file sharing
 Increase data availability by eliminating the single-point-of-failure potential of server-based sharing
 Reduce the total amount of storage required in a SAN by eliminating redundant or replicated data
 Reduce the number of disk volumes required in a SAN and improve the efficient deployment of unused
space
 Increase the server-to-storage scalability ratio, eliminating the expense and bottleneck of dedicated file
servers
 Use industry-standard file systems and SAN and local area network (LAN) protocols
 Work with almost any SAN and LAN product, regardless of hardware and software
Tivoli SANergy eliminates the one-to-one relationship between the number of SAN-connected computers and the
number of disk volumes needed by those computers. Tivoli SANergy transparently enables multiple computers to
share single disk volumes on the SAN-storage. In fact it allows many combinations of computers running
Windows NT, Windows 2000, MacOS, Irix, Solaris, AIX, Tru64, Red Hat Linux and DG/UX to all share the exact
same disk volumes at the same time - across platforms. And if the applications running on those computers are
capable, Tivoli SANergy even enables the transparent sharing of the exact same files at the same time across
platforms.

Features Advantages Benefits


Single file system Uses native file system on MDC or any Eliminates the need to manage multiple file systems,
other Tivoli SANergy-enabled third- regardless of the number of computers connected to

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 55 of 123

party file system the SAN


LAN-flexible Utilizes any LAN hardware and Continues using your existing LAN to handle
software metadata traffic and low-bandwidth data
SAN-flexible Utilizes any SAN hardware and Works equally well with Fiber Channel, SCSI, SSA,
software iSCSI or InfiniBand SANs with components from any
manufacturer
Heterogeneous Supports true file sharing across Works with the mix of computers and operating
operation heterogeneous networks systems used today
Enterprise Enables management control through Enables immediate control through most SAN
Management the Web and SNMP management consoles

3.17.2.8. SAN-speed sharing for Application Files


For many high-bandwidth applications in collaborative work environments, sneaker-net has been faster than any
wire network. LANs suffer from the bandwidth-crippling overhead of network protocols.
With Tivoli SANergy, multiple high-bandwidth workstations run file sharing capable applications all sharing the
same application files simultaneously, at full SAN speeds. These applications include CAD, 3D modeling, and
design; graphics, RIP engines, and digital printing; animation and multimedia creation packages; and video and
film editing and compositing programs. Sneaker net is eliminated because the network is fast enough to handle
bandwidths of dozens of megabytes per second to each workstation. As a result, Tivoli SANergy can improve
collaboration, enhance operational flexibility and efficiency, simplify workflow, and increase productivity.

Inside Tivoli SANergy


Using patented techniques, Tivoli SANergy is implemented as a file system extension. It leverages the
distributed data sharing capabilities embedded within the Windows NT, Windows® 2000, UNIX, and Macintosh
operating systems. Tivoli SANergy redirects the data portion of standard network file input/output off the LAN
and to the SAN. Normal networking protocols (CIFS or NFS) establish access to shared files across a standard
LAN. The data itself flows at a much higher bandwidth over the more efficient SAN. SAN-connected storage
media is formatted in either NTFS, UFS, or EXT2 FS or any other file system that supports the SANergy open
API.
Tivoli SANergy extends the standard Windows NT, Windows 2000, Sun Solaris, or Red Hat Linux® file server to
act as the metadata controller (MDC) for shared storage. This MDC manages the access to storage across the
SAN by the computers running Tivoli SANergy client software.
The MDC manages access to common storage by providing the necessary file system metadata when requested
by the client computers. Hosts can then access the storage directly through their SAN connection. In a
heterogeneous sharing environment, this metadata sharing is critical to ensure the coherency of files being used
across the SAN. Metadata sharing also enables the continued use of all the network-access security mechanisms
already built-in to today’s operating systems. With the addition of two new application programming interface
(APIs), developers can leverage Tivoli SANergy by adding new MDC platforms and additional file systems. One
API allows file system vendors to enable a SANergy MDC to share access to their volumes and files, extending
the current SANergy support for NTFS, UFS, EXT2 FS, and any Tivoli Ready for SANergy-certified third-party
file system. The other API enables operating system vendors, like those that make NAS servers, to add Tivoli
SANergy MDC support hosts to their products. This could enable NAS servers to provide faster and more
efficient file sharing to their hosts. These two APIs are available through the Tivoli Ready program, in addition to
the already existing SANergy Simple Network Management Protocol (SNMP) APIs for administration and
configuration.

Enterprise-ready Availability
Tivoli SANergy High Availability is an add-on feature to the Windows NT and Windows 2000 versions of Tivoli
SANergy. It ensures that critical data remains available in the event of an MDC failure. If a Tivoli SANergy MDC
for Windows NT or Windows 2000 fails, the spare MDC running SANergy High Availability seamlessly assumes
the duties of the failed MDC. MDC dependent Tivoli SANergy hosts running Windows NT, Windows 2000, and
UNIX automatically remap their drives. Most network-aware applications, including database servers, carry on
without interruption. Tivoli SANergy High Availability is an essential component for SANs supporting corporate
databases, Web servers, and other business-critical applications.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 56 of 123

Enterprise-ready Management
In addition to using native and HTML based interfaces, administrators can use any SNMP management console
to manage Tivoli SANergy. A custom SANergy management information base (MIB) is included to support the
use of consoles, such as Tivoli Netview, HP OpenView, or SunNet Manager.

3.18. Fiber Channel


3.18.1. Introduction of Fiber Channel
Fiber Channel is a high performance serial interface that was developed by the computer industry specifically to
address data storage needs. Because it is an open standard that is protocol independent, Fiber Channel works
equally well regardless of operating environment. The additional benefits of Fiber Channel include increased
bandwidth, scalability and distance.
Fiber Channel runs at 200 MB per second, so applications can access data quicker and run quicker. By using
multiple Fiber Channel connections, or loops, the total bandwidth is increased 200 MB per loop. Eight loops
means 8 x 100 MB/s or a total bandwidth of 1600MB per second.

3.18.2. Advantages of Fiber Channel


 Cost Effective
 Designed specifically for storage
 Protocol independent
 Greater bandwidth
 Not host dependent
 Highly scalable in Storage Area Network Framework
 Greater distance between devices

3.18.3. Fiber Channel Topologies


There are basically many ways to set up your Fiber channel network. Here we consider three basic topologies for
Fiber channel.

3.18.3.1. Point-to-Point
This topology uses Fiber channel without a loop overhead, to increase performance and simplify cabling between
a RAID storage box and a host.
http://www.aspsys.com/hardware/nas_san/view.aspx/system_nas_san_pointtopoint_lg.aspxIn a point-to-point
configuration, there are only two devices and they are directly connected to each other. This is used in instances
where it is necessary to locate the physical storage in a different location from the server.
Reasons for this type of configuration could include security or environmental concerns.

Fig. 3.18.3.1 – Point-to-Point Topology

3.18.3.2. Fiber Channel Arbitrated Loop (FC-AL)

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 57 of 123

This topology allows you to attach up to 127 nodes without hubs and switches. FC-AL is a time-shared, full-
bandwidth, distributed topology where each port includes the minimum necessary connection function. Depending
on the distance requirements, workstations or servers can be connected to a single disk or a disk loop with either
optical Fiber or copper media.
To understand a loop configuration, picture a circle with several points around it. Each point represents a device
on a Fiber Channel Loop. Devices connected in this manner are said to be in a Fiber Channel Arbitrated Loop
(FC-AL). In this configuration, each device is connected to the next device and is responsible for repeating data
from the device before it, to the device after it. Should a device on a FC-AL fail, then no devices on the FC-AL will
be able to transmit data.

Fig. 3.18.3.2 – FC-AL Topology

3.18.3.3. Switched Fabric


In this topology, N_Ports (Node Ports) are connected to F_Ports (Fabric Ports) on a FC Switch. This connection
allows for a large number of devices to be connected together, and provides high throughput, low latency and
high availability. Depending on switch vendor support, fabric switches may be interconnected to support up to 16
million-plus N_Ports on a single network.
In a switch configuration, a device called a switch is the connection point for all devices attached to the SAN. The
switch internally provides the functionality of a FC-AL. The main advantage is the ability to bypass and isolate a
failed device.

Fig. 3.18.3.3 – Switched Topology

3.18.4. How do SCSI tape drives connect to a Fiber Channel SAN?


Because virtually every SAN implementation uses Fiber Channel technology, an industry standard network
interface is necessary. Fiber Channel connectivity requires a Host Bus Adapter (HBA) be attached to every server
and storage device on the SAN. Each port uses a pair of fibers for two-way communications, with the transmitter
(TX) connecting to a receiver (RX) at the other end of the Fiber Channel cable.

3.18.5. What is an Interconnect?


An Interconnect is the physical pipeline used for high-speed, high-bandwidth connection within a storage area
network. It is capable of accessing data at speeds 100 times faster than current networks. It connects all of the
pieces of a SAN, as well as providing scalability, connectivity, performance and availability. It is what connects the
different components of a SAN. I/O buses and networks are both example of interconnects.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 58 of 123

3.18.6. Scalable Fiber Channel Devices


Fiber Channel is easy to install and scale. A Fiber Channel device does not have to be connected directly to a
host. Instead, hubs and switches can be used to create Fiber Channel storage networks. Each loop supports up
to 126 devices on a single connection. This means that extremely high storage capacity is available by adding
devices to the loop. To contrast, SCSI buses can only handle 16 connections.
When it comes to cabling, Fiber Channel gives users a choice of copper or fiber optic. Both have a small diameter
allowing for more flexibility when running cable in tight spaces. Fiber Channel also allows for distances of up to
10km between devices when using fiber optics.

3.18.7. Features of Fiber Channel


 Hot-pluggability – Fiber Channel drives can be installed or removed while the host system is operational,
which is crucial in high-end and heavy-use server systems where there is little or no downtime.
 ANSI standard compliance for serial port interface – Fiber Channel does not require special adapters,
which can be expensive.
 Speed – In its intended environment, Fiber Channel is the fastest option available.
 Cost effectiveness – Relative to other high-end solutions, Fiber Channel is inexpensive because it does
not require special adapters.
 Loop resiliency – Fiber channel provides high data integrity in multiple-drive systems, including Fiber
Channel RAID.
 Longer cable lengths – Relative to LVD, Fiber Channel can maintain data integrity through significantly
longer cables. This makes configuring multiple devices easier.

3.18.8. Why Fiber Channel?


Fiber Channel is the solution for IT professionals who need reliable, cost-effective information storage and
delivery at blazing speeds. With development started in 1988 and ANSI standard approval in 1994, Fiber Channel
is the mature, safe solution for gigabit communications.
Today's data explosion presents unprecedented challenges incorporating data warehousing, imaging, integrated
audio/video, networked storage, real-time computing, collaborative projects and CAD/CAE. Fiber Channel is
simply the easiest, most reliable solution for information storage and retrieval.
Fiber Channel, a powerful ANSI standard, economically and practically meets the challenge with these
advantages:
 Price Performance Leadership - Fiber Channel delivers cost-effective solutions for storage and
networks.
 Solutions Leadership - Fiber Channel provides versatile connectivity with scalable performance.
 Reliable - Fiber Channel, a most reliable form of communications, sustains an enterprise with assured
information delivery.
 Gigabit Bandwidth Now - Gigabit solutions are in place today! On the horizon is two gigabit-per-second
data delivery.
 Multiple Topologies - Dedicated point-to-point, shared loops, and scaled switched topologies meet
application requirements.
 Multiple Protocols - Fiber Channel delivers data. SCSI, TCP/IP, video, or raw data can all take
advantage of high-performance, reliable Fiber Channel technology.
 Scalable - From single point-to-point gigabit links to integrated enterprises with hundreds of servers,
Fiber Channel delivers unmatched performance.
 Congestion Free - Fiber Channel's credit-based flow control delivers data as fast as the destination
buffer is able to receive it.
 High Efficiency - Real price performance is directly correlated to the efficiency of the technology. Fiber
Channel has very little transmission overhead. Most important, the Fiber Channel protocol, is specifically
designed for highly efficient operation using hardware.
Corporate information is a key competitive factor, and Fiber Channel enhances IT departments' ability to access
and protect it more efficiently.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 59 of 123

In fact, multiple terabytes of Fiber Channel interfaced storage are installed every day! Fiber Channel works
equally well for storage, networks, video, data acquisition, and many other applications. Fiber Channel is ideal for
reliable, high-speed transport of digital audio/video. Aerospace developers are using Fiber Channel for ultra-
reliable, real-time networking.
Fiber Channel is a fast, reliable data transport system that scales to meet the requirements of any enterprise.
Today, installations range from small post-production systems on Fiber Channel loop to very large CAD systems
linking thousands of users into a switched, Fiber Channel network.
 Fiber Channel is ideal for these applications:
 High-performance storage
 Large data bases and data warehouses
 Storage backup systems and recovery
 Server clusters
 Network-based storage
 High-performance workgroups
 Campus backbones
 Digital audio/video networks

Fig. 3.18.8 – Overview of Enterprise Fiber Channel Switch

3.18.9. Fiber Channel System


Fiber Channel systems are assembled from adapters, hubs, storage, and switches. Host bus adapters are
installed into hosts like any other SCSI host bus adapter. Hubs link individual elements together to form a shared
bandwidth loop. Disk systems integrate a loop into the backplane. A port bypass circuit provides the ability to hot
swap Fiber Channel disks and Fiber Channel links to a hub. Fiber Channel switches provides scalable systems of
almost any size.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 60 of 123

Fig. 3.18.9 – Fiber Channel systems are built from familiar elements
IT systems today require an order of magnitude improvement in performance. High-performance, gigabit Fiber
Channel meets this requirement. Fiber Channel is the most reliable, scalable, gigabit communications technology
today. It was designed by the computer industry for high-performance communications, and no other technology
matches its total system solution.

3.18.10. Technology Comparisons


Fiber Channel is a product of the computer industry. Fiber Channel was specifically designed to remove the
barriers of performance existing in legacy LANs and channels. In addition to providing scalable gigabit
technology, the architects provided flow control, self-management, and ultra-reliability.
Gigabit Ethernet is designed to enable a common frame from the desktop to the backbone. However, Fiber
Channel is designed to be a transport service independent of protocol. Fiber Channel's ability to use a single
technology for storage, networks, audio/video, or to move raw data is superior to the common frame feature.
ATM was designed at a wide area network with the ability to provide quality of service for fractional bandwidth
service. The feature of fractional bandwidth with assured Quality of Service is attractive for some applications.
For the more demanding applications, Class 4 Fiber Channel provides guaranteed delivery and gigabit bandwidth
as well as fractional bandwidth quality of service.
Fiber Channel's use in both networks and storage provides a price savings due to economies of scale associated
with larger volumes. Users can expect their most cost-effective, highest-performance solutions to be built using
Fiber Channel.
As shown in Table below, Fiber Channel is the best technology for applications that require high-bandwidth,
reliable solutions that scale from small to very large.

Fiber Channel Gigabit Ethernet ATM


Technology application Storage, network, video, Network Network, video
clusters
Topologies point-to-point loop hub, Point-to-point hub, switched Switched
switched
Baud rate 1.06 Gbps 1.25 Gbps 622 Mbps
Scalability to higher 2.12 Gbps, 4.24 Gbps Not defined 1.24 Gbps
data rates
Guaranteed delivery Yes No No

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 61 of 123

Congestion data loss None Yes Yes


Frame size Variable, 0-2KB Variable, 0-1.5KB Fixed, 53B
Flow control Credit Based Rate Based Rate Based
Physical media Copper and Fiber Copper and Fiber Copper and Fiber
Protocols supported Network, SCSI, Video Network Network, video

3.18.11. LAN Free Backup using Fiber Channel


Introduction
The continuing growth of managed data creates frustrating and costly storage management problems for IT
managers and network administrators. With corporate data amounts doubling every year one of the most difficult
challenges facing IT managers is backing up of continuously expanding data volumes. Scheduled down time or
slow time for data backup is no longer being tolerated.
In the past, there have been two basic ways to back-up data on a group of systems.
The first method, known as local or distributed backup, involves directly attaching a backup device to each
system. The second method, known as centralized backup, involves attaching a central backup device to a single
server. Backups for all other systems are then directed across a local area network (LAN), through the single
server, to the central backup device.
Today there is now a third approach emerging that is generally referred to as LAN-free backup. LAN-free backup
involves attaching a central back-up device to a Storage Area Network (SAN) which all attached servers share.
LAN Free Backup is the ideal solution for backing up and storing mission critical data without utilizing valuable
LAN bandwidth. Using SAN configurations such as LAN Free Backup to move, manage, and protect your critical
data without congesting the LAN easily eliminates bottlenecks associated with storage operations over the LAN.
LAN Free Backup solution is a SAN configuration that connects storage elements directly with backup devices
using Fiber Channel switches, host bus adapters (HBAs) and management software. The leaders in the Fiber
Channel industry have joined together to deliver a completely interoperable solution. LAN Free Backup solution
consists of Adaptec’s AFC-9110G Fiber channel HBA, Brocade’s Silkworm 2050 Fabric switch, Chaparral’s
FS1310 router, and ADIC’s FastStor 22 tape library. The LAN Free Backup solution has been certified with
Veritas Backup Exec v8.5 management software. It combines the benefits of high-speed transfer rates, easy
management, and cost effectiveness into a single solution.
Working together, Adaptec, Brocade, ADIC, Chaparral, and Veritas have performed rigorous testing to ensure
interoperability and ease of use. This LAN Free Backup solution comes complete with a recipe book, which
includes an installation guide, and technical support.

Fig. 3.18.11 – Implementing LAN free backup

3.18.11.1. Distributed Backup

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 62 of 123

The fastest data back up method for a server’s internal disk drives is to attach a backup device directly to the
server. This method is known as local or distributed backup. Figure below shows a group of systems in a typical
distributed backup configuration.

Fig. 3.18.11.1 – Distributed Backup Model


For small environments, distributed backup works very well. As the number of servers requiring back up
increases, distributed backup starts exhibiting problems. The first problem is the cost and a second and far more
serious problem is managing the backup process.
Distributed backup requires IT technicians to touch each system physically to perform backup operations. If the
server data exceeds the tape capacity, then the IT individual must monitor the operation and reload new tapes at
the proper time.
In larger organizations, distributed backup is not viable due to the lack of centralized management and the high
administrative cost associated with the management of multiple, discrete, backup operations

3.18.11.2. Centralized Backup


Centralized backup limits the management overhead to a single storage repository. Here, the problem isn’t
managing a centralized backup repository: rather the challenge is getting the data to it. Conventional centralized
backup solutions rely on an IP network as the data path. The problem with this solution is the TCP/IP processing
associated with transporting the sheer volume of data completely consumes the server CPU. This results in long
backup cycles that exceed the scheduled backup window. So, centralized backups often overflow into user
uptime - resulting in poor network response and generally unacceptable server performance.
Figure below illustrates a typical centralized backup scenario where the Ethernet LAN is the transport mechanism
for the backup data. It is interesting to note that data has to travel through two servers before transferring to the
tape unit. This is sometimes referred to as the centralized backup two-copy problem.
To achieve 4.8 MB/sec the server attached to the tape storage device must be an extremely high performance
machine. Currently, it is believed the sweet spot of the backup market is a 2 to 4 drive tape library backing up 6 to
10 NT servers. Total data to be backed up ranges between 200 GB and 600 GB.
The problem with a centralized backup method, as seen above, is poor LAN performance. With transfer rates
ranging anywhere from 1.6 to 4.8 MB/sec, a full backup of a 500GB site takes anywhere from 29 to 87 hours. This
time does not include daily incremental backups.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 63 of 123

Fig. 3.18.11.2 – Centralized Backup Model

The major advantage of a centralized method is the ease of management. Advanced backup management
products allow scheduling multiple backups in advance, which can proceed without operator intervention.
Backups can generally occur during slower weekend periods. For small to medium environments that do not have
heavily loaded LANs, conventional centralized backup is probably the most cost effective and easily managed
backup method.

3.18.11.3. SAN Backup


A third system back up method uses a dedicated storage network for storage operations. Figure below illustrates
a typical configuration.

Fig. 3.18.11.3.1 – SAN Backup Model

This concept of a dedicated storage network is known as a Storage Area Network or SAN. Backup methods
based on SANs offer all of the management advantages that centralized backup solutions offer coupled with high
data transfer rates generally associated with directly attached or distributed backup solutions.
SANs offer great promise but are relatively new to the market. In addition to increasing backup operations
efficiency, SANs allow storage to decouple from the server. Decoupling storage from the server allows IT

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 64 of 123

individuals more flexibility in controlling storage resources. Currently the only practical interconnect for a SAN is
Fiber Channel. Figure below shows a LAN-free Backup implementation based on a Fiber Channel SAN.
In Figure below, a backup operation involves copying data from internal server storage and writing it to the tape
library. In this case, data copies occur only once before data is written to tape.
Since backup data does not traverse the network stack, the CPU utilization is much lower than with the
centralized backup method. Given that the maximum transfer rate for 1 Giga bit Fiber Channel interconnect is
around 100 Mbytes/sec, the limiting factor or SAN backup performance is now the tape drive transfer rate. A FC-
based SAN can fully backup a 500GB site in about 15 hours using a two-drive tape library. Using a four-drive tape
library, the backup can be done in about 7.5 hours.
Figure below also shows a Fiber Channel to SCSI Router. Since native FC tape libraries are relatively new, this
enables using of legacy SCSI tape libraries.

Fig. 3.18.11.3.2 – Servers storage and FC HBA’s

With Fiber Channel providing 100 Mbytes/sec today (moving to 200 Mbytes/sec in the near future), there is more
than enough backup application bandwidth. The high bandwidth of Fiber Channel also allows sharing external
storage. Figure 5 shows a SAN configuration with external storage and an attached tape library. There are
numerous advantages to having storage external to the servers that include storage sharing, the ability to scale
storage independently, easier manageability storage, etc. A full discussion of these advantages exceeds this
document’s scope.

Fig. 3.18.11.3.3 – Server with FC HBA’s

Having storage external to the server introduces the possibility of performing a server-less backup. In a server-
less backup, the server issues a SCSI third party copy command to the backup device. The backup device then
becomes a SCSI initiator and copies the data directly from the storage elements. This has the advantage of not
requiring servers to copy data from the storage element and send it to the backup device. The server is not part of
the data movement and can therefore devote all its’ compute cycles to serving applications.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 65 of 123

3.18.12. Conclusion
A Fiber Channel, LAN-free backup solution offers all management advantages of a centralized backup scheme
coupled with the high performance of distributed backup. For cost sensitive solutions, a Fiber Channel hub can
replace the switch. Fiber Channel hubs are less expensive than switches but do not scale well for configurations
that involve external storage.
LAN-free backup using Fiber Channel is an excellent solution for environments that have a heavily congested
LAN and need to perform system backups without impacting LAN performance.
LAN-free backup is a first step into SAN technology. With the addition of external storage, the true power of SANs
can be realized. Applications such as storage centralization, virtualization, and clustering allow IT environments to
reach new levels of reliability, scalability, and maintainability.

3.18.13. LAN Free Backup Solution Benefits


Assurance of an interoperable solution
All products are rigorously tested for interoperability with all types of Fiber Channel devices and configurations to
deliver the utmost in compatibility, performance, and reliability - now and in the future.

Ease of installation
Recipe book that includes an installation guide and users manual makes installation easy. Installation and
technical support are available through a single point of contact.

Flexibility of design
As storage demand grows component selection is not limited to one brand.

Best of breed components


Adaptec - the industry leader in host storage business
Brocade - the industry leader in Fiber Channel switching
Chaparral - providing industry leading performance in Fiber Channel to SCSI connectivity
ADIC - the industry leader in tape auto-loaders

3.18.14. Fiber Channel Strategy for Tape Backup Systems


The basic architecture for tape backup and archive systems has remained unchanged since the introduction and
subsequent wide-scale implementation of the LAN nearly over 15 years ago. Fiber Channel technology allows for
the implementation of new tape backup system architectures that will greatly benefit systems administrators,
network engineers, and ultimately the customers that rely on these systems.
Two new architectures, Fiber Channel-based LAN-free and Server-less backup, achieve large increases in
overall, tape backup, system throughput by eliminating many of the bottlenecks.

3.18.14.1. Stage - 1 (LAN Free Backup)


LAN-free backup is the application of Fiber Channel technology to the tape storage sub-system to increase
performance in the overall storage system by eliminating the need to pull data over the LAN and through a file or
application server. Typically, it is deployed using a tape server, tape library, and disk-based storage all attached
directly to Fiber Channel infrastructure as shown in the diagram below. The tape library is attached to the Fiber
Channel network by using a bridge device, such as the ATTO FiberBridge, which also acts as a hardware buffer
for incoming data.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 66 of 123

Fig. 3.20.1 – LAN Free Backup Model

In this design, the tape server can stream data directly from the storage to the bridge device at 85 to 90 M/sec.
The only bottleneck is the speed of the tape library itself, and the realized throughput of the tape server itself.

Advantages of LAN-Free Approach


The advantage of the LAN-free backup approach is increased throughput to the tape devices and, hence shorter
backups. By removing the Ethernet bottleneck (2 - 8 M/sec.), the performance envelope is most affected by the
throughput of the tape units themselves, usually between 12 - 20M/sec. This improves performance by 2.5 to 10
times, an immediately realizable gain in efficiency. Use of tape RAID software can also be used to boost
performance by aggregating tape device bandwidth in the same manner as disk RAID.
LAN-free backup can also leverage existing assets, keeping cost of entry to SAN architectures low. Thus LAN-
free backup can be viewed as an upgrade to the existing tape storage sub-system, rather than as an entirely new
installation.
By having the ability to deploy existing assets in a new manner, thereby extending the lifetime of those assets, a
greater ROI is realized on these assets. Time to implement is also significantly shortened, which leads to more
immediate benefit to the system and reducing the overall cost of implementation.
Besides the obvious financial benefits, LAN-free backup brings with it service increases that will positively effect
end-users. Since the LAN is not involved there are no longer heavily loads placed on the LAN or on application
servers during backups. This enables higher overall service levels in the LAN and better application response
times.
LAN-free backup helps ensure that backups are complete without disruption to other systems. This in turn
reduces adverse effects of backups on normal business operations.

3.18.14.2. Stage - 2 (Server-Less Backup)


Server-less backup takes the LAN-free solution one step further. In this environment, the tape server is delegated
to the role of system coordinator rather than data mover. A copy device such as the ATTO FiberBridge takes on
the task of actually moving data from the disk storage to the tape library.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 67 of 123

Fig. 3.20.2 – Server Free Backup Model

Typically, there are several major elements to a server-less backup solution. First is the hardware infrastructure
deployed for LAN-free backup. Second a bridge device such as the ATTO FiberBridge capable of acting as a
copy device or independent data movement unit is needed to actual move the data. Finally, special control
software such as Legato's Celestra, which issues commands to the copy device and insures smooth operation of
the system. A tape server is still necessary, but acts as a place to house the control software more than as a
system device dedicated to moving data.
The copy device follows a similar philosophy to network computing devices sometimes referred to as network
appliances. It is a specialized device with sufficient and specialized resources to perform a specific rather than
general activity within a network or SAN. In the case of a copy device for server-less backup the copy device
needs to have enough compute power and memory to support the movement of large blocks of data. The copy
device must also support connections to other device that may be involved in the movement of the data, in this
case disk drives and tape libraries. Finally, the device must provide a software interface that allow it to interact
with software applications that wish to control, manage, and track the movement of data in the SAN. Currently,
the Extended Copy Command interface is the most popular interface for these type applications.
In general the market has looked to bridge devices since many of them, including the ATTO FiberBridge, have
these attributes.

Advantages of Server-Less Approach to End-Users


There are a number of major advantages to this approach. While not as dramatic as the advantages of LAN-free
backup, gains may be found from server-less backup that are not available in any other architecture.
Removing the remaining bottleneck in the system will create performance gains in an important area: The tape
server. Even in LAN-free backup, the backup server's performance is directly related to the memory, I/O, and
CPU performance of the backup server itself. This inhibitor to optimal performance is eliminated as the data
moves through the high-performance copy device, optimized for data movement, rather than through a general-
purpose computer bound by multiple needs and a non-specific architecture.
With server-less backup, cost savings may be realized by the elimination of expensive, high-end servers and their
replacement with relatively inexpensive copy devices such as the ATTO FiberBridge and low-end control servers.
In fact, since software such as Celestra can usually share space on another server, the dedicated tape server
may be eliminated altogether for additional savings.
This architecture also makes possible the ability to stream the same data to several tape libraries at once even if
geographically separated without the necessity of copying and moving the tapes themselves. This provides an
effective element in a disaster recovery plan.
Finally, the system becomes simpler overall. A general-purpose server that requires significant administration and
maintenance is replaced with a standalone device that need virtually no maintenance and can be replaced quickly
in case of failure. This, coupled with the aforementioned cost savings, results in a lower Total Cost of
Ownership for the backup sub-system and system infrastructure as well.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 68 of 123

3.18.14.3. Suggested Deployment Strategy


This is a suggested deployment plan that migrates from a non-shared, parallel SCSI-based solution to a server-
less backup solution for new customers
 Upgrade Existing Systems to LAN-free Backup - by deploying a bridge device and other Fiber
Channel components, current systems can be upgraded to a LAN-free backup architecture. This has the
advantage of proving out the hardware infrastructure before moving to server-less backup. It is
advantageous to start here because the step forward is not as radical and yet provides an immediate
enhancement.
 Upgrade Existing Systems to Server-less backup - upgrade the LAN-free systems to serverless
where desired. Since this is likely done through software updates to the bridge device as opposed to
hardware additions, this provides a method of achieving this functionality at a low incremental cost with
little risk. Any risk is also mitigated by the ability to fall back on the existing LAN-free backup system.
 Add Capacity - as need for more backup capacity grows, add more inexpensive copy devices and
additional tape libraries. It is at this point that the cost effectiveness of this solution becomes apparent.
Instead of having to add additional servers (that require extensive administration) and upgrades to the
LAN, inexpensive copy devices such as the ATTO FiberBridge are added.
 Add faster tape units - as the speeds of tape devices themselves increase, so will the overall efficacy
and performance of the tape backup sub-system. Since the tape devices themselves are the bottleneck,
increases in performance will be immediately realized when the performance of the tape units is
increased. The current architecture places control of system performance with the LAN and server rather
than the tape unit itself. LAN-free and Serverless backup architectures shift control of bandwidth and
hence system performance to the high-speed Fiber Channel network, high-bandwidth copy device, and
tape library.
By implementing a migration strategy away from the current server- and LAN-oriented backup systems toward a
Fiber Channel based shared tape system, improvements in overall system performance and reductions in total
cost of ownership of the system will be achieved. By extending the lifetime of existing assets while increasing
overall system performance and reducing operating costs, a better return on investment will be realized.
Finally, end-users and companies will be served better by experiencing less disruption to their overall systems
and hence regular business operations. For this reason alone, implementation of this strategy is worthwhile.

3.19. iSCSI
3.19.1 Introduction of iSCSI
With the release of the Fiber Channel and SAN based on it the storage world staked on a network access to
storage devices. Almost everyone announced that the future belonged to the storage area networks. For several
years the FC interface was the only standard for such networks but today many realize that it's not so. The SAN
based on the FC has some disadvantages, which are the price and difficulties of access to remote devices. At
present there are some initiatives, which are being standardized; they are meant to solve or diminish the
problems. The most interesting of them is iSCSI.

Fig. 3.19.1 – iSCSI

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 69 of 123

The word iSCSI can often be seen in newspapers and ads of leading storage device makers. However, different
sources have very different views, and some consider the iSCSI an indisputable leader for data storage systems
in the near future, others have already given it up for lost yet before it was born.
iSCSI (Internet Small Computer System Interface) is a TCP/IP-based protocol for establishing and managing
connections between IP-based storage devices, hosts and clients.

3.19.2. Advantages of iSCSI


 iSCSI provides storage consolidation, improves disk capacity utilization, eases storage management,
and consolidates data backup. It allows access from any server or client inside a building, across a
campus, throughout a metropolitan area, or the around world, over an existing IP infrastructure or across
a dedicated IP infrastructure for storage traffic.
 iSCSI overcomes the distance limitations posed by traditional direct attached storage solutions.
 iSCSI enables the deployment of cost-effective storage area networks based on technologies already
supported and understood (i.e., SCSI, IP, Ethernet, SNMP).
 iSCSI is extremely interoperable technology built upon time tested underlying standards including SCSI,
TCP/IP, and Ethernet.

3.19.3. Advantages of iSCSI on SAN:


High Availability
Multiple paths between servers and storage enable a constant connection, even if some lines go down.

Scalability
The switched architecture of SANs enable IT managers to expand storage capacity without shutting down
applications.

Maximize Storage Resource Investment


SANs allow you to share disk and tape devices across heterogeneous platforms.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 70 of 123

Uses Well-Understood Ethernet Technology

Fig. 3.19.4 – Overview of Ethernet Technology

3.19.4. iSCSI describes:


Transport protocol for SCSI which operates on top of TCP
New mechanism for encapsulating SCSI commands on an IP network
Protocol for a new generation of data storage systems that natively use TCP/IP
But it's known that rules of packet delivery differ for IP and SCSI. In IP packets are delivered without a strict order,
it is also in charge of data recovery, which takes more resources. At the same time, in SCSI, as a channel
interface, all packets must be delivered one after another without delay, and breach of the order may result in
data losses. In spite of the fact that this problem, according to some experts, brings ambiguity into practical use of
the iSCSI technology, today there are devices that prove its viability. The engineers developing the iSCSI
managed to solve this problem to some degree. The specification of the iSCSI requires a longer packet's head.
The head includes additional information, which speeds up packet assembling by a great margin.
According to a senior system engineer of a Utah's university, the only obstacle for popularization of the Ethernet
as a base technology for establishing storage area networks is a relatively great latency (close to 75
microseconds) because of the peculiarities of the TCP/²Ð stack. It can be a crucial problem in High-End systems
in case of simultaneous access to thousands of files.
Experts working on iSCSI address the problem of latency with a careful attention. And although there are a lot of
means developed to reduce influence of parameters, which cause delays in processing of IP packets, the iSCSI
technology is positioned for middle-level systems.
iSCSI develops quite rapidly. The need in a new standard was so strong that during 14 months after the proposal
on the iSCSI by IETF in February 2000 we got a lot of devices demonstrating capabilities of their interaction. The
Draft 0 on iSCSI published in July 2000 initiated realization of the technology. In January 2001 IP Storage Forum
was created within the SNIA (Storage Networking Industry Association), which had already 50 members in half a
year; and a product released in April 2000 soon won the Enterprise Networking Product Prize.
So, what is so attractive in the iSCSI for majors of the IT industry who do not even consider contradictions of this
standard?
Here are the most important applications and functions which can be realized with data storage systems used:

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 71 of 123

Within the frames of those which can effectively be realized using modern methods:
 Consolidation of data storage systems
 Data backup
 Server clusterization
 Replication
 Recovery in emergency conditions
Here are new capabilities which can effectively be realized with the IP Storage:
 SAN geographic distribution
 QoS
 Safety
In addition, new storage area systems with the iSCSI being native for them provide more advantages:
 A single technology for connection of storage systems, servers and clients within LAN, WAN, SAN ·
Great experience of industry in Ethernet and SCSI technologies · Possibility of substantial geographic
remoteness of storage systems · Possibility to use management means for TCP/IP networks.
To transfer data to storage devices with the iSCSI interface it's possible to use not only data carriers,
communicators and routers of existent LAN/WAN but also usual network cards on the client's side. But it is
followed by considerable expenses of processor power on the client's side which uses such card. According to the
developers, the software iSCSI realization can reach data rates of Gigabit Ethernet at a significant, about 100%,
CPU load. That is why it is recommended using special network cards which support mechanisms of CPU unload
before TCP stack processing. At present (June 2002), such cards are produced by Intel.
The Intel PRO/1000T IP Storage Adapter is offered at 700USD. It contains a powerful Xscale processor, 32M
memory and transfers calculations related with iSCSI and TCP/IP and calculations of checksums of TCP, IP
frames to the integrated processor. According to the company it can be as efficient as 500Mbit/s at 3-5% CPU
load of a host system.

3.19.5. How iSCSI Works


When an end user or application sends a request, the operating system generates the appropriate SCSI
commands and data request, which then go through encapsulation and, if necessary, encryption procedures. A
packet header is added before the resulting IP packets are transmitted over an Ethernet connection. When a
packet is received, it is decrypted (if it was encrypted before transmission), and disassembled, separating the
SCSI commands and request. The SCSI commands are sent on to the SCSI controller, and from there to the
SCSI storage device. Because iSCSI is bi-directional, the protocol can also be used to return data in response to
the original request.
iSCSI is one of two main approaches to storage data transmission over IP networks; the other method, Fiber
Channel over IP (FCIP), translates Fiber Channel control codes and data into IP packets for transmission
between geographically distant Fiber Channel SANs. FCIP (also known as Fiber Channel tunneling or storage
tunneling) can only be used in conjunction with Fiber Channel technology; in comparison, iSCSI can run over
existing Ethernet networks. A number of vendors, including Cisco, IBM, and Nishan have introduced iSCSI-based
products (such as switches and routers).

3.19.6. Applications that can take advantage of these iSCSI benefits include:
 Disaster recovery environments for stored data that needs to be mirrored/recovered in a remote location
can take advantage of the distance extensions that iSCSI enables over an IP network.
 Fiber Channel server and storage extensions.
 Storage backup over an IP network enables systems to maintain backups online and always be ready
and available to restore the data.
 Storage virtualization and storage resource management applications can create a shared storage
environment for all users on a global IP network.
Any application can now take advantage of data from remote sites that are accessible over an IP network,
expanding the usefulness of this data to E-commerce applications.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 72 of 123

3.19.7. iSCSI under a microscope

Fig. 3.18.7.1 - IP network with iSCSI devices used

Here, each server, workstation and storage device support the Ethernet interface and a stack of the iSCSI
protocol. IP routers and Ethernet switches are used for network connections.
The SAN makes possible to use the SCSI protocol in network infrastructures, thus, providing high-speed data
transfer at the block level between multiple elements of data storage networks.
The Internet Small Computer System Interface also provides a block data access, but over TCP/IP networks.
Architecture of a pure SCSI is based on the client/server model. A client, for example, server or workstation,
initiates requests for data reading or recording from a target - server, for example, a data storage system.
Commands which are sent by the client and processed by the server are put into the Command Descriptor Block
(CDB). The server executes a command which completion is indicated by a special signal alert. Encapsulation
and reliable delivery of CDB transactions between initiators and targets through the TCP/IP network is the main
function of the iSCSI, which is due to be implemented in the medium untypical of SCSI, potentially unreliable
medium of IP networks.
Below is a model of the iSCSI protocol levels, which allows us to get an idea of an encapsulation order of SCSI
commands for their delivery through a physical carrier.

Fig. 3.18.7.2 – iSCSI Protocol Levels Model

The iSCSI protocol controls data block transfer and confirms that I/O operations are truly completed. In its turn, it
is provided via one or several TCP connections.

The iSCSI has four components:


 iSCSI Address and Naming Conventions.
 iSCSI Session Management.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 73 of 123

 iSCSI Error Handling.


 iSCSI Security.

3.19.8. Address and Naming Conventions


As the iSCSI devices are participants of an IP network they have individual Network Entities. Such Network Entity
can have one or several iSCSI nodes.

Fig. 3.18.8 - Model of Network Entities

An iSCSI node is an identifier of SCSI devices (in a network entity) available through the network. Each iSCSI
node has a unique iSCSI name (up to 255 bytes), which is formed according to the rules adopted for Internet
nodes.

For example
Fqn.com.ustar.storage.itdepartment.161. Such name has an easy-to-perceive form and can be processed by the
Domain Name System (DNS). An iSCSI name provides a correct identification of an iSCSI device irrespective of
its physical location. At the same time in course of handling data transfer between devices it's more convenient to
use a combination of an IP address and a TCP port which are provided by a Network Portal. The iSCSI protocol
together with iSCSI names provides a support for aliases, which are reflected in the administration systems for
better identification and management by system administrators.

3.19.9. Session Management


The iSCSI session consists of a Login Phase and a Full Feature Phase, which is completed with a special
command.
The Login Phase of the iSCSI is identical to the Fiber Channel Port Login process (PLOGI). It is used to adjust
various parameters between two network entities and confirm an access right of an initiator. If the iSCSI Login
Phase is completed successfully the target confirms the login for the initiator; otherwise, the login is not confirmed
and a TCP connection breaks.
As soon as the login is confirmed the iSCSI session turns to the FULL Feature Phase. If more than one TCP
connection was established the iSCSI requires that each command/response pair goes through one TCP
connection. Thus, each separate read or write command will be carried out without a necessity to trace each
request for passing different flows. However, different transactions can be delivered through different TCP
connections within one session.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 74 of 123

Fig. 3.19.9 - iSCSI Write example

At the end of a transaction the initiator sends/receives last data and the target sends a response, which confirms
that data are transferred successfully.
The iSCSI logout command is used to complete a session - it delivers information on reasons of its completion. It
can also send information on what connection should be interrupted in case of a connection error, in order to
close troublesome TCP connections.

3.19.10. Error Handling


Because of a high probability of errors in data delivery in some IP networks, especially WAN, where the iSCSI can
work, the protocol provides a great deal of measures for handling errors.
So that error handling and recovery can work correctly both the initiator and the target must be able to buffer
commands before they are confirmed. Each terminal must have a possibility to recover selectively a lost or
damaged PDU within a transaction for recovery of data transfer.

Here is the hierarchy of the error handling and recovery after failures in the iSCSI:
The lowest level - identification of an error and data recovery on the SCSI task level, for example, repeated
transfer of a lost or damaged PDU.
Next level - a TCP connection which transfers a SCSI task can have errors. In this case there is an attempt to
recover the connection.
At last, the iSCSI session can be damaged. Termination and recovery of a session are usually not required if
recovery is implemented correctly on other levels, but the opposite can happen. Such situation requires that all
TCP connections be closed, all tasks, under fulfilled SCSI commands be completed, and the session be restarted
via the repeated login.

3.19.11. Security
As the iSCSI can be used in networks where data can be accessed illegally, the specification allows fpr different
security methods. Such encoding means as IPSec, which use lower levels, do not require additional matching
because they are transparent for higher levels, and for the iSCSI as well. Various solutions can be used for
authentication, for example, Kerberos or Private Keys Exchange, an iSNS server can be used as a repository of
keys.

3.19.12. Adaptec iSCSI

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 75 of 123

iSCSI Snap Servers, storage arrays and HBAs are flexible, cost-effective and easy-to-manage. Ideal for building
iSCSI-based networked storage infrastructures for remote offices, email and other databases, or as primary
storage for data that doesn't require the high performance of Fiber Channel SANs, they provide a high-ROI
storage option for businesses of all sizes.

3.19.12.1. Storage Systems


 Snap Server 18000 - 2U 2TB - 30TB iSCSI-enabled IP SAN RAID solution
 Snap Server 4500 - 1U 1.6TB - 3.6TB iSCSI-enabled IP SAN RAID solution
 Snap Server 4200 - 1U 640GB iSCSI-enabled IP SAN RAID solution
 Adaptec iSA1500 Storage Array - 1U, 4-drive iSCSI to SATA RAID solution

3.19.12.2. HBAs
 Adaptec 7211C (Copper) - 1Gb ASIC-based iSCSI copper adapter with full protocol offload
 Adaptec 7211F (Fiber Optic) - 1Gb ASIC-based iSCSI fiber optic adapter with full protocol offload

3.19.12.3. Adaptec 7211F (Fiber Optic)


The Adaptec 7211F (fiber optic) delivers high performance, interoperable connection into iSCSI SANs in Gigabit
Ethernet environments. Unlike NIC based implementations the Adaptec 7211F (fiber optic) offers ASIC-based
complete TCP/IP and iSCSI offload enabling lower CPU utilization and the best price-performance. The Adaptec
7211F (fiber optic) provides best-in-class interoperability and is ideal for applications like storage consolidation,
LAN-free and server-free back-up, database and e-mail deployment, as well as remote replication and disaster
recovery.

Highlights
 The premier choice for connectivity
 High-speed iSCSI SAN connectivity with minimal CPU utilization
 Fully offloads protocol processing from the host CPU
 Enables any enterprise that uses standard Ethernet technology to consolidate storage, increase data
availability, and reap the benefits of SANs
 Ideal for environments where storage consolidation, LAN-free backup, and remote replication

Benefits
 Delivers outstanding iSCSI performance using familiar, affordable technology
 Ideal for environments where storage consolidation, LAN-free backup, and remote replication are
required. Database, e-mail, and disaster recovery and perfectly suited for iSCSI SANs with iSCSI HBAs.
 Fully offloads protocol processing from the host CPU
 High-speed iSCSI SAN connectivity with minimal CPU utilization
 Enables any enterprise that uses standard Ethernet technology to consolidate storage, increase data
availability, and reap the benefits of SANs
 Enables low latency SCSI "blocks" to be transported via Ethernet and TCP/IP

3.19.13. Conclusion
I'm quite sure that in the near future the Fiber Channel won't disappear and the FC SAN market will be further
developing. At the same time the IP Storage protocols will make possible to use effectively storage area networks
in those applications for which the FC can't provide an effective realization. With the FCIP and iFCP protocols
data storage networks will be geographically distributed. And the iSCSI will make possible to use advantages of
the SAN in the spheres, which are still not or ineffectively realized within popular technologies.

3.19.13.1. P.S.
The rapid development of data storage networks is what the conception of the World Wide Storage Area Network
based on. WWSAN provides for an infrastructure which will support a high-speed access and storage of data

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 76 of 123

distributed all over the world. The conception is very close to the WWW but is based on different services. One of
examples is servicing a manager who travels around the world with presentations. WWWSAN provides for
transparent transfer of "mobile" data according to how their owner travels all around the world. Therefore,
wherever such manager can be, he will always have a high-speed access to the data he needs, and an operation
with them won't require a complicated ineffective synchronization via the WWW.
The conception of building the World Wide Storage Area Network excellently fits in the development of modern IP
Storage technologies.

3.19.13.2. Terms and abbreviations:


 SAN - Storage Area Network
 CDB - command descriptor block.
 PDU - Protocol Data Unit.
 SNIA - Storage Networking Industry Association.
 DNS - Domain Name Server.
 PLOGI - Fiber Channel Port Login.
 iSCSI - Internet Small Computer Systems Interface
 FCIP - Fiber Channel over TCP/IP
 iFCP - Internet Fiber Channel Protocol
 iSNS - Internet Storage Name Service
 WWSAN - World Wide Storage Area Network
 QoS - Quality of Service (usually describes a network through latency
and band of a signal).

3.19.14. Others (iFCP, FCIP)


The IP Storage (IPS) work group was created within the frames of developing network storage technologies in the
Internet Engineering Task Force (IETF); it has the following directions:
 iSCSI (Internet Small Computer Systems Interface)
 FCIP (Fiber Channel over TCP/IP)
 iFCP (Internet Fiber Channel Protocol)
 iSNS (Internet Storage Name Service)
In January 2001 IP Storage Forum was established within SNIA (Storage Networking Industry Association).
Today the Forum includes three subgroups: FCIP, iFCP, iSCSI, each representing a protocol, which is under the
IETF protection.
FCIP - a tunnel protocol based on the TCP/IP, which is designed for connection of geographically far FC SANs
without affecting FC and IP protocols.
iFCP - TCP/IP based protocol for connection of FC data storage systems using the IP infrastructure together or
instead of FC switching and routing elements.
For better understanding of positioning of these three protocols there is a diagram of networks based on them.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 77 of 123

Fig. 3.19.14 - Diagram of IP Storage networks

3.19.14.1. Fiber Channel over IP


The most revolutionary protocol among these three is Fiber Channel over IP. It doesn't bring in any changes into
the SAN structure and organization of storage area systems. The main idea of this protocol is to make functional
integration of geographically remote storage networks.

Here is the stack of the FCIP protocol:

Fig. 3.19.14.1 - Lower levels of the FCIP protocol

FCIP helps to effectively solve a problem of geographical distribution, and integration of SANs on large distances.
This protocol is entirely transparent for existent FC SANs and involves usage of infrastructure of modern
MAN/WAN networks. So, if you want to merge geographically remote FC SANs with new functionality enabled
you will have to get just one FCIP gateway and connection to MAN/WAN networks. A geographically distributed

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 78 of 123

SAN based on the FCIP is taken by SAN devices as a usual FC network, and it is seen as a usual IP traffic for a
MAN/WAN network it is connected to.

3.19.14.2. FCIP IETF IPS Working Group Draft Standard specifies:


 Rules of encapsulation of FC frames for delivery through TCP/IP;
 Rules of using encapsulation for creation of a virtual connection between FC devices and elements of an
FC network;
 TCP/IP environment for support of creation of a virtual connection and support of FC traffic tunneling
through an IP network including safety, integrity of data and a data rate issue.
Here are some applied problems which can be successfully solved using the FCIP protocol: remote backup, data
recovery and a shared data access. With high-speed MAN/WAN communications one can also use synchronous
data doubling and a shared distributed access to data storage systems.

3.19.14.3. iFCP
Internet Fiber Channel Protocol is a protocol which provides FC traffic delivery over the TCP/IP transport between
iFCP gateways. In this protocol an FC transport level is replaced with a transport of the IP network, the traffic
between FC devices is routed and switched by the means of TCP/IP. The iFCP protocol allows connecting current
FC data storage systems to an IP network with a support of network services which are necessary for these
devices.

Here is how an iFCP protocol stack looks like:

Fig. 3.19.14.3 - Lower levels of the iFCP protocol

According to the specification iFCP:


 Overlays FC frames for their delivery to a predetermined TCP connection;
 FC services of message delivery and routing are overlapped in the iFCP gateway device; therefore,
network structures and components of the FC do not mix in one FC SAN but are managed by the
TCP/IP means;
 Dynamically creates IP tunnels for FC frames
An important feature of the iFCP is that this protocol provides an FC device-to-device connection via an IP
network, which is a more flexible scheme in comparison to the SAN-to-SAN. For example, if the iFCP has a TCP
connection between pairs of N_Ports of two FC devices such connection can have its own QoS level which will be
different from a QoS level of another pair of FC devices.

3.19.15. How to Build an iSCSI SAN


Not all organizations can afford to build a storage-area network. Why spend hundreds of thousands of dollars and
lots of management hours, the thinking goes, if you can get by with your existing direct-attached storage that's
more of an inconvenience than a problem?
But don't discount a SAN so easily: With the advent of iSCSI, storage networks are becoming more affordable for
organizations of all sizes. You won't get all the benefits of a high-end Fiber Channel SAN with iSCSI, but you will
have the immediate advantage of remote, centralized storage.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 79 of 123

ISCSI sends SCSI commands over an IP network. As long as the machine requesting data and the machine
serving the data both understand iSCSI, the requesting machine will see drives and data on the server as "local."
This lets you expand the data in your data server (or group of servers) and not throw disks into every network app
server.
In iSCSI parlance, an initiator is a device or software that maps SCSI into IP: It wraps SCSI commands in an IP
packet and ships them to an iSCSI target. The target machine unwraps iSCSI packets from IP and acts upon the
iSCSI commands. It returns an iSCSI response or multiple responses, which are usually blocks of data.
The server is your application server, and the storage box is the machine serving up iSCSI drives. (We're using
storage box to represent anything from a Linux software iSCSI target to a full-blown SAN with iSCSI support.)
You need a gigabit copper network for an iSCSI SAN. If you try running iSCSI over a 100-Mbps network, you'll be
disappointed. Assuming your network connection maintains 100 percent utilization, 100 Mbps is roughly
equivalent to 5 MB per second of disk transfer. Because iSCSI has a request/response for every packet
transferred and network performance degrades before 100 percent saturation, the best performance you'll get is
6.25 MBps of throughput. That's a rough estimate that includes time to wrap and unwrap data packets and
responses to each packet.
Bottom line: 6.25 MBps of data transfer is not good, considering that most drives run in the 40- to 320-MBps
transfer range. Besides, Gigabit Ethernet is affordable: Gigabit adapters start at $60; switches, $120. And Gigabit
Ethernet has the throughput for treating iSCSI as a local drive.

Fig. 3.19.15 – iSCSI over Ethernet Network


Don't put your iSCSI SAN on your regular IP network, either. There's plenty of traffic running over that, and iSCSI
is bandwidth-intensive. Also consider whether your servers have enough CPU power to handle iSCSI.
Unwrapping and reassembling iSCSI packets can take a lot of CPU time. The iSCSI standard assumes packets
are received in order, while TCP does not (plus iSCSI adds load from intensive TCP/IP processing). So if your
server CPU is moderately utilized, you'll need an HBA (Host Bus Adapter) or TOE (TCP Offload Engine). These
devices take care of some, or all, of the iSCSI and TCP processing without burdening the CPU.
HBAs are storage-only cards that connect your machine to a target using iSCSI. TOEs are TCP-only cards that
off-load TCP processing for the CPU. TOE is useful in iSCSI because of the high volume of packets transferred,
while HBA processes SCSI commands--another data-intensive application. HBAs costs $600 to $1,200 each, so
add them only to machines that need more CPU.
And check with your HBA vendor to ensure that its product supports regular TCP/IP communications (most don't).
If it doesn't, buy a separate gigabit NIC for that machine if it will handle any management for your storage
network. Ideally, the NIC should sit on a separate machine on the gigabit SAN--but not participating in storage
work--for network management.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Storage Area Network Page 80 of 123

3.19.16. Setup
Part of iSCSI's appeal is you don't need specialized networking knowledge--like you do with Fiber Channel SANs-
-to set it up. It's relatively simple to build and configure.
First, set up your IP network. Install Ethernet cards or HBAs, and remember to put one in your storage server if
you have a target that requires a card or blade to make it iSCSI-ready. You have several options: At the low end
is the UNH-iSCSI open-source project that builds a target for Linux. You can install it on any Linux machine,
customize the config file and have an iSCSI target. Fill the box with drives and use it as your storage box.
Alternatively, you can buy premade storage boxes that are iSCSI targets with plenty of room for drives. This is a
good place to start if your budget is tight. You'll need to choose the number of drives, the type of drive (SCSI, ATA
or Fiber Channel) and how much expandability you need in the device, as well as the amount of throughput.
Another option is to make your existing Fiber Channel SAN and NAS equipment iSCSI-compatible, with iSCSI
cards for SANs and iSCSI gateways for NAS products.
Next, run cables to your gigabit switch. Remember, you're creating a separate IP network from your backbone. IP
networking is much the same no matter the medium--configure the network using your OS guidelines.

3.19.17. Pain-Free Initiation


Configuring iSCSI differs from product to product, and it's different for the initiator and the target systems. Once
you've got your storage IP network running, devices can ping each other, and setting up the iSCSI portion is easy.
Just enable the iSCSI functionality.
Using the UNH Linux target software, you can modify the target configuration file to set up drives, portals and
targets. Windows and dedicated storage devices have interface tools for this configuration.
Once you've defined the targets and drives they offer, set up the initiators. If a given server doesn't have an HBA
acting as an initiator, you'll have to install one, like the Microsoft Initiator. This adds value even if you have an
HBA installed. Some HBAs can't use the Microsoft Initiator, though, so check with your HBA vendor.
Some vendors have written their own iSCSI initiators that are delivered with their cards. There are initiators and
targets for both Windows and Linux available. Windows has an "iSCSI Initiator" icon in the Control Panel for
automatically discovering targets, and you can set up undiscovered targets, too. Linux has a config file you modify
manually to set up iSCSI, and there's a handy how-to for this purpose. If there's an HBA on your storage machine,
check your HBA vendor's documentation for help.
Once the initiators are configured, try to contact the targets. Beware: If you set up the targets after the initiators,
you have to rescan the iSCSI network at each initiator to see the targets.
To check that both TCP/IP and iSCSI are working, the iSCSI protocol maps a command to do an "iSCSI ping." It's
a SCSI command wrapped in an IP packet that doesn't request any operation from the target--just a response.
Next, select the drives you want each initiator to use, which should be visible as local drives. Format them and
begin using them like any other new drive in the system.
Over time, you can augment or replace software initiators with HBAs or TOEs, and you can swap out your lower-
end target with anything up to a full-blown SAN. You can even upgrade to 10-Gbps Ethernet for greater
throughput, if necessary.

3.19.18. SAN Components


 Fiber Channel Switches (SAN Fabric)
 SAN Fabric Management and Monitoring Software
 SAN Fabric Security and Access Control Software
 Storage Devices
 Hosts and Host Bus Adapters (HBA)
 Cabling and Cable Connectors
 Gigabit Interface Converters (GBICs)

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - SAN Setup by Wilshire Page 81 of 123

4. SAN Setup by WILSHIRE


4.1. Hardware Details
Components of a SAN comprises of the following:
JNI FCE – 6410N 64-Bit HBA (Host Bus Adapter)
Brocade SilkWorm 2800 Switch
ATTO FIBER-BRIDGE 2200 R/D
SCSI Tape drive and a SCSI disk subsystem.

4.1.1. JNI FCE 6410N Fiber Channel HBA


The FiberStart line of PCI-Fiber Channel host bus adapter (HBAs) is designed to execute mission critical
applications by enabling high-speed data transfers between PCI-based servers and a Fiber channel link. The
HBAs are designed with high performance, cut-through architecture, low CPU utilization, a highly efficient physical
layer design and a modular software structure. The stability and reliability of the JNI FiberStart line of HBAs
remain unchallenged; and its interoperability allows for cross-platform implementation.
The FCE-6410 is bundled with two JNI proprietary software products, EZ Fiber and the PC Driver Suite. EZ Fiber
is a powerful graphically based management and configuration utility that makes installing and maintaining JNI
HBAs as easy point-and-click. The PC Driver Suite is an integrated suite of software drivers that enables the HBA
to operate with a wide variety of operating systems.
These connectivity options, along with its industry-leading price-performance and straightforward installation
process make the FCE-6410 adapter obvious choice when implementing Fiber Channel in a heterogeneous
environment.
Supports switched fabric, arbitrated loop and point-to-point topologies
Full speed, full-duplex Fiber Channel interface
Reduced bottlenecks in distributed and clustered environments with measured performance over 98.5 MB/second
(half-duplex)
LUN-Level zoning/mapping enabled through EZ Fiber Management Utility
Runs on Windows NT, Windows 2000, Novell NetWare, Red Hat Linux, HP-UX, AIX, Solaris and Mac. OS.

4.1.2. Brocade SilkWorm 2800 Switch


The Brocade SilkWorm switches create an intelligent storage-networking infrastructure for mission-critical Storage
Area Networks (SAN). The SilkWorm 2800 enterprise-class Fiber Channel switches is designed to address the
SAN requirements of very large workgroups and enterprises. This Switch support business-critical SAN
applications, such as LAN-free backup, storage consolidation, remote mirroring, and high-availability clustering
configurations.
The SilkWorm 2800 switch is also a completely interoperable with entry-level switches, enabling cost-effective
“pay-as-you-grow” migration to more advanced SAN environments.
The SilkWorm 2800 switch support scalability through networking multiple switches, and a fabric operating system
(OS) that enables heterogeneous device connectivity, automatic data routing and re routing, self-healing, and
scalable connectivity. The 16-port SilkWorm 2800 complements the SilkWorm 2400 switch by delivering higher
density connectivity for extremely large SAN fabrics. Up to 239 switches can be networked together, providing
over 2,000 ports of interconnectivity.

Technical Highlights
 16-port switch deliver an industrial-strength framework for enterprise SAN fabrics.
 Each port delivers 100 MB/sec full duplex line speed.
 Offer superior interoperability with a wide range of servers and storage devices.
 Fabric OS provides powerful fabric management capabilities.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - SAN Setup by Wilshire Page 82 of 123

 Provides swappable, redundant power supplies and cooling fans for high reliability, availability, and
serviceability.
 Rack mount, Desktop or Drop-in

Specifications And Features


 16 Universal ports automatically determine the port type for a loop, point-to-point devices, or an
InterSwitch Link (ISL).
 Scalability Maximum upto 239 Switches of Full Fabric Architecture
 Performance Switch Bandwidth is 16Bg/sec End-to-End
 Non-blocking architecture delivers full-speed data delivery irrespective of traffic conditions. Cut through
routing provides maximum latency of two microseconds from switch port to switch port.
 448 dynamically allocated frame buffer
 Fabric switches support unicast, multi - cast (256 groups), and broadcast hot Pluggable, industry-
standard GBICs.
 Redundant power supply, GBICs, and rack mount kit

General
 Support seamless connectivity to Fiber Channel Arbitrated Loop and full switch fabric configurations.
 Supports disk, tape and removable devices.

Fiber Channel Connectivity


 16 GBIC Fiber channel port
 Support for Fiber optic and copper GBICs, short wave and long wave optics.

4.1.3. ATTO Fiber-Bridge 2200 R/D


Fiber channel-to-SCSI Bridge with advanced management, connectivity and Fabric support.
A Fiber Channel-to-SCSI Bridge for high Demand Environments.
ATTO Fiber Bridge 2200R/D is an intelligent Fiber Channel-to-SCSI bridge with a very versatile enclosure. The
ATTO Fiber bridge 2200R/D provides Fiber channel performance of 1.0625 Gigabit (100 MB/sec) transfer rates.
This product which is tightly coupled with two independent SCSI ports provides a sustained throughput of 98
MB/sec. This also supports the latest Fiber channel features including full fabric connectivity.

Technical Highlights
 Single GBIC Fiber channel port.
 Dual independent SCSI buses.
 RS-232, Ethernet and Fiber channel In-band configuration, Management and Monitoring.
 Support for Full Duplex and class 2 transfers.
 Rackmount, Desktop or Drop-in.

Specifications and Features


 One Fiber channel GBIC port.
 Two independent SCSI buses - Ultra2 LVD or High voltage Differential models available.
 LEDs show Fiber channel activity, SCSI bus activity, unit ready and power status.
 Upto 5000 I/Os per second and a sustained throughput of 98 MB/sec.
 RS-232 Serial port provides command line & menu interface for diagnostic information and configuration
options.
 On-board Ethernet provides SNMP and Telnet based monitoring and configuration.
 Supports full Duplex operations.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - SAN Setup by Wilshire Page 83 of 123

General
 Reliably attaches SCSI devices to Fiber channel arbitrated loop and fabric infrastructures.
 Supports disk, tape and removable devices.

Fiber Channel Connectivity


 One GBIC Fiber channel port
 Support for Fiber optic and copper GBICs, short wave and long wave optics.

SCSI Connectivity
 Two independent SCSI buses.

Local and Network Management


 10/100 BaseT Ethernet port for LAN-based Management.
 Out-of-band support for Telnet, FTP and SNMP over Ethernet.
 RS-232 serial port for local management using Fiberbridge services.
 Command-line and menu-based ASCII text management interface for RS-232 and Telnet.
 LEDs for Fiber channel activity, SCSI activity, power and system ready.
 ATTO BridgeTools, Java-based graphical software for configuration and management of the ATTO
Fiberbridge products over Fiber Channel, Ethernet or RS-232.
The ATTO FiberBridge 2200R/D is Operating system independent.

4.1.4. Hardware Installation


Minimum System Requirements
 Windows 2000
 Windows NT version 4.0 with service pack 4 (SP4)
 Solaris version 2.6,7,8.

4.1.5. Installing the Adapter card


 Shutdown the system, power down peripherals, and unplug the powercord.
 Unplug any peripheral devices from the system unit.
 To install the adapter card, locate and insert it in an unused 64-bit expansion slot. PCI slots are typically
white or ivory. If an expansion slot id covering the slot opening unscrew and remove the bracket.
Note: The HBAs can be installed in either a 32-bit or 64-bit PCI system.
64-bit performance will not occur on a 32-bit system.
 Insert the adapter card in the available slot.
 Attach a loop back plug to the FC port.
 Plug power cord, cables and peripherals back into the computer and power up.

Connecting Cables and Devices


Connecting devices to your new adapter card may require a variety of cables and/or adapter.

Optical Interface Connector


If your new adapter card has an optical connection, the interface uses a Fiber optic cable with standard SC Fiber
optic connectors at each end. The optical cables should be plugged into the optical Fiber channel (FC) connector.
See the figure below for the location of the transmitter port (TX) and receiver port (RX) on the optical SC
connector.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - SAN Setup by Wilshire Page 84 of 123

4.2. Software Installation


4.2.1. Installation in Solaris 9
Insert JNI CD
cd /cdrom/jni/solaris , you'll see a file jni.pkg.
To add the package, # pkgadd -d /cdrom/jni/solaris/jni.pkg.
Mount the floppy containing the patches, go to /floppy/san
Copy two patches 108434-01 & 108535-01 to /opt diretory.
cp 108434-01 /opt/108434-01.Z
cp 108435-01 /opt/108435-01.Z
Unzip both the files now.
To add the patches
# patchadd -u 108434-01
# patchadd -u 108435-01 , -u is for updating.
# cd /kernel/drv , here you'll have to edit some files.
Go to st.conf, (for SCSI tape drive),
Line 145: set the LUN number to 10, (depends upon the ID of the tape drive).
Go to sd.conf (for SCSI disk),
where target=1, set LUN=11 (again depends on the id of the device).
Go to jnic.conf,
Line 139: uncomment it for defining the node name and give the node number as you see in the NT machine's
EZ Fiber utility, FC Target 0.
Line 152: uncomment it for defining the port name and give the port number, repeat the same procedure.
Line 166: uncomment it and give port binding = "0000ef";
Now init 0 and boot with boot -r.
Now if your Tape drive is ready, you can see the files in /dev/rmt, and can take backups as usual. If you have any
disks, go to Format Command, you can see the disk.
Again Insert the JNI CD,
# cd /cdrom/jni/solaris/EZFiber, ls
Run a file install.sh.
To execute EZ Fiber,
# cd /opt/jni/EZFiber/standalone, ls
Run a file ezf, ( ./ezf ),
A GUI tool EZ Fiber is ready and the devices in SAN can be seen here as you see in NT 4.0.

4.3.2. Installation in NT4.0


Go to the control panel settings and then to SCSI Adapters.
There you'll have an option Add driver, you'll have to load the CD and add the driver.
Run EZ Fiber software now and you can see the devices in the SAN (LUN numbers).

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Emerging Technologies Page 85 of 123

5. Emerging Technologies
5. Introduction of InfiniBand
InfiniBand is a new High Speed, Enterprise Wide, I/O technology. InfiniBand provides for high performance I/O
used in networked computing platforms and defines the requirements for creating an InfiniBand network. The
benefits of InfiniBand over existing technologies include more scale for growth, higher speed data transfer and
easy integration with legacy systems.
Today's bus-based architecture is limited in its ability to meet the needs of the evolving data center. The speed of
the Peripheral Component Interconnect (PCI) bus, the 'gateway' between external communications (the Internet)
and the CPU, has not increased in tandem with CPU speed and Internet traffic, creating a bottleneck. InfiniBand
(Infinite Bandwidth) promises to eliminate this bottleneck. InfiniBand, a switched-fabric architecture for I/O
systems and data centers, is an open standard that implements a network for I/O connectivity, thereby de-
coupling the I/O path from the computing elements of a configuration (the CPU and memory). InfiniBand allows
for improvements in network performance, processor efficiency, reliability, and scalability. Despite these
compelling benefits, the enormous investment in PCI-based architectures will make a phased implementation of
InfiniBand necessary.

Fig. 5 – Overview of InfiniBand System

5.1 InfiniBand Advantages


InfiniBand allows for greater network performance, processor efficiency, reliability, and scalability. The following
outlines how each of these are achieved.
Network Performance--InfiniBand has been designed to solve the problem of meeting I/O demand, which is
being generated by high-end computing concepts, such as clustering, fail-safe, and 24X7 availability. The
architecture is intended to minimize disruption of existing paradigms and business practices. The specification
creates three different performance classes--1x, 4x, and 12x. Each lx link can transmit 2.5Gbps in each direction.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Emerging Technologies Page 86 of 123

Even in its slowest configuration, InfiniBand's throughput is on par with the fastest PCI bus, SCSI, Gigabit
Ethernet, and Fiber Channel technology. Thus, implementation of the highest-class InfiniBand architecture will
increase throughput by twelve times or more. InfiniBand enables systems to keep up with the ever-increasing
customer requirements for reliability, availability, and scalability, increased bandwidth, and support for Interact
technology.
Processor Efficiency--InfiniBand's channel adapters are intelligent. This allows them to offload much of the
communications processing from the operating systems and CPU. InfiniBand shifts the burden of processing I/O
from the server's CPU onto the InfiniBand network, freeing up the CPU for other processing.
Reliability--Reliability is superior to today's PCI model because data can take many paths across the InfiniBand
architecture. For example, a processor could have two ports; each port would connect to one of two switches. In
the event one of the links failed, all traffic could be rerouted over the other operating link. By building a network of
redundant pathways using multiple switches, reliability can be achieved.
Scalability--The center of the Internet data center shifts from the server to a switched fabric in an InfiniBand
architecture. Servers, networking, and storage all access a common fabric. Each of these devices can scale
independently based on the needs of the data center.

5.2 InfiniBand Architecture


According to the 1.0.a specifications, IBA is described as "... a first order interconnect technology for
interconnecting processor nodes and I/O nodes to form a system area network. The architecture is independent
of the host operating system (OS) and processor platform."
InfiniBand is much like a phone system--it is able to handle thousands of messages at any given time, as
opposed to a shared bus, which is able to handle only one message at a time.
The specification defines the various nodes in a subnet (a single InfiniBand network) Nodes of a subnet include:
* Routers--interconnect components for routing traffic across subnets or to non InfiniBand networks.
* Switches--interconnect components for intra-subnet routing.
* Channel Adapters (CAs)--devices that terminate a link; execute transport-level functions to CPU nodes and I/O
nodes.
InfiniBand is an open standard that implements a network for I/O connectivity, thereby decoupling the I/O path
from the computing dements of a configuration (the CPU and memory). As illustrated in the Figure, InfiniBand
server elements consist of CPUs and memory. Together with the server, switch technology forms a network to
which I/O devices are attached. This is the core framework, or, otherwise known as an InfiniBand subnet.

5.3 InfiniBand Layers


The InfiniBand architecture is divided into multiple layers where each layer operates independently of one
another. As shown in Figure, InfiniBand is broken into the following layers: Physical, Link, Network, Transport,
and Upper Layers.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Emerging Technologies Page 87 of 123

Fig. 5.3 – InfiniBand Layer Model

5.3.1. Physical Layer


InfiniBand is a comprehensive architecture that defines both electrical and mechanical characteristics for the
system. These include cables and receptacles for fiber and copper media; backplane connectors; and hot swap
characteristics. InfiniBand defines three link speeds at the physical layer, 1X, 4X, 12X. Each individual link is a
four wire serial differential connection (two wires in each direction) that provides a full duplex connection at 2.5
Gb/s.

Fig. 5.3.1 – Physical Layer


The data rates and pin counts for these links are shown in Table

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Emerging Technologies Page 88 of 123

Infiniband Link Rates Table


Fully Duplexed
InfiniBand Link Signal Count Signaling Rate Data Rate
Data Rate
1X 4 2.5 Gb/s 2.0 Gb/s 4.0 Gb/s
4X 16 10 Gb/s 8 Gb/s 16.0 Gb/s
12X 48 30 Gb/s 24 Gb/s 48.0 Gb/s
Note: The bandwidth of an InfiniBand 1X link is 2.5 Gb/s. The actual raw data bandwidth is 2.0 Gb/s (data is
8b/10b encoded). Due to the link being bi-directional, the aggregate bandwidth with respect to a bus is 4 Gb/s.
Most products are multi-port designs where the aggregate system I/O bandwidth will be additive.
InfiniBand defines multiple connectors for “out of the box” communications. Both fiber and copper cable
connectors are defined as well as a backplane connector for rack-mounted systems.

5.3.2 Link Layer


The link layer (along with the transport layer) is the heart of the InfiniBand Architecture. The link layer
encompasses packet layout, point-to-point link operations, and switching within a local subnet.

Packets
There are two types of packets within the link layer, management and data packets. Management packets are
used for link configuration and maintenance. Device information, such as Virtual Lane support is determined with
management packets. Data packets carry up to 4k bytes of a transaction payload.

Switching
Within a subnet, packet forwarding and switching is handled at the link layer. All devices within a subnet have a
16 bit Local ID (LID) assigned by the Subnet Manager. All packets sent within a subnet use the LID for
addressing. Link Level switching forwards packets to the device specified by a Destination LID within a Local
Route Header (LRH) in the packet. The LRH is present in all packets.

QoS
QoS is supported by InfiniBand through Virtual Lanes (VL). These VLs are separate logical communication links
which share a single physical link. Each link can support up to 15 standard VLs and one management lane (VL
15). VL15 is the highest priority and VL0 is the lowest. Management packets use VL15 exclusively. Each device
must support a minimum of VL0 and VL15 while other VLs are optional. As a packet traverses the subnet, a
Service Level (SL) is defined to ensure its QoS level. Each link along a path can have a different VL, and the SL
provides each link a desired priority of communication. Each switch/router has a SL to VL mapping table that is
set by the subnet manager to keep the proper priority with the number of VLs supported on each link. Therefore,
the IBA can ensure end-to-end QoS through switches, routers and over the long haul.

Credit Based Flow Control


Flow control is used to manage data flow between two point-to-point links. Flow control is handled on a per VL
basis allowing separate virtual fabrics to maintain communication utilizing the same physical media. Each
receiving end of a link supplies credits to the sending device on the link to specify the amount of data that can be
received without loss of data. Credit passing between each device is managed by a dedicated link packet to
update the number of data packets the receiver can accept. Data is not transmitted unless the receiver advertises
credits indicating receive buffer space is available.

Data integrity
At the link level there are two CRCs per packet, Variant CRC (VCRC) and Invariant CRC (ICRC) that ensure data
integrity. The 16-bit VCRC includes all fields in the packet and is recalculated at each hop. The 32-bit ICRC
covers only the fields that do not change from hop to hop. The VCRC provides link level data integrity between
two hops and the ICRC provides end-to-end data integrity. In a protocol like ethernet, which defines only a single
CRC, an error can be introduced within a device, which then recalculates the CRC. The check at the next hop
would reveal a valid CRC even though the data has been corrupted. InfiniBand includes the ICRC so that when a
bit error is introduced, the error will always be detected.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Emerging Technologies Page 89 of 123

5.3.3 Network Layer


The network layer handles routing of packets from one subnet to another (within a subnet, the network layer is not
required). Packets that are sent between subnets contain a Global Route Header (GRH). The GRH contains the
128-bit IPv6 address for the source and destination of the packet.
The packets are forwarded between subnets through a router based on each device’s 64 bit globally unique ID
(GUID). The router modifies the LRH with the proper local address within each subnet. Therefore the last router in
the path replaces the LID in the LRH with the LID of the destination port. Within the network layer InfiniBand
packets do not require the network layer information and header overhead when used within a single subnet
(which is a likely scenario for Infiniband system area networks).

5.3.4 Transport Layer


The transport layer is responsible for in-order packet delivery, partitioning, channel multiplexing and transport
services (reliable connection, reliable datagram, unreliable connection, unreliable datagram, raw datagram). The
transport layer also handles transaction data segmentation when sending and reassembly when receiving. Based
on the Maximum Transfer Unit (MTU) of the path, the transport layer divides the data into packets of the proper
size. The receiver reassembles the packets based on a Base Transport Header (BTH) that contains the
destination queue pair and packet sequence number. The receiver acknowledges the packets and the sender
receives the acknowledge and updates the completion queue with the status of the operation. There is a
significant improvement that the IBA offers for the transport layer: all functions are implemented in hardware.
InfiniBand specifies multiple transport services for data reliability. Queue pair, one transport level is used.

5.4 InfiniBand Technical Overview


InfiniBand is a switch-based point-to-point interconnect architecture developed for today’s systems with the ability
to scale for next generation system requirements. It operates both on the PCB as a component-to-component
interconnect as well as an “out of the box” chassis-to-chassis interconnect. Each individual link is based on a four-
wire 2.5 Gb/s bidirectional connection. The architecture defines a layered hardware protocol (Physical, Link,
Network, Transport Layers) as well as a software layer to manage initialization and the communication between
devices. Each link can support multiple transport services for reliability and multiple prioritized virtual
communication channels.
To manage the communication within a subnet, the architecture defines a communication management scheme
that is responsible for configuring and maintaining each of the InfiniBand elements. Management schemes are
defined for error reporting, link failover, chassis management as well as other services to ensure a solid
connection fabric.

5.4.1 InfiniBand Feature Set


 Layered Protocol - Physical, Link, Network, Transport, Upper Layers
 Packet Based Communication
 Quality of Service
 Three Link Speeds
o 1X - 2.5 Gb/s, 4 wire
o 4X - 10 Gb/s, 16 wire
o 12X - 30 Gb/s, 48 wire
 PCB, Copper and Fiber Cable Interconnect
 Subnet Management Protocol
 Remote DMA Support
 Multicast and Unicast Support
 Reliable Transport Methods - Message Queuing
 Communication Flow Control - Link Level and End to End

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Emerging Technologies Page 90 of 123

5.5 InfiniBand Elements


The InfiniBand architecture defines multiple devices for system communication: a channel adapter, switch, router,
and a subnet manager. Within a subnet, there must be at least one channel adapter for each end node and a
subnet manager to set up and maintain the link. All channel adapters and switches must contain a Subnet
Management Agent (SMA) required for handling communication with the subnet manager.

Fig. 5.5 – InfiniBand Architecture

5.5.1 Channel Adapters


A channel adapter connects InfiniBand to other devices. There are two types of channel adapters, a Host Channel
Adapter (HCA) and a Target Channel Adapter (TCA).
An HCA provides an interface to a host device and supports all software Verbs defined by InfiniBand. Verbs are
an abstract representation, which defines the required interface between the client software and the functions of
the HCA. Verbs do not specify the application-programming interface (API) for the operating system, but define
the operation for OS vendors to develop a usable API.
A TCA provides the connection to an I/O device from InfiniBand with a subset of features necessary for each
device’s specific operations.

5.5.2 Switch
Switches are the fundamental component of an InfiniBand fabric. A switch contains more than one InfiniBand port
and forwards packets from one of its port to another based on the LID contained within the layer two Local Route
Header. Other than management packets, a switch does not consume or generate packets. Like a channel
adapter, switches are required to implement a SMA to respond to Subnet Management Packets. Switches can be
configured to forward either unicast packets (to a single location) or multicast packets (addressed to multiple
devices).

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Emerging Technologies Page 91 of 123

Fig. 5.5.2 – Overview of InfiniBand Switch Model

5.5.3 Router
InfiniBand routers forward packets from one subnet to another without consuming or generating packets. Unlike a
switch, a router reads the Global Route Header to forward the packet based on its IPv6 network layer address.
The router rebuilds each packet with the proper LID on the next subnet.

5.5.4 Subnet Manager


The subnet manager configures the local subnet and ensures its continued operation. There must be at least one
subnet manager present in the subnet to manage all switch and router setups and for subnet reconfiguration
when a link goes down or a new link comes up. The subnet manager can be within any of the devices on the
subnet. The Subnet Manager communicates to devices on the subnet through each dedicated SMA (required by
each InfiniBand component).
There can be multiple subnet managers residing in a subnet as long as only one is active. Non-active subnet
managers (Standby Subnet Managers) keep copies of the active subnet manager’s forwarding information and
verify that the active subnet manager is operational. If an active subnet manager goes down, a standby subnet
manager will take over responsibilities to ensure the fabric does not go down with it.

5.6 InfiniBand Support for the Virtual Interface Architecture (VIA)


The Virtual Interface Architecture is a distributed messaging technology that is both hardware independent and
compatible with current network interconnects. The architecture provides an API that can be utilized to provide
high-speed and low-latency communications between peers in clustered applications.
InfiniBand was developed with the VIA architecture in mind. InfiniBand off loads traffic control from the software
client through the use of execution queues. These queues, called work queues, are initiated by the client, and
then left for InfiniBand to manage. For each communication channel between devices, a Work Queue Pair (WQP
- send and receive queue) is assigned at each end. The client places a transaction into the work queue (Work
Queue Entry - WQE, pronounced “wookie”), which is then processed by the channel adapter from the send queue
and sent out to the remote device. When the remote device responds, the channel adapter returns status to the
client through a completion queue or event.
The client can post multiple WQEs, and the channel adapter’s hardware will handle each of the communication
requests. The channel adapter then generates a Completion Queue Entry (CQE) to provide status for each WQE
in the proper prioritized order. This allows the client to continue with other activities while the transactions are
being processed.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Emerging Technologies Page 92 of 123

Fig. 5.6 – InfiniBand Virtual Interface Architecture

5.7 InterConnect of Choice For HPC and Data Center


InfiniBand is a high performance, switched fabric interconnect standard for servers. The technology is deployed
worldwide in server clusters ranging from two to thousands of nodes. From Prudential Financial to Sandia
National Laboratories, InfiniBand has become the standard interconnect of choice for HPC environments and is
quickly becoming the preferred standard in high performance, enterprise data centers.
Founded in 1999, the InfiniBand Trade Association (IBTA) is comprised of leading enterprise IT vendors including
Agilent, Dell, Hewlett-Packard, IBM, InfiniCon, Intel, Mellanox, Network Appliance, Oracle, Sun, Topspin and
Voltaire. The organization completed its first specification in October 2000. In the past 12 months all major
system vendors have announced InfiniBand products and hundreds of products have completed interoperability
testing and are commercially available.
More recently the IBTA announced an effort to extend the technology’s signaling rate beyond its current 30Gbps
limitation to 120Gbps, maintaining InfiniBand’s leadership as the high performance standard interconnect.

5.7.1 Beyond Servers


InfiniBand implementations are prominent in server clusters where high-bandwidth and low latency are key
requirements. In addition to server clusters, InfiniBand unifies the compute, communications and storage fabric in
the data center. Several InfiniBand blade server designs have been announced by major server vendors,
accelerating the proliferation of dense computing. InfiniBand draws on existing technologies to create a flexible,
scalable, reliable I/O architecture that interoperates with any server technology on the market. With industry-wide
adoption, InfiniBand continues to transforms the entire computing market.
In addition to servers, InfiniBand enables both block and file based storage systems with a high performance
interface that directly connects to the server cluster. This unification of servers and storage ultimately delivers
higher performance with lower overall total cost of ownership by utilizing a single network for both clustering and
storage connectivity.
InfiniBand is also being used for embedded computing, an area in which proprietary components are being
replaced by higher-performance, standardized, off-the-shelf equivalents. InfiniBand benefits embedded
applications economically and technically, through its inherent resiliency, scalability and highly efficient
communications.
The InfiniBand Trade Association is lead by a steering committee comprised of Agilent, Hewlett-Packard, IBM,
InfiniCon, Intel, Mellanox, Sun, Topspin and Voltaire. The first version of the specification for the technology was
completed in October 2000. Since then, more than 70 companies have announced the availability of InfiniBand

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Emerging Technologies Page 93 of 123

products. InfiniBand Architecture has created an opportunity for server design innovation including dense server
blade implementations. InfiniBand Architecture draws on existing technologies to create a flexible, scalable,
reliable I/O architecture that interoperates with any server technology on the market. With broad adoption,
InfiniBand is transforming the industry.

5.7.2 A Single, Unified I/O Fabric


Ethernet. Fiber Channel. Ultra SCSI. Proprietary interconnects. Given that these and other I/O methods address
similar needs, and are being implemented in data centers worldwide, it's easy to wonder why so much I/O
technology innovation continues in this already-crowded arena. To understand, one needs only to look at the
complexity in interconnect configurations in today's Internet data centers. Servers are often connected to three or
four different networks redundantly, with enough wires and cables spilling out to give them the look of an
overflowing I/O pasta machine. By creating a unified fabric, InfiniBand takes I/O outside of the box and provides a
mechanism to share I/O interconnects among many servers. InfiniBand does not eliminate the need for other
interconnect technologies. Instead, it creates a more efficient way to connect storage and communications
networks and server clusters together, while delivering an I/O infrastructure that produces the efficiency, reliability
and scalability that data centers demand.

5.7.3 Identifying the Need


Before the emergence of personal computers (PCs), mainframes featured scalable performance and a "channel-
based" model that delivered a balance between processing power and I/O throughput. Data centers provided
reliable data processing in a world of predictable workloads. The primary concern of the data center manager was
system uptime, as failures led to loss of productivity. The industry transitioned from the model of mainframes and
terminals to the client server age, where intelligence is shared between intelligent PCs and racks of powerful
servers. With this transition came the advent of the "PC server," a concept that started with a network-connected
PC turned on its side. This has evolved into an ever-rising specialization of "N-tier" server implementations,
architectures that have applications distributed across a range of systems. The heart of the data center, where
mission critical applications live, still relies on servers featuring the proprietary interconnects first seen in early
mainframe systems. Today, data center managers are looking for more functionality from standard interconnect
server models.

5.7.4 Network Volume Expands


The Internet's impact on the industry has been as big as the PC's, fundamentally changing the way CIOs manage
their compute complexes. In a world where eighty percent of computing historically resided locally on a PC,
Internet traffic and the rise of applications driven by Internet connectivity have created a model where more than
eighty percent of computing is done over the network. This has created a wave of innovation in Ethernet local
area network (LAN) connectivity, moving 10 Mbps LAN infrastructures to speeds of up to 1 Gbps. The first wave
of Internet connectivity also led to investment of trillions of dollars in communications infrastructure, greatly
expanding the ability to transfer large amounts of data anywhere in the world. This has created the foundation for
an explosion in applications addressing virtually every aspect of human interaction. It also creates unique
challenges for the data center, the "mission control" of information processing. The world of predictable workloads
has now been turned into an increasingly unpredictable environment. Once, downtime meant only a loss in
productivity. Now an array of other factors, such as decreased consumer confidence and lost sales, complicate
the mix. Business success depends on data center performance and flexibility today, and this reliance will only
increase as firms escalate their dependence on connectivity for business results.

5.7.5 Trend to Serial I/O


Traditionally, servers have relied on shared bus architecture for I/O connectivity, starting with the industry
standard architecture (ISA) bus. For the past decade, servers have also utilized myriad iterations of the peripheral
component interconnect (PCI) bus. Bus architectures have proven to be an efficient transport for traffic in and out
of a server chassis, but as the cost of silicon has decreased, serial I/O alternatives have become more attractive.
Serial I/O provides point-to-point connectivity, a "siliconization" of I/O resources, and increased reliability and
performance. As serial I/O has become a financially viable alternative, new opportunities have been created for
the industry to address the reliability and scalability needs of the data center.
To meet the demands of changing data center environments, companies have new server platform requirements.
 Increased platform density for scaling more performance in defined physical space
 Servers that can scale I/O and processing power independently

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Emerging Technologies Page 94 of 123

 Racks of servers that can be managed as one autonomous unit


 Servers that can share I/O resources
 True "plug-and-play" I/O connectivity

5.7.6 InfiniBand: Adrenaline for Data Centers


InfiniBand answers these needs and meets the increasing demands of the enterprise data center. The
architecture is grounded in the fundamental principles of channel-based I/O, the very I/O model favored by
mainframe computers. InfiniBand channels are created by attaching host channel adapters and target channel
adapters through InfiniBand switches. Host channel adapters are I/O engines located within a server. Target
channel adapters enable remote storage and network connectivity into the InfiniBand fabric. This interconnect
infrastructure is called a "fabric" based on the way input and output connections are constructed between host
and targets. All InfiniBand connections are created with InfiniBand links utilizing both copper wire and fiber optics
for transmission. Seemingly simple, this design creates a new way of connecting servers in a data center. With
InfiniBand, new server deployment strategies become possible.

5.7.7 Independent Scaling of Processing and Shared I/O


One example of InfiniBand’s impact on server design is the ability to design a server with I/O removed from the
server chassis. This enables independent scaling of processing and I/O capacity, creating more flexibility for data
center managers. Unlike today's servers, which contain a defined number of I/O connections per box, InfiniBand
servers can share I/O resources across the fabric. This method allows a data center manager to add processing
performance when required, without the need to add more I/O capacity (the converse is also true). Shared I/O
delivers other benefits as well. As data center managers upgrade and add storage and networking connectivity to
keep up with traffic demand, there's no need to open every server box to add network interface cards (NICs) or
Fiber channel host bus adapters (HBAs). Instead, I/O connectivity can be added to the remote side of the fabric
through target channel adapters and shared among many servers. This saves uptime, decreases technician time
for data center upgrades and expansion, and provides a new model for managing interconnects. The ability to
share I/O resources also has an impact on balancing the performance requirements for I/O connectivity into
servers. As other networking connections become increasingly powerful data pipes that could saturate one server
can be shared among many servers to effectively balance server requirements. The result is a more efficient use
of computer infrastructure and a decrease in the cost of deployment of fast interconnects to servers.

5.7.8 Raising Server Density, Reducing Size


Removal of I/O from the server chassis also has a profound impact on server density (the amount of processing
power delivered in a defined physical space). As servers transition into rack-mounted configurations for easy
deployment and management, floor space is at a premium. Internet service providers (ISPs) and application
service providers (ASPs) were among the first companies faced with the problem of finding enough room to
house server racks to keep up with processing demand. "Internet hotels"– buildings that house little more than
racks of servers – are commonplace. As the impact of Internet computing grows, the density requirements of
servers become more widespread.
By removing I/O from the server chassis, server designers can fit more processing power into the same physical
space. Using InfiniBand, server manufacturers have recently produced sub-1U server designs (a U is a
measurement of rack height equating to 1.75 inches). More importantly, compute density–the amount of
processing power per U–increases through the expansion of available space for processors inside a server.
Additionally, the new modular designs improve serviceability and provide for faster provisioning of incremental
resources like CPU modules or I/O expansion.

5.7.9 Clustering and Increased Performance


Data center performance is now measured in the performance of individual servers. With InfiniBand, this model
shifts from individual server capability to the aggregate performance of the fabric. InfiniBand enables the
clustering and management of multiple servers as one entity. Performance scales by adding additional boxes,
without many of the complexities of traditional clustering. Even though more systems can be added, the cluster
can be managed as one unit. As processing requirements increase, additional power can be added to the cluster
in the form of another server or "blade." Today's server clusters rely on proprietary interconnects to effectively
manage the complex nature of clustering traffic. With InfiniBand, server clusters can be configured for the first
time with an industry standard I/O interconnect, creating an opportunity for clustered servers to become
ubiquitous in data center deployments. With the ability to effectively balance processing and I/O performance

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Emerging Technologies Page 95 of 123

through connectivity to the InfiniBand fabric, data center managers can react more quickly to fluctuations in traffic
patterns, upswings in data center processing demand, and the need to retool to meet changing business needs.
The net result is a more agile data center with the inherent flexibility to tune performance to an ever-changing
landscape.

5.7.10 Enhanced Reliability


The returns on investment associated with InfiniBand go beyond enhanced performance, shared I/O and server
density improvements. Since the advent of mainframe computing, the most important data center requirement has
been the resiliency of the compute complex. As this requirement increases with the advent of Internet
communications, a more reliable server platform design is required. InfiniBand increases server reliability in a
multitude of ways.
Channel-Based Architecture Because InfiniBand is grounded on a channel-based I/O model, connections
between fabric nodes are inherently more reliable than conventional I/O technologies.
Message-Passing Structure InfiniBand protocol utilizes an efficient message-passing structure to transfer data.
This moves away from the traditional "load store" model used by the majority of today's systems and creates a
more efficient and reliable transfer of data.
Natural Redundancy InfiniBand fabrics are constructed with multiple levels of redundancy in mind. Nodes can be
attached to a fabric for link redundancy. If a link goes down, not only should the fault be limited to the link, but also
the additional link should ensure that connectivity continues to the fabric. By creating multiple paths through the
fabric, intra-fabric redundancy results. If one path fails, traffic can be rerouted to the final endpoint destination.
InfiniBand also supports redundant fabrics for the ultimate in fabric reliability. With multiple redundant fabrics, an
entire fabric can fail without creating data center downtime.

5.7.11 End User Benefit


With more than 200 worldwide deployments in development or completed, customers are quickly garnering a
return on investment. In 2003 Prudential Insurance replaced proprietary fabrics with InfiniBand and reduced its
costs by an order of magnitude over previous UNIX solutions. Burlington Coat Factory doubled the performance
of its Oracle database by replacing large-scale UNIX systems with InfiniBand and commercial off-the-shelf
servers.

5.7.12 Industry-Wide Effort


Building a successful computer industry standard takes collaboration and cooperation. Through the InfiniBand
Trade Association, some of the industry's most competitive companies are working together to further enhance
InfiniBand architecture. They are leading the shift to a fabric-based I/O architecture, and know InfiniBand benefits
everyone in the server world.

5.8. Relationship between InfiniBand and Fiber Channel or Gigabit


Ethernet
InfiniBand Architecture is complementary to Fiber Channel and Gigabit Ethernet. InfiniBand architecture is
uniquely positioned to become the I/O interconnect of choice for data center implementations. Networks such as
Ethernet and Fiber Channel are expected to connect into the edge of the InfiniBand fabric and benefit from better
access to InfiniBand architecture enabled compute resources. This will enable IT managers to better balance I/O
and processing resources within an InfiniBand fabric.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 96 of 123

6. Enterprise Backup Storage


6. Introduction of Enterprise Backup Storage
The explosive growth of data in the world around us is staggering today’s computing environment. Application
advancements, hard disk capacity doublings, network bandwidth, and connectivity improvements are all
contributing to the management challenges felt by today’s information technology managers. This growth
presents particularly interesting challenges to tape backup technology, since protecting the data on today’s rapidly
growing storage subsystems is an absolute requirement.
Data is typically thought of and measured by its physical capacity. This physical capacity has fueled tremendous
growth in the primary storage industry (hard drives and hard-drive-based subsystems). The successes of large
primary disk suppliers such as IBM, EMC and Network Appliance have been the result. One often overlooked
piece of the storage boom is the effect that this explosion has had on the market for secondary storage, or
removable media storage.
Of the removable storage types, tape has continued to evolve through the years, and it is still the hands-down
leader in the cost-for-capacity category. This fact has created an ever growing need for larger and higher-
performance tape drives and automation subsystems. To effectively select a tape technology in today’s crowded
tape marketplace, it is important for end users to understand the underlying technology and some of the history of
tape. Then, end users must apply some common feature-and-benefit analysis and some individualized needs
analysis to ensure that a tape technology choice does not leave the business hanging as time goes by.
Roadmaps end, technology improvements can be delayed or never realized, companies falter, and new
technologies can be introduced which out-date the older technologies. Any of these happenings can leave an end
user in a precarious position when they affect the tape technology in which the end user has invested.
This white paper provides a simple and understandable look at today's most prevalent mid-range tape
technologies. It looks at the history and evolution of each technology and examines how each technology
accomplishes the advances and features necessary to compete in the mid-range tape marketplace today. This
paper does not discuss a number of very expensive tape technologies, as they are typically not cost competitive
in the mid-range space. By the same token, many low-end tape technologies are excluded from discussion,
primarily because of the types and sizes of the customers that they target. All of the midrange tape technologies
studied in this paper are available in stand-alone and automated library offerings. However, regardless of the
offering, the tape drive technology specifications remain the same.

6.1 Recording Methods


6.1.1 Linear Serpentine Recording

Fig. 6.1.1.1 - Linear Serpentine Recording

All DLT and LTO tape products write linear serpentine data tracks parallel to the edge of the tape (Figure 1). In
these technologies, half-inch tape moves linearly past head assemblies that houses the carefully aligned read and
write heads. To create the serpentine pattern on the tape, the head assembly moves up or down to precise
positions at the ends of the tape. Once the head assembly is in position, the tape motion is resumed and another
data track is written parallel to and in between the previously written tracks. Both DLT and LTO technologies
position the read heads slightly behind the write heads to accomplish a read-while-write-verify. Older DLT and
LTO technologies use the edge of the tape or a pre-written servo-track as a tracking reference during read and

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 97 of 123

write operations. The new Super DLT technology, however, uses an optical assist servo technology, called Pivotal
Optical Servo, to align its heads to the proper tracks.
The Use of Azimuth to Increase Linear Capacity Azimuth is defined as the trajectory of an angle measured in
degrees going clockwise from a base point. In many tape and disk applications, azimuth has been used through
time to increase storage densities. When using azimuth, tracks can be pushed together on a tape, eliminating the
need for the guard bands that used to be required between adjacent tracks. The guard bands were eliminated, for
example, in DLT’s transitions from the DLT 4000 to the DLT 7000-8000 technologies
The DLT 4000 used normal linear recording, in which the head assembly operated in one position perpendicular
to the tape, writing data blocks in a true linear pattern. The DLT 7000 and DLT 8000 incorporated a modified
linear serpentine method called Symmetrical Phase Recording (SPR). The SPR method allows the head
assembly to rotate into three different positions, thereby allowing data blocks to be written in a herringbone or
SPR pattern, as shown Figure 2 below. This method yields a higher track density and higher data capacity,
eliminating the wasted space for guard bands. A third vertical head position (zero azimuth) allows the DLT 7000
and DLT 8000 drives to read DLT 4000 tapes.

Fig. 6.1.1.2 - Logical diagram of normal Linear and SPR Linear Recording.

6.1.2 Helical Scan


Sony AIT and Exabyte Mammoth employ a helical scan recording method in which data tracks are written at an
angle with respect to the edge of an 8 mm tape. This is achieved by wrapping magnetic tape partially around an
angled, rotating drum. The read and write heads are precisely aligned in the drum and protrude very slightly from
its smooth outer surface. As the tape is moved past the rotating drum, the heads create an angled data track on
the tape

Fig. 6.1.2 - Helical-Scan Recording

Read heads are positioned just behind the write heads, allowing read-while write-verify, which ensures the data
integrity of each data stripe. A special servo head on the drum and track on the tape are used for precise tracking
during subsequent read operations. All helical-scan tape drives use azimuth to maximize the use of the tape
media. Rather than moving the head assembly itself like linear devices do, helical recording creates azimuth by
mounting the heads at angles in respect to each other.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 98 of 123

6.2 Tape Drive Performance


6.2.1 Tape Loading and Cartridge Handling
In all tape drive systems, the tape must be pulled from the cartridge, guided through the tape path, and then
pulled across the read-write head assembly. Linear and helical tape technologies differ significantly in their
methods of tape handling and loading, but in every case, tapes must be handled properly to avoid high error
rates, tape damage, and in the worst case-loss of data.

6.2.2 Linear Drive Mechanisms


When the tape cartridge is inserted into a linear tape drive, a load mechanism inside the drive engages with a
positioning tab at the beginning of the tape, which pulls the tape out of the cartridge and onto a take-up hub inside
the drive compartment. As the read or write operation is performed, the tape is spooled between the take-up hub
inside the drive and the cartridge supply reel inside the media cartridge. This is one reason why linear tape drives
are much larger than helical scan drives, which employ a dual-spool cartridge design.
It is very important that linear tape cartridges not be dropped or roughly handled because the tape inside may
slacken or shift on the spool. This may cause problems with loading the tape or may cause edge damage on the
media, since the leader may fail to engage when inserted into the tape drive. If this leader-latching problem
occurs, the tape cartridge is typically rendered useless, and the drive may even require repair, which is
particularly problematic in automated tape library environments.

Fig. 6.2.2 - Diagram of Linear Tape Drive

6.2.3 Helical-Scan Drive Mechanisms


Sony AIT and Exabyte Mammoth drives employ a more common method of tape loading. When the tape cartridge
is inserted, drive motors engage the cartridge hubs and work with tape loading guides to position tape into the
tape path. As the read or write operation is performed, the tape is spooled from one cartridge hub to the other.
Because of this, Sony AIT and Mammoth tape cartridges are much less sensitive to rough handling and dropping.
For best results, users should follow the manufacturer’s recommendations for storage and handling of data
cartridges.

Fig. 6.2.3 - Diagram of a Helical-Scan Drive

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 99 of 123

6.2.4 Tape Tension and Speed Control


In all tape drives, the tape must be precisely moved through the tape path and across the heads during read or
write operations. Also, the relative speed between the tape and the heads must be precisely controlled.
AIT and pre-Mammoth Exabyte tape drives employ traditional servo-driven capstan-and-pinch-roller designs to
control tape speed. These designs use a capstan, or a controlled motorized cylinder, to pinch the tape against a
freewheeling roller, pulling the tape through the tape path at a regulated speed. The take-up and supply hubs are
used to spool and unwind the tape, but the precise tape speed is controlled at the capstan point.
Exabyte Mammoth drives employ an entirely new capstan-less design in which the tape speed is completely
controlled by closed-loop, servo-driven take-up and supply hubs. The speed of the hubs is engineered to be
constantly and precisely varied as the diameter of the two spools changes. For instance, the take-up hub speed
must decrease steadily as the tape spool gets larger in order to maintain a constant tape speed across the heads.
The goal of the capstan-less design is to reduce tape stress caused by the capstan-and-pinch-roller system.
However, Mammoth field studies have not proven this method to significantly improve reliability.
Linear recording technology controls tape speed with a system that is very similar to that of Mammoth tape drives.
Tape speed is controlled using a servo mechanism and pick-up and take-up spools. These linear mechanisms
employ a very tight and positive control of the spool-to-deck mechanism, which forces the spool gears into the
corresponding deck gears. In all tape handling systems, tape tension is required to ensure that the tape is held
firmly against the head assembly as it traverses the tape path. This tension leads to tape-head wear. In general,
the tape tension in linear drives is over twice that of helical scan drives. However, other factors such as head
material, media composition, and cleaning practices will also have an effect on tape-head wear.

6.2.5 Tape Speed and Stress


Linear drives move tape at a relatively fast rate, typically over 150 inches per second (ips). The helical scan drives
use a much slower tape speed of less than one ips through the tape path and past the rapidly rotating drum
assembly. Interestingly, the relative tape speed is nearly equal in both helical-scan and linear technologies.
Tape stress is a function of many system variables, some of which include tape speed, tape path control
mechanisms (usually guide rollers), capstan pressure, and media contamination. It is important to understand
how each drive technology minimizes this tape stress. Linear tape drives utilize a straighter tape path but a much
higher tape speed, making the guide-roller system critical to minimize edge wear on the media. On the other
hand, helical scan drives use a much slower tape speed but a more complex tape path.

6.2.6 Data Streaming and Start/Stop Motion


A tape drive’s ability to continuously read or write data, or “stream” data, is a key performance and reliability
differentiator. A drive’s performance will suffer dramatically if the drive is not supplied with data at a rate sufficient
to keep it streaming. In cases where these conditions are not met, the drive will need to stop the forward tape
motion, reverse the position, bring the tape back to speed, and then restart the write operation.
Linear technologies, with higher tape speeds, do not operate well in start-stop mode. Each start-stop operation
requires the mechanism to stop the tape from greater than 150 ips, rewind well past the last data written, ramp
the speed back to greater than 150 ips, and then resume writing. The amount of time spent performing a stop-
rewind-start motion dramatically impacts the overall tape system’s throughput. In an attempt to minimize this,
high-performance linear technologies employ powerful reel motor systems. The reel-motor system results in linear
drives having larger physical footprints and higher power consumption ratings than helical-scan devices.
Helical-scan drives, in addition to being smaller and using less energy, can perform the stop-rewind-start
sequences very quickly. This is owing to their slower tape speeds and their constantly rotating drum mechanisms.
While continued stop-start motion is detrimental to any drive, the reliability impact is greater on devices with
higher tape speeds because of the mechanical stress placed on the system and the media.
All four of these tape drive technologies use data buffering techniques to minimize the need to perform stop-start
activities. Linear technologies must use larger buffers since the performance and reliability penalty for a stop-start
operation is so much higher than with helical-scan products. Mammoth and AIT drives will typically out-perform
DLT and LTO drives in applications where drive streaming is not possible.

6.2.7 Media Load Time and File Access Time

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 100 of 123

Media load and file access times are important factors to consider as per-tape capacities rise or when tape drives
are integrated into robotic tape libraries. Media load time is defined as the amount of time between cartridge
insertion and the drive becoming ready for host system commands. File access time is defined as the time
between when the drive receives a host-system command to read a file and the time when the drive begins to
read the data.
File access times are typically expressed as averages, since the requested file might be located in the middle of
the tape or at either end. Times are usually specified as the time required to reach the middle. Drive vendors
typically state specifications for both media load and file access. The specifications for the four mid-range tape
technologies are shown in the following table.

Media Load and File Access Time *


Tape Drive Media Load Time Average File Access Time
Exabyte Mammoth 20 seconds 55 seconds
Exabyte Mammoth-2 17 seconds 60 seconds
Quantum DLT 8000 40 seconds 60 seconds
Tape Drive Media Load Time Average File Access Time
Quantum Super DLT 40 seconds 70 seconds
HP LTO Surestore Ultrium 230 15 seconds 71 seconds
IBM LTO 3580 Ultrium 15 seconds 65 seconds
Seagate Viper 200 LTO Ultrium 10 seconds 76 seconds
Sony AIT-1 10 seconds 27 seconds
Sony AIT-2 10 seconds 27 seconds
Sony AIT-3 10 seconds 27 seconds

* Times obtained from drive manufacturers’ published information.


The Sony AIT drives offer a much faster media load time and file access time, making these technologies an
obvious choice for applications requiring fast data retrieval. The AIT time advantage is due in part to the unique
Memory In Cassette (MIC) feature, which consists of an electrically erasable programmable read-only memory
chip, called Flash EEPROM, built into the Sony AME tape cartridge. The flash memory stores information
previously stored in a hidden file written before a tape’s Logical Beginning Of Tape (LBOT). Through the use of
the MIC feature, Sony’s AIT drives reduce wear and tear on mechanical components during the initial load
process and offer faster file access. MIC technology is now being used in today’s LTO tape drives.

6.2.8 Data Capacity


Data capacity is measured by the amount of data that can be recorded on a single tape cartridge. Tape
manufacturers maximize capacity by increasing the bit density on a given area of tape or by increasing the length
of the tape in the cartridge. Hardware data compression is also used to increase capacity, and any valid tape
technology comparison must show both native and compressed values. Each manufacturer uses a different data
compression algorithm resulting in different compression ratios:

Data Compression *
Tape Type Algorithm Ratio
Exabyte Mammoth IDRC 2:1
Exabyte Mammoth-2 ALDC 2.5:1
Quantum DLT DLZ 2:1
Quantum Super DLT DLZ 2:1
HP/IBM/Seagate LTO ALDC 2:1
Sony AIT ALDC 2.6:1

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 101 of 123

* Data compression obtained from drive manufacturers’ published information.


Native and compressed capacities for each type of tape are shown in the table below. The comparisons made
here are based on the maximum tape lengths available at the time of this writing.

Capacity
Media Type Native Capacity Compressed Capacity
Exabyte Mammoth 20 GB 40 GB
Exabyte Mammoth-2 60 GB 150 GB
Quantum DLT 8000 40 GB 80 GB
Quantum Super DLT 110 GB 220 GB
HP LTO Surestore Ultrium 230 100 GB 200 GB
IBM LTO 3580 Ultrium 100 GB 200 GB
Seagate LTO Viper 200 Ultrium 100 GB 200 GB
Sony AIT-1 (Extended Length) 35 GB 91 GB
Sony AIT-2 50 GB 130 GB
Sony AIT-3 100 GB 260 GB
* Tape capacities obtained from drive manufacturers’ published information.

6.2.9 Data Transfer Rate


Data transfer rate is defined as the speed at which data is written to tape from the drive’s internal buffer. This is
usually measured in megabytes per second (MB/sec.). If the data transfer from the host system to the drive is
significantly slower than the drive’s transfer rate (after compression), a great deal of start-stop tape motion will
occur while the drive waits for more data. Start-stop activities sometimes referred to as shoe shining because the
tape goes back and forth across the head will adversely impact the drive’s throughput performance and can
dramatically increase wear on the drive’s mechanical subsystem. Therefore, it is important to keep the tape
drive’s cache buffer supplied with data for drive streaming. Buffer sizes are selected by the manufacturers to
minimize start-stop activities. However, larger buffer sizes cannot eliminate start-stops in situations where there
exists a performance mismatch between the host system and the drive.

Data Transfer Rates


Drive Type Native Compressed
Exabyte Mammoth 3 MB/sec. 6 MB/sec.
Exabyte Mammoth-2 12 MB/sec. 30 MB/sec.
Quantum DLT 8000 6 MB/sec. 12 MB/sec.
Quantum Super DLT 11 MB/sec. 22 MB/sec.
HP LTO Surestore Ultrium 230 15 MB/sec. 30 MB/sec.
IBM LTO 3580 Ultrium 15 MB/sec. 30 MB/sec.
Seagate LTO Viper 200 Ultrium 16 MB/sec. 32 MB/sec.
Sony AIT-1 (Extended Length) 3 MB/sec. 7.8 MB/sec.
Sony AIT-2 6 MB/sec. 15.6 MB/sec.
Sony AIT-3 12 MB/sec. 31.2 MB/sec.

* Data transfer rates obtained from drive manufacturers’ published information.

6.3 Reliability
In general, tape drive reliability can mean many things to many people. Tape drive vendors have notoriously
slanted tape technology specifications in order to lure users into using to their technology. Following are two sets
of reliability specifications often used in mid-range tape technology competition.

6.3.1 Mean Time between Failure (MTBF)

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 102 of 123

One method of measuring tape drive reliability is specified by Mean Time Between Failure (MTBF). This is a
statistical value relating to how long, on average, the drive mechanism will operate without failure. In reality, drive
reliability varies greatly and cannot be accurately predicted from a manufacturer’s MTBF specification.
Environmental conditions, cleaning frequency, and duty cycle can significantly affect actual drive reliability. The
fact that manufacturers usually don’t include head life in the MTBF specification, and the manufacturer’s duty
cycle assumptions vary. Tape drive manufacturers often add a disclaimer to the MTBF specification that the
figures should only be used for general comparison purposes. Head life specifications (in hours) are subject to
some of the same interpretation problems as MTBF, but when combined with other reliability specifications, they
offer a good comparison of performance in high duty-cycle environments. The table below shows how reliability
spec. compare.

MTBF and Head Life Statistics *


Tape Drive MTBF Head Life
Exabyte Mammoth 250,000 hours @ 20% duty cycle 30,000 hours
Exabyte Mammoth-2 300,000 hours @ 20% duty cycle 50,000 hours
Quantum DLT 8000 250,000 hours @ 100% duty cycle 50,000 hours
Quantum Super DLT 250,000 hours @ 100% duty cycle 30,000 hours
HP LTO Surestore Ultrium 230 250,000 hours @ 100% duty cycle **
IBM LTO 3580 Ultrium 250,000 hours @ 100% duty cycle 60,000 hours
Seagate LTO Viper 200 Ultrium 250,000 hours @ 100% duty cycle **
Sony AIT-1 250,000 hours @ 40% duty cycle 50,000 hours
Sony AIT-2 250,000 hours @ 40% duty cycle 50,000 hours
Sony AIT-3 400,000 hours @ 100% duty cycle 50,000 hours

* Rates obtained from drive manufacturers’ published information.

6.3.2 Annual Failure Rate (AFR)


An excellent real-world indicator of a drive’s reliability is the Annual Failure Rate (AFR) of a drive technology’s
field population. As in MTBF calculations, the user’s results are averaged regardless of environmental conditions,
cleaning frequency, and duty cycle, which can significantly affect actual drive reliability. Therefore, these numbers
should be used only for general comparison purposes. Vendors calculate AFR numbers based on how many
failed drives they have returned to the factory from the installed base. The vendor then averages those numbers
over each year for that tape technology. LTO tape technologies are so new that they have not yet been able to
produce any AFR data.

Tape Drive Annual Failure Rates


Drive Type Approximate AFR
Exabyte Mammoth 2.5%
Quantum DLT 4.5%
HP/IBM/Seagate LTO **
Sony AIT 1.5%

** Technology too new to be quantified.

6.3.3 Data Integrity


Data integrity is specified as the bit error rate (BER), which gives the number of permanent errors per total
number of bits written. Mammoth, DLT, AIT, and LTO drives all incorporate a read-while-write-verify error
detection, a cyclic redundancy check (CRC), and an error correction code (ECC) algorithm to ensure a BER of
1017, or 1 error in 100 trillion bits. The Sony AIT drive is the only product that incorporates a third-level error
correction code (in addition to first and second level) for increased data integrity.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 103 of 123

6.4 Media Types


Two basic types of media are used in today’s mid-range tape drives: Metal Particle (MP) and Advanced Metal
Evaporated (AME). MP technology is used in DLT, LTO, first-generation Exabyte drives, as well as many other
tape technologies, like video tape technology. AME technology is used for Mammoth and AIT media. Both media
types contain a base film and a recording layer of magnetic metal material. MP tape is a relatively old technology
and has evolved to support ever-increasing bit densities. Sony’s new AME media, on the other hand, has key
features that significantly improve its recording characteristics and its head-to-tape interface reliability, making it
the most advanced media type being used today.
The MP recording layer is composed of magnetic material mixed with a binder and other additives, such as
lubricants. AME media’s recording layer is made entirely of magnetic cobalt material. The highly metallic surface
of AME media allows higher recording densities and improved signal-to-noise ratios. AME media also employs a
very smooth diamond-like carbon (DLC) coating, which significantly reduces drive-head wear and head
contamination.

6.4.1 Media Reliability


Media reliability is often summarized with pass specifications and use specifications. However, media experts and
real-world users agree that media pass and use specifications are largely theoretical and generated primarily for
marketing purposes. Even Quantum has stated, “The relevance of the media use spec is under review” (DLT
Forum, 17 August 1999). The best way to judge media’s durability is to evaluate its formulation. The smoother
and more pure the media, the less friction is generated between the tape and head, resulting in longer-lasting
media.
For a comparison of the stated specifications of media uses and passes in today’s mid-range tape technologies, a
clarification of terms is required. Note the distinction between the terms “passes” and “uses.” For purposes of
comparison here, one “use” is defined as the filling of a tape to capacity, and a “pass” is defined as the running of
the tape over the head in one direction.
The media-use specification is the more valid way to compare the drives. This is because the pass specifications
are not comparable; there are too many differences between helical-scan and linear technologies. In helical-scan
devices, there are only two passes required for one use. In linear devices, multiple passes are required for one
use.
A single use of a DLT tape, for example, involves numerous tape passes over the read-write heads. Specifically,
in a DLT 8000 device, the head can write four channels at once, and the tape can accept 208 channels, requiring
the tape to be passed over the head 52 times to fill a tape. Helical drives, like Mammoth and AIT, require only one
pass to fill a tape and one to rewind it, for a total of two. To determine the number of uses a tape may endure, the
listed pass specification must be divided by the number of passes necessary to fill a tape. For example, the DLT
8000 media-use number can be calculated by dividing 1 million passes by 52. This equals 19,230 (Quantum lists
15,000 in their specifications). AIT’s 15,000 and Mammoth’s 10,000 media-use numbers are deduced by dividing
30,000 and 20,000 passes by two, respectively. Therefore, based purely on specifications, the media used by all
of the drives are approximately equal in durability.

Media Use Specifications *


Drive Type Media Type Media Uses
Exabyte Mammoth AME 10,000
Quantum DLT MP 15,000
Quantum Super DLT MP 17,850
HP/IBM/Seagate Ultrium LTO MP **
Sony AIT AME 15,000

* Rates obtained from drive manufacturers’ published information.


** These tape drive information not there.

6.4.2 Media and Backward Compatibility


Exabyte originally designed Mammoth-1 drives to read both AME and MP media as well as to be compatible with
their existing MP 8 mm tapes. This backward compatibility, however, forces special read-write head requirements
to read both AME and MP media. It also necessitates special cleaning practices by the user. For example, if an

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 104 of 123

MP tape is read by the Mammoth drive, the drive will not accept another tape until a cleaning cartridge is inserted.
Cleaning is required because the MP media binder chemistry is prone to leave debris on the heads and in the
tape path. This raises a reliability question for Mammoth drives reading MP tapes on a consistent basis. Exabyte
has not published any specifications or test reports that quantify reliability when using the Mammoth drive in this
mode. The implications of cleaning are even less appealing when using the drive with a mixed media set in a tape
library environment where backup software does not recognize the difference in media types. It is perhaps more
realistic for Mammoth users to transition to AME media and avoid the problems associated with using MP media.
As tape technologies evolve, a drive manufacturer must weigh the size of its installed base and the willingness of
that base to switch to a new media type as the manufacturer introduces new tape drives. In general, new tape
drives utilize new media types to take advantage of the latest head and media components. Unfortunately,
comparison algorithms and media types have been continued long past their usable life just to extend the
installed bases backward read (and sometimes write) capabilities.
Sony’s third generation AIT product, AIT-3, is the first tape drive to double the transfer rates of previous-
generation media. For example, an AIT-1 cartridge in an AIT-3 drive will achieve double the transfer rate of that
same cartridge in an AIT-1 drive. (That transfer rate is higher than an AIT-2 cartridge in an AIT-2 drive, but still not
as high as an AIT-3 cartridge in an AIT-3 drive.) However, an AIT-2 cartridge in an AIT-3 drive will duplicate the
transfer rate available for AIT-3 cartridges in AIT-3 drives.
Other technologies have always forced the previous generation speeds when using the older media. So, while it is
appealing to be able to read the older tape with the newer drives, most customers have ended up transitioning
their media pool over to the newer tapes. Backup windows become unpredictable when new and old media are
mixed inside an automated tape library. However, tape library manufacturers like Spectra Logic are now providing
solutions in which a user can logically partition old and new media in one tape library. Logical partitioning such as
this can help to leverage the end user’s original investment in the older tapes.

6.5 Drive Cleaning


As tape technologies advance, the recording density on each square millimeter of the tape increases, the
distance between the head and tape decreases, and the physical head gap shrinks. Dust, media particles, and
other contaminants can enter the head-to-tape interface area and cause high error rates, which slow
performance, decrease capacity per tape, and eventually lead to drive failure. Tape drive manufacturers have
traditionally addressed these issues by specifying periodic cleaning with a fabric or rough media cleaning
cartridges. All of the drives examined here have an LED cleaning light on the front of the drive, which flashes
when the drive needs to be cleaned. In addition, Exabyte specifies that a cleaning cartridge be loaded into the
Mammoth drive every 72 tape-motion hours. Quantum DLT drives have no recommended cleaning interval other
than when the cleaning LED flashes.
Sony has taken a different approach to keeping the AIT drive’s tape path and heads clean. First, the AIT drive
does not rely on external fans in the library or the system cabinet to cool the AIT drive and components; those
types of fans force airborne dust through the drive and the critical head-tape interface. AIT drive cooling is
achieved via an internal, variable-speed fan that cools only the drive circuitry and base plate without pulling air
through the tape path. Second, the AME media formulation and the DLC coating significantly reduce media
surface debris that can clog heads. These features allow Sony AIT drives to operate with virtually no manual
cleaning, eliminating maintenance problems and significantly reducing the drive’s overall operating costs. Finally,
a built-in head-cleaning wheel is automatically activated by an error-rate-monitoring device to ensure a clean
head-to-tape interface and maximum performance. (Occasional cleaning of AIT read and writes heads with
approved Sony cleaning media may be required for excessive head contamination.)

6.6 Technology Roadmaps


When choosing a tape drive technology, an end user should consider the migration path of the technology. A
future migration path should offer higher performance and capacity while ensuring backward-read compatibility
with previously written tapes. With typical corporate data volume growing at 60 percent per year, a user would not
want to buy into a technology near the end of its life cycle and then be stuck with the lower performance and
lower capacity of an older technology. For a look at the past and the proposed future of the different mid-range
tape technologies, see the roadmaps compared at the end of this section. (See the table below for drive
performance roadmaps.)
Historically, tape vendors have struggled to continue along their roadmaps. These challenges have stemmed from
a wide variety of causes, including financial difficulties owing to loss of marketshare, prices decreasing, and
product development delays. Product development problems have arisen from a number of sources. Sony stands

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 105 of 123

alone as having full ownership control over its deck manufacturing, head technology, and media; several
companies, however, have been very dependent upon other companies to release their next product.
Three generations have historically been the industry norm for tape drive evolution. Evolving semiconductor
technologies, compression algorithms, heads, and media processes have made it very difficult for drive vendors
to extend the older technologies past three generations while remaining competitive with newer drive products
and backward compatible with the existing installed base.

Roadmaps of Drive Performance (Native Transfer Rates) *


Drive Type 1998 1999 2000 2001 2003 2005 2006 2007
AIT-6
AIT-1-XL AIT-2 AIT-3 AIT-4 AIT-5
AIT 3 MB/sec. 6 MB/sec.
-
12 MB/sec. 24 MB/sec. 48 MB/sec.
- 96
MB/sec.
Super Super
Super DLT Super DLT Super DLT
DLT 7000 DLT 8000 DLT 1280 DLT 2400
DLT 5 MB/Sec. 6 MB/Sec.
- 220 320 640
50 100
11 MB/Sec. 16 MB/Sec. 32 MB/Sec.
MB/Sec. MB/Sec.
Mammoth Mammoth Mammoth
Mammoth 3 MB/Sec. 3 MB/Sec. 12 MB/Sec.
** ** ** ** **

Surestore
HP LTO - - - Ultrium ** ** ** **
15 MB/Sec.
3580 Ultrium
IBM LTO - - -
15 MB/Sec.
** ** ** **

Viper 200
Seagate
- - - Ultrium ** ** ** **
LTO 16 MB/Sec.

* Highest data transfer rates of tape drive technologies as publicly stated by drive vendors.
** These tape drive information not there.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 106 of 123

Roadmaps of Drive Capacity (Native) *


Drive Type 1998 1999 2000 2001 2003 2005 2006 2007
AIT-1-XL AIT-2 AIT-3 AIT-4 AIT-5 AIT-6
AIT 35 GB 50 GB
-
100 GB 200 GB 400 GB
-
800 GB.
Super DLT Super DLT Super DLT Super Super
DLT 7000 DLT 8000
DLT - 220 320 640 DLT 1280 DLT 2400
35 GB 40 GB
110 GB 160 GB 320 GB 640 GB 1.2 TB
Mammoth Mammoth Mammoth
Mammoth 20 GB 20 GB 60 GB
** ** ** ** **

Surestore
HP LTO - - - Ultrium 230 ** ** ** **
100 GB
3580 Ultrium
IBM LTO - - - ** ** ** **
100 GB
Viper 200
Seagate
- - - Ultrium ** ** ** **
LTO 100 GB

* Highest native capacities of tape drive technologies as publicly stated by drive vendors.
** These tape drive information not there.
This typically leaves engineers with the problem concerning backward compatibility. Often times, backward
compatibility issues make it difficult to remain competitive with other technologies of the time. In the early years of
DLT technology, the capacity and transfer rate between DLT generations doubled. However, now that it’s mature,
the jump from DLT 7000 to 8000 yielded an incremental increase of only 5 GB in capacity and 1 MB/sec. in
transfer rate.
Quantum Corporation recently launched its next generation DLT product: Super DLT. Super DLT technology
incorporates more channels, new thin film M-R heads, a new optical servo system, and advanced media
formulations. This new DLT product required significant engineering innovation. The major challenges that
created on-schedule delivery difficulties include the new servo positioning architecture, a new head design, new
media formulations, and much higher internal data rates than the previous DLT architecture. Additionally,
pressure to maintain backward read and write compatibility only increased the engineering complexity. The first
Super DLT drives did not offer backward compatibility to previous DLT generations.
With AIT, Sony remains in the forefront of all mid-range tape technologies, holding the highest capacity and
performance specifications for the last several years. Sony has continued to drive the cost of AIT drives down,
offering users the best cost-for-performance figures in this class. The December 2001 release of AIT-3 marks the
third generation of Sony’s AIT technology. Sony has published a roadmap, which extends through AIT-6,
expecting to double capacity and performance every two years.
Exabyte’s Mammoth drive had experienced some lengthy production delays but is shipping in volume quantities
today. Exabyte’s Mammoth technology showcased numerous industry firsts and was the company’s first attempt
at designing and manufacturing a deck mechanism and head assemblies without Sony’s expertise. During the
production delays, Exabyte allowed Quantum’s DLT and Sony’s AIT to capture Mammoth’s previous generation
customers as the customers’ needs increased when no new products were being offered by Exabyte. The
company’s financial woes were only continuing to grow, and Exabyte very recently made the decision to merge
with Ecrix Corporation.
In today’s marketplace, companies that deliver solid products on schedule have gained market share and have
become standards. Exabyte delivered a number of products from 1987-1992, and gathered more than 80 percent
of the mid-range market share. Those products included the EXB-8200, EXB-8500, EXB-8200C, EXB-8500C,
EXB-8205, EXB-8505, and EXB-8505XL. Exabyte owes its key success to those initial products, which offered
higher performance at a moderate price while playing in a market with very little competition. However, Exabyte’s
inability to deliver Mammoth until nearly three years after announcing the product opened the door for other
technologies.
Quantum’s DLT drives were able to deliver better throughput at a time when storage capacities were exploding.
The DLT 2000, DLT 2000XT, and DLT 4000 drives were able to offer better capacity, performance, and reliability
than the first Exabyte products, allowing them to capture the market share previously owned by Exabyte. Again,

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 107 of 123

delivering a product in a landscape with little competition allowed Quantum to gain more than 80 percent of the
market between 1992 and 1996. Availability and engineering delays for DLT 7000 and follow-up DLT products
have now opened the door for newer technologies.

6.7 Tape Technologies


The Major Tape technologies
 DAT
 DLT
 LTO
 AIT

6.7.1 DAT

6.7.1.1 HP – DAT 72 Tape Drive

Overview
The HP StorageWorks DAT 72 tape drive is the fifth generation of HP's popular DDS tape drives, built on the
success of four previous generations of DDS technology and providing unprecedented levels of capacity,
reliability and cost of ownership. The DAT 72 delivers a capacity of 72 GB on a single data cartridge and a
transfer rate of 21.6 GB/hr (assuming a 2:1 compression ratio). This drive reads and writes DAT 72, DDS-4, and
DDS-3 formats, making it the perfect upgrade from earlier generations of DDS.
The StorageWorks DAT 72 tape drive is the ideal choice for small and medium businesses, remote offices, and
workgroups. The DAT 72 drive comes in four models -- internal, external, hot-plug, and offline hot-swap array
module - plus it fits in HP's 3U rack-mount kit, making it compatible with virtually any server environment.

Fig. 6.7.1.1 – HP DAT Tape Drive

Features & Benefits


 Industry-standard DDS technology: As the most popular server backup technology of all time, DDS
offers a proven track record of dependability
 Up to 36 GB native capacity on a single tape (with DAT 72): Provides enough capacity for a small server
or workstation
 Transfer rate of over 10 GB/hr native (with DAT 40 and DAT 72): Backs up a whole cartridge worth of
data in less than four hours

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 108 of 123

 HP One-Button Disaster Recovery (OBDR): Restores your entire system at the touch of a button without
the need for system disks or software CDs
 Small, half-height form-factor: Fits easily into most servers and workstations, including HP ProLiant and
AlphaServers with hot-plug drive bays
 Wide choice of models: Comes in internal, external, hot-plug, offline-hot swap array module, and rack-
mount configurations, providing a suitable option for any server
 Automatic head cleaner: Minimizes the need for manual cleaning with a cleaning cartridge
 Lowest media price of any tape technology: Reduces the overall cost of ownership
 Broad compatibility with a wide range of servers, operating systems, and backup software: Suits almost
every operating environment
 HP StorageWorks Library and Tape Tools utilities: Helps make installation, management, and
troubleshooting a breeze
 Includes TapeWare XE: Provides a complete, easy-to-use backup solution that includes disaster
recovery capabilities

Specification
System feature Description
Capacity Up to 36 GB native capacity on a single tape -- 72 GB at 2:1 compression
Media  DAT 72 media – 170m, 4mm tape, Metal Particle (MP++++) formulation (Blue cartridge
shell for ease of identification in mixed media archives where older versions of DDS
media may be in use)
 DDS4 - read and write compatibility
 DDS3 - read and write compatibility
Media Format Recording method - 4 mm helical scan
Recording Format - DAT 72, DDS-4, DDS-3 (ANSI/ISO/ECMA)
Data Compression - Lempel-Ziv (DCLZ)
Error Detection/Correction - Reed-Solomon
Data Encoding Method - Partial Response Maximum Likelihood (PRML)
Buffer size 8 MB
Performance Sustained Transfer Rate (native) 3 MB/s
Sustained Transfer Rate (with 2:1 data 6 MB/s
compression)
Burst Transfer Rate 6 MB/s (asynchronous)
40 MB/s (synchronous)
Data Access Time 68 s
Average Load Time 15 s
Average Unload Time 15 s
Rewind Time 120 s (end to end)
Rewind Tape Speed 1.41 m/s
Reliability MTBF - 125,000 hours at 100% duty cycle
Uncorrected Error Rate - 1x10-17 bits read
Interface SCSI Interface - Wide Ultra SCSI-3 (LVD/SE)
SCSI Connector -
Internal: 68-pin wide HD LVD
External: 68-pin wide LVDS, thumbscrew
Array module: 80-pin SCA (SCSI and power)
Termination -
No terminator is required for internal model (assumes use of terminated cable).
External model requires termination with multimode terminator (included with product).
Array module requires termination with multimode terminator (ordered separately - p/n
C2364A).

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 109 of 123

6.7.1.2 T9940 Tape Drives


The StorageTek® capacity-centric T9940 tape drive is ideal for applications that demand high throughput and
capacity. The StorageTek® T9940B tape drive stores up to 200 gigabytes of native data on a single tape cartridge
at rates as high as 30 megabytes per second.

Fig. 6.7.1.2 – StorageTek Tape Drive

Benefits

Reduced batch and backup windows


With its native data transfer rate of 30 megabytes per second, or up to 70 megabytes per second with
compression, the T9940B drive helps you store more data in less time to meet your shrinking production batch
and backup windows.

Increased productivity
The high capacity T9940 tape drives minimize cartridge mounts, require fewer cartridges to manage for disaster
recovery and improve automation efficiency.

Lower storage costs


T9940 tape drives do the work of multiple typical mid-range drives. This enables you to minimize extra hardware,
reduce SAN complexity and simplify management, to help reduce your costs.

Lower backup costs


The T9940B drive’s low dollar-per-gigabyte media helps you cut your total costs of backup. With data
compression, it can store as much as 400-800 gigabytes on a single cartridge.

Standard Features

Tape compatibility
The T9940B drive provides backward read compatibility with 9940A cartridges. It can rewrite StorageTek 9940A
tape cartridges with three times more data, for extended investment protection.

StorageTek VolSafe media support


The T9940B drive supports VolSafe® cartridges, a high-capacity write-once-read-many (WORM) storage solution.
VolSafe is a non-erasable, non-rewritable tape media for archiving critical data.

Multi-platform connectivity

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 110 of 123

T9940 drives run on today’s popular operating environments. The T9940B supports two gigabit FICON, ESCON,
and two-gigabit Fiber Channel connectivity. The T9940A drive supports ESCON, SCSI and one-gigabit Fiber
Channel connectivity.

FICON for distance


The FICON interface drives support distances up to 100 kilometers (versus nine kilometers for ESCON) without
channel extensions or significant throughput reduction. This enables remote storage applications and increases
your disaster recovery options.

SAN-readiness
A native two-gigabit fabric-aware Fiber Channel interface makes the T9940B drive ready for the demands of high-
speed SAN environments and storage server networks.

Specification
 Tape load and thread to ready: 18 sec (formatted)
 Average file access time (first file): 41 sec
 Average access time: 59 sec
 Maximum/average rewind time: 90/45 sec
 Unload time: 18 sec
 Data transfer rate, native (uncompressed): 30 MB/sec
 Data transfer rate (compressed): 70 MB/sec
 Capacity, native (uncompressed): 200 GB
 Interface: 2 Gb Fiber Channel, ESCON, ESCON for VSM, 2Gb FICON for FICON and FICON express
channels
 Burst transfer rate:
Channel rate (Fiber Channel): 200 MB/sec (maximum instantaneous)
Interface (Fiber Channel): N & NL port, FC-PLDA (Hard and soft AL-PA capability), FC-AL-2 FCP-2, FC-TAPE
Read/write compatibility interface: Proprietary format
Emulation modes: Native, T9940A, 3490E, 3590

6.7.2 DLT
6.7.2.1 Tandberg DLT 8000 Autoloader
10 Cartridge version
The Tandberg DLT Autoloader brings you the productivity and security of an automated tape solution, as well as
the proven reliability and scalability of DLT technology.
Unattended Backup – High Speed, Capacity and Reliability
The Tandberg DLT8000 autoloader is one of the highest capacity (4U half-width) autoloaders available. This
highly reliable autoloader contains a single Tandberg DLT8000 tape drive and holds up to 10 DLTtape™IV data
cartridges supporting random or sequential access. Based on the widely accepted DLTtape™ technology
renowned for its ultimate reliability, you can be assured that your mission critical data is well protected.

Automated Storage Management – Made Easy


The Tandberg DLT Autoloader has an easy-to-read LCD readout, which provides information about the tape
drive, media status, robotics, installation and configuration. The removable cartridge magazine (7 cartridges in the
magazine - 3 in fixed slots) provides easy handling and allows for exchanging weekly backups in one simple step.
The Tandberg DLT autoloader also supports an automatic drive cleaning feature which does not require operator
intervention. In addition, the optional barcode reader provides an inventory of stored data and automated media
tracking so that access to stored data is fast and simple.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 111 of 123

Integration and Administration – Made Easy


The Tandberg DLT Autoloader fits easily on a desktop or server and the 4U size allows two units to be placed
side by side in a standard 19" rack. Drive exchange and upgrade can be carried out in the field by the user in less
than 30 minutes. With simplistic and comprehensive diagnostics, user-friendly configuration and installation, the
Tandberg DLT autoloader provides the industry’s easiest maintenance for the network administrator.

Special Functions
 Up to 800GB* storage capacity
 Up to 43GB*/hr transfer rate
 Available with Tandberg DLT8000 drive
 Fits easily on a desk, in a rack or on top of a server
 Removable magazine for easy storage management
 Optional barcode reader for fast cartridge inventory and data retrieval
 Added security with TapeAlert™
 Supported by all major software suppliers
 Data Capacity Native: 400GB
 Data Capacity Compressed (2:1): 800GB
 Transfer Rate Native: 6 MB/s / 360MB/min / 21.6GB/hr
 Transfer Rate Compressed (2:1): 12MB/s / 720MB/min / 43.2GB/hr
 SCSI Interface: SCSI-2, Fast/Wide LVD/SE
 Tape Capacity: 10-cartridge capacity

High Speed, Capacity and Reliability


The Tandberg DLT8000 autoloader is one of the highest capacity (4U half-width) autoloaders available. This
highly reliable autoloader contains a single Tandberg DLT8000 tape drive and holds up to 10 DLTtape™IV data
cartridges supporting random or sequential access. Based on the widely accepted DLTtape™ technology
renowned for its ultimate reliability, you can be assured that mission critical data is well protected.

6.7.2.2 SUN StorEdge L8500 Tape Library


The Sun StorEdge L8500 Tape Library provides robust enterprise tape automation capabilities to support the
backup/restore of your valuable data in a complex SAN environment.
The L8500 is ideal for enterprise-wide data consolidation efforts, and can also help to reduce administrative
overhead and lower the total cost of storage ownership.
And because of Sun's stringent testing requirements, you're assured of a quality product optimised for the Solaris
environment.
Sun's Fiber Channel tape drives incorporate FCP-2 error recovery, which detects, retries, and recovers from
errors on the interface to eliminate problems - all without interruption in service or performance degradation.
The Sun StorEdge L8500 Tape Library is an integral part of a policy-based data management strategy, and when
combined with Sun servers, arrays and software, it offers a complete, end-to-end, Sun solution.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 112 of 123

Fig. 6.7.2.2 – StorEdge Tape Library

Features
Upgradable to support future drive types. Future ability to connect multiple libraries via redundant Pass-Through
Ports (PTPs).
Conversion kit for some customer-owned drives.
Ease of service.
Sun FC drives support FCP-2 error recovery.
Small footprint, high slot density (can exceed more than 50 slots/sq. ft.). Service/operator areas limited to front
and back.
Remote monitoring via TCP/IP or optional local touch-screen panel.
Supports true mixed media and drives, including 9840B/C, 9940B and LTO 2.
Multiple robots.

Benefits
Protects customer investment; can accommodate growth without scheduled downtime, supporting the high-
availability demands of enterprise customers.
Protects your current customer investment.
Near zero scheduled downtime.
No interruption in backup performance, which is transparent to user.
Conserves valuable data center floor space.
Ease of management.
Customers can select the appropriate drives for their application and migrate to new drive types without having to
manage physical partitions - so there's only one library to manage.
Reduces the queuing effect found in libraries with single robots; multiple robots can handle more requests in
parallel.

Specification
 Availability:
Non-disruptive serviceability: Standard N+1 power for drives, robotics, and library electronics, allowing
replacement while the library is operating. 2N power is optional.
 Capacity and Performance:

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 113 of 123

Number of cartridge slots: 1,448 customer-usable slots (minimum) 6,500 customer-usable slots
(maximum)
Number of tape drives: Up to 64 drives of any combination
Cartridge access port (CAP): Standard 39 cartridge slot CAP, Optional 39 additional slots (78 total)
 Capacity:
Number of cartridge slots: 1,448 customer-usable slots (minimum) 6,500 customer-usable slots
(maximum)
Number of tape drives: Up to 64 drives of any combination
Cartridge access port (CAP): Standard 39 cartridge slot CAP, optional 39 additional slots (78 total)
 Hardware: Sun Blade 1000, 1500, 2000, 2500Sun Fire V210, V240, V250, 280R, V440, V480, V880,
V1280, E2900Sun Fire 4800, 4810, E4900, 6800, E6900Sun Fire 12K, 15K, E20K, E25KNetra 240, 440,
1280Ultra 60 & 80Sun Enterprise 220R, 250, 420R, 450, x500, 10000
 Management:
Media management: full mixed media, any cartridge can be placed in any cell, no required partitions
Digital vision system: Unique digital vision camera system performs continuous calibration and reads
bar codes
Operator panel: Standard remote monitoring and control; touch-screen is optional
Automatic clean: Dedicated cleaning cartridge slots for tape drive cleaning for multiple drive types by
library or software command
Automatic self discovery: Auto-discovery and auto-configuration for all drive, media types, slots, and
Cartridge Access Ports
Continuous automation calibration: No periodic maintenance or alignment required
 Performance: Throughput per hour, native (uncompressed):Per drive: 9840C: 30 MB/sec9940B: 30
MB/secLTO-2: 30 MB/secPer 64 drives:9840C - 6.9 TB/hr9940B - 6.9 TB/hrLTO-2 - 6.9 TB/hrAverage
cell to drive time: 6.25 sec per robotMean Time To Repair (MTTR): 30 minutes or lessMean Exchanges
Between Failures (MEBF): 2,000,000 exchangesMean Time Between Failure (MTBF) - drives9840C
FCPower On: 290,000 hr @ 100% duty cycleTape Load: 240,000 hr @ 10 loads/day (100,000
loads)Tape Path Motion (TCM): 216,000 hr @ 70% TCM duty cycleHead Life: 8.5 yr @ 70% TCM duty
cycle9840B FCPower On: 290,000 hr @ 100% duty cycleTape Load: 240,000 hr @ 10 loads/day
(100,000 loads)Tape Path Motion (TCM): 196,000 hr @ 70% TCM duty cycleHead Life: 8.5 yr @ 70%
TCM duty cycleMLTO-2 FCMTBF: 250,000 hr @ 100% duty cycleMCBF: 100,000 cartridge load/unload
cyclesHead Life: 60,000 tape motion hours
 Software: Operating System:Solaris 8 U4 Operating System or laterSolaris 9 Operating
SystemSupported software:Sun Enterprise and Application:Sun StorEdge Enterprise Backup Software
7.1 and laterSun StorEdge Utilisation Suite (SAM-FS) Software 4.1 and laterSun StorEdge SFS 4.4 and
laterThird-Party:VERITAS NetBackup 5.0 and laterACSLS 7.1 and later

6.7.2.3 HP MSL6000 Tape Libraries


The portfolio of HP StorageWorks MSL6000 Tape Libraries provide centralized backup to a single automated
device, freeing valuable IT resources for more strategic work. Ideal for medium to large IT networks with or
without a storage area network (SAN), experiencing uncertain data growth. The MSL6000 Tape Libraries provide
change without chaos, simplicity without compromise, and growth without limits to meet even the most demanding
IT network requirements.
The MSL6000 Tape Libraries offer maximum flexibility with best-in-class offering for drive technology including the
new HP LTO Ultrium 960 and SDLT 600 in addition to the Ultrium 460 tape drives. These technologies are
available for both new library purchases as well as upgrades to currently installed MSL5000 and MSL6000 Tape
Libraries.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 114 of 123

Fig. 6.7.2.3 – HP Tape Library

The MSL6000 Tape Libraries are easily managed through an intuitive GUI control panel and integrated remote
web management, allowing simple management capabilities from any remote or on-site location. In addition, each
library is available with HP world-class diagnostic tool, HP Library and Tape Tools, at no additional charge. Fully
tested and certified in HP's Enterprise Business Solutions (EBS), the MSL6000 tape libraries can be up and
running quickly in a wide range of fully supported configurations.
The MSL6000 Tape Libraries provide growth without limits by offering maximum investment protection through
scalability. To move from a direct attach to network attached storage configuration, a simple installation of a Fiber
Channel interface card makes the conversion a snap. In addition, the MSL6000 Tape Libraries will scale to larger
configurations by enabling a single library to grow and change with capacity and technology as needs require. Not
only will the MSL6000 Tape Library scale within the family, but it can also be scaled with MSL5000 Tape Libraries
using a pass-through mechanism for up to 16 drives and 240 slots.

Features
 Scalable: Multi-unit stacking allows the library to grow with your storage requirements. You can start
with a direct-attach configuration, and easily change to network storage environment with only an
interface card upgrade.
 Flexible: Available with a broad choice in tape technology, including: Ultrium 960, Ultrium 460, and
SDLT 600, and with either a SCSI or Fiber Channel interface. Upgrade to new technology with easy to
install upgrade kits
 Manageable: User-friendly GUI control panel and web interface make library management easy from
any remote or local location.
 Reliable: Tape libraries provide consistent backup and automatically change tapes with robotics rating
of 2 million Mean Swaps before Failure.
 Compact: 5U and I0U modules offer the highest storage density in their class.
 Affordable: Buy only the storage you need now and add more later.
 Evolutionary: Drives can be upgraded as technology progresses.
 Compatible: All MSL6000 libraries work with industry leading servers, operating systems, and backup
software is fully tested through the HP Enterprise Business Solutions group for complete certification

Benefits
 Flexible: Investment protection by providing instant interface and drive technology upgrades without
hassle
 Manageable: Manage the library from any local or remote location and reduces administrative burden
 Scalable: Investment protection by providing seamless capacity enhancement

6.7.2.4 EXABYTE 430 Tape Library


 High capacity, high performance automated data storage
 Up to 4 drives, 30 cartridges
 2.4 to 4.5 TB capacity (compressed)

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 115 of 123

 86.5 to 173 GB/hr transfer rate


 Accommodates either VXA-2 or M2 tape drives
 Optimized for rack-mount installations

Fig. 6.7.2.4 – EXABYTE Tape Library

Impressive! The 430 tape library is the most affordable, mid-range automated data storage solution designed for
mid-size data centers running IBM and HP/Compaq servers.
The power of mid-range automation now comes with a choice. The 4 drive, 30 slot 430 library can be configured
to meet your unique system needs with either VXA-2 or M2 tape drives for up to 5TB of data storage.
Don't pay for more than you need. The 430 library with VXA-2 is designed to meet the of organizations limited by
both budget and network bandwidth. Running at speeds up to 173GB/hr, the 430 with VXA-2 has adequate
performance for many mid-range data center environments, priced thousands less than the nearest competitor.
If your data center system architecture is optimized for speed, the 430 configured with M2 tape drives delivers the
advantages of a higher performance tape drive.

Extended service options include:


 For the VXA-2 library: PN 1010915-000 for 430 VXA, 5x9 Next Business Day On-site Service
 For the M2 library: PN 1010918-000 for 430 M2, 5x9 Next Business Day On-site Service

6.7.2.5 Scalar 10K by ADIC


With its capacity-on-demand scalability, built-in storage network support, and high-availability architecture, the
Scalar 10K brings the efficiencies of consolidation to your backup.

Fig. 6.7.2.5 – Tape Library by ADIC

The Scalar 10K’s unique capacity-on-demand scalability lets you scale your storage capacity more easily and
quickly than you can with any other library. Capacity-on-demand systems ship with extra capacity that you can
activate, in 100-tape increments, using a software key. You pay only for the capacity you use.
For high-capacity or mixed-media needs, the Scalar 10K offers traditional library configurations. These maximum
capacity models have up to 15,885 tape slots and allow you to combine LTO, SDLT, and AIT technology in the
same chassis.
The Scalar 10K is the first library to offer integrated storage network support, with certified interoperability that
means seamless operation in new or existing SANs. The system supports multiple protocols and heterogeneous

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 116 of 123

fabrics at the same time. Integrated SAN management services, such as serverless backup and data-path
conditioning, provide better backup in storage networks.
The Scalar 10K’s high-availability architecture, which includes true 2N power and dual data paths, is designed to
meet the reliability demands of data consolidation. Features to ensure maximum system uptime include auto-
calibration, self-configuration, and magazine-based loading of up to 7.9TB (native) at once.
For more information on the Scalar 10K, please see the Scalar 10K microsite.

Features and Benefits


 Capacity-on-demand scalability: lets you instantly activate additional storage while paying only for the
storage you actually use
 Enterprise size: high drive count for maximum performance and flexible configurations
 Proven connectivity: storage network interoperability means seamless integration into new or existing
SANs
 Intelligent SAN support: integrated storage networking support—including serverless backup, data path
conditioning, and built-in firewall—for easier installation and operation, higher performance and reliability
 High availability: features include true 2N power, dual SCSI library control channels, hot-swap drives
 Reliability: diagnostic options include real-time health checks, email and pager alerts, and phone-home
event reporting
 Rapid configuration: five slots-per-second inventory speed; auto-discovery and auto-calibration of all
components
 Investment protection: supports LTO, SDLT, and AIT in single- or multi-technology installations for
maximum flexibility and performance
 Virtual libraries: provides up to 16 “virtual library” partition options for multiple applications support
 Responsive service: one-year on-site service provided by ADIC technical Assistance Center (ATAC) with
24-hour, worldwide service and support

6.7.3 LTO (Linear Tape Open)

6.7.3.1 HP Ultrium960 Tape Drive


The HP StorageWorks Ultrium 960 Tape Drive represents HP's third-generation of LTO tape drive technology.
Positioned as the highest capacity drive in the StorageWorks family, the Ultrium 960 features an 800 GB
compressed (2:1) storage capacity using new LTO 3 media. By doubling the capacity of current Ultrium drives,
HP customers now require fewer data cartridges to meet their storage needs, significantly reducing their IT costs
and increasing their ROI. The Ultrium 960 also features a compressed (2:1) data transfer rate of 160 MB/s. Using
a new 16-channel recording head, an Ultra320 SCSI interface, and increases in media tracks and bit density, the
HP Ultrium 960 sets a new performance benchmark as the world's fastest tape drive. For customers that
continually experience shrinking backup windows, the Ultrium 960 is the ideal direct-attach backup storage
solution for enterprise and mid-range server, as well as entry-level hard drive arrays.

Fig. 6.7.3.1 – HP LTO Tape Drive

The Ultrium 960 supports the industry's most comprehensive list of compatible hardware and software platforms.
Each drive option includes a single-server version of HP Data Protector (license) and Yosemite TapeWare (CD)
backup software, as well as support for HP StorageWorks One-Button Disaster Recovery (OBDR) and HP
StorageWorks Library and Tape Tools (L&TT). The Ultrium 960 Tape Drive is fully read and write compatible with
all second-generation Ultrium media, and adds a further degree of investment protection with the ability to read all

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 117 of 123

first-generation Ultrium media as well. The Ultrium 960 also represents HP's first tape drive solution to deliver
support for Write-Once, Read-Many (WORM) media. This feature allows customers to easily integrate a cost-
effective solution to secure, manage, and archive compliant data records to meet stringent industry regulations.
HP customers can now manage all of their backup and archiving data protection needs with just one drive.

Features
 800 GB Capacity: The Ultrium 960 tape drive is a high capacity drive that stores 800 GB on a single
cartridge with 2:1 compression.
 160 MB/s Performance: The world's fastest tape drive with sustainable data transfer rates to 160 MB/s at
2:1 compression.
 Data Rate Matching (DRM): Allows the tape drive to dynamically and continuously adjust the speed of
the drive, from 27 MB/s to 80 MB/s, matching the speed of the host or network.
 LTO Open Standard: Drive technology based on an open standard that provides for media compatibility
across all brands of LTO Ultrium products.
 Server Compatibility: Qualified on HP ProLiant, Integrity, 9000, NonStop, and AlphaServers platforms, as
well as many servers from other leading vendors such as Dell, IBM, and Sun.
 Software Compatibility: Extensive list of supported backup and archiving software applications from HP,
CA, VERITAS, Yosemite, Legato, Tivoli, and many more.
 Support for WORM Media: Able to read and write to new Write-Once Read-Many (WORM) HP Ultrium
Data Cartridges
 Management and Diagnostics Software Included: HP StorageWorks Library and Tape Tools software
provides a single application for managing and troubleshooting your tape drive, media and configuration.
 Backup Software Included: Includes a single-server version of Yosemite TapeWare XE (CD) and HP
OpenView Data Protector (license)
 One-Button Disaster Recovery (OBDR) Supported: Firmware-based disaster recovery feature that can
restore an entire system using a single Ultrium 960 tape drive and data cartridge

Benefits
 High capacity drive allows customer to backup more data with fewer data cartridges: High capacity drive
reduces the costs associated with data protection by requiring fewer data cartridges to complete
backups.
 Ultra fast performance can backup more data in less time: High performance drive allows customers to
scale their backup capacities without having to increase their backup windows.
 Data Rate Matching optimizes performance while reducing tape and media wear: Data Rate Matching
optimizes the performance of the tape drive by matching the host server or network's data transfer rate,
putting less stress on the tape drive and media.
 LTO open standard provides customers with more choices: The LTO open standard ensures
compatibility across all brands of Ultrium tape drives, giving customers a greater choice of Ultrium
solutions without losing investment protection.
 Comprehensive hardware and software qualification increase customers agility to adapt to new
environments as needed: Support for heterogeneous hardware and software platforms provides
customers with a single tape drive solution for all environments.
 Investment protection through backward write and read compatibility: Backward read compatibility
ensures that files from generation one and two Ultrium data cartridges can be recovered using the HP
Ultrium 960 tape drive. Backward write compatibility allows the customer to create backups using
second-generation Ultrium media with their HP Ultrium 960 tape drive, maximizing their ROI for media
that was previously purchased.
 Easily integrate a secure method for archiving compliant records using Ultrium WORM media: With a
single HP Ultrium 960 tape drive and HP's comprehensive support for hardware and software platforms,
customers can easily integrate a WORM-based archiving solution into their current data protection
strategy using LTO Ultrium solutions.
 Complete set of management and diagnostics tools included with each tape drive option and available
via free download from HP.com: Tape drive management, performance optimization, and
troubleshooting is made simple using the HP StorageWorks Library and Tape Tools application that is
included with HP Ultrium 960 tape drive.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 118 of 123

 Complete hardware and software solution in the box with each HP Ultrium 960 tape drive: HP Ultrium
960 tape drives ships with a choice of single-server backup software applications (HP OpenView Data
Protector and Yosemite TapeWare), tape drive media, and SCSI cables, providing the customer with a
complete data protection solution in the box.
 Simple and fast disaster recovery with One-Button Disaster Recovery (OBDR) included in the drive
firmware: HP Ultrium 960 tape drives include a HP-exclusive disaster recovery feature, One-Button
Disaster Recovery, that allows the customer to simply and quickly recover a server's operating system,
software applications, and data using a single HP Ultrium data cartridge.

6.7.3.2 IBM 3584 Tape Library


The IBM TotalStorage 3584 Tape Library leverages the drive technology the IBM TotalStorage 3588 Ultrium Tape
Drive Model F3A, as well as the IBM TotalStorage 3592 Tape Drive Model J1A, within the same library. The 3584
has been designed to meet the needs of customers seeking solutions for data archiving, backup, disaster
recovery, and other storage needs. The 3584 tape library is designed to offer high performance, availability,
reliability, mixed media, and on demand storage capacity.

Fig. 6.7.3.2 – IBM Tape Library


 Tape Library offers the IBM TotalStorage 3592 Tape Drive and the new IBM TotalStorage 3588 Ultrium
Tape Drive Model F3A, utilizing Linear Tape-Open (LTO) Ultrium 3 Tape Drive technology designed to
provide high capacity, throughput, and fast access performance
 Variety of drive technology offerings help increase storage density while protecting your technology
investment in supporting LTO Ultrium 1, LTO Ultrium 2, LTO Ultrium 3 and IBM 3592 tape drives and
media within the same library
 Introducing a second library accessor, the 3584 High Availability Frame Model HA1, designed to help
increase library availability and reliability
 Built-in storage management functions designed to help maximize availability and allow for dynamic
management of both cartridges and drives
 Designed to provide multi-petabyte capacity, high performance and reliability in an automated tape
library that is scalable to 192 tape drives and over 6200 cartridges for midrange to enterprise open
systems environments
 Patented Multi-Path Architecture designed to help increase configuration flexibility with logical library
partitioning while enabling system redundancy for high availability

6.7.3.3 Comparison IBM LTO Ultrium versus Super DLT Tape Technology

Introduction
This white paper is a comparison of Super DLTtapeTM technology with the Ultrium technology developed by the
Linear Tape Open (“LTO”) technology providers, Seagate, HP and IBM. Its focus is on the merits of the two
technologies from a customer point of view, and as such it compares the features and benefits of the SDLT 220

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 119 of 123

drive with the three different implementations of Ultrium technology, taking into account the key factors a
customer considers when choosing a data protection solution. It draws on secondary data from respected industry
analysts such as IDC and Dataquest, independent third party test data, as well as extensive primary research
conducted with IT managers in departmental and enterprise IT environments.

Technology Overview
Super DLTtape is the latest generation of the award-winning DLTtapeTM technology. The SDLT 220 drive is a
single reel, half-inch magnetic tape drive with a native capacity of 110GB a native transfer rate of 11 MB/sec. It is
manufactured by Quantum Corporation and by Tandberg Data, and is sold and marketed by most leading
vendors of servers and automated backup systems. It is backward read compatible with all DLTtape IV media
written on DLT 4000, DLT 7000, and DLT 8000 tape drives.
Ultrium tape drives are the single reel implementation of LTO technology, a new platform developed by Seagate,
HP and IBM. They also use half-inch magnetic media, have a native capacity of 100GB and are specified with
transfer rates of 15 MB/sec or 16 MB/sec. They are sold by HP and IBM’s captive server and automation
divisions, as well as by a subset of other vendors. Ultrium drives are not compatible with any previous tape
technology.

Open Standards
DLTtape drives and media have served the world’s mid-range backup and archiving needs for much of the last
ten years. With an installed base of over 1.7 million drives and over 70 million cartridges shipped to customers,
DLTtape systems are recognized as the de facto industry standard for mid-range backup. IDC’s latest reported
market share numbers indicate that DLTtape had a market share of 73% in the mid-range tape segment1. The
chart below summarizes the installed bases of various competing mid-range tape technologies.

6.7.3.4 AML/2 LTO by ADIC


The AML/2 is the industry's premier solution for large data system storage. It offers the industry's largest capacity,
scaling to support hundreds of drives, more than 76,000 pieces of media, and as much as 5,000TB of data
(native). The library grows simply by adding new, barrier-free modules.
For true mission-critical applications, the AML/2 offers the only large-scale library in which every piece of media
can be accessed by two fully redundant robotics systems to ensure maximum continuous data availability.
The AML/2 supports more than a dozen media types. The AML/2's barrier-free scalability and mixed-media
support make it the preferred choice for a wide range of data management tasks, including:
 Backup and Restore
 Nearline and HSM
 Digital Asset Management
 Archiving
 Digital Broadcasting
 Data Acquisition
The AML/2 features a multi-host architecture that lets the library share data across all of your network applications
and architectures, including Storage Area Networks. It also supports partitioning of the library's storage into user-
defined virtual libraries.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 120 of 123

Fig. 6.7.3.4 – Data Storage System by ADIC

Features and Benefits


 Industry's largest capacity data storage library - the only one with fully redundant robotics for continuous
data availability
 Industry-leading density, with more than 6,500GB per square foot
 Protects your investment by supporting simultaneous use of different media and drive types and
providing an easy migration path to new technologies
 Scales with your data growth using unique barrier-free expansion to eliminate all pass-through ports
 Multi-host architecture makes it easy to share the library between different applications and OS
 Provides for automated data and media management and allows easy remote monitoring and operation.
 Supported by one of the industry's premier global networks of integration, logistics, and field support
experts
The AML/2 holds up to 5,184TB and 400 drives, handles a mixture of drive and media types, scales without pass-
through ports, and offers the industry's only fully redundant robotics option.

6.7.3.5 Scalar 1000 AIT by ADIC


The Scalar 1000 is a high-performance, scalable tape library with integrated storage networking support. A
budget-friendly choice, the library reduces total cost of ownership (TCO) in high-growth environments.
The Scalar 1000 grows easily and economically, scaling from 118 tapes to more than a thousand. Unlike other
libraries, its unique barrier-free expansion eliminates the use of pass-through ports—maintaining performance
and reliability as the system scales.

Fig. 6.7.3.5 Tape Library by ADIC

To provide better backup in storage networks, the Scalar 1000 features management services that ease
installation and diagnostics, enhance security and availability, and make data management more efficient. These
tools include serverless backup, single-view connectivity, a built-in SAN firewall, and data-path conditioning
utilities that increase backup performance and reliability.

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 121 of 123

The Scalar 1000 supports LTO, SDLT/DLT, and AIT technologies in single- or mixed media configurations. It also
offers up to 16 “virtual library” partitions.

Features and Benefits


 Cost-effective scalability: pay as you grow by adding drives, cartridges, and expansion modules, as
needed
 Barrier-free growth: robotics extend through all modules for continuous high-speed service without pass-
through ports
 Proven connectivity: storage network interoperability means seamless integration into new or existing
SANs
 Intelligent SAN support: integrated storage networking support—including serverless backup, data path
conditioning, and built-in firewall—for easier installation and operation, higher performance and reliability
 Reliability: diagnostic options include real-time health checks, email and pager alerts, and phone-home
event reporting
 High performance: Supports up to 48 drives with a 1,000,000-cartridge exchange rating.
 Remote operation: browser-based management lets you perform all library operations and diagnostics
remotely
 Investment protection: supports LTO, SDLT/DLT, and AIT in single- or multi-technology installations for
maximum flexibility and performance
 Virtual libraries: provides up to 16 “virtual library” partition options for multiple applications support
 Responsive service: one-year on-site service provided by ADIC technical Assistance Center (ATAC) with
24-hour, worldwide service and support

6.7.3.6 AML/J by ADIC


The AML/J automated storage library offers a unique combination of solutions for large-system data storage
needs.
It’s the right solution for changing technology because mixing and changing media is easy. Supporting more than
a dozen technologies, the AML/J is designed for easy field integration of new drive technologies. And since it
supports concurrent operation of different technologies in the same library, older and newer technologies can
easily co-exist.
The AML/J is the right solution for data growth because it offers true scalability within a single library. Barrier-free
expansion modules let you scale the system easily and economically–from as few as two drives and 400
cartridges to as many as 226 drives and 7,500 cartridges.
Finally, the AML/J is the right solution for enterprise storage because the system makes it easy to share the
library between different applications and operating systems. Its multi-host architecture lets the AML/J become a
shared network resource. You can also partition the library into user-defined volumes.

Features and Benefits


 Grows economically with data using a unique barrier-free expansion system that eliminates pass-through
ports
 Provides the industry's widest choice of storage technologies, including simultaneous use of different
media
 Protects your organization's IT investment by offering an easy migration path to new technologies
 Offers an integrated multi-host architecture that makes it easy to share the library between different
applications and operating systems
 Provides for automated data and media management and allows easy remote monitoring and operation
 Supported by one of the industry's premier global networks of integration, logistics, and field support
experts

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
UA - Enterprise Backup Storage Page 122 of 123

Fig. 6.7.3.6 – ADIC Storage Library

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
Enterprise Storage Solution - Index Page i

INDEX
1. Introduction to DAS........................................................................................................................... 1
1.1. Advantages of DAS................................................................................................................................. 1
1.2. Direct Attached Storage (DAS) Model...................................................................................................... 1
1.3. Ideal Situations for DAS .......................................................................................................................... 2
1.4. Adaptec Direct Attached Storage – SANbloc 2GB JBOD.......................................................................... 2
1.5. Connectivity............................................................................................................................................ 2
1.5.1. Enhanced IDE.................................................................................................................................. 3
1.5.1.1 PATA......................................................................................................................................... 4
1.5.1.2 SATA......................................................................................................................................... 4
1.5.1.3 Advantages of SATA over PATA ................................................................................................ 4
1.5.1.4. PATA vs. SATA ........................................................................................................................ 4
1.5.1.5. Hardware, Configurations & Pictures ......................................................................................... 5
1.5.2. SCSI................................................................................................................................................ 8
1.5.2.1. Introduction............................................................................................................................... 8
1.5.2.2. Advantages of SCSI.................................................................................................................. 9
1.5.2.3. Comparison of SCSI Technologies .......................................................................................... 10
1.5.2.4. Single - Ended vs. Differential.................................................................................................. 10
1.5.2.5. SCSI Devices that do no work together.................................................................................... 11
1.5.2.6. SCSI Termination.................................................................................................................... 11
1.5.2.7. Adaptec Ultra320 SCSI ........................................................................................................... 12
1.5.2.8. SCSI Controllers ..................................................................................................................... 12
1.5.3. Fiber Channel ................................................................................................................................ 12
1.5.3.1. Introduction............................................................................................................................. 12
1.5.3.2. Advantages of Fiber Channel .................................................................................................. 13
1.5.3.3. Comparing FC DAS Storage Solutions..................................................................................... 14
2.1 Introduction to NAS ....................................................................................................................... 15
2.2. Advantages of NAS:.............................................................................................................................. 15
2.3. What is Filer?........................................................................................................................................ 16
2.4. Strong standards for Network Attached Storage (NAS)........................................................................... 16
2.5. Network Attached Storage versus Storage Area Network ....................................................................... 17
2.6. NAS Plus Tape Based Data Protection .................................................................................................. 18
2.7. Streamlined Architecture ....................................................................................................................... 18
2.8. NAS Characteristics.............................................................................................................................. 19
2.9. NAS Applications and Benefits .............................................................................................................. 19
2.10. Business Benefits of NAS Gateways.................................................................................................... 19
2.11. Drawback ........................................................................................................................................... 20
2.12. File Storm NAS: .................................................................................................................................. 20
2.13 Benefits of Low end and workgroup NAS storage.................................................................................. 20
2.14. AS: Think Network Users .................................................................................................................... 21
2.15. Ns: Think Back-End/Computer Room Storage Needs........................................................................... 21
2.16 NAS Solutions ..................................................................................................................................... 21
2.16.1 NetApp NAS Solution .................................................................................................................... 21
2.16.1.1 NetApp Filers Product Comparison......................................................................................... 22
2.16.2. NAS by AUSPEX ......................................................................................................................... 24
2.16.3. NAS by EMC................................................................................................................................ 26
2.16.4. EMC NS Series/Gateway NAS Solution ........................................................................................ 27
2.16.5. NAS by SUN ................................................................................................................................ 28
2.16.6. Sun StorEdge N8400 and N8600 filers.......................................................................................... 28
2.16.7. Sun StorEdge 5310 NAS Appliance .............................................................................................. 28

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
Enterprise Storage Solution - Index Page ii

2.16.8. NAS by ADIC ............................................................................................................................... 30


2.16.8.1. Benefits of Using a SAN Behind a NAS Storage Network ....................................................... 30
2.16.8.2. ADIC / Network Appliance Solution Overview......................................................................... 30
2.16.8.3. Benefits for ADIC-Network Appliance NAS backup solution to an enterprise:........................... 31
2.17. StorNext Storage Manager .................................................................................................................. 31
2.17.1. Benefits of StorNext Storage Manager .......................................................................................... 32
2.16.2. Features of StorNext Storage Manager ......................................................................................... 32
Introduction of SAN............................................................................................................................. 34
3.1. Advantages of Storage Area Networks (SANs) ...................................................................................... 35
3.2. Advantages of SAN over DAS ............................................................................................................... 35
3.3. Today’s SAN Topologies....................................................................................................................... 36
3.4. Difference between SAN and LAN......................................................................................................... 38
3.5. Difference between SAN and NAS ........................................................................................................ 38
3.6. How do I manage a SAN? ..................................................................................................................... 38
3.7. What is a SAN Manager?...................................................................................................................... 38
3.8. When should I use a Switch vs. a Hub? ................................................................................................. 38
3.9. TruTechnology...................................................................................................................................... 39
3.9.1. TruFiber......................................................................................................................................... 39
3.9.2. TruCache....................................................................................................................................... 39
3.9.3. TruMap .......................................................................................................................................... 39
3.9.4. TruMask ........................................................................................................................................ 40
3.9.5. TruSwap ........................................................................................................................................ 40
3.10. Features of a SAN .............................................................................................................................. 40
3.11. SANs : High Availability for Block-Level Data Transfer.......................................................................... 40
3.12. Server-Free Backup and Restore ........................................................................................................ 41
3.13. Backup Architecture Comparison......................................................................................................... 41
3.14. SAN approach for connecting storage to your servers/network? ........................................................... 41
3.15. Evolution of SANs ............................................................................................................................... 43
3.16. Comparison of SAN with Available Data Protection Technologies......................................................... 44
3.17. SAN Solutions .................................................................................................................................... 45
3.17.1. SAN Hardware Solutions .............................................................................................................. 45
3.17.1.1. ADIC SAN Solutions.............................................................................................................. 45
3.17.1.2. SAN by SUN......................................................................................................................... 46
3.17.1.3. Features of SUN StorEdge .................................................................................................... 47
3.17.1.4. Benefits of SUN StorEdge ..................................................................................................... 47
3.17.2 SAN Management Software Solutions............................................................................................ 48
3.17.2.1. SAN by VERITAS.................................................................................................................. 48
3.17.2.2. Veritas SAN Applications....................................................................................................... 48
3.17.2.3. Example for Increasing Availability Using Clustering............................................................... 50
3.17.2.4. VERITAS SAN Solutions ....................................................................................................... 51
3.17.2.5. VERITAS SAN 2000: The Next Generation ............................................................................ 53
3.17.2.6. Tivoli Storage Manager ......................................................................................................... 53
3.17.2.7. Tivoli SANergy ...................................................................................................................... 54
3.17.2.8. SAN-speed sharing for Application Files ................................................................................ 55
3.18. Fiber Channel ..................................................................................................................................... 56
3.18.1. Introduction of Fiber Channel........................................................................................................ 56
3.18.2. Advantages of Fiber Channel........................................................................................................ 56
3.18.3. Fiber Channel Topologies............................................................................................................. 56
3.18.3.1. Point-to-Point........................................................................................................................ 56
3.18.3.2. Fiber Channel Arbitrated Loop (FC-AL).................................................................................. 56
3.18.3.3. Switched Fabric .................................................................................................................... 57
3.18.4. How do SCSI tape drives connect to a Fiber Channel SAN? .......................................................... 57

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
Enterprise Storage Solution - Index Page iii

3.18.5. What is an Interconnect? .............................................................................................................. 57


3.18.6. Scalable Fiber Channel Devices ................................................................................................... 58
3.18.7. Features of Fiber Channel ............................................................................................................ 58
3.18.8. Why Fiber Channel?..................................................................................................................... 58
3.18.9. Fiber Channel System .................................................................................................................. 59
3.18.10. Technology Comparisons ........................................................................................................... 60
3.18.11. LAN Free Backup using Fiber Channel........................................................................................ 61
3.18.11.1. Distributed Backup .............................................................................................................. 61
3.18.11.2. Centralized Backup ............................................................................................................. 62
3.18.11.3. SAN Backup ....................................................................................................................... 63
3.18.12. Conclusion ................................................................................................................................. 65
3.18.13. LAN Free Backup Solution Benefits ............................................................................................ 65
3.18.14. Fiber Channel Strategy for Tape Backup Systems....................................................................... 65
3.18.14.1. Stage - 1 (LAN Free Backup)............................................................................................... 65
3.18.14.2. Stage - 2 (Server-Less Backup) ........................................................................................... 66
3.18.14.3. Suggested Deployment Strategy.......................................................................................... 68
3.19. iSCSI.................................................................................................................................................. 68
3.19.1 Introduction of iSCSI ..................................................................................................................... 68
3.19.2. Advantages of iSCSI .................................................................................................................... 69
3.19.3. Advantages of iSCSI on SAN:....................................................................................................... 69
3.19.4. iSCSI describes:........................................................................................................................... 70
3.19.5. How iSCSI Works......................................................................................................................... 71
3.19.6. Applications that can take advantage of these iSCSI benefits include:............................................ 71
3.19.7. iSCSI under a microscope ............................................................................................................ 72
3.19.8. Address and Naming Conventions ................................................................................................ 73
3.19.9. Session Management................................................................................................................... 73
3.19.10. Error Handling............................................................................................................................ 74
3.19.11. Security...................................................................................................................................... 74
3.19.12. Adaptec iSCSI............................................................................................................................ 74
3.19.12.1. Storage Systems................................................................................................................. 75
3.19.12.2. HBAs .................................................................................................................................. 75
3.19.12.3. Adaptec 7211F (Fiber Optic)................................................................................................ 75
3.19.13. Conclusion ................................................................................................................................. 75
3.19.13.1. P.S. .................................................................................................................................... 75
3.19.13.2. Terms and abbreviations: .................................................................................................... 76
3.19.14. Others (iFCP, FCIP) ................................................................................................................... 76
3.19.14.1. Fiber Channel over IP.......................................................................................................... 77
3.19.14.2. FCIP IETF IPS Working Group Draft Standard specifies: ..................................................... 78
3.19.14.3. iFCP ................................................................................................................................... 78
3.19.15. How to Build an iSCSI SAN ........................................................................................................ 78
3.19.16. Setup ......................................................................................................................................... 80
3.19.17. Pain-Free Initiation ..................................................................................................................... 80
3.19.18. SAN Components....................................................................................................................... 80
4. SAN Setup by WILSHIRE................................................................................................................. 81
4.1. Hardware Details .................................................................................................................................. 81
4.1.1. JNI FCE 6410N Fiber Channel HBA................................................................................................ 81
4.1.2. Brocade SilkWorm 2800 Switch ...................................................................................................... 81
4.1.3. ATTO Fiber-Bridge 2200 R/D.......................................................................................................... 82
4.1.4. Hardware Installation...................................................................................................................... 83
4.1.5. Installing the Adapter card .............................................................................................................. 83
4.2. Software Installation.............................................................................................................................. 84
4.2.1. Installation in Solaris 9.................................................................................................................... 84

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
Enterprise Storage Solution - Index Page iv

4.3.2. Installation in NT4.0........................................................................................................................ 84


5. Introduction of InfiniBand ............................................................................................................... 85
5.1 InfiniBand Advantages ........................................................................................................................... 85
5.2 InfiniBand Architecture ........................................................................................................................... 86
5.3 InfiniBand Layers ................................................................................................................................... 86
5.3.1. Physical Layer................................................................................................................................ 87
5.3.2 Link Layer ....................................................................................................................................... 88
5.3.3 Network Layer................................................................................................................................. 89
5.3.4 Transport Layer............................................................................................................................... 89
5.4 InfiniBand Technical Overview ............................................................................................................... 89
5.4.1 InfiniBand Feature Set..................................................................................................................... 89
5.5 InfiniBand Elements ............................................................................................................................... 90
5.5.1 Channel Adapters ........................................................................................................................... 90
5.5.2 Switch............................................................................................................................................. 90
5.5.3 Router ............................................................................................................................................ 91
5.5.4 Subnet Manager ............................................................................................................................. 91
5.6 InfiniBand Support for the Virtual Interface Architecture (VIA) .................................................................. 91
5.7 InterConnect of Choice For HPC and Data Center .................................................................................. 92
5.7.1 Beyond Servers .............................................................................................................................. 92
5.7.2 A Single, Unified I/O Fabric.............................................................................................................. 93
5.7.3 Identifying the Need ........................................................................................................................ 93
5.7.4 Network Volume Expands ............................................................................................................... 93
5.7.5 Trend to Serial I/O........................................................................................................................... 93
5.7.6 InfiniBand: Adrenaline for Data Centers ........................................................................................... 94
5.7.7 Independent Scaling of Processing and Shared I/O.......................................................................... 94
5.7.8 Raising Server Density, Reducing Size ............................................................................................ 94
5.7.9 Clustering and Increased Performance ............................................................................................ 94
5.7.10 Enhanced Reliability...................................................................................................................... 95
5.7.11 End User Benefit ........................................................................................................................... 95
5.7.12 Industry-Wide Effort....................................................................................................................... 95
5.8. Relationship between InfiniBand and Fiber Channel or Gigabit Ethernet ................................................. 95
6. Introduction of Enterprise Backup Storage.................................................................................... 96
6.1 Recording Methods................................................................................................................................ 96
6.1.1 Linear Serpentine Recording ........................................................................................................... 96
6.1.2 Helical Scan.................................................................................................................................... 97
6.2 Tape Drive Performance ........................................................................................................................ 98
6.2.1 Tape Loading and Cartridge Handling.............................................................................................. 98
6.2.2 Linear Drive Mechanisms ................................................................................................................ 98
6.2.3 Helical-Scan Drive Mechanisms....................................................................................................... 98
6.2.4 Tape Tension and Speed Control .................................................................................................... 99
6.2.5 Tape Speed and Stress................................................................................................................... 99
6.2.6 Data Streaming and Start/Stop Motion ............................................................................................. 99
6.2.7 Media Load Time and File Access Time........................................................................................... 99
6.2.8 Data Capacity ............................................................................................................................... 100
6.2.9 Data Transfer Rate........................................................................................................................ 101
6.3 Reliability............................................................................................................................................. 101
6.3.1 Mean Time between Failure (MTBF).............................................................................................. 101
6.3.2 Annual Failure Rate (AFR) ............................................................................................................ 102
6.3.3 Data Integrity ................................................................................................................................ 102
6.4 Media Types........................................................................................................................................ 103
6.4.1 Media Reliability............................................................................................................................ 103

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3
Enterprise Storage Solution - Index Page v

6.4.2 Media and Backward Compatibility ................................................................................................ 103


6.5 Drive Cleaning ..................................................................................................................................... 104
6.6 Technology Roadmaps ........................................................................................................................ 104
6.7 Tape Technologies .............................................................................................................................. 107
6.7.1 DAT.............................................................................................................................................. 107
6.7.1.1 HP – DAT 72 Tape Drive........................................................................................................ 107
6.7.1.2 T9940 Tape Drives ................................................................................................................ 109
6.7.2 DLT .............................................................................................................................................. 110
6.7.2.1 Tandberg DLT 8000 Autoloader.............................................................................................. 110
6.7.2.2 SUN StorEdge L8500 Tape Library......................................................................................... 111
6.7.2.3 HP MSL6000 Tape Libraries .................................................................................................. 113
6.7.2.4 EXABYTE 430 Tape Library ................................................................................................... 114
6.7.2.5 Scalar 10K by ADIC ............................................................................................................... 115
6.7.3 LTO (Linear Tape Open) ............................................................................................................... 116
6.7.3.1 HP Ultrium960 Tape Drive...................................................................................................... 116
6.7.3.2 IBM 3584 Tape Library........................................................................................................... 118
6.7.3.3 Comparison IBM LTO Ultrium versus Super DLT Tape Technology ......................................... 118
6.7.3.4 AML/2 LTO by ADIC .............................................................................................................. 119
6.7.3.5 Scalar 1000 AIT by ADIC ....................................................................................................... 120
6.7.3.6 AML/J by ADIC ...................................................................................................................... 121

www.wilshiresoft.com Wilshire Software Technologies Rev. Dt: 15-Oct-08


[email protected] Ph: 2761-2214 / 6677-2214 / 6452-6173 Version: 3

You might also like