ONTAP 9 Concepts
ONTAP 9 Concepts
ONTAP 9 Concepts
Concepts
Contents
Deciding whether to use this guide ............................................................. 5
ONTAP platforms ......................................................................................... 6
Cluster storage .............................................................................................. 7
High-availability pairs .................................................................................. 8
Network architecture ................................................................................... 9
Logical ports ................................................................................................................ 9
Support for industry-standard network technologies ................................................ 10
Disks and aggregates .................................................................................. 11
Aggregates and RAID groups ................................................................................... 11
Root-data partitioning ............................................................................................... 12
Volumes, qtrees, files, and LUNs ............................................................... 13
Storage virtualization ................................................................................. 14
SVM use cases .......................................................................................................... 15
Cluster and SVM administration ............................................................................... 15
Namespaces and junction points ............................................................................... 16
Path failover ................................................................................................ 17
NAS path failover ...................................................................................................... 17
SAN path failover ...................................................................................................... 18
Load balancing ........................................................................................... 19
Replication .................................................................................................. 21
Snapshot copies ......................................................................................................... 21
SnapMirror disaster recovery and data transfer ........................................................ 22
SnapVault archiving .................................................................................................. 23
MetroCluster continuous availability ........................................................................ 24
Storage efficiency ........................................................................................ 25
Thin provisioning ...................................................................................................... 25
Deduplication ............................................................................................................ 25
Compression .............................................................................................................. 26
FlexClone volumes, files, and LUNs ........................................................................ 26
Security ........................................................................................................ 27
Client authentication and authorization ..................................................................... 27
Administrator authentication and RBAC .................................................................. 28
Virus scanning ........................................................................................................... 28
Encryption ................................................................................................................. 29
WORM storage ......................................................................................................... 30
Application aware data management ....................................................... 31
ONTAP release model ................................................................................ 32
Where to find additional information ....................................................... 33
Copyright information ............................................................................... 36
Trademark information ............................................................................. 37
4 | ONTAP Concepts
◦ NFS management
◦ SMB/CIFS management
• Disaster recovery and archiving
Data protection
6
ONTAP platforms
ONTAP data management software offers unified storage for applications that read and write data
over block- or file-access protocols, in storage configurations that range from high-speed flash, to
lower-priced spinning media, to cloud-based object storage.
ONTAP implementations run on NetApp-engineered FAS or AFF appliances, on commodity
hardware (ONTAP Select), and in private, public, or hybrid clouds (NetApp Private Storage or Cloud
Volumes ONTAP). Specialized implementations offer best-in-class converged infrastructure (FlexPod
Datacenter) and access to third-party storage arrays (FlexArray Virtualization).
Together these implementations form the basic framework of the NetApp data fabric, with a common
software-defined approach to data management and fast, efficient replication across platforms.
• FlexArray is a front end for third-party and NetApp E-Series storage arrays, offering a uniform
set of capabilities and streamlined data management. A FlexArray system looks like any other
ONTAP system and offers all the same features.
7
Cluster storage
The current iteration of ONTAP was originally developed for NetApp's scale out cluster storage
architecture. This is the architecture you typically find in datacenter implementations of ONTAP.
Because this implementation exercises most of ONTAP’s capabilities, it’s a good place to start in
understanding the concepts that inform ONTAP technology.
Datacenter architectures usually deploy dedicated FAS or AFF controllers running ONTAP data
management software. Each controller, its storage, its network connectivity, and the instance of
ONTAP running on the controller is called a node.
Nodes are paired for high availability (HA). Together these pairs (up to 12 nodes for SAN, up to 24
nodes for NAS) comprise the cluster. Nodes communicate with each other over a private, dedicated
cluster interconnect.
Depending on the controller model, node storage consists of flash disks, capacity drives, or both.
Network ports on the controller provide access to data. Physical storage and network connectivity
resources are virtualized, visible to cluster administrators only, not to NAS clients or SAN hosts.
Nodes in an HA pair must use the same storage array model. Otherwise you can use any supported
combination of controllers. You can scale out for capacity by adding nodes with like storage array
models, or for performance by adding nodes with higher-end storage arrays.
Of course you can scale up in all the traditional ways as well, upgrading disks or controllers as
needed. ONTAP's virtualized storage infrastructure makes it easy to move data nondisruptively,
meaning that you can scale vertically or horizontally without downtime.
Single-node clusters
A single-node cluster is a special implementation of a cluster running on a standalone node. You
might want to deploy a single-node cluster in a branch office, for example, assuming the workloads
are small enough and that storage availability is not a critical concern.
In this scenario, the single-node cluster would use SnapMirror replication to back up the site's data
to your organization's primary data center. ONTAP Select, with its support for ONTAP running on
commodity hardware, would be a good candidate for this type of implementation.
8
High-availability pairs
Cluster nodes are configured in high-availability (HA) pairs for fault tolerance and nondisruptive
operations. If a node fails or if you need to bring a node down for routine maintenance, its partner
can take over its storage and continue to serve data from it. The partner gives back storage when the
node is brought back on line.
HA pairs always consist of like controller models. The controllers typically reside in the same chassis
with redundant power supplies.
An internal HA interconnect allows each node to continually check whether its partner is functioning
and to mirror log data for the other’s nonvolatile memory. When a write request is made to a node, it
is logged in NVRAM on both nodes before a response is sent back to the client or host. On failover,
the surviving partner commits the failed node's uncommitted write requests to disk, ensuring data
consistency.
Connections to the other controller’s storage media allow each node to access the other's storage in
the event of a takeover. Network path failover mechanisms ensure that clients and hosts continue to
communicate with the surviving node.
To assure availability, you should keep performance capacity utilization on either node at 50% to
accommodate the additional workload in the failover case. For the same reason, you may want to
configure no more than 50% of the maximum number of NAS virtual network interfaces for a node.
Network architecture
The network architecture for an ONTAP datacenter implementation typically consists of a cluster
interconnect, a management network for cluster administration, and a data network. NICs (network
interface cards) provide physical ports for Ethernet connections. HBAs (host bus adapters) provide
physical ports for FC connections.
Logical ports
In addition to the physical ports provided on each node, you can use logical ports to manage network
traffic. Logical ports are interface groups or VLANs.
Interface groups
Interface groups combine multiple physical ports into a single logical “trunk port.” You might want
to create an interface group consisting of ports from NICs in different PCI slots to ensure against a
slot failure bringing down business-critical traffic.
An interface group can be single-mode, multimode, or dynamic multimode. Each mode offers
differing levels of fault tolerance. You can use either type of multimode interface group to load-
balance network traffic.
VLANs
VLANs separate traffic from a network port (which could be an interface group) into logical
segments defined on a switch port basis, rather than on physical boundaries. The end-stations
belonging to a VLAN are related by function or application.
You might group end-stations by department, such as Engineering and Marketing, or by project, such
as release1 and release2. Because physical proximity of the end-stations is irrelevant in a VLAN, the
end-stations can be geographically remote.
10 | ONTAP Concepts
IPspaces
You can use an IPspace to create a distinct IP address space for each virtual data server in a cluster.
Doing so enables clients in administratively separate network domains to access cluster data while
using overlapping IP addresses from the same IP address subnet range.
A service provider, for example, could configure different IPspaces for tenants using the same IP
addresses to access a cluster.
SNMP traps
You can use SNMP traps to check periodically for operational thresholds or failures. SNMP traps
capture system monitoring information sent asynchronously from an SNMP agent to an SNMP
manager.
FIPS compliance
ONTAP is compliant with the Federal Information Processing Standards (FIPS) 140-2 for all SSL
connections. You can turn on and off SSL FIPS mode, set SSL protocols globally, and turn off any
weak ciphers such as RC4.
11
• For business-critical applications that need the lowest possible latency and the highest possible
performance, you might create an aggregate consisting entirely of SSDs.
• To tier data with different access patterns, you can create a hybrid aggregate, deploying flash as
high-performance cache for a working data set, while using lower-cost HDDs or object storage
for less frequently accessed data. A FlashPool consists of both SSDs and HDDs. A FabricPool
consists of an all-SSD aggregate with an attached object store.
• If you need to segregate archived data from active data for regulatory purposes, you can use an
aggregate consisting of capacity HDDs, or a combination of performance and capacity HDDs.
Root-data partitioning
Every node must have a root aggregate for storage system configuration files. The root aggregate has
the RAID type of the data aggregate.
A root aggregate of type RAID-DP typically consists of one data disk and two parity disks. That's a
significant “parity tax” to pay for storage system files, when the system is already reserving two disks
as parity disks for each RAID group in the aggregate.
Root-data partitioning reduces the parity tax by apportioning the root aggregate across disk partitions,
reserving one small partition on each disk as the root partition and one large partition for data.
As the illustration suggests, the more disks used to store the root aggregate, the smaller the root
partition. That's also the case for a form of root-data partitioning called root-data-data partitioning,
which creates one small partition as the root partition and two larger, equally sized partitions for data.
Both types of root-data partitioning are part of the ONTAP Advanced Drive Partitioning (ADP)
feature. Both are configured at the factory: root-data partitioning for entry-level FAS2xxx, FAS9000,
FAS8200, FAS80xx and AFF systems, root-data-data partitioning for AFF systems only.
13
FlexGroup volumes
In some enterprises a single namespace may require petabytes of storage, far exceeding even a
FlexVol volume's 100TB capacity.
A FlexGroup volume supports up to 400 billion files with 200 constituent member volumes that
work collaboratively to dynamically balance load and space allocation evenly across all members.
There is no required maintenance or management overhead with a FlexGroup volume. You simply
create the FlexGroup volume and share it with your NAS clients. ONTAP does the rest.
14
Storage virtualization
You use storage virtual machines (SVMs) to serve data to clients and hosts. Like a virtual machine
running on a hypervisor, an SVM is a logical entity that abstracts physical resources. Data accessed
through the SVM is not bound to a location in storage. Network access to the SVM is not bound to a
physical port.
Note: SVMs were formerly called “vservers.” You will still see that term in the ONTAP command
line interface (CLI).
An SVM serves data to clients and hosts from one or more volumes, through one or more network
logical interfaces (LIFs). Volumes can be assigned to any data aggregate in the cluster. LIFs can be
hosted by any physical or logical port. Both volumes and LIFs can be moved without disrupting data
service, whether you are performing hardware upgrades, adding nodes, balancing performance, or
optimizing capacity across aggregates.
The same SVM can have a LIF for NAS traffic and a LIF for SAN traffic. Clients and hosts need only
the address of the LIF (IP address for NFS, SMB, or iSCSI; WWPN for FC) to access the SVM. LIFs
keep their addresses as they move. Ports can host multiple LIFs. Each SVM has its own security,
administration, and namespace.
In addition to data SVMs, ONTAP deploys special SVMs for administration:
• An admin SVM is created when the cluster is set up.
• A node SVM is created when a node joins a new or existing cluster.
• A system SVM is automatically created for cluster-level communications in an IPspace.
You cannot use these SVMs to serve data. There are also special LIFs for traffic within and between
clusters, and for cluster and node management.
Example
The following example creates a volume named “home4” located on SVM vs1 that has a
junction path /eng/home:
cluster1::> volume create -vserver vs1 -volume home4 -aggregate aggr1 -size
1g -junction-path /eng/home
[Job 1642] Job succeeded: Successful
17
Path failover
There are important differences in how ONTAP manages path failover in NAS and SAN topologies.
A NAS LIF automatically migrates to a different network port after a link failure. A SAN LIF does
not migrate (unless you move it manually after the failure). Instead, multipathing technology on the
host diverts traffic to a different LIF—on the same SVM, but accessing a different network port.
Subnets
A subnet reserves a block of IP addresses in a broadcast domain. These addresses belong to the
same layer 3 network and are allocated to ports in the broadcast domain when you create a LIF. It
is usually easier and less error-prone to specify a subnet name when you define a LIF address than
it is to specify an IP address and network mask.
18 | ONTAP Concepts
ONTAP Selective LUN Map (SLM) limits the number of paths from the host to a LUN by default. A
newly created LUN is accessible only through paths to the node that owns the LUN or its HA partner.
You can also limit access to a LUN by configuring LIFs in a port set for the initiator.
Load balancing
Performance of workloads begins to be affected by latency when the amount of work on a node
exceeds the available resources. You can manage an overloaded node by increasing the available
resources (upgrading disks or CPU), or by reducing load (moving volumes or LUNs to different
nodes as needed).
You can also use ONTAP storage quality of service (QoS) to guarantee that performance of critical
workloads is not degraded by competing workloads:
• You can set a QoS throughput ceiling on a competing workload to limit its impact on system
resources (QoS Max).
• You can set a QoS throughput floor for a critical workload, ensuring that it meets minimum
throughput targets regardless of demand by competing workloads (QoS Min).
• You can set a QoS ceiling and floor for the same workload.
Throughput ceilings
A throughput ceiling limits throughput for a workload to a maximum number of IOPS or MB/s. In
the figure below, the throughput ceiling for workload 2 ensures that it does not “bully” workloads 1
and 3.
A policy group defines the throughput ceiling for one or more workloads. A workload represents the
I/O operations for a storage object: a volume, file, or LUN, or all the volumes, files, or LUNs in an
SVM. You can specify the ceiling when you create the policy group, or you can wait until after you
monitor workloads to specify it.
Note: Throughput to workloads might exceed the specified ceiling by up to 10 percent, especially
if a workload experiences rapid changes in throughput. The ceiling might be exceeded by up to
50% to handle bursts.
Throughput floors
A throughput floor guarantees that throughput for a workload does not fall below a minimum number
of IOPS. In the figure below, the throughput floors for workload 1 and workload 3 ensure that they
meet minimum throughput targets, regardless of demand by workload 2.
Tip: As the examples suggest, a throughput ceiling throttles throughput directly. A throughput
floor throttles throughput indirectly, by giving priority to the workloads for which the floor has
been set.
20 | ONTAP Concepts
A workload represents the I/O operations for a volume, LUN, or, starting with ONTAP 9.3, file. A
policy group that defines a throughput floor cannot be applied to an SVM. You can specify the floor
when you create the policy group, or you can wait until after you monitor workloads to specify it.
Note: Throughput to a workload might fall below the specified floor if there is insufficient
performance capacity (headroom) on the node or aggregate, or during critical operations like
volume move trigger-cutover. Even when sufficient capacity is available and critical
operations are not taking place, throughput to a workload might fall below the specified floor by
up to 5 percent.
Adaptive QoS
Ordinarily, the value of the policy group you assign to a storage object is fixed. You need to change
the value manually when the size of the storage object changes. An increase in the amount of space
used on a volume, for example, usually requires a corresponding increase in the throughput ceiling
specified for the volume.
Adaptive QoS automatically scales the policy group value to workload size, maintaining the ratio of
IOPS to TBs|GBs as the size of the workload changes. That's a significant advantage when you are
managing hundreds or thousands of workloads in a large deployment.
You typically use adaptive QoS to adjust throughput ceilings, but you can also use it to manage
throughput floors (when workload size increases). Workload size is expressed as either the allocated
space for the storage object or the space used by the storage object.
Note: Used space is available for throughput floors in ONTAP 9.5 and later. It is not supported for
throughput floors in ONTAP 9.4 and earlier.
• An allocated space policy maintains the IOPS/TB|GB ratio according to the nominal size of the
storage object. If the ratio is 100 IOPS/GB, a 150 GB volume will have a throughput ceiling of
15,000 IOPS for as long as the volume remains that size. If the volume is resized to 300 GB,
adaptive QoS adjusts the throughput ceiling to 30,000 IOPS.
• A used space policy (the default) maintains the IOPS/TB|GB ratio according to the amount of
actual data stored before storage efficiencies. If the ratio is 100 IOPS/GB, a 150 GB volume that
has 100 GB of data stored would have a throughput ceiling of 10,000 IOPS. As the amount of
used space changes, adaptive QoS adjusts the throughput ceiling according to the ratio.
21
Replication
Traditionally, ONTAP replication technologies served the need for disaster recovery (DR) and data
archiving. With the advent of cloud services, ONTAP replication has been adapted to data transfer
between endpoints in the NetApp data fabric. The foundation for all these uses is ONTAP Snapshot
technology.
Snapshot copies
A Snapshot copy is a read-only, point-in-time image of a volume. The image consumes minimal
storage space and incurs negligible performance overhead because it records only changes to files
since the last Snapshot copy was made.
Snapshot copies owe their efficiency to ONTAP's core storage virtualization technology, its Write
Anywhere File Layout (WAFL). Like a database, WAFL uses metadata to point to actual data blocks
on disk. But, unlike a database, WAFL does not overwrite existing blocks. It writes updated data to a
new block and changes the metadata.
It's because ONTAP references metadata when it creates a Snapshot copy, rather than copying data
blocks, that Snapshot copies are so efficient. Doing so eliminates the “seek time” that other systems
incur in locating the blocks to copy, as well as the cost of making the copy itself.
You can use a Snapshot copy to recover individual files or LUNs, or to restore the entire contents of a
volume. ONTAP compares pointer information in the Snapshot copy with data on disk to reconstruct
the missing or damaged object, without downtime or a significant performance cost.
A Snapshot policy defines how the system creates Snapshot copies of volumes. The policy specifies
when to create the Snapshot copies, how many copies to retain, how to name them, and how to label
them for replication. For example, a system might create one Snapshot copy every day at 12:10 a.m.,
retain the two most recent copies, name them “daily” (appended with a timestamp), and label them
“daily” for replication.
22 | ONTAP Concepts
The first time you invoke SnapMirror, it performs a baseline transfer from the source volume to the
destination volume. The baseline transfer typically involves the following steps:
• Make a Snapshot copy of the source volume.
• Transfer the Snapshot copy and all the data blocks it references to the destination volume.
• Transfer the remaining, less recent Snapshot copies on the source volume to the destination
volume for use in case the “active” mirror is corrupted.
Once a baseline transfer is complete, SnapMirror transfers only new Snapshot copies to the mirror.
Updates are asynchronous, following the schedule you configure. Retention mirrors the Snapshot
policy on the source. You can activate the destination volume with minimal disruption in case of a
disaster at the primary site, and reactivate the source volume when service is restored.
Because SnapMirror transfers only Snapshot copies after the baseline is created, replication is fast
and nondisruptive. As the failover use case implies, the controllers on the secondary system should
be equivalent or nearly equivalent to the controllers on the primary system to serve data efficiently
from mirrored storage.
SnapVault archiving
SnapVault is archiving technology, designed for disk-to-disk Snapshot copy replication for standards
compliance and other governance-related purposes. In contrast to a SnapMirror relationship, in which
the destination usually contains only the Snapshot copies currently in the source volume, a SnapVault
destination typically retains point-in-time Snapshot copies created over a much longer period.
You might want to keep monthly Snapshot copies of your data over a 20-year span, for example, to
comply with government accounting regulations for your business. Since there is no requirement to
serve data from vault storage, you can use slower, less expensive disks on the destination system.
As with SnapMirror, SnapVault performs a baseline transfer the first time you invoke it. It makes a
Snapshot copy of the source volume, then transfers the copy and the data blocks it references to the
destination volume. Unlike SnapMirror, SnapVault does not include older Snapshot copies in the
baseline.
Updates are asynchronous, following the schedule you configure. The rules you define in the policy
for the relationship identify which new Snapshot copies to include in updates and how many copies
to retain. The labels defined in the policy (“monthly,” for example) must match one or more labels
defined in the Snapshot policy on the source. Otherwise, replication fails.
Note: SnapMirror and SnapVault share the same command infrastructure. You specify which
method you want to use when you create a policy. Both methods require peered clusters and
peered SVMs.
Storage efficiency
ONTAP offers a wide range of storage efficiency technologies in addition to Snapshot copies. Key
technologies include thin provisioning, deduplication, compression, and FlexClone volumes, files,
and LUNs. Like Snapshot copies, all are built on ONTAP's Write Anywhere File Layout (WAFL).
Thin provisioning
A thin-provisioned volume or LUN is one for which storage is not reserved in advance. Instead,
storage is allocated dynamically, as it is needed. Free space is released back to the storage system
when data in the volume or LUN is deleted.
Suppose that your organization needs to supply 5,000 users with storage for home directories. You
estimate that the largest home directories will consume 1 GB of space.
In this situation, you could purchase 5 TB of physical storage. For each volume that stores a home
directory, you would reserve enough space to satisfy the needs of the largest consumers.
As a practical matter, however, you also know that home directory capacity requirements vary greatly
across your community. For every large user of storage, there are ten who consume little or no space.
Thin provisioning allows you to satisfy the needs of the large storage consumers without having to
purchase storage you might never use. Since storage space is not allocated until it is consumed, you
can “overcommit” an aggregate of 2 TB by nominally assigning a size of 1 GB to each of the 5,000
volumes the aggregate contains.
As long as you are correct that there is a 10:1 ratio of light to heavy users, and as long as you take an
active role in monitoring free space on the aggregate, you can be confident that volume writes won't
fail due to lack of space.
Deduplication
Deduplication reduces the amount of physical storage required for a volume (or all the volumes in an
AFF aggregate) by discarding duplicate blocks and replacing them with references to a single shared
block. Reads of deduplicated data typically incur no performance charge. Writes incur a negligible
charge except on overloaded nodes.
As data is written during normal use, WAFL uses a batch process to create a catalog of block
signatures. After deduplication starts, ONTAP compares the signatures in the catalog to identify
duplicate blocks. If a match exists, a byte-by-byte comparison is done to verify that the candidate
blocks have not changed since the catalog was created. Only if all the bytes match is the duplicate
block discarded and its disk space reclaimed.
26 | ONTAP Concepts
Compression
Compression reduces the amount of physical storage required for a volume by combining data blocks
in compression groups, each of which is stored as a single block. Reads of compressed data are faster
than in traditional compression methods because ONTAP decompresses only the compression groups
that contain the requested data, not an entire file or LUN.
You can perform inline or postprocess compression, separately or in combination:
• Inline compression compresses data in memory before it is written to disk, significantly reducing
the amount of write I/O to a volume, but potentially degrading write performance. Performance-
intensive operations are deferred until the next postprocess compression operation, if any.
• Postprocess compression compresses data after it is written to disk, on the same schedule as
deduplication.
Security
ONTAP uses standard methods to secure client and administrator access to storage and to protect
against viruses. Advanced technologies are available for encryption of data at rest and for WORM
storage.
Authentication
You can create local or remote user accounts:
• A local account is one in which the account information resides on the storage system.
• A remote account is one in which account information is stored on an Active Directory domain
controller, an LDAP server, or a NIS server.
ONTAP uses local or external name services to look up host name, user, group, netgroup, and name
mapping information. ONTAP supports the following name services:
• Local users
• DNS
• External NIS domains
• External LDAP domains
A name service switch table specifies the sources to search for network information and the order in
which to search them (providing the equivalent functionality of the /etc/nsswitch.conf file on UNIX
systems). When a NAS client connects to the SVM, ONTAP checks the specified name services to
obtain the required information.
Kerberos support
Kerberos is a network authentication protocol that provides “strong authentication” by encrypting
user passwords in client-server implementations. ONTAP supports Kerberos 5 authentication with
integrity checking (krb5i) and Kerberos 5 authentication with privacy checking (krb5p).
Authorization
ONTAP evaluates three levels of security to determine whether an entity is authorized to perform a
requested action on files and directories residing on an SVM. Access is determined by the effective
permissions after evaluation of the security levels:
Native file-level security exists on the file or directory that represents the storage object. You can
set file-level security from a client. File permissions are effective regardless of whether SMB or
NFS is used to access the data.
Authentication
You can create local or remote cluster and SVM administrator accounts:
• A local account is one in which the account information, public key, or security certificate resides
on the storage system.
• A remote account is one in which account information is stored on an Active Directory domain
controller, an LDAP server, or a NIS server.
Except for DNS, ONTAP uses the same name services to authenticate administrator accounts as it
uses to authenticate clients.
RBAC
The role assigned to an administrator determines the commands to which the administrator has
access.You assign the role when you create the account for the administrator. You can assign a
different role or define custom roles as needed.
Virus scanning
You can use integrated antivirus functionality on the storage system to protect data from being
compromised by viruses or other malicious code. ONTAP virus scanning, called Vscan, combines
best-in-class third-party antivirus software with ONTAP features that give you the flexibility you
need to control which files get scanned and when.
Storage systems offload scanning operations to external servers hosting antivirus software from third-
party vendors. The ONTAP Antivirus Connector, provided by NetApp and installed on the external
server, handles communications between the storage system and the antivirus software.
• You can use on-access scanning to check for viruses when clients open, read, rename, or close
files over SMB. File operation is suspended until the external server reports the scan status of the
file. If the file has already been scanned, ONTAP allows the file operation. Otherwise, it requests
a scan from the server.
• You can use on-demand scanning to check files for viruses immediately or on a schedule. You
might want to run scans only in off-peak hours, for example. The external server updates the scan
status of the checked files, so that file-access latency for those files (assuming they have not been
modified) is typically reduced when they are next accessed over SMB. You can use on-demand
scanning for any path in the SVM namespace, even for volumes that are exported only through
NFS.
You typically enable both scanning modes on an SVM. In either mode, the antivirus software takes
remedial action on infected files based on your settings in the software.
Encryption
ONTAP offers both software- and hardware-based encryption technologies for ensuring that data at
rest cannot be read if the storage medium is repurposed, returned, misplaced, or stolen.
NSE supports self-encrypting HDDs and SSDs. You can use NetApp Volume Encryption with NSE
to “double encrypt” data on NSE drives, provided that you use the Onboard Key Manager.
• Your encryption key management solution must comply with Federal Information Processing
Standards (FIPS) 140-2 or the OASIS KMIP standard.
• You need a multi-cluster solution. KMIP servers support multiple clusters with centralized
management of encryption keys.
KMIP servers support multiple clusters with centralized management of encryption keys.
• Your business requires the added security of storing authentication keys on a system or in a
location different from the data. KMIP servers stores authentication keys separately from your
data.
KMIP servers stores authentication keys separately from your data.
WORM storage
SnapLock is a high-performance compliance solution for organizations that use write once, read
many (WORM) storage to retain critical files in unmodified form for regulatory and governance
purposes.
A single license entitles you to use SnapLock in strict Compliance mode, to satisfy external mandates
like SEC Rule 17a-4, and a looser Enterprise mode, to meet internally mandated regulations for the
protection of digital assets. SnapLock uses a tamper-proof ComplianceClock to determine when the
retention period for a WORM file has elapsed.
You can use SnapLock for SnapVault to WORM-protect Snapshot copies on secondary storage. You
can use SnapMirror to replicate WORM files to another geographic location for disaster recovery and
other purposes.
31
• View applications
Release types
• A feature release includes a set of new market-driven features, as well as fixes for bugs that
customers encountered in earlier releases. Each feature release is numbered “ONTAP x.y”, for
example, “ONTAP 9.0” or “ONTAP 9.1”.
• A service update release delivers timely fixes for critical bugs encountered in the field, for
customers who cannot wait for the next feature release. Each service update is numbered
“ONTAP x.yPz”, for example, “ONTAP 9.0P1” or “ONTAP 9.0P2”.
• A debug release is a hot-fix release delivered on an exception basis to a specific customer. These
releases are not normally planned and are provided only in case of extraordinary need. Each
debug release is numbered “ONTAP x.yPzDa”, for example, “ONTAP 9.0P1D1” or “ONTAP
9.0P2D2”.
Service update and debug releases are referred to as “P-release patches.”
A feature release is made available as a Release Candidate (RC), then as a General Availability (GA)
release:
• The Release Candidate designation indicates that NetApp completed internal testing as well as
early customer validation of the release.
• The General Availability designation indicates that a Release Candidate performed extraordinarily
well across multiple deployments, enabling customers to achieve 99.999% availability or better in
production environments.
Any configuration qualified for an ONTAP GA release is automatically qualified for all P-release
patches of that release. If support is introduced in a patch release, subsequent patch releases maintain
that support.
For complete information, see ONTAP release model.
ONTAP platforms
• NetApp Data Fabric Architecture Fundamentals
Describes how the NetApp Data Fabric unifies data management across distributed resources.
Cluster storage
• System administration
Describes cluster and SVM administration.
High-availability pairs
• High-availability configuration
Describes how to configure HA pairs.
• NetApp Technical Report 3450: High-Availability Pair Controller Configuration Overview and
Best Practices
Describes best practices for HA pair configuration.
Network architecture
• Network and LIF management
Describes network management and LIF configuration.
• NetApp Technical Report 4182: Ethernet Storage Design Considerations and Best Practices for
Clustered Data ONTAP Configurations
Describes how ONTAP network architectures are implemented.
• SAN administration
Describes how to configure and manage LUNs, igroups, and targets for SAN protocols.
Storage virtualization
• SMB/CIFS configuration express
Describes how to quickly configure CIFS/SMB client access.
Path failover
• Network and LIF management
Describes network management and LIF configuration.
• NetApp Technical Report 3441: Windows Multipathing Options with Data ONTAP: Fibre
Channel and iSCSI
Describes multipathing options available for iSCSI and Fibre Channel SAN.
Load balancing
• Logical storage management
Describes how to manage FlexVol volumes, qtrees, files, and LUNs.
• Performance management
Describes how to use QoS to guarantee workload performance.
Replication
• Volume disaster recovery express preparation
Describes how to quickly configure a SnapMirror destination volume.
• Data protection
Describes how to manage Snapshot copies on a local ONTAP system, and how to replicate
Snapshot copies to a remote system using SnapMirror.
Where to find additional information | 35
• NetApp Technical Report 4015: SnapMirror Configuration and Best Practices Guide for ONTAP
9.1, 9.2
Describes SnapMirror best practices.
• NetApp Technical Report 3548: Best Practices for MetroCluster Design and Implementation
Describes MetroCluster best practices.
Storage efficiency
• Logical storage management
Describes how to manage FlexVol volumes, qtrees, files, and LUNs.
• NetApp Technical Report 3563: NetApp Thin Provisioning Increases Storage Utilization With On
Demand Allocation
Introduces ONTAP thin provisioning.
• NetApp Technical Report 3966: NetApp Data Compression and Deduplication Deployment and
Implementation Guide (Clustered Data ONTAP)
Describes ONTAP deduplication and compression
• NetApp Technical Report 3742: Using FlexClone to Clone Files and LUNs
Describes how to use FlexClone to create space efficient copies of files and LUNs.
Security
• NFS management
Provides reference information for NFS file access.
• SMB/CIFS management
Provides reference information for CIFS file access.
• Antivirus configuration
Describes how to configure ONTAP virus scanning.
Copyright information
Copyright © 2018 NetApp, Inc. All rights reserved. Printed in the U.S.
No part of this document covered by copyright may be reproduced in any form or by any means—
graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an
electronic retrieval system—without prior written permission of the copyright owner.
Software derived from copyrighted NetApp material is subject to the following license and
disclaimer:
THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE,
WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
NetApp reserves the right to change any products described herein at any time, and without notice.
NetApp assumes no responsibility or liability arising from the use of products described herein,
except as expressly agreed to in writing by NetApp. The use or purchase of this product does not
convey a license under any patent rights, trademark rights, or any other intellectual property rights of
NetApp.
The product described in this manual may be protected by one or more U.S. patents, foreign patents,
or pending applications.
Data contained herein pertains to a commercial item (as defined in FAR 2.101) and is proprietary to
NetApp, Inc. The U.S. Government has a non-exclusive, non-transferrable, non-sublicensable,
worldwide, limited irrevocable license to use the Data only in connection with and in support of the
U.S. Government contract under which the Data was delivered. Except as provided herein, the Data
may not be used, disclosed, reproduced, modified, performed, or displayed without the prior written
approval of NetApp, Inc. United States Government license rights for the Department of Defense are
limited to those rights identified in DFARS clause 252.227-7015(b).
37
Trademark information
NETAPP, the NETAPP logo, and the marks listed on the NetApp Trademarks page are trademarks of
NetApp, Inc. Other company and product names may be trademarks of their respective owners.
http://www.netapp.com/us/legal/netapptmlist.aspx
38
Index
A concepts 26
admin role
and cluster administrator 15 D
admin SVM data fabric
concepts 14, 15, 28 concepts 6
ADP data protection
concepts 12 concepts 22, 30
Advanced Data Partitioning (ADP) MetroCluster continuous availability 24
concepts 12 SnapMirror disaster recovery 22
aggregate Snapshot copies 21
and disk partitioning 12 data SVM
and disks 11 concepts 14
and RAID groups 11 data transfer)
and volumes 13 concepts 22
concepts 11 debug release
use cases 11 ONTAP 32
ALUA deduplication
concepts 18 concepts 25
application aware data management disaster recovery
overview 31 concepts 22, 24
archiving disk
characteristics of SnapVault 23 and aggregates 11
concepts 30 and RAID groups 11, 12
Asymmetric Logical Unit Access (ALUA) disk encryption
concepts 18 concepts 29
authentication disk partitioning
concepts 27, 28 concepts 12
authorization DNS load balancing
concepts 27, 28 concepts 10
Storage-Level Access Guard 27 documentation
how to receive automatic notification of changes to
B 38
how to send feedback about 38
baseline transfer DSM
concepts 22 concepts 18
broadcast domain
and VLANs 17
concepts 17 E
encryption
C disk encryption 29
volume encryption 29
CIFS share export
concepts 16 NFS 16
cloud storage
with Cloud Volumes ONTAP 6
cluster F
administrator 15, 28 FabricPool
concepts 7, 8 concepts 11
interconnect 7, 9 failover
peering 22 HA pairs 8
comments network path 17, 18
how to send feedback about documentation 38 failover group
commodity hardware concepts 17
and ONTAP Select 6 fault tolerance
compaction concepts 8
concepts 26 feature release
compression ONTAP 32
40 | ONTAP Concepts