PurityFA 6.4.10 FlashArray AdminGuide
PurityFA 6.4.10 FlashArray AdminGuide
PurityFA 6.4.10 FlashArray AdminGuide
Administration Guide
Version 6.4.10
Copyright Statement
© 2023 Pure Storage (“Pure”), Portworx and its associated trademarks can be found here and its
virtual patent marking program can be found here. Third party names may be trademarks of their
respective owners.
The Pure Storage products and programs described in this documentation are distributed under
a license agreement restricting the use, copying, distribution, and decompilation/reverse engin-
eering of the products. No part of this documentation may be reproduced in any form by any
means without prior written authorization from Pure Storage, Inc. and its licensors, if any. Pure
Storage may make improvements and/or changes in the Pure Storage products and/or the pro-
grams described in this documentation at any time without notice.
THIS DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED
CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED
WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-
INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS
ARE HELD TO BE LEGALLY INVALID. PURE STORAGE SHALL NOT BE LIABLE FOR
INCIDENTAL OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH THE FURNISHING,
PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE INFORMATION CONTAINED
IN THIS DOCUMENTATION IS SUBJECT TO CHANGE WITHOUT NOTICE.
Pure Storage, Inc. 2555 Augustine Drive, Santa Clara, CA 95054 http://www.purestorage.com
Direct comments to mailto: [email protected].
Version 1
Pure Storage Confidential - For distribution only to Pure Customers and Partners 2
Table of Contents
Chapter 1:About this Guide 20
What's New? 21
Organization of the Guide 24
A Note on Format and Content 25
Related Documentation 25
Contact Us 26
Documentation Feedback 26
Product Support 26
General Feedback 26
Chapter 2:FlashArray Concepts and Features 27
Arrays 27
Array Service Type 28
Connected Arrays 28
Hardware Components 29
Network Interface 30
Block Storage 32
Volumes 32
Volume Groups 33
Volume Snapshots 33
Volume Snapshots vs. Protection Group Snapshots 33
Eradication Delays 35
Eradication Delay Settings after an Upgrade 36
Extending or Decreasing an Eradication Pending Period 36
SafeMode 37
Automatic Protection Group Assignment for Volumes 38
SafeMode Status 39
Always-On Quality of Service 39
Hosts 39
Pure Storage Confidential - For distribution only to Pure Customers and Partners 3
|
Host Guidelines 41
Host Groups 41
Host Group Guidelines 42
Host-Volume Connections 42
Private Connections 43
Shared Connections 43
Breaking Private and Shared Connections 43
Logical Unit Number (LUN) Guidelines 44
Connection Guidelines 44
Connections 44
Protection Groups and Protection Group Snapshots 45
Protection Groups 45
Space Consumption Considerations 48
Protection Group Snapshots 48
File Storage 49
File Systems 50
Managed Directories 50
Exports 51
Auto Managed Policies 52
NFS Datastore 52
Local Users 52
NFSv3 and File Locking 53
NFS User Mapping 54
Directory Quotas 54
Snapshots 56
Previous Versions 56
Protection Plan 57
Hard Links and Symbolic Links (Symlinks) 59
Object Names 59
File and Directory Names 59
Pure Storage Confidential - For distribution only to Pure Customers and Partners 4
|
Virtual Interfaces 60
Authentication and Authorization 60
ACL and Mode_t Interoperability 60
Users and Security 61
Directory Service 61
Multi-factor Authentication 62
Multi-factor Authentication through SAML2 Single Sign-on 62
Multi-factor Authentication with RSA SecurID® Authentication 62
SSL Certificate 63
Industry Standards 63
Troubleshooting and Logging 64
Alerts 64
Audit Trail 65
User Session Logs 65
SNMP Agent and SNMP Managers 66
Remote Assist Facility 66
Event Logging Facility 67
Syslog Logging Facility 67
Chapter 3:Conventions 68
Object Names 68
Volume Sizes 69
IP Addresses 69
Storage Network Addresses 70
Chapter 4:GUI Overview 72
GUI Navigation 73
End User Agreement (EULA) 77
GUI Login 78
Logging in to the Purity//FA GUI 79
Logging in with Password Authentication 79
Logging in with SAML2 SSO Authentication 80
Pure Storage Confidential - For distribution only to Pure Customers and Partners 5
|
Pure Storage Confidential - For distribution only to Pure Customers and Partners 6
|
Pure Storage Confidential - For distribution only to Pure Customers and Partners 7
|
Pure Storage Confidential - For distribution only to Pure Customers and Partners 8
|
Pure Storage Confidential - For distribution only to Pure Customers and Partners 9
|
Pure Storage Confidential - For distribution only to Pure Customers and Partners 10
|
Pure Storage Confidential - For distribution only to Pure Customers and Partners 11
|
Pure Storage Confidential - For distribution only to Pure Customers and Partners 12
|
Pure Storage Confidential - For distribution only to Pure Customers and Partners 13
|
Pure Storage Confidential - For distribution only to Pure Customers and Partners 14
|
Pure Storage Confidential - For distribution only to Pure Customers and Partners 15
|
Pure Storage Confidential - For distribution only to Pure Customers and Partners 16
|
Pure Storage Confidential - For distribution only to Pure Customers and Partners 17
|
Pure Storage Confidential - For distribution only to Pure Customers and Partners 18
|
Pure Storage Confidential - For distribution only to Pure Customers and Partners 19
Chapter 1:
About this Guide
The Pure Storage® FlashArray User Guide is written for array administrators who view and man-
age the Pure Storage FlashArray storage system.
FlashArrays are administered through the Purity for FlashArray (Purity//FA) graphical user inter-
face (GUI) or command line interface (CLI). Users should be familiar with system, storage, and
networking concepts, and have a working knowledge of Windows or UNIX.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 20
Chapter 1:About this Guide | What's New?
What's New?
The Purity//FA 6.4.x release line introduces new features and enhancements to increase func-
tionality. The following have been implemented in 6.4.x releases.
6.4.10:
l Adds SafeMode™ Default Protection to the vVol SPBM interface. vVol Storage
Policy Based Management (SPBM) introduces a new capability "Default Protection"
that allows users to influence the placement of volumes in default protection groups
upon creation. To learn more on virtual volumes, including configuration steps, refer
to the Pure Storage vSphere Web Client Plugin for vSphere User Guide on the Know-
ledge Base site at https://support.purestorage.com.
l NFS v4.1 for file services. FlashArray file services now supports version 4.1 of the
NFS protocol. This introduces new features and enhancements for interoperability
and ease of use over version 3 of NFS that a number of FlashArray users already
have access to.
l Capacity metrics for subscription storage. For Evergreen//One customers, capacity
metrics are now based on effective used capacity, a metric closer to host-written capa-
city, in line with the storage consumption billing model. With Evergreen//One™ sub-
scription-based storage, “Evergreen//One” appears in the top left corner of the
Purity//FA GUI and in the purearray list --service CLI command output.
Introduces the concept of array service type to distinguish subscription storage from
purchased arrays. The purearray list --service CLI command returns
FlashArray on a purchased array and returns Evergreen//One on subscription
storage.
l New SafeMode eradication pending period. Introduces a separate eradication
pending period with a default of 8 days for array objects protected by SafeMode, in
addition to the eradication pending period with a default of one day for other array
objects. See "Eradication Delays" on page 35".
6.4.9:
l Introduces new CLI options to prevent scheduled protection group snapshot cre-
ation from overloading the array performance. Adds support for throttling of snap-
shots to lessen the impact on the array performance. Use the --allow-throttle
and --dry-run options for purepgroup snap and purevol snap commands
Pure Storage Confidential - For distribution only to Pure Customers and Partners 21
Chapter 1:About this Guide | What's New?
Pure Storage Confidential - For distribution only to Pure Customers and Partners 22
Chapter 1:About this Guide | What's New?
Pure Storage Confidential - For distribution only to Pure Customers and Partners 23
Chapter 1:About this Guide | Organization of the Guide
l Automatic SafeMode protection for volumes on new arrays. When Purity//FA 6.4.0
is installed on a new array, by default all newly created and copied volumes auto-
matically become members of a ratcheted protection group. Purity//FA automatically
creates a protection group at the root of the array and in each new pod. Default pro-
tection groups are managed through the GUI (Protection > Array > Default Protection
pane). For more information, see "Default Protection for Volumes" on page 195.
l ActiveWorkload. Multiple Synchronous Connections supports up to five synchronous
connections between Purity//FA to support hub and spoke topology for stretched
pods. For more information, see "SafeMode" in "Pods" on page 133.
l Security Patches Mechanism. FlashArray users are now able to install critical secur-
ity patches on their arrays without assistance from Pure Storage Technical Services.
Pure1 users can install critical security through Pure1 Edge Service. For more inform-
ation see puresw in the Purity//FA CLI Reference Guide.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 24
Chapter 1:About this Guide | A Note on Format and Content
Related Documentation
Refer to the following related guides to learn more about the FlashArray:
l Purity//FA CLI Reference Guide. The Purity//FA command line interface (CLI) is a
non-graphical, command-driven interface used to query and administer the FlashAr-
ray. The Purity//FA CLI is comprised of built-in commands specific to the Purity//FA
operating environment. Refer to the Purity//FA CLI Reference Guide for a description
of the CLI and a detailed description of each command.
Pure Storage REST API Guide. The Pure Storage REpresentational State Transfer
(REST) API uses HTTP requests to interact with the FlashArray resources. The Pure
Storage REST API Guide provides an overview of the REST API and a list of all avail-
able resources.
l Pure Storage SMI-S Provider Guide. Purity//FA includes the Pure Storage Storage
Management Initiative Specification (SMI-S) provider, which allows FlashArray admin-
istrators to manage the array using an SMI-S client over HTTPS. The Pure Storage
SMI-S Provider Guide describes functionality the provider supports and information
on connecting to the provider.
l Third-party plugin guides. Pure Storage packages and plug-ins extend the func-
tionality of the FlashArray. Available packages and plug-ins include, but are not lim-
ited to, VSS Hardware Provider, FlashArray OpenStack Cinder Volume Driver,
Pure Storage Confidential - For distribution only to Pure Customers and Partners 25
Chapter 1:About this Guide | Contact Us
FlashArray Storage Replication Adapter, Management Plugin for vSphere, and vReal-
ize Operations Management Pack.
l Pure1 Manage User Guide. Pure1 Manage is an integrated cloud-based, mobile-
friendly platform that lets you monitor and manage your Pure Storage arrays from any-
where with just a web browser. Pure1 Manage provides full-stack monitoring with
visual summaries of array conditions, predictive analysis, capacity and workload plan-
ning, and support cases. Pure1 Manage includes the Pure1 Digital Marketplace,
where you can directly purchase, manage, and renew Pure products and services.
All related guides are available on the Knowledge site at https://support.purestorage.com.
Contact Us
Pure Storage is always eager to hear from you.
Documentation Feedback
We welcome your feedback about Pure Storage documentation and encourage you to send
your questions and comments to <[email protected]>.
Product Support
If you are a registered Pure Storage user, log in to the Pure Storage Technical Services website
at https://support.purestorage.com to browse our knowledge base, view the status of your open
support cases, and view the details of past support cases.
You can also contact Pure Storage Technical Services at <[email protected]>.
General Feedback
For all other questions and comments about Pure Storage, including products, sales, service,
and just about anything that interests you about data storage, email <[email protected]>.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 26
Chapter 2:
FlashArray Concepts and
Features
This chapter provides a brief introduction to the FlashArray hardware, networking, and storage
components and describes where they are managed in Purity//FA.
Purity//FA is the operating environment that manages the FlashArray. Purity//FA, which comes
bundled with the FlashArray, can be administered through a graphical user interface (Purity//FA
GUI) or command line interface (Purity//FA CLI).
The FlashArray can also be managed through the Pure Storage® REpresentational State Trans-
fer (REST) API, which uses HTTP requests to interact with resources within Pure Storage. For
more information about the Pure Storage REST API, refer to the Pure Storage REST API Refer-
ence Guide on the Knowledge site at https://support.purestorage.com.
Arrays
A FlashArray controller contains the processor and memory complex that runs the Purity//FA
software, buffers incoming data, and interfaces to storage shelves, other controllers, and hosts.
FlashArray controllers are stateless, meaning that all metadata related to the data stored in a
FlashArray is contained in storage-shelf storage. Therefore, it is possible to replace the con-
troller of an array at any time with no data loss.
The following are some array-specific tasks that can be performed through the Purity//FA GUI:
l Display array health through the Health > Hardware page.
l Monitor capacity, storage consumption, performance (latency, IOPS, bandwidth) met-
rics , and replication through the Analysis page.
l Change the array name and other configuration settings through the Settings > Sys-
tem page.
The same tasks can also be performed through the CLI purearray command.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 27
Chapter 2:FlashArray Concepts and Features | Arrays
Service type is also seen through the purearray list --service CLI command.
Connected Arrays
A connection must be established between two arrays in order for data transfer to occur.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 28
Chapter 2:FlashArray Concepts and Features | Hardware Components
For example, two arrays must be connected in order to perform asynchronous replications.
When two arrays are connected to replicate data from one array to another, the array where data
is being transferred from is called the source array, and the array where data is being transferred
to is called the target array.
As another example, two arrays must be connected to perform ActiveCluster replication or Act-
iveDR replication.
Arrays are connected using a connection key, which is supplied from one array and entered into
the other array.
For asynchronous replication, once two arrays are connected, optionally configure network
bandwidth throttling to set maximum threshold values for outbound traffic.
Connected arrays are managed through the GUI (Storage > Array) and CLI (purearray con-
nect command).
Network bandwidth throttling is configured through the GUI (Storage > Array) and CLI (pur-
earray throttle command).
Hardware Components
Purity//FA displays the operational status of most FlashArray hardware components. The dis-
play is primarily useful for diagnosing hardware-related problems.
Status information for each component includes the functioning status, index numbers, speed at
which a component is operating, and reported temperature.
In addition to general hardware component operational status, Purity//FA also displays status
information for each flash module and NVRAM module on the array. Status information includes
module status, physical storage capacity, module health, and time at which a module became
non-responsive.
FlashArray hardware names are fixed. When they are powered on, FlashArray controllers and
storage shelves automatically discover each other and self-configure to optimize I/O per-
formance, data integrity, availability, and fault recoverability, all without administrator inter-
vention.
Purity//FA visually identifies certain hardware components through LED lights and numbers.
Controllers, flash module bays, NVRAM bays, and storage shelves contain LED lights that can
be turned on and off with Purity//FA. Furthermore, storage shelves contain LED integers to
uniquely identify shelves in multi-shelf arrays.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 29
Chapter 2:FlashArray Concepts and Features | Network Interface
Hardware components are displayed and administered through the GUI (Health > Hardware)
and CLI (purehw command).
Flash modules and NVRAM modules are displayed through the GUI (Health > Hardware) and
CLI (puredrive command).
Each hardware component in a FlashArray has a unique name that identifies its location in the
array for service purposes.
The hardware component names are used throughout Purity//FA, for instance in the GUI Health
> Hardware page, and with CLI commands such as puredrive and purehw.
Network Interface
View and configure network interface, subnet, and DNS attributes through Purity//FA.
The Purity//FA network interfaces manage the bond, Ethernet, virtual, and VLAN interfaces used
to connect the array to an administrative network. See Figure 2-2.
Figure 2-2. Settings > Network
Pure Storage Confidential - For distribution only to Pure Customers and Partners 30
Chapter 2:FlashArray Concepts and Features | Network Interface
Each FlashArray controller is equipped with two Ethernet interfaces that connect to a data cen-
ter network for array administration.
A bond interface combines two or more similar Ethernet interfaces to form a single virtual "bon-
ded" interface with optional child devices. A bond interface provides higher data transfer rates,
load balancing, and link redundancy. A default bond interface, named replbond, is created
when Purity//FA starts for the first time.
Array administrators cannot create or delete bond interfaces. To create or delete a bond inter-
face, contact Pure Storage Technical Services.
Apply a service to an interface to specify the type of network traffic the device serves. Each inter-
face must have at least one or more services applied. Supported services include ds, file,
iscsi, management, nvme-roce, nvme-tcp, and replication. For example, apply the rep-
lication service to the replbond bond interface to channel all replication traffic through that
device.
View the network connection attributes, including interface, netmask, and gateway IP
addresses, maximum transmission units (MTUs), and the network services attached to each net-
work interface.
Enable or disable an interface through Purity//FA at any time. Disabling an interface while an
administrative session is being conducted causes the session to lose SSH connection and no
longer be able to connect to the controller.
Configure the network connection attributes, including the interface, netmask, and gateway IP
addresses, and the MTU. Ethernet and bond interface IP addresses and netmasks are set expli-
citly, along with the corresponding netmasks. DHCP mode is not supported.
Manage the domain name system (DNS) domains that are configured for the array. Each DNS
domain can include up to three static DNS server IP addresses. DHCP mode is not supported.
Network interfaces and DNS settings are configured through the GUI (Settings > Network) and
CLI (purenetwork command for network interfaces, and puredns for DNS settings).
Note: Editing the following attributes is not supported on Cloud Block Store:
l Network interfaces, including bond, Ethernet, and VLAN interfaces
l Subnet netmasks
l DNS settings
Pure Storage Confidential - For distribution only to Pure Customers and Partners 31
Chapter 2:FlashArray Concepts and Features | Block Storage
Block Storage
Volumes
FlashArrays eliminate drive-oriented concepts such as RAID groups and spare drives that are
common with disk arrays. Purity//FA treats the entire storage capacity of all flash modules in an
array as a single homogeneous pool from which it allocates storage only when hosts write data
to volumes created by administrators. Therefore, creating a FlashArray volume only requires a
volume name, to be used in administrative operations and displays, and a provisioned size.
FlashArray volumes are virtual, so creating, renaming, resizing, and destroying a volume has no
meaning outside the array.
Create a single volume or multiple volumes at one time. Purity//FA administrative operations rely
on volume names, so they must be unique within an array.
Creating a volume creates persistent data structures in the array, but does not allocate any phys-
ical storage. Purity//FA allocates physical storage only when hosts write data. Volume creation
is therefore nearly instantaneous. Volumes do not consume physical storage until data is actu-
ally written to them, so volume creation has no immediate effect on an array's physical storage
consumption.
Rename a volume to change the name by which Purity//FA identifies the volume in admin-
istrative operations and displays. The new volume name is effective immediately and the old
name is no longer recognized in CLI, GUI, or REST interactions.
Resize an existing volume to change the virtual capacity of the volume as perceived by the
hosts. The volume size changes are immediately visible to connected hosts. If you decrease
(truncate) the volume size, Purity//FA automatically takes an undo snapshot of the volume. The
undo snapshot enters an eradication pending period, after which time the snapshot is destroyed.
During the eradication pending period, the undo snapshot can be viewed, recovered, or per-
manently eradicated through the Destroyed Volumes folder. Increasing the size of a truncated
volume does not restore any data that is lost when the volume was first truncated.
Eradication pending periods are configured in the Settings > System > Eradication Configuration
pane. See "Eradication Delays" on page 35 and "Eradication Delay Settings" on page 285.
Copy a volume to create a new volume or overwrite an existing one. After you copy a volume,
the source of the new or overwritten volume is set to the name of the originating volume.
Destroy a volume if it is no longer needed. When you destroy a volume, Purity//FA automatically
takes an undo snapshot of the volume. The undo snapshot enters an eradication pending
Pure Storage Confidential - For distribution only to Pure Customers and Partners 32
Chapter 2:FlashArray Concepts and Features | Block Storage
period. During the eradication pending period, the undo snapshot can be viewed, recovered, or
permanently eradicated through the Destroyed Volumes folder. Eradicating a volume com-
pletely obliterates the data within the volume, allowing Purity//FA to reclaim the storage space
occupied by the data. After the eradication pending period, the undo snapshot is completely
eradicated and can no longer be recovered.
Limits and priority adjustments can be set on volumes to reflect the relative importance of their
workloads. The bandwidth limit enforces the maximum allowable throughput and the IOPS limit
enforces the maximum I/O operations processed per second. A priority adjustment increases or
decreases the performance priority of a volume relative to other volumes, when supported by
the FlashArray hardware.
Volume Groups
Volume groups organize FlashArray volumes into logical groupings. An action such as con-
necting to a host, applying a policy, configuring bandwidth or IOPS limits, or setting priority
adjustments, when taken on the volume group acts on all volumes within the group.
Volume group tasks are performed through the GUI (Storage > Volumes) or CLI (puregroup
command).
A volume can belong to only one volume group.
Volume Snapshots
Volume snapshots are immutable, point-in-time images of the contents of one or more volumes.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 33
Chapter 2:FlashArray Concepts and Features | Block Storage
A protection group volume snapshot is a volume snapshot that is created from a group of
volumes that are part of the same protection group. All of the volume snapshots created
from a protection group snapshot are point-in-time consistent with each other.
Protection group snapshots can be manually generated on demand or enabled to auto-
matically generate at scheduled intervals. After a protection group snapshot has been
taken, it is either stored on the local array or replicated over to a remote (target) array.
Protection group volume snapshot tasks performed through the Storage > Volumes
page of the GUI or purevol command of the CLI are limited to copying snapshots. All
other protection group snapshot tasks are performed through the Storage > Protection
Groups page of the GUI or purepgroup command of the CLI.
For more information about protection groups and protection group snapshots, refer to
the Protection Groups and Protection Group Snapshots section. See Figure 2-3.
All volume snapshots are visible through the Storage > Volumes page.
Figure 2-3. Storage - Details Pane - Volumes - Snapshots
Create a volume snapshot to generate a point-in-time image of the contents of the specified
volume(s). Volume snapshot names append a unique number assigned by Purity//FA to the
name of the snapped volume. For example, vol01.4166. Optionally specify a suffix to replace the
unique number.
The volume snapshot naming convention is VOL.NNN, where:
Pure Storage Confidential - For distribution only to Pure Customers and Partners 34
Chapter 2:FlashArray Concepts and Features | Block Storage
Eradication Delays
The eradication delays protect against the accidental deletion of data in a destroyed object.
When an object is destroyed, it enters an eradication pending period of between 1 and 30 days,
after which the object is automatically eradicated. This applies to all individual data and con-
figuration objects. An object in the eradication pending period can be manually eradicated prior
to the end of the eradication pending period (unless the SafeMode manual eradication pre-
vention feature is enabled).
Pure Storage Confidential - For distribution only to Pure Customers and Partners 35
Chapter 2:FlashArray Concepts and Features | Block Storage
Purity supports two types of eradication delays, one for SafeMode-protected objects and one for
other objects:
l Disabled delay: The eradication delay for SafeMode-protected objects on the array.
Sets the length of the eradication pending period for SafeMode-protected objects.
Only takes effect when SafeMode is enabled. Known as the "disabled" eradication
delay because manual eradication is disabled on those objects. Default 8 days. 14
days are recommended.
l Enabled delay: The eradication delay for objects for which eradication is enabled,
that is, objects not protected by SafeMode. Sets the length of the eradication pending
period for array objects not protected by SafeMode. Default 1 day.
The eradication delays support both ActiveCluster and ActiveDR. In ActiveCluster, the destroy
time is stored within an object and is consistent for objects that are replicated across two arrays.
The eradication delays are displayed and configured through the GUI (Settings > System >
Eradication Delay). In addition, the user may contact Pure Storage Technical Services to con-
figure eradication delays.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 36
Chapter 2:FlashArray Concepts and Features | Block Storage
SafeMode
SafeMode for Purity//FA is a family of features that adds additional security to provide ransom-
ware protection for storage objects through the following means:
l Manual eradication prevention. Disables the ability to manually eradicate destroyed
objects. Only the expiration of a destroyed object’s eradication pending period can
cause eradication.
l Snapshot and replication protection. Prevents snapshot and replication schedules
from being disabled and retention period from being reduced.
l Volume protection. Ensures that volume data is protected by protection group snap-
shots, providing per protection group ransomware protection.
l Automatic volume protection for new arrays. Provides automatic protection group
membership for newly created or copied volumes. A default protection group is auto-
matically created for each pod and also for volumes that are not in a pod. Default pro-
tection is configured in the Protection > Array > Default Protection pane. See
"Automatic Protection Group Assignment for Volumes" on the next page.
For best protection, Pure recommends enabling the retention lock feature in addition to extend-
ing the eradication pending period to seven days or more, which is configured through the GUI
(Settings > System > Eradication Configuration) and CLI (purearray eradication-con-
fig setattr command).
To allow granular and flexible control of the SafeMode feature, FlashArray supports retention
lock per protection group. The protection group retention lock is configured through the GUI (Pro-
tection > Protection Groups) and CLI (purepgroup retention-lock ratchet com-
mand).
For protection groups, the retention lock is by default unlocked. By ratchet enabling the retention
lock, all of the following are disallowed for a non-empty protection group:
l Destroying the protection group
l Manual eradication of the protection group and its container
l Member and target removal
l Decreasing the eradication delay
l Disabling snapshot or replication schedule
l Decreasing snapshot or replication retention or frequency
Pure Storage Confidential - For distribution only to Pure Customers and Partners 37
Chapter 2:FlashArray Concepts and Features | Block Storage
Note: Volumes protected through the SafeMode global volume protection feature are rep-
resented by an asterisk in Protection > Members panel and by the purepgroup list
CLI command and are not listed by name.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 38
Chapter 2:FlashArray Concepts and Features | Block Storage
default protection groups list contains pgroup-auto,pgroup-B, the pod default protection groups
list is created with <pod-name>::pgroup-auto,<pod-name>::pgroup-B.
Purity//FA automatically creates each protection group in the pod default protection groups list.
When a volume is created in or copied into a pod, the new volume is given membership in all pro-
tection groups contained in the pod default protection group list.
SafeMode Status
The protection group SafeMode pane indicates whether retention lock is enabled for the pro-
tection group. Retention Lock displays one of the following values:
l Ratcheted - The protection group is ratcheted and if the protection group is not
empty, manual eradication is disabled and retention reduction is disallowed.
l Unlocked - The protection group is not ratcheted.
Similarly, the SafeMode status appears in the lower section of the left navigation pane and dis-
plays one of the following values:
l Enabled - Either global SafeMode is enabled, or minimum one non-empty protection
group is ratcheted.
l Disabled - Global SafeMode is not enabled and no non-empty protection groups are
ratcheted.
Hosts
The host organizes the storage network addresses - the iSCSI Qualified Names (IQNs), NVMe
Qualified Names (NQNs), and Fibre Channel World Wide Names (WWNs) - that identify the host
computer initiators. The host communicates with the array through the Ethernet or Fibre Chan-
Pure Storage Confidential - For distribution only to Pure Customers and Partners 39
Chapter 2:FlashArray Concepts and Features | Block Storage
nel ports. The array accepts and responds to commands received on any of its ports from any of
the IQNs, NQNs, and WWNs associated with a host.
Note: Cloud Block Store accepts and responds only to the iSCSI Qualified Names
(IQNs); the NVMe Qualified Names (NQNs) and Fibre Channel World Wide Names
(WWNs) are not supported.
Purity//FA hosts are virtual, so creating, renaming, and deleting a host has no meaning outside
the array.
Create hosts to access volumes on the array. A Purity//FA host is comprised of a host name and
one or more IQNs, NQNs, or WWNs. Host names must be unique within an array.
Associate one or more IQNs, NQNs, or WWNs with the host after it has been created. The host
cannot communicate with the array until at least one IQN, NQN, or WWN has been associated
with it.
iSCSI Qualified Names (IQNs) follow the naming standards set by the Internet Engineering Task
Force (see RFC 3720). For example, iqn.2016-01.com.example:flasharray.491b30d0efd97f25.
NVMe Qualified Names (NQNs) follow the naming standards set by NVM Express. For example,
nqn.2016-01.com.example:flasharray.491b30d0efd97f25.
Fibre Channel World Wide Names (WWNs) follow the naming standards set by the IEEE Stand-
ards Association. WWNs are comprised of eight pairs of case-insensitive hexadecimal numbers,
optionally separated by colons. For example, 21:00:00:24:FF:4C:C5:49.
Like hosts, IQNs, NQNs, and WWNs must be unique in an array. A host can be associated with
multiple storage network addresses, but a storage network address can only be associated with
one host.
Host IQNs, NQNs, and WWNs can be added or removed at any time.
Rename a host to change the name by which Purity//FA identifies the host in administrative oper-
ations and displays. Host names are used solely for FlashArray administration and have no sig-
nificance outside the array, so renaming a host does not change its relationship with host groups
and volumes. The new host name is effective immediately and the old name is no longer recog-
nized in CLI, GUI, or REST interactions.
Optionally, configure the Challenge-Handshake Authentication Protocol (CHAP) to verify the
identity of the iSCSI initiators and targets to each other when they establish a connection. By
default, the CHAP credentials are not set.
To ensure the array works optimally with the host, set the host personality to the name of the
host operating or virtual memory system. The host personality setting determines how the
Pure Storage Confidential - For distribution only to Pure Customers and Partners 40
Chapter 2:FlashArray Concepts and Features | Block Storage
Purity//FA system tunes the protocol used between the array and the initiator. For example, if
the host is running the HP-UX operating system, set the host personality to HP-UX. By default,
the host personality is not set. If your system is not listed as one of the valid host personalities,
do not set it.
Delete a host if it is no longer required. Purity//FA will not delete a host while it has connections
to volumes, either private or shared. You cannot recover a host after it has been deleted.
Host Guidelines
Purity//FA will not create a host if:
l The specified name is already associated with another host in the array.
l Any of the specified IQNs, NQNs, or WWNs are already associated with an existing
host in the array.
l The creation of the host would exceed the limit of concurrent hosts, or the creation of
the IQN, NQN, or WWN would exceed the limit of concurrent initiators.
Purity//FA will not delete a host if:
l The host has private connections to one or more volumes.
Purity//FA will not associate an IQN, NQN, or WWN with a host if:
l The creation of the IQN, NQN, or WWN would exceed the maximum number of con-
current initiators.
l The specified IQN, NQN, or WWN is already associated with another host on the
array.
Hosts are configured through the GUI (Storage > Hosts) and CLI (purehost command).
Host Groups
A host group represents a collection of hosts with common connectivity to volumes.
Purity//FA host groups are virtual, so creating, renaming, and deleting a host group has no mean-
ing outside the array.
Create a host group if several hosts share access to the same volume(s). Host group names
must be unique within an array.
After you create a host group, add hosts to the host group and then establish connections
between the volumes and the host group.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 41
Chapter 2:FlashArray Concepts and Features | Block Storage
When a volume is connected to a host group, it is assigned a logical unit number (LUN), which
all hosts in the group use to communicate with the volume. If a LUN is not manually specified
when the connection is first established, Purity//FA automatically assigns a LUN to the con-
nection.
Once a connection has been established between a host group and a volume, all of the hosts
within the host group are able to access the volume through the connection. These connections
are called shared connections because the connection is shared between all of the hosts within
the host group.
Rename a host group to change the name by which Purity//FA identifies the host group in admin-
istrative operations and displays. Renaming a host group does not change its relationship with
hosts and volumes. The new host group name is effective immediately and the old name is no
longer recognized in CLI, GUI, or REST interactions.
Delete a host group if it is no longer required. You cannot recover a host group after it has been
deleted.
Host-Volume Connections
For a host to read and write data on a FlashArray volume, the two must be connected. Purity//FA
only responds to I/O commands from hosts to which the volume addressed by the command is
Pure Storage Confidential - For distribution only to Pure Customers and Partners 42
Chapter 2:FlashArray Concepts and Features | Block Storage
Private Connections
Connecting a volume to a host establishes a private connection between the volume and the
host. You can connect multiple volumes to a host. Likewise, a volume can be connected to mul-
tiple hosts.
Disconnecting a volume from a host, or vice versa, breaks the private connection between the
volume and host. Other shared and private connections are unaffected.
Shared Connections
Connecting a volume to a host group establishes a shared connection between the volumes and
all of the hosts within that host group. You can connect multiple volumes to a host group. Like-
wise, a volume can be connected to multiple host groups.
Disconnecting a volume-host group connection breaks the shared connection between the
volume and all of the hosts within the host group. Other shared and private connections are unaf-
fected.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 43
Chapter 2:FlashArray Concepts and Features | Block Storage
Connection Guidelines
Purity//FA will not establish a (private) connection between a volume and a host if:
l An unavailable LUN was specified.
l The volume is already connected to the host, either through a private or shared con-
nection.
l Purity//FA will not establish a (shared) connection between a volume and host group
if:
l An unavailable LUN was specified.
l The volume is already connected to the host group.
l The volume is already connected to a host associated with the host group.
Host-volume connections are performed through the GUI (Storage > Hosts and Storage >
Volumes) and CLI (purehgroup connect, purehost connect and purevol connect commands).
Connections
The Connections page displays connectivity details between the Purity//FA hosts and the array
ports.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 44
Chapter 2:FlashArray Concepts and Features | Block Storage
The Host Connections pane displays a list of hosts, the connectivity status of each host, and the
number of initiator ports associated with each host. Connectivity statuses range from "None",
where the host does not have any paths to any target ports, to "Redundant", where the host has
the same number of paths from every initiator to every target port on both controllers.
The Target Ports pane displays the connection mappings between each array port and initiator
port. Each array port includes the following connectivity details: associated iSCSI Qualified
Name (IQN), NVMe Qualified Name (NQN), or Fibre Channel World Wide Name (WWN)
address, failover status, and communication speed. A check mark in the Failover column indic-
ates that the port has failed over to the corresponding port pair on the primary controller.
Host connections and target ports are displayed through the GUI (select Health > Connections)
and CLI (pureport list, purehost list --all, and purevol list --all commands).
Protection Groups
A protection group defines a set of volumes, hosts, or host groups (called members) that are pro-
tected together through snapshots with point-in-time consistency across the member volumes.
The members within the protection group have common data protection requirements and the
same snapshot, replication, and retention schedules.
Each protection group includes the following components:
l Source array. An array from which Purity//FA generates a point-in-time snapshot of
its protection group volumes. Depending on the protection group schedule settings,
the snapshot data is either retained on the source array or replicated over to and
Pure Storage Confidential - For distribution only to Pure Customers and Partners 45
Chapter 2:FlashArray Concepts and Features | Block Storage
Pure Storage Confidential - For distribution only to Pure Customers and Partners 46
Chapter 2:FlashArray Concepts and Features | Block Storage
Create a protection group to add members (volumes, hosts, or host groups) that have common
data protection requirements. Pure Storage protection groups are virtual, so creating, renaming,
and destroying a protection group has no meaning outside the array. Protection group names
must be unique within an array.
Copy a protection group to restore the state of the volumes within a protection group to a pre-
vious protection group snapshot. The restored volumes are added as real volumes to a new or
existing protection group. Note that restoring volumes from a protection group snapshot does
not automatically expose the restored volumes to hosts and host groups.
Rename a protection group to change the name by which Purity//FA identifies the protection
group in administrative operations and displays. When you rename a protection group, the
name change is effective immediately and the old name is no longer recognized by Purity//FA.
Destroy a protection group if it is no longer needed.
Destroying a protection group implicitly destroys all of its snapshots. Once a protection group
has been destroyed, all snapshot and replication processes for the protection group stop and
the destroyed protection group begins its eradication pending period of from 1 to 30 days.
When the eradication pending period has elapsed, Purity//FA starts reclaiming the physical stor-
age occupied by the protection group snapshots.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 47
Chapter 2:FlashArray Concepts and Features | Block Storage
During the eradication pending period, you can recover the protection group to bring the group
and its content back to its original state, or manually eradicate the destroyed protection group to
reclaim physical storage space occupied by the destroyed protection group snapshots.
Once reclamation starts, either because you have manually eradicated the destroyed protection
group, or because the eradication pending period has elapsed, the destroyed protection group
and its snapshot data can no longer be recovered.
The Time Remaining column displays the eradication pending period in hh:mm format, which
begins at the number of days in its eradication pending period and counts down to 00:00. When
the eradication pending period reaches 00:00, Purity//FA starts the reclamation process. The
Time Remaining number remains at 00:00 until the protection group or snapshot is completely
eradicated.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 48
Chapter 2:FlashArray Concepts and Features | File Storage
File Storage
File services are supported on FlashArray//C and FlashArray//X.
File services are administered through the Purity for FlashArray (Purity//FA) graphical user inter-
face (GUI) or command line interface (CLI). Users should be familiar with file system, storage,
and networking concepts, and have a working knowledge of Windows or UNIX.
Before you begin, contact Pure Storage Technical Services to have file services activated on the
FlashArray.
Note: The FlashArray//X50R2 model does not support both block storage and file storage
at the same time.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 49
Chapter 2:FlashArray Concepts and Features | File Storage
File Systems
A FlashArray can contain up to 50 separate file systems, each with a number of directories
which can be exported via supported protocols. Clients, using Active Directory or LDAP, can con-
nect and access these exports using SMB or NFS:
l SMB version 1.0 / 2.0 / 2.1 / 3.0 / 3.02 / 3.11
l NFS version 3 / 4.1
Note: SMB version 1.0 is deprecated and disabled by default due to security reasons.
Because it lacks encryption and protection, the best practice is to avoid the use of this ver-
sion. For more information, contact Pure Storage Technical Services.
During ActiveDR replication, a FlashArray can contain up to 50 separate file systems. Arrays
can replicate the file systems to the target array; however, if the number of file systems on the tar-
get array is over 50 file systems no additional file systems can be created until the number of file
systems on the target array is reduced to less than 50.
A managed directory is a directory that allows attaching exports, quotas, and snapshot policies.
In addition, for these directories, metrics and space information are available. Managed dir-
ectories are created by an administrator and are limited to the top eight levels of directories,
counting the root directory as the first level.
Since managed directories and exports should be placed in useful places and clients only see
their own part of the file system, there is rarely a need for a massive number of separate file sys-
tems. Most of the time, one or a few file systems are sufficient.
File systems, directories, and files are dynamically allocated and do not require you to allocate
or partition any of the storage prior to use. Storage space is allocated when used and given back
to the combined pool of block and file storage when content is eradicated.
Creation, destruction, and eradication of a file system is a management-only operation through
the Storage > File Systems page. Alternatively, refer to the purefs command in the Purity//FA
CLI Reference Guide.
Managed Directories
Not every directory in a file system matters to an administrator. Define the ones that matter by
using managed directories. Only these directories can have policies attached. They also provide
space reporting and metrics.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 50
Chapter 2:FlashArray Concepts and Features | File Storage
When a new file system is created, the root directory is automatically created. This is a managed
directory named “root”. This directory can only be destroyed together with the entire file system.
All directories created through management (GUI or CLI) are managed. Directories created by
protocol clients are not managed, except when using the auto managed directory feature.
Managed directories can be created, up to eight levels deep, only as children of other managed
directories. Since the root directory is a managed directory, all directories in a file system will
have at least one managed directory as an ancestor that can provide access points, protection,
space reporting, metrics, and quota notifications and enforcement. Export, quota and snapshot
policies can be added to any managed directory.
A managed directory can only be deleted through management. To avoid accidental eradication
of content, the managed directory can only be deleted when no content exists and all shares are
either removed or disabled.
Client directories can be moved within the scope (tree) of a parent managed directory. However,
directories cannot be moved out of the scope of a managed directory or into the scope of
another managed directory.
Managed directories are managed through the Storage > File Systems page of the GUI, or the
CLI puredir command.
Exports
Exports (that is, shares) are entry points for clients to connect to the file system using local users
for file, Active Directory or LDAP for authentication and authorization. With NFS User Mapping
Disabled, exports can be accessed without directory services. Clients connect by using the file
service IP address or URL and the export name. For each protocol, export names must be
unique for the entire FlashArray. Exports and files can be made accessible for clients that use
SMB and, at the same time, clients that use NFS, using the same export name. When granted
access, clients only see the part of the file system that the export exposes, meaning the target
directory and its subdirectories. With Access Based Enumeration (ABE) enabled, SMB policies
allow directories and files to be hidden from clients that do not have sufficient permissions.
Exports are created by using SMB or NFS policies with rules, adding each policy to one or more
managed directories. A policy can be created, modified, temporarily disabled, and when no
longer needed, permanently removed.
Export policies can be reused to create many exports. Modifying a policy by adding or removing
rules, affects the exports for all directories where the policy is used.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 51
Chapter 2:FlashArray Concepts and Features | File Storage
SMB and NFS policies are managed through the Storage > Policies page of the GUI, or the CLI
purepolicy smb and nfs commands. Exports are managed through the Storage > File Sys-
tem page or the CLI puredir export command.
NFS Datastore
FlashArray File can serve as a VMware NFS datastore. Using vSphere 7.0 or later, NFS data-
stores can be created using NFS exports on FlashArray through the NFS protocol version 3 or
4.1.
For getting started with NFS datastores on the FlashArray, refer to the VMware NFS Datastores
on FlashArray Quick Start Guide on the Knowledge site at https://support.purestorage.com, or
refer to the purepolicy command in the Purity//FA CLI Reference Guide.
Local Users
Local Users is a file services feature that allows you to use a locally stored directory of users and
groups, internal to the FlashArray, in place of an external authentication solution such as Active
Directory (AD) or LDAP. After users and groups are created on the array, clients are allowed to
connect to the FlashArray File domain and authenticate with their respective credentials.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 52
Chapter 2:FlashArray Concepts and Features | File Storage
Local Users for file is a separate concept from FlashArray local users. The main purpose of local
users for file is to access file systems via SMB or NFS protocols, while the purpose of FlashAr-
ray local users is to manage the array.
A user is a local user account which includes a username and password. Each user is a member
of one primary group. Before creating a user, its primary group must be created if it does not
already exist. A user can also be a member of other groups, denoted as secondary groups.
A group is a local group account under which one or more users can be gathered for simplified
management of permissions. For example, accounting, development, sales, and so on. A group
can have many members. Only users can be members of a group, not other groups. Before
deleting a group, all members must be removed from the group.
External members, user accounts or groups that reside on external AD or LDAP servers, can be
added to local groups as well. The purpose of this is to authenticate external users through the
local group, similar to local users, and authenticate the user within the array, rather than the
entire domain.
There are two built-in local user accounts: Administrator and Guest, and three built-in group
accounts: Administrators, Guests and Backup Operators. These built-in users and groups can-
not be removed or modified.
Permissions are managed from the client side, for example through Windows Explorer or Com-
puter Management, by adding and removing permissions to users or groups.
Local users for file are managed through the Settings > Access > File System tab of the GUI, or
the CLI pureds local command.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 53
Chapter 2:FlashArray Concepts and Features | File Storage
The NLM and Network Status Monitor (NSM) services are enabled by default with the NFS pro-
tocol service.
The NLM protocol depends on the NSM protocol to solve cases where the client or server
restarts, which would otherwise leave hanging locks. In cases where files are unintentionally left
in a locked state, run the puredir lock nlm-reclamation create command to release
all NLM locks for the entire array. By doing this, client applications are notified, allowing them to
reclaim the lock.
NFS version 4.1 introduces a number of new features and enhancements for interoperability and
ease of use over version 3 of NFS. The NFS 4.1 and SMB protocols do not use the NLM protocol
for file locking. Instead, file locking is handled internally on the array.
Directory Quotas
Directory quotas allow restriction of storage space for each managed directory, including all sub-
directories below. It is an always-on feature which, once a one-time initial scan of the file system
is complete, allows for instant quota enablement upon attaching a quota policy.
There are two types of directory quota limits: unenforced (soft quota) and enforced (hard quota).
The unenforced quota will be informative to the user or administrator and can be used to better
Pure Storage Confidential - For distribution only to Pure Customers and Partners 54
Chapter 2:FlashArray Concepts and Features | File Storage
plan for future system resource upgrades but will not affect operations. The enforced quota will
result in all future space increasing operations being prevented with ENOSPC errors, meaning
there is no storage space left, until the quota overage has been mitigated. The enforced quota
size also provides information to the client about disk space.
Directory quotas are implemented by creating and attaching quota policies to managed dir-
ectories. Each managed directory can have no more than one quota policy attached, but each
policy can include multiple rules for quota limits.
Limits are defined so that there can be zero, one, or more unenforced limits per policy, and
optionally one enforced limit. When the enforced limit is used, all unenforced limits must be 80%
of the enforced limit, or lower.
Note that clients might be able to briefly go beyond the set limit by continuing to write for an
amount of time after the hard quota has been reached. This is by design to avoid impacting I/O
performance, as directory quota on FlashArray runs as a background process. Normally, this
allows clients to continue for no more than 15 seconds of additional storage beyond the set
quota. In the case of a heavy load, this may extend the time up to three minutes.
When applying a quota limit to a managed directory already in use, the current usage must not
exceed the new enforced limit. This is to avoid unexpected ENOSPC errors. The “ignore usage”
option can be used to override this, and the quota will then be applied.
Quota limits can also be nested so that directories with individual quota limits exist below
another directory with a quota limit. The limits then become dynamic, so that the directories
below, while having their own quota limits, may also be limited by the quota that exists above, if
this limit were to be reached first.
Directory quota is unaware of data reduction and deduplication. Logical file sizes are accounted
for, which means that sparse files, or empty space within files, are also counted. Space used for
snapshots are not counted towards the quota limit.
When a quota limit threshold is exceeded, an email notification will be sent to the owner of the
managed directory, either the user, the group, or both the user and the group, according to the
corresponding quota rule settings. That is, if the notification parameter is set to “group”, the
email will be sent to the email address associated with the group, and for “user”, to the email
address associated with the user.
There are three email severity levels:
l Informational: Exceeding a soft quota threshold, or 80% of a hard quota threshold,
generates a notification with informational severity.
l Warning: An important message is generated when exceeding 90% of a hard quota
Pure Storage Confidential - For distribution only to Pure Customers and Partners 55
Chapter 2:FlashArray Concepts and Features | File Storage
threshold.
l Critical: Urgent message when a hard quota threshold is reached.
The owner of a directory can be viewed and changed from a connected client with a chow-
n/chgrp type operation. For example, with Windows directory properties, the directory owner can
be viewed or changed in the advanced part of the security view.
For email notifications to be sent, SMTP must be correctly configured in the Alert Routing panel
on the Settings > System page. Furthermore, the groups and users, found in directory the ser-
vices such as Active Directory or LDAP, or through FlashArray File Local Users, must be pop-
ulated with their associated email addresses. The attribute for email is typically found in the
preferences for each user and group.
Quota policies are managed through the Storage > Policies page or the CLI purepolicy
quota command. The relationship between quota policies and managed directories can be
managed through the Storage > File Systems page or with the CLI puredir quota command.
Snapshots
Snapshots give you the ability to retrieve earlier versions of folders and files in case of unwanted
changes or deletion of content.
A snapshot is a copy of the underlying file structure with files and content that are consistent to a
single point in time. When a snapshot is accessed, it appears to be a full copy at the time the
snapshot was taken, but with read-only access. The copy includes the directory with sub-
directories and files below.
Each snapshot is located in a separate subdirectory within the .snapshot directory, which is a
hidden directory. Snapshots are immutable and cannot be altered. Thus, files or directories must
be copied out of the snapshot directory before they can be used (for example, to restore con-
tent). Alternatively, with SMB, use the Previous Versions feature to access snapshot content.
Scheduled snapshots are managed via snapshot policies. In addition, snapshots can be created
manually by an administrator. In any case, a retention period can be set which defines when the
snapshot is eradicated. For scheduled snapshots, the retention period is required so that the
number of snapshots is kept within reasonable limits.
Previous Versions
Previous Versions is an SMB feature that allows the user to access previous versions of files
and directories based on snapshots of the respective data. Using software that supports the fea-
ture, for example Windows File Explorer, the user can select a previous version and then
Pure Storage Confidential - For distribution only to Pure Customers and Partners 56
Chapter 2:FlashArray Concepts and Features | File Storage
choose to open or restore the selected content. Restoring files or directories overwrites the exist-
ing files and cannot be undone. Before restoring a file or directory, the user can select open to
make sure that it is the correct version.
Similarly, previous versions are accessible through SMB shares (exports) by adding a UTC
timestamp to the export name when accessing the share. This is the timestamp of the chosen
snapshot. For example, if a snapshot was taken July 10th of 2020 at 10:18:28 UTC, on the root
level of the share, the following path provides access to that version: \\server\share\@GMT-
2020.07.10-10.18.28
For snapshots that are taken on a sub-directory, the directory follows after the timestamp, such
as in the following example: \\server\share\@GMT-2020.07.10-10.18.28\folder4
The feature is available via the SMB protocol version 2.0 or later.
Protection Plan
A protection plan can be defined by creating a snapshot policy with the addition of one or more
rules. Attach the snapshot policy to the managed directory to be safeguarded with scheduled
snapshots. When the policy is attached (and enabled by default), the scheduler creates, des-
troys, and eradicates snapshots automatically in order to fulfill the protection plan at any given
time.
The name of each snapshot consists of the client name of the rule that triggered the snapshot,
with a counter added. For example: hourly.1.
Policies can be reused and attached to other managed directories. Modifying the rules for one
policy affects the scheduling of snapshots for all directories in which the policy is used. However,
snapshots already taken are not altered by modifying rules or policies.
Note: Modifying a policy may lead to additional snapshots being taken to fulfill a partially
complete protection plan.
Thinning Rules: At any specific point in time, a snapshot policy produces no more than one
snapshot for each attached directory, even in the presence of multiple rules. For each policy, the
first snapshot to be taken is the one with the longest keep-for time and thus gives full data pro-
tection. All other snapshots are, at that point in time, postponed.
The rule with the highest frequency (that is, the shortest "every"), is the base rule that determ-
ines the scheduled time slots. The minimum value is 5 minutes.
The scheduler determines the next scheduled snapshot using the time that a snapshot was
scheduled for, not the time that it was created. This prevents the scheduler from drifting in case
of delayed snapshots due to system load.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 57
Chapter 2:FlashArray Concepts and Features | File Storage
With the "at" parameter, snapshots can be taken at a selected time of day. When used, the
scheduled time "every" must be a multiple of a day (24-hour period).
Protection plans are defined as follows:
1 Create a snapshot policy and give it a name.
2 Add one or more rules to the policy, each rule specifying the following:
l Every: The elapsed time until a new snapshot should be created.
l Keep for: The time to keep the snapshot before automatic eradication.
l Client name: The client visible name for snapshots.
l At: Optionally, the time of day to create a snapshot (every must then be a multiple of
one day).
3 Attach the policy to one or more directories.
If you manually destroy a scheduled snapshot, it will no longer be managed by the scheduler. If
you recover this snapshot, it will be considered a manual snapshot, not a scheduled one. After
recovery, the snapshot is kept until it is manually destroyed or the optional keep-for period
expires.
If a manually destroyed snapshot results in a protection plan not being fulfilled, a new snapshot
is created to replace the destroyed one. This happens as soon as possible and usually within the
next thirty seconds. Since the goal is to satisfy the protection plan rather than the schedule, the
schedule intentionally becomes askew in the following way: The newly created snapshot exists
for the defined (keep-for) period of time, starting at this point in time. The schedule for the fol-
lowing snapshots is calculated from this new point in time.
For example, a protection plan with three rules:
l Hourly snapshots: Every one hour, keep for 24 hours, client name "hourly"
l Daily snapshots: Every one day, keep for 30 days, client name "daily"
l Weekly snapshots: Every one week, keep for 52 weeks, client name "weekly"
The scheduler fulfills the maximum data protection by choosing the rule with the longest keep
time first. In this example, the weekly (keep for 52 weeks) creates a snapshot named
weekly.1. The presence of a week-long snapshot covers the need for any other snapshot for
one hour. The next rule to create a snapshot is the daily (keep for 30 days), named daily.2.
The presence of the daily snapshot covers the need for a snapshot for one hour. The hourly, hav-
ing been postponed twice, is taken: hourly.3.
Directory snapshots are managed through the Protection > Snapshots page, or the CLI
puredir snapshot command. Snapshot policies are managed through the Protection >
Policies page, or the CLI purepolicy snapshot command.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 58
Chapter 2:FlashArray Concepts and Features | File Storage
Object Names
File systems, managed directories, and policies can, like most objects in Purity//FA, be named.
For managed directories, the name does not have to be the same as the path directory name.
The full name of a managed directory consists of the file system name and managed directory
name (not path), separated by a colon (:). For example, FS1:Managed1.
The object names can be 1-63 characters in length. Valid characters are letters (A-Z and a-z),
digits (0-9), and the hyphen (-) character. The first and last characters of the name must be
alphanumeric, and the name must contain at least one letter or '-'. Names are case-insensitive
on input. For example, fs1, Fs1, and FS1 all represent the same file system. Purity//FA displays
names in the case in which they were specified when created or renamed.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 59
Chapter 2:FlashArray Concepts and Features | File Storage
Virtual Interfaces
Clients communicate with one or more file systems through virtual interfaces. The virtual inter-
face service named "File" is used for this purpose. Built-in failover functionality allows clients to
be automatically moved to another interface if one interface fails during operation or in the pro-
cess of a non-disruptive system update.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 60
Chapter 2:FlashArray Concepts and Features | Users and Security
Directory Service
Additional Purity//FA accounts can be enabled by integrating the array with an existing directory
service, such as Microsoft Active Directory or OpenLDAP, allowing multiple users to log in and
use the array and providing role-based access control.
Configuring and enabling the Pure Storage directory service changes the array to use the dir-
ectory when performing user account and permission level searches. If a user is not found loc-
ally, the directory servers are queried.
Directory service configuration is performed through the GUI (Settings > Users) and CLI
(pureds command).
Pure Storage Confidential - For distribution only to Pure Customers and Partners 61
Chapter 2:FlashArray Concepts and Features | Users and Security
Multi-factor Authentication
Multi-factor authentication (MFA) provides an additional layer of security used to verify users'
identities during login attempts.
For arrays with optional multi-factor authentication enabled, a third-party software package veri-
fies authentication requests for the array and also administers the array's authentication
policies.
Purity//FA supports MFA through the RSA SecurID® Authentication Manager and through
SAML2 single sign-on (SSO) with Microsoft® Active Directory Federation Services (AD FS),
Okta, Azure Active Directory (Azure AD), and Duo Security authentication identity management
systems.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 62
Chapter 2:FlashArray Concepts and Features | Industry Standards
SSL Certificate
Purity//FA creates a self-signed certificate and private key when the system is started for the first
time.
SSL certificate configuration includes changing certificate attributes, creating new self-signed
certificates to replace existing ones, constructing certificate signing requests, importing cer-
tificates and private keys, and exporting certificates.
SSL certificate configuration is performed through the GUI (Settings > System) and CLI (pure-
cert command).
Industry Standards
Purity//FA includes the Pure Storage Storage Management Initiative Specification (SMI-S) pro-
vider.
The SMI-S initiative was launched by the Storage Networking Industry Association (SNIA) to
provide a unifying interface for storage management systems to administer multi-vendor
resources in a storage area network. The SMI-S provider in Purity//FA allows FlashArray admin-
istrators to manage the array using an SMI-S client over HTTPS.
SMI-S client applications optionally use the Service Location Protocol (SLP) as a directory ser-
vice to locate resources.
The SMI-S provider is optional and must be enabled before its first use. The SMI-S provider is
enabled and disabled through the GUI (Settings > System) and CLI (puresmis command).
For detailed information on the Pure Storage SMI-S provider, refer to the Pure Storage SMI-S
Provider Guide on the Knowledge site at https://support.purestorage.com.
For general information on SMI-S, refer to the Storage Networking Industry Association (SNIA)
website at https://www.snia.org.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 63
Chapter 2:FlashArray Concepts and Features | Troubleshooting and Logging
Alerts
Alert, audit record, and user session messages are retrieved from a list of log entries that are
stored on the array.
To conserve space, Purity//FA stores a reasonable number of log entries on the array. Older
entries are deleted from the log as new entries are added. To access the complete list of mes-
sages, configure the Syslog Server feature to forward all messages to your remote server.
An alert is triggered when there is an unexpected change to the array or to one of the Purity//FA
hardware or software components. Alerts are categorized by severity level as critical, warning,
or informational.
Alerts are displayed in the GUI and CLI. Alerts are also logged and transmitted to Pure Storage
Technical Services via the phone home facility. Furthermore, alerts can be sent as messages to
designated email addresses and as Simple Network Management Protocol-based (SNMP) traps
and informs to SNMP managers.
Phone Home Facility
The phone home facility provides a secure direct link between the array and the Pure Stor-
age Technical Services team. The link is used to transmit log contents and alert mes-
sages to the Pure Storage Technical Services team.
If the phone home facility is disabled, the log contents are delivered when the facility is
next enabled or when the user manually sends the logs through the GUI or CLI.
Optionally configure the proxy host for HTTPS communication.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 64
Chapter 2:FlashArray Concepts and Features | Troubleshooting and Logging
The phone home facility is managed through the GUI (Settings > System and CLI (pur-
earray command).
Proxies are configured through the GUI (Settings > System and CLI (purearray
setattr --proxy command).
Email
Alerts can be sent to designated email recipients. The list includes the built-in flashar-
[email protected] address, which cannot be deleted. Individual email
addresses can be added to and removed from the list, and transmission of alert mes-
sages to specific addresses can be temporarily enabled or disabled without removing
them from the list.
The list of email alert recipients is managed through the GUI (Settings > System and CLI
(purealert command).
SNMP Managers
If SNMP manager objects are configured on the array, each alert is transmitted to the
SNMP managers.
The SNMP manager objects are configured through the GUI (Settings > System and CLI
(puresnmp command).
Alerts are displayed through the GUI (Health > Alerts) and the CLI (puremessage command).
Audit Trail
The audit trail represents a chronological history of the Purity//FA GUI, Purity//FA CLI, or REST
API operations that a user has performed to modify the configuration of the array. For example,
changing the size of a volume, deleting a host, changing the replication frequency of a protection
group, and associating a WWN to a host generates an audit record.
Audit trails are displayed through the GUI (Settings > Access) and the CLI (pureaudit com-
mand).
Pure Storage Confidential - For distribution only to Pure Customers and Partners 65
Chapter 2:FlashArray Concepts and Features | Troubleshooting and Logging
User sessions are displayed through the GUI (Settings > Users) and the CLI (puremessage
command).
Pure Storage Confidential - For distribution only to Pure Customers and Partners 66
Chapter 2:FlashArray Concepts and Features | Troubleshooting and Logging
Remote assist sessions are controlled by the array administrator, who opens a secure channel
between the array and Pure Storage Technical Services, making it possible for a technician to
log in to the array. The administrator can check session status and close the channel at any
time.
Remote assist sessions are opened and closed through the GUI (Settings > System) and CLI
(purearray remoteassist command).
Proxies are configured through the GUI (Settings > System and CLI (purearray setattr -
-proxy command).
Pure Storage Confidential - For distribution only to Pure Customers and Partners 67
Chapter 3:
Conventions
Purity//FA is the operating environment that queries and manages the FlashArray hardware, net-
working, and storage components. The Purity//FA software is distributed with the FlashArray.
Purity//FA provides two ways to administer the FlashArray: through the browser-based graphical
user interface (Purity//FA GUI) and the command-driven interface (Purity//FA CLI).
Purity//FA follows certain naming and numbering conventions.
Object Names
Valid characters are letters (A-Z and a-z), digits (0-9), and the hyphen (-) character. The first and
last characters of the name must be alphanumeric, and the name must contain at least one letter
or '-'.
Most objects in Purity//FA that can be named, including host groups, hosts, volumes, protection
groups, volume and protection group suffixes, SNMP managers, and subnets, can be 1-63 char-
acters in length.
Array names can be 1-56 characters in length. The array name length is limited to 56 characters
so that the names of the individual controllers, which are assigned by Purity//FA based on the
array name, do not exceed the maximum allowed by DNS.
Names are case-insensitive on input. For example, vol1, Vol1, and VOL1 all represent the
same volume. Purity//FA displays names in the case in which they were specified when created
or renamed.
Pods and volume groups provide a namespace with unique naming conventions.
All objects in a pod have a fully qualified name that include the pod name and object name. The
fully qualified name of a volume in a pod is POD::VOLUME, with double colons (::) separating
the pod name and volume name. The fully qualified name of a protection group in a pod is
POD::PGROUP, with double colons (::) separating the pod name and protection group name.
For example, the fully qualified name of a volume named vol01 in a pod named pod01 is
pod01::vol01, and the fully qualified name of a protection group named pgroup01 in a pod
named pod01 is pod01::pgroup01.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 68
Chapter 3:Conventions | Volume Sizes
If a protection group in a pod is configured to asynchronously replicate data to a target array, the
fully qualified name of the protection group on the target array is POD:PGROUP, with single
colons (:) separating the pod name and protection group name. For example, if protection group
pod01::pgroup01 on source array array01 asynchronously replicates data to target array
array02, the fully qualified name of the protection group on target array array02 is pod01:p-
group01.
All objects in a volume group have a fully qualified name that includes the volume group name
and the object name, separated by a forward slash (/). For example, the fully qualified name of
a volume named vol01 in a volume group named vgroup01 is vgroup01/vol01.
Volume Sizes
Volume sizes are specified as an integer, optionally followed by one of the suffix letters K, M,
G, T, P, denoting 512-byte sectors, KiB, MiB, GiB, TiB, and PiB, respectively, where "Ki"
denotes 2^10, "Mi" denotes 2^20, and so on. If a suffix letter is not specified, the size is
expressed in sectors.
Volumes must be between one megabyte and four petabytes in size. If a volume size of less
than one megabyte is specified, Purity//FA adjusts the volume size to one megabyte. If a volume
size of more than four petabytes is specified, the Purity//FA command fails.
Volume sizes cannot contain digit separators. For example, 1000g is valid, but 1,000g is not.
IP Addresses
FlashArray supports two versions of the Internet Protocol: IP Version 4 (IPv4) and IP Version 6
(IPv6). IPv4 and IPv6 addresses follow the addressing architecture set by the Internet Engin-
eering Task Force.
An IPv4 address consists of 32 bits and is entered in the form ddd.ddd.ddd.ddd, where ddd
is a number ranging from 0 to 255 representing a group of 8 bits. Here are some examples:
Pure Storage Confidential - For distribution only to Pure Customers and Partners 69
Chapter 3:Conventions | Storage Network Addresses
Pure Storage Confidential - For distribution only to Pure Customers and Partners 70
Chapter 3:Conventions | Storage Network Addresses
Like hosts, IQNs, NQNs, and WWNs must be unique in an array. A host can be associated with
multiple storage network addresses, but a storage network address can only be associated with
one host.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 71
Chapter 4:
GUI Overview
The Purity//FA graphical user interface (GUI) is a browser-based system used to view and
administer the FlashArray. See Figure 4-1.
Figure 4-1. Purity//FA GUI
Pure Storage Confidential - For distribution only to Pure Customers and Partners 72
Chapter 4:GUI Overview | GUI Navigation
Displays historical array information, including storage capacity and I/O performance met-
rics, from various viewpoints.
Health
Displays array health, including hardware status, parity, alerts, and connections.
Settings
Displays array-wide system and network settings. Manage array-wide components,
including network interfaces, system time, connectivity and connection configurations,
and alert settings. Also display user accounts, audit trails, user session logs, and soft-
ware details.
GUI Navigation
The dark gray navigation pane that appears along the left side of the Purity//FA GUI contains
links to the GUI pages. See Figure 4-2.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 73
Chapter 4:GUI Overview | GUI Navigation
Click the Pure Storage® logo at the top of the navigation pane to toggle between the expanded
and collapsed views of the pane. Just below the Pure Storage logo are links to the GUI pages.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 74
Chapter 4:GUI Overview | GUI Navigation
Click a link to analyze or configure the information that appears in the page to its right. For
example, click the Storage link to view information about the FlashArray storage objects, such
as hosts, host group, volumes, protection group, volume groups, pods, file systems, and dir-
ectories. The navigation pane includes links to the following external sites:
Help
Accesses the FlashArray user guides and launches the Pure1® community and Pure
Storage Technical Services portals.
End User Agreement
Displays the terms of the Pure Storage End User Agreement (EULA). For more inform-
ation about the Pure Storage End User Agreement, refer to End User Agreement (EULA).
Terms
Launches the Pure Storage Product End User Information page, which includes a link to
the Pure Storage End User Agreement.
Log Out
Logs the current user out of the Purity//FA GUI.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 75
Chapter 4:GUI Overview | GUI Navigation
The lower portion of the navigation pane displays FlashArray and Purity//FA version information,
the SafeMode status, and the name of the user who is currently logged into the Purity//FA GUI.
The pane to the right of the navigation pane displays the information and configuration options
for the selected GUI link. The information on each page is organized into panels, charts, and
lists. See Figure 4-3.
Figure 4-3. Purity//FA GUI - Page and Buttons
The alert icons that appear to the far right of the title bar indicate the number of recent Warning
and Critical alerts, respectively. A recent alert represents one that Purity//FA saw within the past
24 hours and still considers an open issue that requires attention. Click anywhere in an alert row
to display additional alert details. To analyze the alerts in more detail, select Health > Alerts.
The Search field (magnifying glass) in the upper-right corner of the screen allows you to quickly
search for existing hosts, host groups, volumes, protection groups, volume groups, and pods on
Pure Storage Confidential - For distribution only to Pure Customers and Partners 76
Chapter 4:GUI Overview | GUI Navigation
the array. Type any part of the name (case-insensitive) in the field to display all matches, and
then click the name in the list of results to view its details in the Storage page. See Figure 4-4.
Figure 4-4. Purity//FA GUI - Navigation - Quick Search
Various panels, such as Storage > Volumes and Health > Alerts, contain lists of information.
The total number of rows in a list output is displayed in the upper-right corner of the list. Some
lists can be very large, extending beyond hundreds of rows. See Figure 4-5.
Figure 4-5. Purity//FA GUI - Navigation - List Output
Pagination divides a large list output into discrete pages. Pagination is enabled by default and is
only in effect if the number of lines in the list output exceeds 10 rows. To move through a pagin-
ated list, click < to go to the previous page, or click > to go to the next page.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 77
Chapter 4:GUI Overview | GUI Login
Agreement pop-up window. Click Download Agreement to download a copy of the End User
Agreement from the array to your local machine. Accept the terms of the agreement by com-
pleting the fields at the bottom of the agreement and clicking Accept. Only array administrators
(i.e., users with the Array Admin role) have the necessary permissions to complete the fields at
the bottom of the agreement and click Accept.
Accepting the agreement requires the following information:
l Name - Full legal name of the individual at the company who has the authority to
accept the terms of the agreement.
l Title - Individual's job title at the company.
l Company - Full legal name of the entity.
The name, title, and company name must each be between 1 and 64 characters in length.
If the agreement is not accepted, Purity//FA generates an alert notifying all Purity//FA alert watch-
ers that the agreement is pending acceptance. A warning alert also appears in the Purity//FA
GUI. Pure Storage is not notified of the alert. The alert remains open until the agreement is
accepted. Furthermore, whenever a user logs in to Purity//FA GUI, the End User Agreement win-
dow pops up as a reminder that the agreement is pending acceptance.
Once the terms of the agreement have been accepted, Purity//FA closes the alert and stops gen-
erating the End User Agreement pop-up window.
GUI Login
Logging in to the Purity//FA GUI requires a virtual IP address or fully-qualified domain name
(FQDN) and an Purity//FA login username and password; this information is provided during the
FlashArray installation. FlashArray is installed with one administrative account with the user-
name pureuser. The initial password for the account is pureuser. For security purposes,
Pure Storage recommends that the password for the account be changed immediately upon first
login through the pureadmin CLI command. Pure Storage tests the Purity//FA GUI with the two
most recent versions of the following web browsers:
l Apple Safari
l Google Chrome
l Microsoft Edge
Pure Storage Confidential - For distribution only to Pure Customers and Partners 78
Chapter 4:GUI Overview | GUI Login
Pure Storage Confidential - For distribution only to Pure Customers and Partners 79
Chapter 4:GUI Overview | GUI Login
Note: In case the SAML2 SSO service is temporarily unavailable, an array administrator
(such as pureuser) can access the array through the Local Access link on the login
page. This link is for emergency use only.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 80
Chapter 4:GUI Overview | GUI Login
To log in, you supply your passcode, which is based on an RSA SecurID tokencode. See Figure
4-9.
Figure 4-9. Example RSA SecurID Tokencode
Contact your RSA SecurID administrators for your organization's passcode instructions. To log
into a FlashArray with multi-factor authentication enabled:
1 Open a web browser.
2 Type the virtual IP address or fully-qualified domain name of the FlashArray in the address
bar and press Enter. The Purity//FA GUI login screen appears.
3 In the Username field, type the FlashArray user name. For example, pureuser.
4 In the Passcode field, type your passcode obtained from the third-party authentication soft-
ware.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 81
Chapter 4:GUI Overview | GUI Login
Pure Storage Confidential - For distribution only to Pure Customers and Partners 82
Chapter 5:
Dashboard
The Dashboard page displays a running graphical overview of the array's storage capacity or
effective used capacity (EUC), performance, and hardware status.
Figure 5-1. Dashboard
On subscription storage, the Capacity panel displays metrics based on effective used capacity.
Figure 5-2. Dashboard Capacity Pane on Subscription Storage
Pure Storage Confidential - For distribution only to Pure Customers and Partners 83
Chapter 5:Dashboard | Capacity
l Hardware Health
l Performance Charts
Capacity
The Capacity panel displays array size and storage consumption or effective used capacity
details. The percentage value in the center of the wheel is calculated as Used/Total. All capa-
city values are rounded to two decimal places.
Purchased Arrays
On a purchased array, the capacity wheel displays the percentage of array space occupied by
data and metadata. and the capacity wheel is broken down into the following components:
System
Physical space occupied by internal array metadata.
Replication Space
Physical system space used to accommodate pod-based replication features, includ-
ing failovers, resync, and disaster recovery testing.
Shared Space
Physical space occupied by deduplicated data, meaning that the space is shared with
other volumes and snapshots as a result of data deduplication.
Snapshots
Physical space occupied by data unique to one or more snapshots.
Unique
Physical space that is occupied by data of both volumes and file systems after data
reduction and deduplication, but excluding metadata and snapshots.
Empty
Unused space available for allocation.
The capacity panel also displays the following information for a purchased array:
Data Reduction
Pure Storage Confidential - For distribution only to Pure Customers and Partners 84
Chapter 5:Dashboard | Capacity
Ratio of mapped sectors within a volume versus the amount of physical space the
data occupies after data compression and deduplication. The data reduction ratio
does not include thin provisioning savings.
For example, a data reduction ratio of 5:1 means that for every 5 MB the host writes to
the array, 1 MB is stored on the array's flash modules.
Total Reduction
Ratio of provisioned sectors within a volume versus the amount of physical space the
data occupies after reduction via data compression and deduplication and with thin
provisioning savings. Total reduction is data reduction with thin provisioning savings.
For example, a total reduction ratio of 10:1 means that for every 10 MB of provisioned
space, 1 MB is stored on the array's flash modules.
Used
Physical storage space occupied by volume, snapshot, shared space, and system
data.
Total
Total physical usable space on the array.
Replacing a drive may result in a dip in usable space. This is intended behavior. RAID
striping splits data across an array for redundancy purposes, spreading a write across
multiple drives. A newly added drive cannot use its full capacity immediately but must
stay in line with the available space on the other drives as writes are spread across
them. As a result, usable capacity on the new drive may initially be reported as less
than the amount expected because the array will not be able to write to the unal-
locatable space. Over time, usable capacity fluctuations will occur, but as data is writ-
ten to the drive and spreads across the array, usable capacity will eventually return to
expected levels.
Size
Total provisioned size of all volumes. Represents storage capacity reported to hosts.
Subscription Storage
The capacity panel displays the following information for subscription storage:
Unique
Effective used capacity data of both volumes and file systems after removing
clones, but excluding metadata and snapshots.
Snapshots
Effective used capacity consumed by data unique to one or more snapshots.
Shared
Pure Storage Confidential - For distribution only to Pure Customers and Partners 85
Chapter 5:Dashboard | Recent Alerts
Effective used capacity consumed by cloned data, meaning that the space is
shared with cloned volumes and snapshots as a result of data deduplication.
Used
Total effective used capacity containing user data, including Shared, Snapshots,
and Unique storage.
Estimated Total
Estimated total effective used capacity available from a host’s perspective, includ-
ing both consumed and unused storage.
Provisioned Size
The sum of the sizes of all volumes on the array.
Displays a ‘-‘ sign for arrays when a file system on the array has unlimited pro-
visioned size.
Virtual
The amount of data that the host has written to the volume as perceived by the
array, before any data deduplication or compression.
Recent Alerts
The Recent Alerts panel displays a list of alerts that Purity//FA saw within the past 24 hours and
considers open issues that require attention. The list contains recent alerts of all severity levels.
To view the details of an alert, click the alert message.
To view a list of all alerts including ones that are no longer open, go to the Health > Alerts page.
Hardware Health
The Hardware Health panel displays the operational state of the array controllers, flash mod-
ules, and NVRAM modules.
To analyze the hardware components in more detail, click the image or go to the Health > Hard-
ware page.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 86
Chapter 5:Dashboard | Performance Charts
Performance Charts
The performance charts display I/O performance metrics in real time.
Figure 5-3. Dashboard - Performance Graphs
The performance metrics are displayed along a scrolling graph; incoming data appear along the
right side of each graph every few minutes as older numbers drop off the left side. Each per-
formance chart includes Read (R), Write (W), and Mirrored Write (MW), representing the most
recent data samples rounded to two decimal places.
Hover over any of the charts to display metrics for a specific point in time.
The performance panel includes Latency, IOPS, and Bandwidth charts.
Latency
The Latency chart displays the average latency times for various operations.
l Read Latency (R) - Average arrival-to-completion time, measured in milliseconds, for
a read operation.
l Write Latency (W) - Average arrival-to-completion time, measured in milliseconds,
for a write operation.
l Mirrored Write Latency (MW) - Average arrival-to-completion time, measured in mil-
liseconds, for a write operation. Represents the sum of writes from hosts into the
Pure Storage Confidential - For distribution only to Pure Customers and Partners 87
Chapter 5:Dashboard | Performance Charts
volume's pod and from remote arrays that synchronously replicate into the volume's
pod. The MW value only appears if there are writes through ActiveCluster replication
being processed.
IOPS
The IOPS (Input/output Operations Per Second) chart displays I/O requests processed
per second by the array. This metric counts requests per second, regardless of how much
or how little data is transferred in each.
l Read IOPS (R) - Number of read requests processed per second.
l Write IOPS (W) - Number of write requests processed per second.
l Mirrored Write IOPS (MW) - Number of write requests processed per second. Rep-
resents the sum of writes from hosts into the volume's pod and from remote arrays
that synchronously replicate into the volume's pod. The MW value only appears if
there are writes through ActiveCluster replication being processed.
Bandwidth
The Bandwidth chart displays the number of bytes transferred per second to and from all
file systems. The data is counted in its expanded form rather than the reduced form
stored in the array to truly reflect what is transferred over the storage network. Metadata
bandwidth is not included in these numbers.
l Read Bandwidth (R) - Number of bytes read per second.
l Write Bandwidth (W) - Number of bytes written per second.
l Mirrored Write Bandwidth (MW) - Number of bytes written into the volume's pod per
second. Represents the sum of writes from hosts into the volume's pod and from
remote arrays that synchronously replicate into the volume's pod.
By default, the performance charts display performance metrics for the past 1 minute. To display
more than 1 minute of historical data, select Analysis > Performance.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 88
Chapter 5:Dashboard | Performance Charts
Pure Storage Confidential - For distribution only to Pure Customers and Partners 89
Chapter 6:
Storage
The Storage page displays configuration, space, and snapshot details for all types of FlashArray
storage objects.
The metrics that appear near the top of each page represent the capacity and consumption
details for the selected storage object. For example, the Storage > Array page displays array-
wide capacity usage. Likewise, the Storage > Pods page displays the capacity and consumption
details for all volumes within all pods in the FlashArray. Drill down to a specific storage object to
view additional details. For example, drill down to a specific volume to see its creation date and
unique serial number. See Figure 6-1 for the Storage tab on a purchased array.
Figure 6-1. Storage
On subscription storage, the reported metrics are based on effective used capacity (EUC). See
Figure 6-2 for the Storage tab on a subscription storage and "Subscription Storage" on page 85
for information on subscription capacity metrics.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 90
Chapter 6:Storage |
Pure Storage Confidential - For distribution only to Pure Customers and Partners 91
Chapter 6:Storage | Array
Array
The Storage > Array page displays a summary of all storage components on the array. See Fig-
ure 6-3.
Figure 6-3. Storage > Array
The array summary panel (with the array name in the header bar) contains a series of rectangles
(technically known as hero images) representing the storage components of the array. The num-
bers inside each hero image represent the number of objects created for each of the respective
components. Click a rectangle to jump to the page containing the details for that particular stor-
age component.
The "Connecting Arrays" on page 188 and "Offload Targets" on page 182 panes now are under
the Protection > Arrays tab.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 92
Chapter 6:Storage | Hosts and Host Groups
Hosts
The Hosts panel displays summary information, including host group association, interface, con-
nected volumes (both shared and private), provisioned size, and either storage consumption or
effective used capacity for each host on the array.
Host names that begin with an @ symbol represent app hosts. For more information about app
hosts, see "Installed Apps" on page 355. See Figure 6-4.
Figure 6-4. Storage > Hosts
Pure Storage Confidential - For distribution only to Pure Customers and Partners 93
Chapter 6:Storage | Hosts and Host Groups
From the Hosts page, click a host name to display its details. Figure 6-5 displays the details for
host ESXi-GRP-Cluster02-H0001, which is connected to one host group (ESXi-GRP-
Cluster02-HG003) and four volumes, and is a member of protection group PG002.
Figure 6-5. Hosts Page
Pure Storage Confidential - For distribution only to Pure Customers and Partners 94
Chapter 6:Storage | Hosts and Host Groups
Displays additional details specific to the selected host, including CHAP credentials and
host personality.
Host Groups
In the Storage > Hosts page, the Host Groups panel displays summary information, including
host associations, connected (shared) volumes, provisioned size, and either storage con-
sumption or effective used capacity, for each host group on the array.
From the Hosts page, click a host group name to display its details. Figure 6-6 displays the
details for host group ESXi-GRP-Cluster02-HG003, which is connected to two hosts and
three volumes.
Figure 6-6. Host Group Connected to Two Hosts and Three Volumes
Pure Storage Confidential - For distribution only to Pure Customers and Partners 95
Chapter 6:Storage | Hosts and Host Groups
Connected Volumes
Displays a list of volumes that have shared connections to the host group.
Protection Groups
Displays any protection groups to which the host group belongs.
Creating Hosts
Create hosts to access volumes on the array. Create a single host or multiple hosts at one time.
To create a host:
1 Select Storage > Hosts.
2 In the Hosts panel, click the menu icon and select Create... . The Create Host dialog box
appears.
3 In the Name field, type the name of the new host.
4 In the Personality field, select the name of the host operating or virtual memory system. If
your host personality does not appear in the list, select None.
5 To add the new host to a protection group, leave the Add to protection group after hosts are
created box checked (default; recommended).
6 Click Create.
7 If you selected the Add to protection group after hosts are created box, the Add to Pro-
tection Group dialog opens.
aTo add the new host to an existing protection group, select that protection group in the
Add host(s) to field.
bTo add the new host to a new protection group, click Create Protection Group.
In the Create Protection Group dialog, enter the pod name and the name of the new pro-
tection group.
cClick Create.
To create multiple hosts:
1 Select Storage > Hosts.
2 In the Hosts panel, click the menu icon and select Create... . The Create Host dialog box
appears.
3 Click Create Multiple…. The Create Multiple Hosts dialog box appears.
4 Complete the following fields:
Pure Storage Confidential - For distribution only to Pure Customers and Partners 96
Chapter 6:Storage | Hosts and Host Groups
l Name: Specify the template used to create the host names. Host names cannot con-
sist of all numeric values.
Place the hash (#) symbol where the numeric part of the host name should appear.
When Purity//FA creates the host names, the hash symbol is replaced with the host
number, beginning with the start number specified.
l In the Personality field, select the name of the host operating or virtual memory sys-
tem. If your host personality does not appear in the list, select None.
l Start Number: Enter the host number used to create the first host name.
l Count: Enter the number of hosts to create.
l Number of Digits: Enter the minimum number of numeric digits of the host number. If
the number of digits is greater than the start number, the host number begins with
leading zeros.
l Add to protection group after hosts are created: To add the new hosts to a pro-
tection group, leave this box checked (default; recommended).
5 Click Create.
6 If you selected the Add to protection group after hosts are created box, the Add to Pro-
tection Group dialog opens.
aTo add the new hosts to an existing protection group, select that protection group in the
Add host(s) to field.
bTo add the new hosts to a new protection group, click Create Protection Group.
In the Create Protection Group dialog, enter the pod name and the name of the new pro-
tection group.
cClick Create
Pure Storage Confidential - For distribution only to Pure Customers and Partners 97
Chapter 6:Storage | Hosts and Host Groups
2 In the Host Groups panel, click the menu icon and select Create... . The Create Host Group
dialog box appears.
3 In the Name field, type the name of the new host group.
4 To add the new host group to a protection group, leave the Add to protection group after
host groups are created: box checked (default; recommended).
5 Click Create.
6 If you selected the Add to protection group after host groups are created box, the Add to
Protection Group dialog opens.
aTo add the new host group to an existing protection group, select that protection group
in the Add host group(s) to field.
bTo add the new host group to a new protection group, click Create Protection Group.
In the Create Protection Group dialog, enter the pod name and the name of the new pro-
tection group.
cClick Create
To create multiple host groups:
1 Select Storage > Hosts.
2 In the Host Groups panel, click the menu icon and select Create... . The Create Host Groups
dialog box appears.
3 Click Create Multiple…. The Create Multiple Host Groups dialog box appears.
4 Complete the following fields:
l Name: Specify the template used to create the host group names. Host group names
cannot consist of all numeric values.
Place the hash (#) symbol where the numeric part of the host group name should
appear. When Purity//FA creates the host group names, the hash symbol is replaced
with the host group number, beginning with the start number specified.
l Start Number: Enter the number used to create the first host group name.
l Count: Enter the number of host groups to create.
l Number of Digits: Enter the minimum number of numeric digits of the host group num-
ber. If the number of digits is greater than the start number, the host group number
begins with leading zeros.
l Add to protection group after host groups are created: To add the new host groups
to a protection group, leave this box checked (default; recommended).
5 Click Create.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 98
Chapter 6:Storage | Hosts and Host Groups
6 If you selected the Add to protection group after host groups are created box, the Add to
Protection Group dialog opens.
aTo add the new host groups to an existing protection group, select that protection group
in the Add host group(s) to field.
bTo add the new host groups to a new protection group, click Create Protection Group.
In the Create Protection Group dialog, enter the pod name and the name of the new pro-
tection group.
cClick Create
Pure Storage Confidential - For distribution only to Pure Customers and Partners 99
Chapter 6:Storage | Hosts and Host Groups
3 In the Host Ports panel, click the menu icon and select Configure Fibre Channel WWNs....
The Configure Fibre Channel WWNs dialog box appears.
The WWNs in the Existing WWNs column of the dialog box represent the WWNs that
have been discovered by Purity//FA (i.e., the WWNs of computers whose initiators have
"logged in" to the array).
4 Click an existing WWN in the left column to add it to the Selected WWNs column.
Alternatively, to manually add a WWN, click Enter WWNs Manually and type the WWNs,
in comma-separated format, in the Port WWNs field.
5 Click Add.
Note: Configuring Fibre Channel WWNs is not supported on Cloud Block Store.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 100
Chapter 6:Storage | Hosts and Host Groups
3 In the Details panel, click the menu icon and select Configure CHAP.... The Configure
CHAP dialog box appears.
4 Complete the following fields:
l Host User: Set the host user name for CHAP authentication.
l Host Password: Enter the host password for CHAP authentication. The password
must be between 12 and 255 characters (inclusive) and cannot be the same as the
target password.
l Target User: Set the target user name for CHAP authentication.
l Target Password: Enter the target password for CHAP authentication. The host pass-
word cannot be the same as the target password. The password must be between 12
and 255 characters (inclusive) and cannot be the same as the host password.
5 Click Save. To disable CHAP, clear the fields in the Configure CHAP dialog box and click
Save.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 101
Chapter 6:Storage | Hosts and Host Groups
3 In the Details panel, click the menu icon and select Add Preferred Arrays.... The Add Pre-
ferred Arrays dialog box appears.
4 From the Available Arrays column, click the arrays you want to add as preferred arrays for
the host.
5 Click Add.
Renaming a Host
To rename a host:
1 Select Storage > Hosts.
2 In the Hosts panel, click the rename icon for the host you want to rename. The Rename Host
dialog box appears.
3 In the Name field, enter the new name of the host.
4 Click Rename.
Deleting a Host
You can delete a host either through private or shared connections. You cannot delete a host if it
is connected to volumes, Before deleting a host, disconnect all hosts and volumes from the host
group.
To delete a host:
1 Select Storage > Hosts.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 102
Chapter 6:Storage | Hosts and Host Groups
2 In the Hosts panel, click the delete icon for the host you want to delete. The Delete Host dia-
log box appears.
3 Click Delete. Any volumes that were connected to the host are disconnected, and the
deleted host no longer appears in the Host panel.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 103
Chapter 6:Storage | Hosts and Host Groups
3 From the Member Hosts panel, click the remove host (x) icon next to the host you want to dis-
connect. The Remove Host dialog box appears.
4 Click Remove.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 104
Chapter 6:Storage | Hosts and Host Groups
2 In the Host Groups panel, click the menu icon and select Download CSV to save the host_
groups.csv file to your local machine.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 105
Chapter 6:Storage | Volumes
Volumes
The Storage > Volumes page displays summary information for all volumes on the array. See
Figure 6-7.
Figure 6-7. Storage > Volumes Page
Pure Storage Confidential - For distribution only to Pure Customers and Partners 106
Chapter 6:Storage | Volumes
l Volumes Overview
l Working with Volumes
l Destroying and Eradicating Volumes
l Working with Volume-Host Connections
l Working with Volume Snapshots
l Working with Volume Groups
l Destroying and Eradicating Volume Groups
Volumes Overview
The Volumes panel displays a list of all volumes on the array.
Volume names that include a double colon (::) represent volumes inside pods. Volume names
that include a forward slash (/) represent volumes inside volume groups.
Volume names that begin with an @ symbol represent app volumes. For more information about
app volumes, see "Installed Apps" on page 355.
The Volumes page also displays volumes, volume snapshots, and volume groups that have
been destroyed and are pending eradication.
The Volumes and Volume Groups panels are organized into the following three tabs:
l Space - Displays information about the provisioned (virtual) size, snapshots, and
either physical storage consumption or effective used capacity for each volume or
volume group.
l QoS - Displays the bandwidth limit, the IOPS limit, the last priority adjustment, and pri-
ority of each volume or volume group (the priority field appears only in the Volume
panel). If the bandwidth limit is not set, the value appears as a dash (-), representing
unlimited throughput. If the IOPS limit is not set, the value appears as a dash (-), rep-
resenting unlimited IOPS.
l Details - Displays general information about each volume, including the number of
hosts to which the volume is connected either through private or shared connections,
and the unique serial number of the volume.
Storage Containers
A volume can reside in one of the following types of storage containers: root of the array (""),
pod, or volume group. The most simple of array configurations is one that contains volumes at
Pure Storage Confidential - For distribution only to Pure Customers and Partners 107
Chapter 6:Storage | Volumes
the root of the array. Each pod and volume group is a separate namespace for the volumes it
contains.
Pods are created and configured to store volumes and protection groups that need to be fully
synchronized with other arrays.
Each volume in a pod consists of the pod namespace identifier and the volume name, separated
by a double colon (::). The naming convention for a volume inside a pod is POD::VOL, where:
l POD is the name of the container pod.
l VOL is the name of the volume inside the pod.
For example, the fully qualified name of a volume named vol01 inside a pod named pod01 is
pod01::vol01.
For more information about pods, see "Pods" on page 133.
Volume groups organize volumes into logical groupings. If virtual volumes are configured, a
volume group is automatically created for each virtual machine that is created.
Each volume in a volume group consists of the volume group namespace identifier and the
volume name, separated by a forward slash (/). The naming convention for a volume inside a
volume group is VGROUP/VOL, where:
l VGROUP is the name of the container volume group.
l VOL is the name of the volume in the volume group.
For example, the fully qualified name of a volume named vol01 inside a volume group named
vgroup01 is vgroup01/vol01.
For more information about volume groups, see "Volume Groups" on page 112.
Volumes that reside in one storage container are independent of the volumes that reside in other
containers. For example, a volume named vol01 is completely unrelated to a volume named
vgroup01/vol01.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 108
Chapter 6:Storage | Volumes
In , volume vol05 represents a volume named vol05 that resides on the root of the array,
volume vgroup01/vol05 represents a volume named vol05 that resides in volume group
vgroup01, volume vgroup02/vol05 represents a volume named vol05 that resides in volume
group vgroup02, and volume pod01::vol05 represents a volume named vol05 that resides in
pod pod01. Though all four volumes have "vol05" in their names, they are completely inde-
pendent of one another.
Figure 6-8. Volumes
Virtual Volumes
VMware Virtual Volumes (vVols) storage architecture is designed to give VMware administrators
the ability to perform volume operations and apply protection group snapshot and replication
policies to FlashArray volumes directly through vSphere.
On the FlashArray side, virtual volumes are created and then connected to VMware ESXi hosts
or host groups via a protocol endpoint (also known as a conglomerate volume). The protocol
endpoint itself does not serve I/Os; instead, its job is to form connections between FlashArray
volumes and ESXi hosts and host groups.
Each protocol endpoint can connect multiple virtual volumes to a single host or host group, and
each host or host group can have multiple protocol-endpoint connections.
LUN IDs are automatically assigned to each protocol endpoint connection and each virtual
volume connection. Specifically, each protocol endpoint connection to a host or host group cre-
ates a LUN (PE LUN), while each virtual volume connection to a host or host group creates a
sub-LUN. The sub-LUN is in the format x:y, where x represents the LUN of the protocol end-
Pure Storage Confidential - For distribution only to Pure Customers and Partners 109
Chapter 6:Storage | Volumes
point through which the virtual volume is connected to the host or host group, and y represents
the sub-LUN assigned to the virtual volume.
In Figure 6-9, one virtual volume named pure_VVol and one protocol endpoint named pure_
PE are connected to host host01. The virtual volume is identified by the sub-LUN (7:1).
Figure 6-9. Virtual Volumes
Note: Virtual volumes are primarily configured through the vSphere Web Client plugin.
For more information about virtual volumes, including configuration steps, refer to the Pure Stor-
age vSphere Web Client Plugin for vSphere User Guide on the Knowledge site at
https://support.purestorage.com.
Volume Details
From the Volumes page, click a volume name to display details, including connected hosts
(through both shared and private connections), provisioned size, either storage consumption or
effective used capacity, and serial number, for the specified volume on the array.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 110
Chapter 6:Storage | Volumes
Figure 6-10 displays the details for volume ESXi-Cluster02-vol001, which is connected to
three hosts and two host groups, and is a member of protection group PG003.
Figure 6-10. Volume Details
Pure Storage Confidential - For distribution only to Pure Customers and Partners 111
Chapter 6:Storage | Volumes
Volume Snapshots
Displays a list of volume snapshots. A volume snapshot is a point-in-time image of the
contents of a volume. There are various ways to create volume snapshots: as a single
volume or multiple volumes at the same time (atomically) through the Storage > Volumes
page, or as part of protection group snapshots through the Protection > Protection
Groups page.
Details
Displays the unique details for the volume, such as volume creation date, unique serial
number, and QoS information including bandwidth limit, IOPS limit, and DMM priority
adjustment. If the volume was created from another source, such as a volume snapshot,
the Source field displays the name of the source from where the volume was created.
Volume Groups
The Volumes Groups panel displays a list of volume groups that have been created on the array.
Volume groups organize FlashArray volumes into logical groupings.
If virtual volumes are configured, each volume group on the array represents its associated vir-
tual machine, and inside each of those volume groups are the FlashArray volumes that are
assigned to the virtual machine. Volume groups that are associated with virtual machines have
names that begin with "vvol-" and end with the virtual machine name. For more information
about virtual volumes, including configuration steps, refer to the Pure Storage vSphere Web Cli-
ent Plugin for vSphere User Guide on the Knowledge site at https://sup-
port.purestorage.com.
Volume groups can also be created through the Volumes page. Once a volume group has been
created, create new volumes directly in the volume group or move existing ones into the volume
group.
In the Volume Groups panel, click the name of a volume group to display its details, such as pro-
visioned size, storage consumption or effective used capacity, and a list of volumes that reside
in the volume group.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 112
Chapter 6:Storage | Volumes
Figure 6-11 displays the details for volume group vgroup01, which contains five volumes.
Figure 6-11. Volume Groups
Pure Storage Confidential - For distribution only to Pure Customers and Partners 113
Chapter 6:Storage | Volumes
exceeds the IOPS limit, throttling occurs. If set, the IOPS limit must be between 100
and 100M. By default, the QoS IOPS limit is unlimited.
The IOPS limit of a volume group represents the aggregate IOPS for all the volumes
in the volume group.
QoS IOPS limits are not enforced on volumes or volume groups that do not have the
IOPS limit set.
l DMM Priority Adjustment
A DMM priority adjustment can be applied to volumes or volume groups to increase
or decrease their relative performance priority, when supported by FlashArray hard-
ware such as Direct Memory Modules. For example, use a DMM priority adjustment
to configure a higher performance priority for volumes that run critical, latency-sens-
itive workloads or to configure a lower priority for volumes that run workloads with
less latency sensitivity.
To apply a priority adjustment, use the optional QoS Configuration > DMM Priority
Adjustment fields when creating the volume or use the Configure QoS dialog for an
existing volume. Priority values are 10 (high), 0 (default), and -10 (low). By default,
all volumes have the same priority value of 0. Adjustment values are +10 (higher pri-
ority), 0 (no change or default priority), and -10 (lower priority). Volumes can also be
set to a specific priority with the equals sign, = 10 for high priority, = 0 for default pri-
ority, and = -10 for low priority.
In general, volumes that are members of a volume group inherit the priority adjust-
ment of their volume group. However, if a volume has a priority value set with the '='
operator (for example, =+10), it retains that value and is unaffected by any volume
group priority adjustment settings.
Notes:
l If all volumes are set to the same priority, even the higher priority (10),
then all volumes have the same relative priority and no volume receives a
performance priority.
l If various volumes have priority values of 10, 0, and -10, then the volumes
with a value of 10 receive performance priority. Those volumes with values
0 and -10 are treated equally (and do not receive priority).
l If various volumes have priority values of 0 and -10, then the volumes with
a value of 0 receive performance priority.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 114
Chapter 6:Storage | Volumes
l If various volumes have priority values of 10 and -10, then the volumes
with a value of 10 receive performance priority.
l If various volumes have priority values of 10 and 0, then the volumes with
a value of 10 receive performance priority.
l +10 and -10 are maximum and minimum priority values, respectively.
Applying a +10 adjustment to a volume that already has a priority value of
10 has no effect. Similarly, applying a -10 adjustment to a volume that
already has a priority value of -10 has no effect.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 115
Chapter 6:Storage | Volumes
7 Optionally click Protection Configuration (Optional) to view, add, or remove default pro-
tection groups and to enable or disable default protection for the new volume. The current
default protection group list for the new volume is shown in the Data Protection field.
aTo add additional protection groups for the new volume, click the Edit icon on the right of
the Data Protection field. The Select Protection Groups dialog appears, with Available
Protection Groups listed on the left. Protection groups that are already listed in the cur-
rent default protection group list have their check boxes grayed out. The Selected Pro-
tection Groups column lists the protection groups to which the new volume will be
assigned.
l To add the new volume to an additional protection group, in the Available Pro-
tection Groups column, select the check box for that protection group. The pro-
tection group is then listed in the Selected Protection Groups column on the
right.
l To remove a protection group, click the 'x' icon on the right of the protection
group row.
l To remove all protection groups from the Selected Protection Groups column,
click Clear all.
bWhen the Selected Protection Groups column is correct, click Select.
cTo enable default protection for the new volume, leave the Use Default Protection
check box enabled (recommended). To disable default protection for the new volume,
uncheck the Use Default Protection check box.
4 Click Create.
To create multiple volumes:
1 Select Storage > Volumes.
2 In the Volumes panel, click the menu icon and select Create... . The Create Volumes dialog
box appears.
3 Click Create Multiple…. The Create Multiple Volumes dialog box appears.
4 Complete the following fields:
l In the Pod or Volume Group field, select the pod or volume group to where the
volumes will be created.
l Name: Specify the template used to create the volume names. Volume names can-
not consist of all numeric values.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 116
Chapter 6:Storage | Volumes
Place the hash (#) symbol where the numeric part of the volume name should
appear. When Purity//FA creates the volume names, the hash symbol is replaced
with the volume number, beginning with the start number specified.
l Provisioned Size: Specify the provisioned (virtual) size of the volume. The volume
size must be between one megabyte and four petabytes. The provisioned size is
reported to hosts.
l Start Number: Enter the volume number used to create the first volume name.
l Count: Enter the number of volumes to create.
l Number of Digits: Enter the minimum number of numeric digits of the volume num-
ber. If the number of digits is greater than the start number, the volume number
begins with leading zeros.
l Bandwidth Limit: Optionally set the maximum QoS bandwidth limit. The bandwidth
limit applies to each of the volumes created in this set of volumes. Whenever through-
put exceeds the bandwidth limit, throttling occurs. If set, bandwidth limit must be
between 1 MB/s and 512 GB/s.
l IOPS Limit: Optionally set the maximum QoS IOPS limit. The IOPS limit applies to
each of the volumes created in this set of volumes. Whenever the number of I/O oper-
ations per second exceeds the IOPS limit, throttling occurs. If set, the IOPS limit must
be between 100 and 100M.
l DMM Priority Adjustment: Optionally select +10 to give the volumes a higher priority
or -10 for a lower priority.
5 Optionally click Protection Configuration (Optional) to view, add, or remove default pro-
tection groups and to enable or disable default protection for the new volumes. The current
default protection group list for the new volumes is shown in the Data Protection field.
aTo add additional protection groups for the new volumes, click the Edit icon on the right
of the Data Protection field. The Select Protection Groups dialog appears, with Available
Protection Groups listed on the left. Protection groups that are already listed in the cur-
rent default protection group list have their check boxes grayed out. The Selected Pro-
tection Groups column lists the protection groups to which the new volumes will be
assigned.
l To add the new volumes to an additional protection group, in the Available Pro-
tection Groups column, select the check box for that protection group. The pro-
tection group is then listed in the Selected Protection Groups column on the
right.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 117
Chapter 6:Storage | Volumes
Moving a Volume
Volumes can be moved into, out of, and between pods and volume groups.
See also "Moving a Volume when SafeMode is Enabled" on the next page
To move a single volume:
1 Select Storage > Volumes.
2 Select the specific volume you want to move.
3 Click the menu icon and select Move.... The Move Volume dialog box appears.
4 From the Pod or Volume Group field, select the pod or volume group you wish to move the
volume to.
If the protection groups in the source container cannot be in the target container, the
Remove from Pgroup and Data Protection fields will appear. The Remove from Pgroup
field will have protection groups that must be abandoned for the move to complete. In the
Data Protection field, specify the protection groups the volume should be added to.
5 Click Move.
To move multiple volumes:
1 Select Storage > Volumes.
2 In the Volumes panel, click the menu icon and select Move.... The Move Volumes dialog box
appears.
3 In the Existing Volumes column, select the volumes you want to move. All of the selected
volumes will be moved to the same destination.
4 From the Pod or Volume Group field, select the Pod or Volume group you want to move the
selected volumes to.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 118
Chapter 6:Storage | Volumes
If the protection groups in the source container cannot be in the target container, the
Remove from Pgroup and Data Protection fields will appear. The Remove from Protection
Group field will have protection groups that must be abandoned for the move to complete.
From the Add to Protection Group field, specify the protection groups the volumes should
be added to.
5 Click Move.
Moving a Volume when SafeMode is Enabled
A volume in a SafeMode-enabled protection group can only be moved to a SafeMode-enabled
protection group with equal or better SafeMode protections, as determined by the following con-
figuration characteristics:
l Snapshot schedule frequency and retention.
l Replication schedule frequency and retention.
l Target retention number of days and number retained per day.
l Blackout window (if the current protection group has a blackout window).
If the target protection group does not match or exceed the current protection group on all of
these configurations, the volume cannot be moved.
Notes about volume moves when SafeMode is enabled:
l For volumes in protection groups based on host or hostgroup membership, Purity
does not ensure that the target protection group has equal or better SafeMode pro-
tections.
l Contact Pure Technical Support to move a volume that is currently a member of more
than one protection group.
Renaming a Volume
To rename a volume:
1 Select Storage > Volumes.
2 In the Volumes panel, click the rename icon for the volume you want to rename. The
Rename Volume dialog box appears.
3 In the Name field, enter the new name of the volume.
4 Click Rename.
Resizing a Volume
Resizing a volume changes its provisioned (virtual) size.
To change the provisioned size a volume:
Pure Storage Confidential - For distribution only to Pure Customers and Partners 119
Chapter 6:Storage | Volumes
Copying a Volume
Copy a volume to create a new volume or overwrite an existing one. You cannot copy volumes
across pods if the source and target pods are both stretched but on different pairs.
To copy a volume:
1 Select Storage > Volumes.
2 In the Volumes panel, click the volume that you want to copy. The volume detail page opens.
3 In the Volume > <volume name> row, click the menu icon and select Copy....
4 Click the menu icon and select Copy Volume. The Copy Volume dialog box appears.
5 In the Container field, specify the root location, pod, or volume group to where the new
volume will be created. The forward slash (/) represents the root location of the array.
6 In the Name field, type the name of the new or existing volume.
7 To overwrite an existing volume, click the Overwrite toggle button to enable (blue) the over-
write feature.
8 Optionally click Protection Configuration (Optional) to view or add default protection groups
and to enable or disable default protection for the newly copied volumes. The current default
protection group list for the copied volumes is shown in the Data Protection field.
aTo add groups to the default protection group list, click the Edit icon on the right of the
Data Protection field. The Select Protection Groups dialog appears, with Available Pro-
tection Groups listed on the left. Protection groups that are already listed in the current
default protection group list have their check boxes grayed out.
bClick Select.
cTo enable default protection for the copied volume, leave the Use Default Protection
check box enabled (recommended). To disable default protection for the copied volume,
uncheck the Use Default Protection check box.
4 Click Copy.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 120
Chapter 6:Storage | Volumes
Pure Storage Confidential - For distribution only to Pure Customers and Partners 121
Chapter 6:Storage | Volumes
l In the Volumes panel, click the volume to drill down to its details, and then click the
QoS edit icon in the Details panel.
3 In the DMM Priority Adjustment menus select +10 to give the volume a higher priority or -
10 for a lower priority, or use the equals sign (=) to set a specific priority: 10 (higher), 0
(default), or -10 (lower).
Pure Storage Confidential - For distribution only to Pure Customers and Partners 122
Chapter 6:Storage | Volumes
Pure Storage Confidential - For distribution only to Pure Customers and Partners 123
Chapter 6:Storage | Volumes
4 Click an existing volume in the left column to add it to the Selected Volumes column. If the
volume does not exist, click Create New Volume to create a new volume and connect it to
the host.
5 Click Connect.
To establish a private connection from a host to a volume:
1 Select Storage > Volumes.
2 In the Volumes panel, click the volume to drill down to its details.
3 In the Connected Hosts panel, click the menu icon and select Connect.... The Connect
Hosts dialog box appears.
The hosts in the Available Hosts column represent the hosts that are eligible to be con-
nected to the volume.
4 Click an existing volume in the left column to add it to the Selected Volumes column. If the
volume does not exist, click Create New Volume to create a new volume and connect it to
the host.
5 Optionally assign a LUN to the connection. If the field is left blank, Purity//FA automatically
assigns the next available LUN to the connection.
6 Click Connect.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 124
Chapter 6:Storage | Volumes
2 In the Volumes panel, click the volume to drill down to its details.
3 In the Connected Host Groups panel, click the menu icon and select Connect.... The Con-
nect Host Groups dialog box appears.
The host groups in the Available Host Groups column represent the host groups that are eli-
gible to be connected to the volume.
4 Click an existing host group in the left column to add it to the Selected Host Groups column. If
the host group does not exist, click Create New Host Group to create a new host group and
connect it to the volume.
5 Click Connect.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 125
Chapter 6:Storage | Volumes
Breaking a volume-host group connection breaks all connections between the volume and all
hosts affiliated with the host group. Other shared and private connections to the volume are unaf-
fected.
There are two ways to break shared volume-host group connections: 1) disconnect a volume
from its host group, or 2) disconnect a host group from the volume.
To disconnect a volume from its host group:
1 Select Storage > Hosts.
2 In the Host Groups panel, click the host group name to drill down to its details.
3 In the Connected Volumes panel, click the disconnect volume (x) icon next to the volume you
want to disconnect. The Disconnect Volume dialog box appears.
4 Click Disconnect.
To disconnect a host group from a volume:
1 Select Storage > Volumes.
2 In the Volumes panel, click the volume to drill down to its details.
3 In the Connected Host Groups panel, click the disconnect host group (x) icon next to the host
group you want to disconnect. The Disconnect Host Group dialog box appears.
4 Click Disconnect.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 126
Chapter 6:Storage | Volumes
6 Click Create.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 127
Chapter 6:Storage | Volumes
Pure Storage Confidential - For distribution only to Pure Customers and Partners 128
Chapter 6:Storage | Volumes
2 In the Destroyed Snapshots panel, click the Eradicate Snapshot icon for to the snapshot you
want to permanently eradicate. The Eradicate Snapshot dialog box appears.
3 Click Eradicate.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 129
Chapter 6:Storage | Volumes
l Name: Specify the template used to create the volume group names. Volume group
names cannot consist of all numeric values.
Place the hash (#) symbol where the numeric part of the volume group name should
appear. When Purity//FA creates the volume group names, the hash symbol is
replaced with the volume group number, beginning with the start number specified.
l Start Number: Enter the volume number used to create the first volume group name.
l Count: Enter the number of volume groups to create.
l Number of Digits: Enter the minimum number of numeric digits of the volume group
number. If the number of digits is greater than the start number, the volume number
begins with leading zeros.
l Bandwidth Limit: Optionally set the maximum QoS bandwidth limit. The bandwidth
limit applies to each volume that becomes a member of these groups. Whenever
throughput exceeds the bandwidth limit, throttling occurs. If set, bandwidth limit must
be between 1 MB/s and 512 GB/s.
l IOPS Limit: Optionally set the maximum QoS IOPS limit. The IOPS limit applies to
each volume that becomes a member of these groups. Whenever the number of I/O
operations per second exceeds the IOPS limit, throttling occurs. If set, the IOPS limit
must be between 100 and 100M.
l DMM Priority Adjustment: Optionally select +10 to give the volume a higher priority
or -10 for a lower priority.
5 Click Create.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 130
Chapter 6:Storage | Volumes
3 In the Bandwidth Limit field, set the maximum QoS bandwidth limit for the volume group.
Whenever throughput exceeds the bandwidth limit, throttling occurs. If set, the bandwidth
limit must be between 1 MB/s and 512 GB/s.
To give the volume group unlimited throughput, clear the Bandwidth Limit field.
4 In the IOPS Limit field, set the maximum QoS IOPS limit for the volume group. Whenever the
number of I/O operations per second exceeds the IOPS limit, throttling occurs. If set, the
IOPS limit must be between 100 and 100M.
To give the volume group unlimited IOPS, clear the IOPS Limit field.
5 Click Save.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 131
Chapter 6:Storage | Volumes
Pure Storage Confidential - For distribution only to Pure Customers and Partners 132
Chapter 6:Storage | Pods
Pods
The Storage > Pods page displays summary information for each pod on the array. See Figure
6-12.
Figure 6-12. Storage > Pods
Pure Storage Confidential - For distribution only to Pure Customers and Partners 133
Chapter 6:Storage | Pods
A pod is a management container containing a group of volumes that can be stretched or linked
between two FlashArrays. A pod serves as a consistency group that is created for truly active-
active synchronous replication (ActiveCluster) or active-passive continuous replication (Act-
iveDR). When a pod is stretched over two FlashArrays, any time there is a failover between the
two FlashArrays, anything that was contained in that pod will be write-order consistent.
For ActiveCluster, Purity supports multiple connections between FlashArrays for a hub-and-
spoke topology for stretched pods. This way, a single FlashArray can participate as a con-
solidator, synchronously replicating the desired volumes for FlashArrays dedicated for specific
workloads. IP supports up to five synchronous connections between FlashArrays. Fibre Channel
supports one synchronous connection.
An array can have multiple pods, and each pod can be stretched and unstretched. When stretch-
ing pods for ActiveCluster synchronous replication, make sure not to exceed the limits for
stretched objects like pods, volumes, volume snapshots, and protection group snapshots. For
information about the ActiveCluster synchronous IP or FC replication limits, see one of the
FlashArray Model Limits articles, as applicable to the given model, on the Knowledge site at
https://support.purestorage.com.
Volumes can be moved into and out of pods, but they cannot be moved into or out of stretched
pods. To move volumes into or out of a stretched pod, unstretch the pod before you move the
volumes. A volume cannot be copied across pods if the source and target pods are both
stretched but on different pairs.
Pods can also contain protection groups with volume, host, or host group members. Addi-
tionally, file systems can be created inside pods and file systems can be moved into and out of
pods. The Storage > Pods > File Systems page allows the creation of file systems within a pod.
See Figure 1-2.
A pod provides a private namespace, so the names of file systems, and volume and protection
groups in pods will not conflict with any volumes or protection groups with the same name on the
root of the array. The fully qualified name of a volume in a pod is POD::VOLUME, with double
colons (::) separating the pod name and volume name. The fully qualified name of a protection
group in a pod is POD::PGROUP, with double colons (::) separating the pod name and pro-
tection group name.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 134
Chapter 6:Storage | Pods
Name the file system to be created in the pod, then click Create to create a new file system in
the pod.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 135
Chapter 6:Storage | Pods
For example, a volume named vol02 in a pod named pod01 will be named pod01::vol02. A
protection group named pgroup01 in a pod named pod01 will be named pod01::pgroup01.
See Figure 6-14.
Figure 6-14. Configuring a Pod (part 1)
Pure Storage Confidential - For distribution only to Pure Customers and Partners 136
Chapter 6:Storage | Pods
If a protection group in a pod is configured to asynchronously replicate data to a target array, the
fully qualified name of the protection group on the target array is POD:PGROUP, with single
colons (:) separating the pod name and protection group name. For example, if protection group
pod01::pgroup01 on source array array01 asynchronously replicates data to target array
array02, the fully qualified name of the protection group on target array array02 is pod01:p-
group01.
In addition to passive mediation and failover preference, Purity provides the pre-election beha-
vior to further ensure a stretched pod remains online. With pre-election, an array within a
stretched pod is chosen by Purity to keep a pod online when other failures occur in the envir-
onment.
The pre-election behavior elects one array of the stretched pod to remain online in the rare event
that:
Pure Storage Confidential - For distribution only to Pure Customers and Partners 137
Chapter 6:Storage | Pods
l The mediator is inaccessible on both arrays within the stretched pod, preventing the
arrays from racing to the mediator to determine which one keeps the pod online.
...and then later...
l The arrays within the stretched pod become disconnected from each other.
When the mediator becomes inaccessible on both arrays, Purity pre-elects an array per pod to
keep the pod online. Then, if the arrays lose contact with each other, the pre-elected array for
that pod takes over to keep the pod online while its peer array takes the pod offline.
If either array reconnects to the mediator before they lose contact with each other, the pre-elec-
tion result is cancelled. The array with access to the mediator will race to the mediator and keep
the pod online if its peer array fails or the arrays become disconnected from each other.
The pre-election status appears in the form of a heart symbol in the Mediator status column of
the Storage > Pods > Arrays panel; a gray heart means the array was pre-elected by Purity to
keep the pod online, while an empty heart means the array was pre-elected by Purity to take the
pod offline. If a heart does not appear, this means the array is connected to its peer array within
the stretched pod and at least one array in the pod has access to the mediator.
One and only one array within each pod is pre-elected at a given point in time, so while a pre-
elected array is keeping the pod online, the pod on its non-elected peer array remains offline dur-
ing the communication failure.
Users cannot pre-elect arrays. Purity uses various factors, including the following ones (listed in
order of precedence), to determine which array is pre-elected:
l If a pod has a failover preference set, then the array that is preferred will be pre-elec-
ted.
l If one of the arrays has no hosts connected to volumes in the pod, then the other
array will be pre-elected.
l If neither of the above factors applies, one of the arrays is selected by Purity.
If the pre-elected array goes down while pre-election is in effect, the non-elected peer array will
not bring the pod online.
If the non-elected array reconnects to the mediator while it is still disconnected from the pre-elec-
ted array, it is ignored and will still keep the pod offline. If the data in the non-elected pod must
be accessed, clone it to create a point-in-time consistent copy of the pod and its contents, includ-
ing its volumes and snapshot history. After the pod has been cloned, disconnect the hosts from
the original volumes and reconnect the hosts to the volumes within the cloned pod.
If the arrays re-establish contact with each other but the mediator is still inaccessible, the array
that was online throughout the outage starts replicating pod data to its peer array until the pod is
Pure Storage Confidential - For distribution only to Pure Customers and Partners 138
Chapter 6:Storage | Pods
once again in sync and both arrays serve I/O. One array will still be pre-elected (with the appro-
priate heart status still displayed) in case both arrays lose contact with each other again.
When the peer arrays re-establish contact with each other and can access the mediator, the
array that was online throughout the outage starts replicating pod data to its peer array until the
pod is once again in sync and both arrays serve I/O, at which time pod activity returns to normal.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 139
Chapter 6:Storage | Pods
ActiveDR Replication
ActiveDR is a Purity//FA data protection feature that enables active-passive, continuous rep-
lication by linking pods across two FlashArrays, providing a low RPO (Recovery Point Object-
ive).
ActiveDR replication streams pod-to-pod transfer of compressed data from a source FlashArray
at the production site to a target FlashArray at the recovery site. If the source FlashArray
becomes unavailable due to events such as a disaster or workload migration, you can imme-
diately fail over to the target FlashArray.
A low RPO allows you to recover at the target site with less data loss compared to scheduled
snapshot replication. Because ActiveDR replication constantly replicates data to the target
FlashArray and does not wait for the write acknowledgment from the target FlashArray, no addi-
tional host write latency is incurred when the distance between the two FlashArrays increases.
For information about ActiveDR and how to use ActiveDR replication to provide fast recovery,
see the following topics:
l Key Features
l Setting Up ActiveDR Replication
l Promotion Status of a Pod
l Replica Links
l Adding File Data to the Pod on the Source Array
l Performing a Failover for Fast Recovery
l Performing a Reprotect Process after a Failover
l Performing a Failback Process after a Failover
l Performing a Planned Failover
l Performing a Test Recovery Process
Key Features
ActiveDR replication provides the following key features:
l Pod-based replication - Uses a storage pod as a management container for rep-
lication, failover, and consistency. An active pod on a source array can be linked to a
passive pod on a target array to form a pod-to-pod replication pair.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 140
Chapter 6:Storage | Pods
l Near-zero Recovery Point Objective (RPO) - Achieves near-zero data loss for rapid
disaster recovery at the DR site, enabling you to keep the data on the source and tar-
get FlashArrays almost synchronized.
Note: ActiveDR replication for file systems provides up to one hour RPO.
l Test recovery without disrupting replication - Enables failover testing without dis-
rupting data replication to the recovery site to maintain the RPO.
l Pre-configured volume and host connection - Allows hosts to be connected to the
volumes on the target FlashArray at the recovery site before a failover to speed up
and simplify the failover process.
l Bidirectional replication - Allows different pods in the same two FlashArrays to link
and replicate in opposite directions across sites.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 141
Chapter 6:Storage | Pods
Pure Storage Confidential - For distribution only to Pure Customers and Partners 142
Chapter 6:Storage | Pods
Note: You cannot move volumes into the source pod after the replica link is created.
To create a pod,
1 Select Storage > Pods.
2 In the Pods pane, click the menu icon and select Create....
3 In the Create Pod pane, enter the name of the pod that you want to set up as the intended tar-
get pod and click Create.
Note: If an undo-demote pod already exists, the demotion process fails with an
error.
For more information, see "Promotion Status of a Pod" on page 146.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 143
Chapter 6:Storage | Pods
Note: Volumes cannot be added to pods with file systems and file systems cannot be
added to pods with volumes.
With the file systems or volumes in place, a replica link can be created to initiate ActiveDR rep-
lication. File systems and volumes cannot be moved into a pod when the replication is initiated;
only after deletion of the replica link. However, you may create a new volume or file system in
the pod without deleting or pausing the link.
A replica link can only be created when the Purity//FA on the target array is of the same version
as the source, or newer. Hence, if the source array is being upgraded to a newer version, a link
cannot be created or recreated after deletion, without first upgrading the target array. Only file
policy or block features that are supported on both the source and target arrays, can be used;
unsupported operations will fail. The recommendation is therefore to use the same version of
Purity//FA on both arrays, source and target.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 144
Chapter 6:Storage | Pods
When you link a source pod with a demoted pod using a replica link, the demoted pod becomes
the target pod of the source pod. The target pod serves as a replica pod to track the changes of
the source pod, including volumes, snapshots, protection groups, and protection group snap-
shots.
To create a replica link,
1 Log in to the source FlashArray at the production site.
2 Select Protection > ActiveDR.
3 In the Replica Links pane, click the menu icon and select Create....
The Create Replica Link dialog box appears.
4 Provide information for the following fields:
5 Click Create.
The local and remote FlashArrays are connected, and the replica link starts the baseline
process between the source and target pods. When the baseline process is complete, the
source pod starts to replicate data to the target pod, changing the replica-link status to rep-
licating.
For more information about replica links, see "Replica Links" on page 149.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 145
Chapter 6:Storage | Pods
Pure Storage Confidential - For distribution only to Pure Customers and Partners 146
Chapter 6:Storage | Pods
Note: If an undo-demote pod already exists, the demotion process fails with an
error.
A pod can have only one undo-demote pod named pod_name.undo-demote. You
cannot demote a pod that already has an undo-demote pod. To demote such a pod,
you must first eradicate the undo-demote pod. You cannot rename an undo-demote
pod; however, when you rename a demoted pod, the associated undo-demote pod
automatically inherits the new pod name. For example, renaming a demoted pod
podA to podB automatically changes the undo-demote pod name from podA.undo-
demote to podB.undo-demote.
When you demote a pod that is the source of a replica link, you must restrict the promotion
status transitions by clicking either the Quiesce button or the Skip Quiesce button in the
Demote Pod dialog box.
l The Quiesce setting
Demotes a pod to allow it to become a target pod after the replica-link status changes
to quiesced. Setting this option ensures that all local data has been replicated to the
remote pod before the pod is demoted.
You should set this option when performing a planned failover.
l The Skip Quiesce setting
Demotes a pod to allow it to become a target pod without waiting for the quiesced
status of the replica link. Using this option loses any data that has not been replicated
to the remote pod.
When you promote a pod that was demoted, note the following conditions:
l The promotion status of the pod initially shows the promoting status, indicating the
promotion process is in progress. When the promotion process is complete, the pro-
motion status transitions to promoted.
Note: You must wait for the promotion status to transition to promoted before access-
ing the data in the pod.
l Promoting a pod is restricted if the replica-link status is quiescing.
To override this restriction, select the Abort Quiesce button to force promotion
without waiting for the quiesce operation to complete replicating data from the source.
Using this option loses any data that has not been replicated and reverts the pod to
its most recent recovery point.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 147
Chapter 6:Storage | Pods
Demoting Pods
By default, the promotion status of a pod is promoted when it is initially created. You can
demote a pod to allow it to become a target pod for ActiveDR replication.
To demote a pod,
1 Log in to the FlashArray in which you want to demote the pod.
2 Select Storage > Pods and select the pod to demote.
3 In the Pods panel, click the menu icon of the pod and select Demote….
The Demote Pod dialog box appears.
4 If the pod is the source of a replica link, configure one of the following settings:
5 Click Demote.
The promotion status of the pod changes to demoted when the demotion process is com-
plete.
For more information, see "Promotion Status of a Pod" on page 146.
Promoting Pods
You can promote a pod that was previously demoted to allow read/write access to the host. If the
pod is the target of a replica link, the pod will be updated with the latest replicated data from the
journal.
To promote a pod,
1 Log in to the FlashArray in which you want to promote the pod.
2 Select Storage > Pods and select the pod to promote.
3 In the Pods panel, click the menu icon of the pod and select Promote….
The Promote Pod dialog box appears.
4 (Optional) Select the Abort Quiesce check box.
Using the setting promotes the pod while the replica-link status is quiescing without wait-
ing for the quiesce operation to complete.
5 (Optional) Select the Promote From 'pod.undo-demote' check box.
Setting this option promotes the pod using the associated undo-demote pod as the source.
When the promotion process is complete, the pod contains the same configuration and
data as the undo-demote pod. The undo-demote pod will be eradicated.
6 Click Promote.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 148
Chapter 6:Storage | Pods
The promotion status of the pod changes to promoted when the promotion process is com-
plete. You must wait for the promotion status to transition to promoted before accessing
the data in the pod.
For more information, see "Promotion Status of a Pod" on page 146.
Replica Links
When you associate a source pod with a demoted pod by creating a replica link, the demoted
pod becomes the target pod of the source pod. The direction of the replica link is from the pro-
moted source pod to the demoted target pod. You can create replica links in either direction
between the same two FlashArrays. The target pod of a replica link cannot be on the same
FlashArray as the source pod.
The target pod of a replica link tracks the data and configuration changes of the source pod,
including changes to volumes, snapshots, protection groups, and protection group snapshots.
Changes to the source pod are continuously replicated to the target FlashArray where they are
stored in the background in a journal. When the target pod is demoted, it is updated with the
latest changes from the journal every few minutes.
This form of replication does not have an impact on front-end write latency because host writes
on the source are not required to wait for acknowledgment from the target FlashArray as they
would with ActiveCluster replication. Therefore, writes on the source are not affected by latency
on the replication network or the distance between the source and target FlashArrays.
Note the following configuration differences between a source pod and the associated target
pod:
l The replicated volumes in the target pod have different serial numbers from the same
volumes in the source pod.
l The target pod has different hosts and host group connections.
To view more detailed information of replica links, see
l "Displaying Replica Links" on page 151
l "Displaying the Lag and Bandwidth Details of Replica Links" on page 152
Replica-Link Status
Replica-link status includes the following values:
l baselining
Indicates that the source pod is sending the initial data set.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 149
Chapter 6:Storage | Pods
Note: During the baseline process, promoting a target pod in the demoted status is
not allowed.
l idle
Indicates that write streams stop because the source pod is being demoted with the
Skip Quiesce setting.
l paused
Indicates that ActiveDR replication between the source and target pods has been
paused.
For information on how to resume the replication, see "Managing Replica Links" on
page 145.
l quiescing
Indicates that the source pod is not accepting new writes and the most recent writes
to the source pod are currently being replicated to the target pod.
l quiesced
Indicates that the source pod is demoted and all the new writes have been replicated
to the target pod.
l replicating
Indicates that the source pod is replicating data to the target pod.
l unhealthy
Indicates that the current replica link is unhealthy. You should check the connection.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 150
Chapter 6:Storage | Pods
Note: The lag and recovery point refer to the data that is successfully replicated to the
journal on the target and can be recovered by promoting the pod. The current contents of
the target pod might not reflect the reported recovery point. The reason is that the target
pod is updated periodically only when it is demoted.
Bandwidth Requirements
There are no bandwidth requirements to maintain the near-zero RPO for ActiveDR replication.
However, when the front-end data transfer rate exceeds the available bandwidth in your envir-
onment, RPO increases and ActiveDR replication automatically transitions to asynchronous
mode to minimize lag.
Unlink Operation
ActiveDR replication associates a source pod with a target pod using a replica link. When you
unlink the two pods by deleting the replica link, the data in the target journal is automatically
transferred to an undo-demote pod. You can retrieve the data using the undo-demote pod; there-
fore, the pods may be relinked without transferring a complete baseline of all data. An undo-
demote pod is automatically eradicated after its eradication pending period has elapsed.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 151
Chapter 6:Storage | Pods
Pure Storage Confidential - For distribution only to Pure Customers and Partners 152
Chapter 6:Storage | Pods
Note: Before a failover process, you should configure ActiveDR replication by linking the
source FlashArray at the production side with a target FlashArray at the recovery site to
protect your mission-critical workloads. For more information, see "Setting Up ActiveDR
Replication" on page 141.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 153
Chapter 6:Storage | Pods
Pure Storage Confidential - For distribution only to Pure Customers and Partners 154
Chapter 6:Storage | Pods
Failover Preparation
To speed up and simplify a failover process, you can connect the hosts to the volumes in the tar-
get pod at the recovery site before a disaster occurs. After this connection, these replica
volumes provide only read access, while the target pod is in a passive state.
In a disaster event, the source pod fails over to a designated target pod that is promoted to allow
read/write access to the host. Before you start a host application, you should remount the file sys-
tems through the host OS. This refreshing process ensures that the host OS or applications
have been cleared and do not contain stale or invalid data from the previous state of the
volumes.
Note: The capability to access volumes in a read-only state depends on the host oper-
ating system and applications to mount and read a read-only volume. This capability var-
ies by operating systems and versions.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 155
Chapter 6:Storage | Pods
after a failover so that the original production site becomes the target of replication and the new
production site is protected. See Figure 6-20.
Figure 6-20. Reprotecting Data at the New Production Site after a Failover
Pure Storage Confidential - For distribution only to Pure Customers and Partners 156
Chapter 6:Storage | Pods
3 In the Demote Pod dialog box, demote the pod on the restored FlashArray with the Skip Qui-
esce setting.
Using this setting demotes the pod to allow it to become a target pod without waiting for the
quiescing replica-link status. The replica link automatically reverses its direction. Note
that any data that has not been replicated is preserved for at least 24 hours in the undo-
demote pod.
For more information about demoting a pod, see "Demoting Pods" on page 148.
During the demotion process, if the network is disconnected, the promotion status of the pod
transitions to demoting. The reprotect process is complete when the network is restored and
then the pod promotion status transitions to demoted.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 157
Chapter 6:Storage | Pods
Pure Storage Confidential - For distribution only to Pure Customers and Partners 158
Chapter 6:Storage | Pods
Pure Storage Confidential - For distribution only to Pure Customers and Partners 159
Chapter 6:Storage | Pods
The promotion status of the pod changes to demoting, and the replica-link status trans-
itions to quiescing.
Setting the Quiesce option demotes the pod to allow it to become a target pod after the rep-
lica-link status changes to quiesced. Using this setting ensures that all local data has
been replicated to the remote pod before the pod is demoted.
For more information about demoting a pod, see "Demoting Pods" on page 148.
4 Wait for the replication to complete by monitoring the replica-link status.
When no more new writes occur, the replica-link status changes to quiesced and the pro-
motion status of the pod changes to demoted. Alternatively, you can monitor the lag or the
recovery point to determine when the last write occurred.
For more information about replica links, see "Replica Links" on page 149.
5 Promote the target pod to be the new source pod by clicking the menu icon of the pod and
selecting Promote....
The promotion status of the target pod changes to promoting while the target pod is
being updated with the most recent write. When the promotion process is complete, the pro-
motion status changes to promoted and the target pod can now allow write access to the
host. As soon as the status transitions to promoted, the replica-link reverses its direction.
6 Start your production applications on the new source FlashArray.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 160
Chapter 6:Storage | Pods
promotion status of the source pod transitions from demoting to demoted. The rep-
lica link reverses its direction and the target pod becomes the new source pod.
After the demotion process of the source pod (status demoted)
l The replica-link status is quiesced, but the replica link has not reversed its direction.
If the target pod goes offline, you can promote the source pod by clicking the menu
icon of the source pod and selecting Promote….
The replica-link status changes to unhealthy because the target pod is unavailable.
When the target pod comes back online, the replica-link status transitions to rep-
licating.
1 Configure ActiveDR replication by associating the source pod with the target pod for data pro-
tection as described in "Setting Up ActiveDR Replication" on page 141
2 Select Storage > Pods and select the target pod to promote at the test site.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 161
Chapter 6:Storage | Pods
3 Promote the target pod by clicking the menu icon of the pod and selecting Promote....
The promotion status of the target pod changes to promoting. When the promotion pro-
cess is complete, the promotion status changes to promoted. After being promoted, the
target pod can now provide read/write access to the host. The source pod continues rep-
licating data in the background in a journal without periodically updating the promoted tar-
get pod.
4 Bring up the host on the target pod.
The data presented to the host will be the point in time when the last data was replicated
and before the target pod was promoted.
5 Perform your tests on the target pod.
In the meantime, replication continues streaming writes in the background in a journal
without periodically updating the target pod. Therefore, you maintain the RPO without los-
ing any data.
6 When the test is complete, terminate the test recovery process by demoting the target pod.
When you demote the target pod,
l The test data written to the target pod will be discarded. However, the data will be
saved in an undo-demote pod that is placed in an eradication pending period.
l ActiveDR replication resumes streaming writes to the target pod.
During an actual failover, when the source FlashArray is offline, ActiveDR replication is dis-
rupted so that no new writes are replicated from the source pod to the target pod. However, if
both the source and target FlashArrays are still online and connected as in the test recovery pro-
cess, ActiveDR replication streams new writes in the background in a journal without periodically
updating the target pod to maintain the RPO. You can optionally choose to stop ActiveDR rep-
lication from the source pod to the target pod.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 162
Chapter 6:Storage | File Systems
File Systems
The Storage > File Systems page displays file systems, managed directories, file exports,
policies, and directory snapshots on the FlashArray. View and manage the storage objects and
the connections between them. Click a file system or a directory to go into its details. See Figure
6-23.
Figure 6-23. Storage > File Systems Page
Pure Storage Confidential - For distribution only to Pure Customers and Partners 163
Chapter 6:Storage | File Systems
The File Systems panel, which is available from the File Systems view, displays a list of file sys-
tems on the array. Click on a file system name for further details.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 164
Chapter 6:Storage | File Systems
Creating a Directory
The Directories panel displays a list of all managed directories on the array or on the selected
file system.
To create a managed directory:
1 Log in to the array.
2 Select Storage > File Systems.
3 In the Directories panel, click the menu icon and select Create, or click the Create Directory
(plus) icon.
4 In the pop-up window, specify the directory as follows:
l File System: If not pre-selected, select a file system from the drop-down list.
l Name: The name to be used for administration.
l Path: The full path for the new directory. The path for a managed directory can be up
to eight levels deep, seven levels below the root.
Click the Create button and the managed directory is created.
Click on a directory name for further details.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 165
Chapter 6:Storage | File Systems
Renaming a Directory
Note: Rename a managed directory to change the name by which Purity//FA identifies
the managed directory in administrative operations and displays. The new directory name
is effective immediately and the old name is no longer recognized in CLI, GUI, or REST
interactions. Note that the root directory cannot be renamed.
To rename a managed directory:
1 Log in to the array.
2 Select Storage > File Systems.
3 In the Directories panel, click the rename icon for the directory you want to rename. The
Rename Directory dialog box appears.
4 In the Name field, enter the new name of the managed directory, and click the Rename but-
ton.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 166
Chapter 6:Storage | File Systems
l Export Name: The name of the export. This name is used when mounting on the cli-
ent side.
Click the Create button and the export is created.
By selecting policies for both NFS and SMB, two exports are created in one operation, both with
the same name. This is possible since the two exports reside in different namespaces.
To delete a file export: In the Directory Exports panel, click the Delete Exports icon (garbage) for
the export that you want to remove. Then confirm the action by clicking the Delete button.
To create and manage, enable or disable, export policies and rules, see the Storage > Policies
page.
Adding a Policy
Export policies, snapshot policies for scheduled snapshots, and quota policies can be added to
a managed directory. Adding an export policy is equivalent to creating a file export, as described
above.
Export policies and quota policies are created on the Storage > Policies page, and snapshot
policies are created on the Protection > Policies page.
To add a policy to a managed directory:
1 Log in to the array.
2 Select Storage > File Systems.
3 Select a directory by clicking the directory name.
4 In the Policies panel, click the menu icon and select Add Export Policies, Snapshot
Policies, or Quota Policies.
5 In the pop-up window:
l Select one or more policies to be added.
l For exports, set a name that will be used when mounting on the client side.
6 Click the Add button and the policy is added.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 167
Chapter 6:Storage | File Systems
l Suffix: Optionally, specify a suffix string to replace the unique number that Purity//FA
creates for the directory snapshot.
Click the Create button and the snapshot is created.
To change snapshot attributes or destroy the snapshot, click the menu icon next to the snapshot
and select Edit, Rename, or Destroy. Only manual snapshots can be renamed.
Protection plans and scheduled snapshots are configured through the Protection > Policies
page.
Directory Details
The Directory Details panel displays additional details for the selected directory. The following
information is available:
l File System: The name of the file system where the directory exists.
l Path: The full path of the directory.
l Created: The date and time when the directory was created.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 168
Chapter 6:Storage | Policies
Policies
The Storage > Policies page displays SMB and NFS export policies, which are used to create
file exports. Directory quota policies are used for creating directory quota limits. See Figure 6-
24.
Figure 6-24. Storage > Policies Page
Note: Predefined export policies may exist, which can be used to create SMB and NFS
exports. These policies should be reviewed and updated according to actual require-
ments before use.
For export policies or quota policies, click a policy name to go into its details:
l Member panel - displays directories that are members of the policy.
l Rule panel - displays rules that are added to the policy.
l Details panel - displays information about the policy and its rules, for example: type
of export, enabled or disabled, and the supported NFS version for NFS exports.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 169
Chapter 6:Storage | Policies
3 In the Export Policy panel, click the menu icon and select Create, or click the Create Policy
(plus) icon.
4 In the pop-up window, specify the snapshot as follows:
l Type: SMB or NFS, selected from the drop-down list.
l Name: The name of the policy.
l Enabled: Click the toggle icon to enable (blue) or disable (gray) the policy.
l Access Based Enumeration: SMB only. To enable this feature, click the toggle icon.
l User Mapping Enabled: NFS only. To disable user mapping, click the toggle icon
(gray).
5 Click the Create button and the export policy is created.
The SMB “Access Based Enumeration” option allows directories and files to be hidden for cli-
ents that have less than generic read permissions. When enabled, these objects are omitted
from the response by the FlashArray.
The NFS “User Mapping Enabled” option allows user UID and GID to be provided by directory
services. User mapping is enabled by default. Disabling this option allows not using directory ser-
vices for file services. Disabling user mapping for existing files or directories might cause access-
ibility issues.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 170
Chapter 6:Storage | Policies
Valid Hostname
A valid hostname is one of the following:
l A fully qualified domain name (FQDN), for example mycomputer.mydomain.
l A hostname with wildcard characters, for example mycomputer*, where * matches
zero or more characters, or mycomputer.m?domain, where ? matches one char-
acter.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 171
Chapter 6:Storage | Policies
The SMB “Anonymous Access Allowed” option allows clients that do not provide credentials
access to the export. If the option is disabled, anonymous users are restricted access.
With the “SMB Encryption Required” option enabled, data encryption is enabled for the export.
This requires the remote client to use SMB encryption. Clients that do not support encryption will
be denied access. By default, when SMB encryption is enabled, only SMB 3.0 clients are
allowed access. If the option is disabled, negotiation of encryption is enabled but data encryption
is not turned on for this export.
With the NFS “root-squash” option selected, which is the default, client users and groups with
root privileges are prevented from mapping their root privileges to a file system. All users with
UID 0 will have their UID mapped to the anonymous UID (default 65534). All users with GID 0
will have their GID mapped to anonymous GID (default 65534). With the “all-squash” option, all
users are mapped to the anonymous UID/GID. The “no-root-squash” option allows root users
and groups to access the file system with root privileges.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 172
Chapter 6:Storage | Policies
The NFS “rw” option allows both read and write requests, which is the default. With the “ro”
option, the exports that use the policy provide read-only access and any request that changes
the file system is denied. The file access timestamp will be updated for read-only access as well
as for read and write.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 173
Chapter 6:Storage | Policies
Modifying a Quota
Once quota rules are defined, they can be modified or renamed. Members can be added or
removed.
1 Log in to the array.
2 Select Storage > Policies.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 174
Chapter 6:Storage | Policies
3 In the Quota Policies panel, select a quota policy by clicking its name.
To modify a rule, click the edit icon next to the rule. Alternatively, for multiple rules, click the
menu icon and select Edit.... Then, in the pop-up window, specify one or more rules to be mod-
ified, separated by commas.
When modifying or enforcing an existing quota limit, the “Ignore Usage” option can be used for
overriding directory usage scanning, and to allow the changes.
Click the Save button and the quota rule is modified.
Editing a Policy
Policies can be temporarily disabled and re-enabled, or selected features can be disabled or
enabled, by editing the policy:
1 Log in to the array.
2 Select Storage > Policies.
3 In the Export Policies panel or the Quota Policies panel, click the menu icon for the policy
and select Edit....
Click the toggle icon to enable (blue) or disable (gray) each feature and then click the Save but-
ton.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 175
Chapter 6:Storage | Policies
Renaming a Policy
To rename a quota policy or an export policy:
1 Log in to the array.
2 Select Storage > Policies.
3 In the Export Policies panel or the Quota Policies panel, click the menu icon for the policy you
want to rename and select Rename....
4 In the Name field, enter the new name of the policy and click the Rename button.
The policy is renamed.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 176
Chapter 6:Storage | Policies
Pure Storage Confidential - For distribution only to Pure Customers and Partners 177
Chapter 7:
Protection
The Protection page displays snapshots, policies, protection groups, ActiveDR replica links, and
ActiveCluster pods that have been promoted but not linked.
The Protection page includes the following tabs:
l Arrays
l Snapshots
l Policies
l Protection Groups
l ActiveDR
l ActiveCluster
Pure Storage Confidential - For distribution only to Pure Customers and Partners 178
Chapter 7:Protection | Array
Array
The Protection > Array page displays a summary of the protection components on the array, a
list of other arrays that are connected to this array, and a list of offload targets, such as Azure
Blob containers, NFS devices, and S3 buckets, that are connected to this array. See Figure 7-1.
Figure 7-1. Protection – Array
Pure Storage Confidential - For distribution only to Pure Customers and Partners 179
Chapter 7:Protection | Array
The array summary panel (with the array name in the header bar) contains a series of rectangles
(technically known as hero images) representing the protection components, such as snap-
shots, protection groups, and policies, on the array. The numbers inside each hero image rep-
resent the number of objects created for each of the respective components. Click a rectangle to
jump to the page containing the details for that particular protection component.
Array attributes, such as array name and array time, are configured through the Settings > Sys-
tem page.
The Connected Arrays panel displays a list of arrays that are connected to the current array. A
connection must be established between two arrays in order for array-based data replication to
occur.
Purity//FA offers three types of replication: asynchronous replication, ActiveDR replication, and
ActiveCluster replication.
Asynchronous replication allows data to be replicated from one array to another. When two
arrays are connected for asynchronous replication, the array where data is being transferred
from is called the local (source) array, and the array where data is being transferred to is called
the remote (target) array. Asynchronous replication is configured through protection groups. For
more information about protection groups, refer to the Protection > Protection Groups section.
ActiveDR replication allows pod-to-pod, continuous replication of compressed data from a
source array at the production site to a target array at the recovery site, providing a near-zero
Recovery Point Objective (RPO). For more information about ActiveDR replication, see "Act-
iveDR Replication" on page 140.
ActiveCluster replication allows I/O to be sent into either of two connected arrays and have it
synced up on the other array. ActiveCluster replication is configured through pods. For more
information about pods, refer to Pods.
For information about Purity//FA replication requirements and interoperability details, see the
Purity Replication Requirements and Interoperability Matrix article on the Knowledge site at
https://support.purestorage.com.
Arrays are connected using a connection key, which is supplied from one array and entered into
the other array.
The Connected Arrays panel displays a list of FlashArray arrays that are connected to the cur-
rent array, and the attributes associated with each connection. See Figure 7-2.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 180
Chapter 7:Protection | Array
The Status column displays the connectivity status between the current array and each remote
array. A Status of connected means the current array is connected to the remote array. Net-
work connection issues or firewall issues could cause the current array not to establish a con-
nection to the remote array.
The Type column displays the type of connection that has been established between the two
arrays for asynchronous replication (async-replication) and synchronous replication
(sync-replication) purposes. Array connections set to async-replication support
asynchronous replications only, while array connections set to sync-replication support
both synchronous and asynchronous replications.
The Management Address column displays the virtual IP address or FQDN of the other array.
The Replication Address column displays the IP address or FQDN of the interface(s) on the
other array that have been configured with the replication service. The management and rep-
lication addresses only appear for the arrays from where an array connection was made. If the
array connection was made from its peer array, the Management Address and Replication
Address columns are empty.
The Array Connections panel also allows you to create new connections to other FlashArray
arrays, view and copy the array connection key, and configure network bandwidth throttling lim-
its for asynchronous replications.
The Network bandwidth throttling feature regulates when and how much data should be trans-
ferred between the arrays. Once two arrays are connected, optionally configure network band-
width throttling to set maximum threshold values for outbound traffic.
In the Array Connections panel, the Throttled column indicates whether network bandwidth throt-
tling has been enabled (True) or disabled (False).
Two different network bandwidth limits can be set:
l Set a default maximum network bandwidth threshold for outbound traffic.
and/or
Pure Storage Confidential - For distribution only to Pure Customers and Partners 181
Chapter 7:Protection | Array
l Set a range (window) of time in which the maximum network bandwidth threshold is in
effect.
If both thresholds are set, the “window” limit overrides the “default” limit.
The limit represents an average data rate, so actual data transfer rates can fluctuate slightly
above the configured limit.
To completely stop the data transfer process, refer to "Managing Replica Links" on page 145
and use the Replica Links pause and resume actions.
In the following example, the current array has been configured to throttle whenever the rate of
data being transferred to array vm-rep exceeds 4 GB/s, except between 10:00am and 3:00pm,
when throttling will occur whenever the data transfer rate exceeds 2 GB/s. See Figure 7-3.
Figure 7-3. Editing Bandwidth Throttling
Offload Targets
Note: Offload targets are not supported on FlashArray//C.
The offload target feature enables system administrators to replicate point-in-time volume snap-
shots from the array to an external storage system. Each snapshot is an immutable image of the
volume data at that instance in time. The data is transmitted securely and stored unencrypted on
the storage system.
Before you can connect to, manage, and replicate to an offload target, the respective Purity//FA
app must be installed. For example, to connect to an NFS offload target, the Snap to NFS app
must be installed. To connect to an Azure Blob container or S3 bucket, the Snap to Cloud app
Pure Storage Confidential - For distribution only to Pure Customers and Partners 182
Chapter 7:Protection | Array
must be installed. To determine if apps are installed on your array, run the pureapp list com-
mand. To install the Snap to NFS or Snap to Cloud app, contact Pure Storage Technical Ser-
vices.
The Offload Targets panel displays a list of all offload targets that are connected to the array.
See Figure 7-4.
Figure 7-4. Offload Targets Panel
Each offload target represents an external storage system such as an Azure Blob container,
NFS device, or S3 bucket to where Purity//FA volume snapshots (generated via protection group
snapshots) can be replicated.
An array can be connected to one offload target at a time, while multiple arrays can be con-
nected to the same offload target.
An offload target can have one of the following statuses:
l Connected: Array is connected to the offload target and is functioning properly.
l Connecting: Connection between the array and offload target is unhealthy, possibly
due to network issues. Check the network connectivity between the interfaces, dis-
connect the array from the offload target, and then reconnect. If the issue persists,
contact Pure Storage Technical Services.
l Not Connected: Offload app is not running. Data cannot be replicated to offload tar-
gets. Contact Pure Storage Technical Services.
l Scanning: A connection has been established between the array and offload target,
and the system is determining the state of the offload target. Once the scan suc-
cessfully completes, the status will change to Connected.
Offload targets that are disconnected from the array do not appear in the list of offload targets.
Whenever an array is disconnected from an offload target, any data transfer processes that are
in progress are suspended. The processes resumes when the connection is re-established.
In the Offload Targets panel, click the name of the offload target to view its details.
The Offload Targets detailed view, which is accessed by clicking the name of the offload target
from the Protection > Array > Offload Targets panel, displays a list of protection groups that are
Pure Storage Confidential - For distribution only to Pure Customers and Partners 183
Chapter 7:Protection | Array
connected to the offload target and the protection group snapshots that have been replicated
and retained on the offload target.
The Protection Groups panel displays a list of all protection groups both local and remote to the
array that are connected to the offload target. If the protection group exists on local array, click
the name of the protection group to drill down to its protection group details; otherwise, hover
over the name of the protection group to view its snapshot retention details.
In the Protection Group Snapshots panel, the details for each snapshot include the snapshot
name, source array and protection group, replication start and end times, amount of data trans-
ferred, and replication progress. The data transferred amount is calculated as the size difference
between the current and previous snapshots after data reduction.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 184
Chapter 7:Protection | Array
In Figure 7-5, an offload target named nfs-target is connected to array pure-001 and is an
offload target for protection group pgroup01 . Twelve protection group snapshots have been
replicated to offload target nfs-target.
Figure 7-5. Connecting an Offload Target to an Array
Click a protection group to further drill down to its details, including the volumes that it protects,
the snapshot and replication schedules, and the offload targets to where the protected volumes
are replicated. In the Protection Group Snapshots panel, the protection group snapshots listed
represent the snapshots that have been taken and retained on the current array in accordance
with the snapshot schedule.
To replicate volume snapshots to an offload target, the array must be able to connect to and
reach the external storage system. Before you configure an offload target on the array, perform
the following steps to verify that the network is set up to support the offload process:
Pure Storage Confidential - For distribution only to Pure Customers and Partners 185
Chapter 7:Protection | Array
1 Verify that at least one interface with the replication service is configured on the array. Assign
an IP address to the port; this will be interface that will be used to connect to the target
device, such as an Azure Blob container, a NAS/NFS device, an NFS storage system, an S3
bucket, or a generic Linux server. For optimum performance, an Ethernet interface of at least
10GbE is recommended.
2 Prepare the offload target.
l For Azure Blob, create a Microsoft Azure Blob container and set the storage account
to the hot access tier. Grant basic read and write ACL permissions, and verify that the
container contains no blobs. By default, server-side encryption is enabled for the con-
tainer and cannot be disabled.
l For NFS, create the NFS export, granting read and write access to the array for all
users.
l For S3, create an Amazon S3 bucket. Grant basic read and write ACL permissions,
and enable default (server-side) encryption for the bucket. Also verify that the bucket
is empty of all objects and does not have any lifecycle policies.
3 Verify that the array can reach the offload target.
After you have prepared the network connections on the array to support replication to an offload
target, perform the following high-level steps to configure the offload target on the array:
1 Connect the array to the offload target.
l For Azure Blob, creating the connection to the Microsoft Azure Blob container
requires the Azure Blob account name and the secret access key, both of which are
created through the Microsoft Azure storage website.
l For NFS, creating the connection requires the host name or IP address of the server
(such as the NFS server) and the mount point on the server.
l For S3, creating the connection to the Amazon S3 bucket requires the bucket's
access key ID and secret access key, both of which are created through Amazon
Web Services.
2 Define which volumes are to be replicated to the offload target.
3 Create a protection group.
4 Add the volumes to the protection group.
5 Add the offload target to the protection group.
6 To replicate data to the offload target on a scheduled basis, set the replication schedule for
the protection group, and then enable the schedule to begin replicating the volume snapshot
Pure Storage Confidential - For distribution only to Pure Customers and Partners 186
Chapter 7:Protection | Array
data to the offload target according to the defined schedule. Skip this step if you only want to
replicate data on demand.
Snapshot data can also be replicated on demand. On-demand snapshots represent single snap-
shots that are manually generated and retained on the source array at any point in time. By
default, an on-demand snapshot is retained indefinitely or until it is manually destroyed. When
generating an on-demand snapshot, optionally add a suffix to the snapshot name, apply the
scheduled retention policy to the snapshot, and asynchronously replicate the on-demand snap-
shot to the offload target. See Figure 7-6.
When an on-demand snapshot is replicated, and no retention policy is applied, the snapshot is
retained on both the source and target arrays. If a retention policy is applied, the snapshot will
not be retained on the source after replication, although one snapshot may be kept as a
baseline. To keep the snapshot on the source after replication, take another on-demand snap-
shot without replication.
Figure 7-6. Create Snapshot
The "Optional Suffix" option allows you to add a unique suffix to the on-demand snapshot name.
The suffix name, which can include letters, numbers and dashes (-), replaces the protection
group snapshot number in the protection group snapshot name. Select the “Apply Retention”
option to apply the scheduled snapshot retention policy to the on-demand snapshot. If you do
not enable “Apply Retention”, the on-demand snapshot is saved until you manually destroy it.
Select the “Replicate Now” option to replicate the snapshot to the target arrays.
Restoring a volume brings the volume back to the state it was when the snapshot was taken.
Restoring a volume from an offload target involves getting the volume snapshot from the offload
target, and then copying the restored volume snapshot to create a new volume or overwrite an
existing one. Volumes snapshots that have been replicated to an offload target can only be
restored through the Purity//FA system.
Any array that is connected to the offload target can get the volume snapshots. However, only
the array that configured the offload target can modify its protection group replication schedule
and destroy, recover, and eradicate the protection group snapshots on the offload target.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 187
Chapter 7:Protection | Array
Destroying a protection group implicitly destroys all of its protection group snapshots. Des-
troying a protection group snapshot destroys all of its protection group volume snapshots,
thereby reclaiming the physical storage space occupied by its data.
Protection groups and protection group snapshots created for offload targets follow the same
eradication pending behavior for most other FlashArray storage objects.
Connecting Arrays
Connect two arrays to perform asynchronous and synchronous replication.
To connect two arrays:
1 Log in to one of the arrays.
2 Select Protection > Array.
3 In the Connected Arrays panel, click the menu icon and select Get Connection Key. The
Connection Key pop-up window appears.
4 Copy the connection key string.
5 Log in to the other array.
6 Select Protection > Array.
7 Click the menu icon and select Connect Array. The Connect Array pop-up window appears.
8 Set the following connection details:
l In the Management Address field, enter the virtual IP address or FQDN of the other
array.
l In the Type field, select the connection type. Valid connection types include async-
replication for asynchronous replication, and sync-replication for syn-
chronous replication.
Array connections set to async-replication support asynchronous replications
only, while array connections set to sync-replication support both synchronous
and asynchronous replications.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 188
Chapter 7:Protection | Array
l In the Connection Key field, paste the connection key string that you copied from the
other array.
l In the Replication Address field, enter the IP address or FQDN of the interface on the
other array.
9 Click Connect. The array appears in the list of connected arrays and a green check mark
appears in the row, indicating that the two arrays are successfully connected.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 189
Chapter 7:Protection | Array
Disconnecting Arrays
For asynchronous replication, disconnecting two arrays suspends any in-progress data transfer
processes. The process resumes when the arrays are reconnected.
For synchronous replication, you cannot disconnect the arrays if any pods are stretched
between the two arrays.
To disconnect two arrays:
1 Log in to one of the arrays.
2 Select Protection > Array.
3 In the Connected Arrays panel, click the disconnect icon (X) for the array you want to dis-
connect.
4 Click Disconnect.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 190
Chapter 7:Protection | Array
The Protection Groups panel displays a list of all protection groups both local and remote to
the array that are connected to the offload target. If the protection group exists on local
array, click the name of the protection group to drill down to its protection group details; oth-
erwise, hover over the name of the protection group to view its snapshot retention details.
The Protection Group Snapshots panel displays a list of protection group snapshots that
have been replicated to the offload target. To further drill down to see the volume snap-
shots for a protection group snapshot, click the corresponding Get snapshots from offload
targets (download) icon.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 191
Chapter 7:Protection | Array
Note: Connecting to an NFS offload target is not supported on Cloud Block Store.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 192
Chapter 7:Protection | Array
l Access Key ID: Type the access key ID of the AWS account. The access key is 20
characters in length.
l Bucket: Type the name of the Amazon S3 bucket.
l Secret Access Key: Type the secret access key of the AWS account to authenticate
requests between the array and S3 bucket. The secret access key is 40 characters in
length.
l If this is the first time a FlashArray array is connecting to this bucket, select the check
box next to Initialize bucket as offload target to prepare the S3 bucket as an offload
target. The array will only initialize the S3 bucket if it is empty.
If other FlashArray arrays have already connected to this bucket, do not select the
check box.
4 Click Connect.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 193
Chapter 7:Protection | Array
The Summary panel appears with the list of restored volume snapshots. Click OK. Option-
ally click the Go to Volumes page link to view the restored snapshots in the Volume Snap-
shots panel.
Once a volume snapshot has been restored, it can be copied to create a new volume or over-
write an existing one.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 194
Chapter 7:Protection | Array
2 Click the offload target from where you want to recover the destroyed protection group snap-
shot.
3 At the bottom of the Protection Group Snapshots panel, click Destroyed to expand the win-
dow. The Destroyed Protection Group Snapshots panel appears.
4 In the Destroyed Protection Group Snapshots panel, click the Recover Protection Group
Snapshot icon. The Recover Protection Group Snapshot pop-up window appears.
5 Click Recover.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 195
Chapter 7:Protection | Array
Pure Storage Confidential - For distribution only to Pure Customers and Partners 196
Chapter 7:Protection | Snapshots
Snapshots
The Protection > Snapshots page enables you to manage snapshots and contains panels for
volume snapshots and directory snapshots.
The Directory Snapshots panel only contains locally created snapshots. See Figure 7-8.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 197
Chapter 7:Protection | Snapshots
The details for each snapshot include the snapshot name, date and time created, and amount of
data transferred. The data transferred amount is calculated as the size difference between the
current and previous snapshots after data reduction.
Destroying a Snapshot
To destroy a volume snapshot or directory Snapshot:
1 Log in to the array.
2 Select Protection > Snapshots.
3 In the Volume Snapshots or Directory Snapshots panel, select the menu icon and then select
Destroy.
4 Select one or more snapshots from the list and then click the Destroy button.
The destroyed snapshot appears in either the volume snapshots or directory snapshots Des-
troyed panel and begins its eradication pending period.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 198
Chapter 7:Protection | Snapshots
During the eradication pending period, you can recover the snapshot to bring it back to its pre-
vious state, or manually eradicate the destroyed snapshot to reclaim physical storage space.
When the eradication pending period has elapsed, Purity//FA starts reclaiming the physical stor-
age occupied by the snapshots. Once reclamation starts, either because you have manually
eradicated the destroyed snapshot, or because the eradication pending period has elapsed, the
destroyed snapshot can no longer be recovered.
The length of the eradication pending period typically is different for SafeMode-protected objects
and other objects, and is configured in the Settings > System > Eradication Configuration pane.
See "Eradication Delays" on page 35 and "Eradication Delay Settings" on page 285.
Recovering a Snapshot
To recover a snapshot:
1 Log in to the array.
2 Select Protection > Snapshots.
3 In the Volume Snapshots or Directory Snapshots panel, select the Destroyed drop-down
menu.
4 Select the menu icon, select Recover, and then select one or more snapshots to recover.
Alternatively, you can recover individual snapshots by clicking the recover (clock) icon in the
row of a single snapshot that you want to recover.
5 Click Recover.
The recovered volume snapshots or directory snapshots return to the associated list of existing
snapshots.
Eradicating a Snapshot
Eradicating a snapshot permanently deletes it. During the eradication pending period, you can
manually eradicate destroyed snapshots to reclaim physical storage space that they occupy.
Once eradication starts, the destroyed snapshot can no longer be recovered.
To eradicate a snapshot:
1 Log in to the array.
2 Select Protection > Snapshots.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 199
Chapter 7:Protection | Snapshots
3 In the Volume Snapshots or Directory Snapshots panel, select the Destroyed drop-down
menu.
4 Select the menu icon, and then click Eradicate.
5 Click the Eradicate button.
The snapshots are completely eradicated from the array.
Manual eradication is not supported when SafeMode retention lock is enabled.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 200
Chapter 7:Protection | Policies
Policies
The Protection > Policies page enables you to create and update policies for your directory
snapshots. You can assign members and rules to policies. Members are managed directories
for which you want snapshots taken. Rules specify the frequency, time taken, time kept, and cli-
ent name for each managed directory snapshot. See Figure 7-9.
Figure 7-9. Protection – Policies
Pure Storage Confidential - For distribution only to Pure Customers and Partners 201
Chapter 7:Protection | Policies
Pure Storage Confidential - For distribution only to Pure Customers and Partners 202
Chapter 7:Protection | Policies
Pure Storage Confidential - For distribution only to Pure Customers and Partners 203
Chapter 7:Protection | Policies
Deleting a Policy
To delete a policy:
1 Log in to the array.
2 Select Protection > Policies.
3 In the Snapshot Policies panel, click the menu icon for the policy you want to delete and
select Delete.
4 Click the Delete button to confirm.
The policy is deleted.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 204
Chapter 7:Protection | Protection Groups
Removing a Member
To remove a member:
1 Log in to the array.
2 Select Protection > Policies.
3 In the Snapshot Policies panel, select a policy link.
4 In the Members panel, select the Remove Member icon (X) for the member that you want to
remove and then click the Remove button.
The member is removed.
Removing a Rule
To remove a rule:
1 Log in to the array.
2 Select Protection > Policies.
3 In the Snapshot Policies panel, select a policy link.
4 In the Rules panel, select the Remove Rule icon (garbage) for the rule that you want to
remove and then click the Remove button.
The rule is removed.
Protection Groups
The Protection > Protection Groups page displays source and target protection groups;
enables you to create, rename, destroy, eradicate, and recover source protection groups; and
enables you to allow and disallow target protection groups. See Figure 7-11.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 205
Chapter 7:Protection | Protection Groups
A protection group represents a collection of members (volumes, hosts, or host groups) on the
FlashArray that are protected together by using snapshots. The members within the protection
group have common data protection requirements and the same snapshot, replication, and
retention schedules.
Creating a protection group snapshot creates snapshots of the volumes within the protection
group, which are then retained on the current array. Protection group snapshots can also be
asynchronously replicated to other arrays and external storage systems, such as Azure Blob
containers, NFS devices, and S3 buckets. When replicating, the array from which a snapshot is
created is called the source array, while the array to which the snapshot is replicated is called
the target.
The Protection > Protection Groups page displays a list of active and destroyed protection
groups and protection group snapshots on the array.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 206
Chapter 7:Protection | Protection Groups
A source protection group represents a protection group that has been created on the current
array to generate and retain snapshots. On the Protection Groups and Snapshots pages, source
protection groups are identified by the protection group name.
A target protection group represents a protection group that has been created on another
(remote) array and has the current array set as one of its replication targets. On the Protection
Groups and Snapshots pages, target protection groups are identified by the remote array name,
followed by a colon (:) and then the protection group name. For example, in Figure 7-11, a pro-
tection group with the name vm-zxia:pg1 represents a protection group named pg1 that has
been created on array vm-zxia. Protection group pg1 has added the current array as a target
array.
Array vm-zxia2 has three protection groups. Two of the protection groups (p and pg-01-12-
01-59) have been created on the current array. A third protection group named pg1 has been
created on remote array vm-zxia.
The Destroyed Groups panel displays a list of destroyed protection groups that are in the erad-
ication pending period.
Click a protection group name to display a detailed view of the protection group.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 207
Chapter 7:Protection | Protection Groups
See Figure 7-12 for a view of protection groups from array vm-zxia. Protection group pg1 was
created on array vm-zxia and has one volume member and one target array named vm-
zxia2. Four protection group snapshots have been created. The snapshot schedule has been
set to create a protection group snapshot once every hour, while the replication schedule has
been set to take a protection group snapshot every four hours and immediately replicate the
snapshot to the specified target array (vm-zxia2).
Figure 7-12. Protection – Protection Group Source
Pure Storage Confidential - For distribution only to Pure Customers and Partners 208
Chapter 7:Protection | Protection Groups
See Figure 7-13 for a view of the same protection group, but from array vm-zxia2. Since the
protection group is created on array vm-zxia, the attributes of protection group pg1 can only be
changed from array vm-zxia.
Figure 7-13. Protection – Protection Group Target
Pure Storage Confidential - For distribution only to Pure Customers and Partners 209
Chapter 7:Protection | Protection Groups
Members
The Members panel displays a list of all storage objects (volumes, hosts, or host groups) that
have been added to the source array. Only members of the same object type can belong to a pro-
tection group. Replication to offload targets is only supported for volumes and not for hosts and
host groups.
If you are viewing member details for a target group, the member name is made up of the array
name and the protection group name.
If you added volumes to the source array, Purity//FA generates snapshots of those specific
volumes. If you added hosts or host groups, Purity//FA generates snapshots of the volumes
within those hosts or host groups. If the same volume appears in multiple hosts or host groups,
only one copy of the volume is kept.
Member volumes are typically named in the Members panel. However, volumes protected
through the SafeMode global volume protection feature are presented by an asterisk in the Mem-
bers panel, as shown in Figure 7-14.
Figure 7-14. Protection – SafeMode Protection Group Member
Note: Volumes, hosts, and host groups are managed through the Storage tab.
Targets
The Targets panel lists the target arrays and offload targets that have been added to the source
array. You only need to add targets if you plan to asynchronously replicate snapshots to another
array or to an external storage system. The Allowed column indicates whether a target array has
Pure Storage Confidential - For distribution only to Pure Customers and Partners 210
Chapter 7:Protection | Protection Groups
allowed (true) or disallowed (false) asynchronous replication. By default, a target array allows
protection group snapshots to be asynchronously replicated to it from the source array.
Source Arrays
The Source Arrays panel lists the source arrays of the protection group. The Source Arrays
panel only appears if the protection group is in a pod on a remote array, and the protection group
has added the current array as a target for asynchronous replication.
If the protection group is in a stretched pod, both arrays of the stretched pod should be con-
nected to the target array for high availability and therefore be listed in the Source Arrays panel.
If only one of the arrays is connected to the target array, Purity//FA generates an alert notifying
users of this misconfiguration.
On-Demand Snapshots
On-demand snapshots represent single snapshots that are manually generated and retained on
the source array at any point in time. By default, an on-demand snapshot is retained indefinitely
or until it is manually destroyed. When you generate an on-demand snapshot, you can also add
a suffix to the snapshot name, apply the scheduled retention policy to the on-demand snapshot,
and asynchronously replicate the on-demand snapshot to the targets. See Figure 7-15.
When an on-demand snapshot is replicated, and no retention policy is applied, the snapshot is
retained on both the source and target arrays. If a retention policy is applied, the snapshot will
not be retained on the source after replication, although one snapshot may be kept as a
baseline. To keep the snapshot on the source after replication, take another on-demand snap-
shot without replication.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 211
Chapter 7:Protection | Protection Groups
The "Optional Suffix" option allows you to add a unique suffix to the on-demand snapshot name.
The suffix name, which can include letters, numbers, and dashes (-), replaces the protection
group snapshot number in the protection group snapshot name.
Select the “Apply Retention” option to apply the scheduled snapshot retention policy to the on-
demand snapshot. If you do not enable “Apply Retention,” the on-demand snapshot is saved
until you manually destroy it. Select the “Replicate Now” option to replicate the snapshot to the
targets.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 212
Chapter 7:Protection | Protection Groups
Snapshot Schedule
The snapshot schedule displays the snapshot and retention schedule.
Configure the snapshot schedule to determine how often Purity//FA should generate protection
group snapshots and how long Purity//FA should retain the generated snapshots.
For example, a new protection group snapshot schedule may be set to:
l Create a snapshot every hour.
l Retain all snapshots for one day, and then retain four snapshots per day for seven
more days.
This means that Purity//FA generates a snapshot every hour and keeps each generated snap-
shot for 24 hours.
For example, a snapshot that is generated on Saturday at 1:00 p.m. is kept until Sunday 1:00
p.m. See Figure 7-17.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 213
Chapter 7:Protection | Protection Groups
After the one-day retention period, Purity//FA keeps four of the snapshots for an additional
seven days. To determine which four snapshots are retained per day, Purity//FA takes all of the
snapshots generated in the past day and selects the four snapshots that are most evenly spaced
out throughout the day. As the seven-day period for each snapshot elapses, the snapshot is
eradicated.
If the retention schedule is configured to retain one snapshot per day, Purity//FA retains the very
first snapshot taken after the snapshot schedule is enabled, and then retains the next snapshot
taken approximately 24 hours thereafter, and so on.
Once you enable the snapshot schedule, Purity//FA immediately starts the snapshot process.
Replication Schedule
The replication schedule section displays the asynchronous replication and retention schedules.
Configure the replication schedule to determine how often Purity//FA should replicate the pro-
tection group snapshots to the targets and how long Purity//FA should retain the replicated snap-
shots. You can configure a blackout period to specify when replication should not occur.
For example, a new protection group replication schedule may be set as follows:
l Replicate the snapshot every four hours, except between 8:00 a.m. and 5:00 p.m.
l Retain all replicated snapshots for one day, and then retain four snapshots per day
for seven more days.
This means that Purity//FA generates a snapshot on the source array every four hours and
immediately replicates each snapshot to the targets. Purity//FA retains each replicated snapshot
for one day (24 hours).
For example, a snapshot that is generated on the source array and replicated to the targets on
Friday 2:00 a.m. is kept until Saturday 2:00 a.m.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 214
Chapter 7:Protection | Protection Groups
The asynchronous replication process stops during the blackout period between 8:00 a.m. and
5:00 p.m. The start of a blackout period will not impact any snapshot replication sessions that
are already in progress. Instead, Purity//FA will wait until the in-progress snapshot replication is
complete before it observes the blackout period.
Blackout periods only apply to scheduled asynchronous replications. Asynchronous replications
generated by on-demand snapshots (via Protection > Snapshots > Create Snapshot > Rep-
lication Now) do not observe the blackout period. See Figure 7-18.
Figure 7-18. Replication Schedule
After the one-day retention period, Purity//FA keeps four of the replicated snapshots for an addi-
tional seven days. The other replicated snapshots are eradicated. To determine which four rep-
licated snapshots are retained per day, Purity//FA takes all of the replicated snapshots
generated in the past day and selects the four that are most evenly spaced out throughout the
day. Purity//FA destroys each replicated snapshot as its seven-day period elapses.
If the retention schedule is configured to retain one replicated snapshot per day, Purity//FA will
retain the very first snapshot taken after the replication schedule is enabled, and then retain the
next snapshot taken approximately 24 hours thereafter, and so on.
Once you enable the replication schedule, Purity//FA immediately starts the asynchronous rep-
lication process, with the following exceptions:
l If you are enabling the replication schedule during the blackout period, Purity//FA
waits for the blackout period to end before it begins the replication process.
l If you are enabling the replication schedule and the "at" time is specified, Purity//FA
starts the replication process at the specified "at" time.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 215
Chapter 7:Protection | Protection Groups
Set the snapshot schedule to specify how often Purity//FA should generate protection group
snapshots and how long Purity//FA should retain the generated snapshots. If the snapshot fre-
quency is set to one or more days, optionally specify the preferred time of day for the snapshot
to occur.
After you have added members and set the snapshot schedule, enable the schedule to start the
snapshot and retention process. You can enable and disable the schedule at any time to manu-
ally start and stop, respectively, the process.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 216
Chapter 7:Protection | Protection Groups
Pure Storage Confidential - For distribution only to Pure Customers and Partners 217
Chapter 7:Protection | Protection Groups
and disallowing replication on a target array will not impact the replication process between the
source array and other target arrays. If you disallow asynchronous replication while a replication
session is in progress, Purity//FA will wait until the session is complete and then stop any new
replication sessions from being created.
For replication to an offload target, the target is an external storage system, such as an Azure
Blob container, NFS device, or S3 bucket.
Set the Replication and Retention Schedule
Set the replication schedule to specify how often Purity//FA should asynchronously replicate the
protection group snapshots to the targets, and how long Purity//FA should retain the replicated
snapshots. You can configure a blackout period to specify when replication should not occur.
If the replication frequency is set to one or more days, optionally specify the preferred time of
day for the replication to occur.
After you have added the targets and members and set the replication schedule, enable the
schedule to start the replication process. You can enable and disable the schedule at any time to
manually start and stop, respectively, the process.
SafeMode
The SafeMode section indicates whether retention lock is enabled for the protection group.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 218
Chapter 7:Protection | Protection Groups
Pure Storage Confidential - For distribution only to Pure Customers and Partners 219
Chapter 7:Protection | Protection Groups
5 In the Available Members column, click the member you want to add. The member appears
in the Selected Members column.
6 Click Add to confirm the addition of the selected member.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 220
Chapter 7:Protection | Protection Groups
l Set the frequency of the snapshot creation. If the snapshot frequency is set to one or
more days, optionally set the 'at' time to specify the preferred hour of each day when
Purity//FA creates the snapshot. For example, if the snapshot schedule is set to
"Create a snapshot every 2 days at 6pm," Purity//FA creates the snapshots every 2
days at or around 6:00 p.m. If the 'at' option is set to dash (-), Purity//FA chooses the
time of day to create the snapshot.
l Set the snapshot retention schedule to keep the specified number of snapshots for
the specified length of time (as minutes, hours, or days) and then to keep the spe-
cified number of snapshots for the specified additional number of days.
7 Click Save to save the snapshot and retention schedule. If the snapshot schedule is
enabled, the array automatically starts generating and retaining snapshots according to the
configured schedule.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 221
Chapter 7:Protection | Protection Groups
l Set the blackout period, if any. The asynchronous replication process stops during
the blackout period. When the blackout period starts, replication processes that are
still in progress will not be interrupted. Instead, Purity//FA will wait until the in-pro-
gress snapshot replication is complete before it observes the blackout period.
l Set the retention schedule to keep the specified number of replicated snapshots for
the specified length of time (as minutes, hours, or days) and then to keep the spe-
cified number of snapshots for the specified additional number of days.
7 Click Save to save the replication and retention schedule. If the replication schedule is
enabled, the array automatically starts replicating snapshots and retaining the replicated
snapshots according to the configured schedule.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 222
Chapter 7:Protection | Protection Groups
Copying a Snapshot
1 Log in to the source or target array.
2 Select Protection > Protection Groups.
3 In the Source Protection Group Snapshots or Target Protection Group Snapshots panel,
click the Copy (pages) icon of the protection group snapshot that you want to copy.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 223
Chapter 7:Protection | Protection Groups
Pure Storage Confidential - For distribution only to Pure Customers and Partners 224
Chapter 7:Protection | Protection Groups
Once reclamation starts, either because you have manually eradicated the destroyed protection
group, or because the eradication pending period has elapsed, the destroyed protection group
and its snapshot data can no longer be recovered.
(See "Eradication Delays" on page 35 for information about eradication pending periods. Erad-
ication pending periods are configured in the Settings > System > Eradication Configuration
pane. See "Eradication Delay Settings" on page 285.)
Pure Storage Confidential - For distribution only to Pure Customers and Partners 225
Chapter 7:Protection | Protection Groups
3 In the Destroyed Protection Groups panel, click the Eradicate (garbage) icon or the menu
icon and select Eradicate....
The Eradicate Protection Group dialog box appears.
If you selected the menu icon and then selected Eradicate... and there are multiple des-
troyed protection groups in the list, select all of the protection groups that you want to des-
troy.
4 Click Eradicate. Purity//FA immediately starts reclaiming the physical storage occupied by
the protection group snapshot or snapshots.
Manual eradication is not supported when SafeMode retention lock is enabled.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 226
Chapter 7:Protection | Protection Groups
2 In the Target Protection Groups column, click the target protection group you want to dis-
allow.
3 Click Disallow.
4 Log in to the target array.
5 Select Protection > Protection Groups.
Enabling SafeMode
By default, retention lock is unlocked. Enabling the retention lock enables ransomware pro-
tection for the protection group. Once the retention lock is ratcheted, it cannot be unlocked by
the user. Contact Pure Storage Technical Services for further assistance. Enrollment is required
with at least two administrators and pin codes.
1 Log in to the source array.
2 Select Protection > Protection Groups.
3 Click the protection group where you want SafeMode enabled.
4 In the SafeMode pane, if the status is “unlocked”, click the edit icon.
5 In the pop-up dialog box, click the Ratcheted toggle button to enable (blue) the SafeMode fea-
ture.
6 Click Save.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 227
Chapter 7:Protection | ActiveDR
ActiveDR
The Protection > ActiveDR page enables you to view, create, and manage replica links. Replica
link management features include the ability to delete, pause, and resume the connection from a
source-array pod to a target-array pod, and the ability to promote and demote local pods. See
Figure 7-19.
Figure 7-19. Protection – ActiveDR
Creating and managing replica links is part of the ActiveDR configuration process, which is
described in "ActiveDR Replication" on page 140.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 228
Chapter 7:Protection | ActiveCluster
ActiveCluster
ActiveCluster replication allows I/O to be sent into either of two connected arrays and have it syn-
chronized with the other array. ActiveCluster replication is configured through pods. The Pro-
tection > ActiveCluster page enables you to clone, rename, and destroy pods that have been
configured for ActiveCluster replication. Pods configured for ActiveCluster have been promoted
but not linked.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 229
Chapter 7:Protection | ActiveCluster
Additional resources for ActiveCluster are available through the Pure Storage support website:
l Requirements and Best Practices
l Quick Start Guide
l Active-Active Asynchronous Replication
l Frequently Asked Questions
For more information about pods, see "Pods" on page 133. Also see Figure 7-21.
Figure 7-21. Protection – ActiveCluster
You can create, destroy, rename, or clone ActiveCluster pods, but you cannot promote or
demote them.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 230
Chapter 8:
Analysis
The Analysis page displays historical array data, including storage capacity, consumption or
effective used capacity, and I/O performance trends across all volumes, host and host groups,
and replication bandwidth activity across all source and target groups on the array. See Figure
8-1.
Figure 8-1. Analysis
The Analysis page displays a series of rolling graphs consisting of real-time capacity, per-
formance, and replication metrics; the incoming data appear along the right side of each graph
as older numbers drop off the left side.
The curves in each graph are comprised of a series of individual data points. Hover over any
part of a graph to display values for a specific point in time. The values that appear in the point-
in-time pop-ups are rounded to two decimal places.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 231
Chapter 8:Analysis |
Different graphs display different metrics. Furthermore, specifying all or individual volumes,
volume groups, pods, protection groups, or hosts determine the metrics that appear within a
graph.
The FlashArray maintains a rolling one-year history of data. The granularity of the historical data
increases with age; older data points are spaced further apart in time than more recent ones.
See Figure 8-2 for an example of performance statistics for the five selected volumes on the
array on 7/06/2022 at 14:40:28.
Figure 8-2. Analysis – Volume Performance Statistics
By default, the Analysis charts display data for the past 1 hour. To view historical data over a dif-
ferent time range, click the 1 Hour range button and select the desired time range. To further
zoom into a time range, from anywhere inside the chart, click and drag from the desired start
time to the desired end time. Click Reset Zoom to zoom back to the time range specified.
The charts in the Analysis page are grouped into the following areas: Performance, Capacity,
and Replication.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 232
Chapter 8:Analysis | Performance
Performance
The Performance charts display I/O performance metrics in real time. See Figure 8-3.
Figure 8-3. Analysis - Performance
By default, Purity//FA displays the performance details for the entire array.
If a volume is in a pod that is stretched across another array, optionally click the Arrays button to
filter the performance details by array. If none of the arrays are selected (default), the chart dis-
plays the overall performance trends for each selected volume. If one or more arrays are selec-
ted, the chart displays the performance trends by array for each selected volume.
To analyze the performance details of specific volumes, click the Volumes sub-tab along the top
of the Performance page, select Volumes from the drop-down list, and select the volumes you
want to analyze. If a bandwidth limit has been set for the volume, the limit appears when the
volume is selected. You can analyze up to five volumes at one time. In the Selection drop-down
list, select Clear All to clear the volume selections.
To analyze the performance details of volumes within specific volume groups, click the Volumes
sub-tab along the top of the Performance page, select Volume Groups from the drop-down list,
and select the volume groups you want to analyze. You can analyze up to five volume groups at
one time. Click Clear All to clear the volume group selections.
To analyze the performance details of volumes within specific pods, click the Pods sub-tab
along the top of the Performance page and select the pods you want to analyze. You can
Pure Storage Confidential - For distribution only to Pure Customers and Partners 233
Chapter 8:Analysis | Performance
analyze up to five pods at a time. In the Selection drop-down list, select Clear All to clear the pod
selections.
If a pod is stretched across another array, optionally click the Arrays button to filter the per-
formance details by array. If none of the arrays are selected (default), the chart displays the over-
all performance trends of all volumes in the selected pod. If one or more arrays are selected, the
chart displays the performance trends by array for each selected volume.
To analyze the performance details of managed directories, click the Directories sub-tab along
the top of the Performance page and select the directories you want to analyze. You can ana-
lyze up to five directories at a time. In the Selection drop-down list, select Clear All to clear the
directory selections.
To analyze the performance details of specific hosts and host groups, click the Hosts sub-tab
along the top of the Performance page and select the hosts or host groups you want to analyze.
Click the menu icon in the upper-right corner of the chart to display or hide mirrored data. To dis-
play remote hosts and host groups, click the menu icon in the upper-right corner of the chart.
The Performance page includes Latency, IOPS, and Bandwidth charts. The point-in-time pop-
ups in each of the performance charts display the following values:
Latency
The Latency chart displays the average latency times for various operations.
l Read Latency (R) - Average arrival-to-completion time, measured in milliseconds, for
a read operation.
l Write Latency (W) - Average arrival-to-completion time, measured in milliseconds,
for a write operation.
l Mirrored Write Latency (MW) - Average arrival-to-completion time, measured in mil-
liseconds, for a write operation. Represents the sum of writes from hosts into the
volume's pod and from remote arrays that synchronously replicate into the volume's
pod.
Latency details are displayed in graphs of one I/O type, such as Read, Write, or Mirrored
Write.
l SAN Time - Average time, measured in milliseconds, required to transfer data
between the initiator and the array.
l QoS Rate Limit Time - Average time, measured in microseconds, that all I/O
requests spend in queue as a result of bandwidth limits reached on one or more
volumes.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 234
Chapter 8:Analysis | Performance
l Queue Time - Average time, measured in microseconds, that an I/O request spends
in the array waiting to be served. The time is averaged across all I/Os of the selected
types.
l Service Time - Average time, measured in microseconds, it takes the array to serve a
read, write, or mirrored write I/O request.
l Total Latency - The sum of SAN Time, QoS Rate Limit Time, Queue Time, and Ser-
vice Time, in microseconds.
IOPS
The IOPS (Input/output Operations Per Second) chart displays I/O requests processed
per second by the array. This metric counts requests per second, regardless of how much
or how little data is transferred in each.
l Read IOPS (R) - Number of read requests processed per second.
l Read Average IO Size (R IO Size) - Average read I/O size per request processed.
Calculated as (read bandwidth)/(read IOPS).
l Write IOPS (W) - Number of write requests processed per second.
l Write Average IO Size (W IO Size) - Average write I/O size per request processed.
Calculated as (write bandwidth)/(write IOPS).
l Mirrored Write IOPS (MW) - Number of write requests processed per second. Rep-
resents the sum of writes from hosts into the volume's pod and from remote arrays
that synchronously replicate into the volume's pod.
l Mirrored Write Average IO Size (MW IO Size) - Average mirrored write I/O size per
request processed. Calculated as (mirrored write bandwidth)/(mirrored
write IOPS).
Bandwidth
The Bandwidth chart displays the number of bytes transferred per second to and from all
file systems. The data is counted in its expanded form rather than the reduced form
stored in the array to truly reflect what is transferred over the storage network. Metadata
bandwidth is not included in these numbers.
l Read Bandwidth (R) - Number of bytes read per second.
l Write Bandwidth (W) - Number of bytes written per second.
l Mirrored Write Bandwidth (MW) - Number of bytes written into the volume's pod per
second. Represents the sum of writes from hosts into the volume's pod and from
remote arrays that synchronously replicate into the volume's pod.
l Other Requests (O)- The number of other requests processed per second.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 235
Chapter 8:Analysis | Performance
Pure Storage Confidential - For distribution only to Pure Customers and Partners 236
Chapter 8:Analysis | Capacity
Capacity
The Capacity charts display array-wide effective used capacity or space consumption inform-
ation, including physical storage capacity and the amount of storage occupied by data and
metadata. See Figure 8-4 for the Analysis > Capacity tab on a purchased array.
Figure 8-4. Analysis - Capacity
See Figure 8-5 for the Analysis > Capacity tab on subscription storage.
Figure 8-5. Analysis - Capacity on Subscription Storage
Pure Storage Confidential - For distribution only to Pure Customers and Partners 237
Chapter 8:Analysis | Capacity
The Array Capacity chart displays the amount of usable physical storage on the array and the
amount of storage occupied by data and metadata. The data point fluctuations represent
changes in physical storage consumed by a volume.
For example, a volume may experience a spike in storage consumption when more data is
being written to it or when other volumes with shared data are eradicated. Conversely, a volume
may experience a dip in storage consumption from trimming or from an increased sharing of
deduplicated data with other volumes.
By default, Purity//FA displays the capacity details for the entire array. To analyze the capacity
details of specific volumes, click the Volumes sub-tab along the top of the Capacity page, select
Volumes from the drop-down list, and select the volumes you want to analyze.
To analyze the performance details of volumes within specific volume groups, click the Volumes
sub-tab along the top of the Capacity page, select Volume Groups from the drop-down list, and
select the volume groups you want to analyze. You can analyze up to five volumes and volume
groups at a time. Click Clear All to clear the volume or volume group selections.
To analyze the capacity details of volumes within specific pods, click the Pods sub-tab along the
top of the Capacity page and select the pods you want to analyze. You can analyze up to five
pods at a time. Click Clear All to clear the pod selections.
To analyze the performance details of managed directories, click the Directories sub-tab along
the top of the Capacity page and select the directories you want to analyze. You can analyze up
to five directories at a time. Click Clear All to clear the directory selections.
In the Capacity chart on a purchased array, the point-in-time pop-up displays the following met-
rics:
Empty Space
Unused space available for allocation.
System
Physical space occupied by internal array metadata.
Replication Space
Physical system space used to accommodate pod-based replication features, includ-
ing failovers, resync, and disaster recovery testing.
Shared Space
Physical space occupied by deduplicated data, meaning that the space is shared with
other volumes and snapshots as a result of data deduplication.
Snapshots
Physical space occupied by data unique to one or more snapshots.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 238
Chapter 8:Analysis | Capacity
Unique
Physical space that is occupied by data of both volumes and file systems after data
reduction and deduplication, but excluding metadata and snapshots.
Used
Total physical space occupied by system, shared space, volume, file system, and
snapshot data.
Usable Capacity
Total physical usable space on the array. Replacing a drive may result in a dip in
usable capacity. This is intended behavior. RAID striping splits data across an array
for redundancy purposes, spreading a write across multiple drives. A newly added
drive cannot use its full capacity immediately but must stay in line with the available
space on the other drives as writes are spread across them. As a result, usable capa-
city on the new drive may initially be reported as less than the amount expected
because the array will not be able to write to the unallocatable space. Over time,
usable capacity fluctuations will occur, but as data is written to the drive and spreads
across the array, usable capacity will eventually return to expected levels.
Data Reduction
Ratio of mapped sectors within a volume versus the amount of physical space the
data occupies after data compression and deduplication. The data reduction ratio
does not include thin provisioning savings.
For example, a data reduction ratio of 5:1 means that for every 5 MB the host writes to
the array, 1 MB is stored on the array's flash modules.
On subscription storage, the point-in-time pop-up displays the following metrics based on effect-
ive used capacity:
Shared
Effective used capacity consumed by cloned data, meaning that the space is shared
with cloned volumes and snapshots as a result of data deduplication.
Snapshots
Effective used capacity consumed by data unique to one or more snapshots.
Unique
Effective used capacity data of both volumes and file systems after removing clones,
but excluding metadata and snapshots
Total
Total effective used capacity containing user data, including Shared, Snapshots, and
Unique storage.
Usable Capacity
Pure Storage Confidential - For distribution only to Pure Customers and Partners 239
Chapter 8:Analysis | Replication
Total usable capacity available from a host’s perspective, including both consumed
and unused storage.
The Host Capacity chart displays the provisioned size of all selected volumes. In the Host Capa-
city chart, the point-in-time pop-up displays the Size metric for a purchased array:
Size
Total provisioned size of all volumes. Represents storage capacity reported to hosts.
On subscription storage, the point-in-time pop-up displays the Provisioned metric:
Provisioned
Total provisioned size of all volumes. Represents the effective used capacity reported to
hosts.
Replication
The Replication charts display historical bandwidth information for asynchronous, synchronous
(ActiveCluster), and continuous (ActiveDR) replication activities on the array. The Bandwidth
chart (not to be confused with the performance Bandwidth chart) displays the number of bytes of
replication snapshot data transferred over the storage network per second between this array
and its source arrays, target arrays, and external storage systems (such as Azure Blob con-
tainers, NFS devices, and S3 buckets), at certain points in time. See Figure 8-6.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 240
Chapter 8:Analysis | Replication
By default, Purity//FA displays bandwidth details for the entire array. In the replication Bandwidth
chart for the array, the point-in-time pop-up displays the following metrics:
l Resync (RX + TX) Number of bytes of replication data transmitted and receive per
second as the array actively gets the latest pod data so that it becomes fully syn-
chronized with its peer arrays. This can be due to an initial pod stretch or due to an
array coming back online after an extended offline event.
l Sync (RX + TX) Number of bytes of synchronous replication data transmitted and
received per second across all pods.
l Async (RX + TX) Number of bytes of asynchronous replication snapshot data trans-
mitted and received per second across all protection groups.
l Continuous (RX + TX) Number of bytes of continuous replication data transmitted
and received per second across all pods.
l Total Total number bytes of replication data transmitted and received per second
across all protection groups.
To analyze the details for a specific protection group, click the Protection Groups sub-tab along
the top of the Replication page, and select the protection groups you want to analyze. You can
select up to five protection groups at one time. The names of the selected protection groups
appear at the top of the details pane. Click Clear All to clear the protection group selection. In
Pure Storage Confidential - For distribution only to Pure Customers and Partners 241
Chapter 8:Analysis | Replication
the replication Bandwidth chart for protection groups, the point-in-time pop-up displays the fol-
lowing metrics:
l RX + TX Number of bytes of replication snapshot data transmitted and received per
second across all protection groups.
l RX Number of bytes of replication snapshot data received per second by the targets
for the selected protection groups.
l TX Number of bytes of replication snapshot data transmitted per second from the
source array for the selected protection groups.
Replication Bandwidth
You can display the bandwidth information of continuous, synchronous, and resync replication
for the pods on the array by selecting Replication > Pods. The Replication > Pods page con-
tains the following panes (see Figure 8-7):
l Pods Displays the bandwidth information for each pod and the total bandwidth inform-
ation for all pods for continuous, sync, and resync replication types, including the num-
ber of bytes per second transmitted (to remote), received (from remote), and both
transmitted and received (total).
l Continuous Displays the graphical representation of continuous replication history for
individual pods over the selected range of time, annotated with the number of bytes of
replication data transmitted (to remote), received (from remote), and both transmitted
and received (total) per second for the point in time.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 242
Chapter 8:Analysis | Replication
l Sync Displays the graphical representation of synchronous replication history for indi-
vidual pods over the selected range of time, annotated with the number of bytes of
replication data transmitted (to remote), received (from remote), and both transmitted
and received (total) per second for the point in time.
l Resync Displays the graphical representation of resync replication history for indi-
vidual pods over the selected range of time, annotated with the number of bytes of
replication data transmitted (to remote), received (from remote), and both transmitted
and received (total) per second for the point in time.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 243
Chapter 8:Analysis | Replication
Pure Storage Confidential - For distribution only to Pure Customers and Partners 244
Chapter 8:Analysis | Replication
Pure Storage Confidential - For distribution only to Pure Customers and Partners 245
Chapter 8:Analysis | Replication
l Select the To remote check box to view the replication bandwidth information to the
remote array.
l Select the From remote check box to view the replication bandwidth information from
the remote array.
l Select both the To remote and From remote check boxes to view the replication
bandwidth information to the remote array, from the remote array, and the total (to
and from the remote array).
l Deselect both the To remote and From remote check boxes to hide the graphs.
5 To view the replication bandwidth information over a different time range, click the 1 Hour
range button to select a predefined time range. By default, the charts display the replication
bandwidth information for the past one hour. For 1-hour time range, the charts are refreshed
every 30 seconds. For 3-hour time range, the charts are refreshed every minute.
6 (Optional) In any of the Continuous, Sync, and Resync panes, click the graph to update the
bandwidth information of all replication types in the Pods pane for a specific point in time.
Click the menu icon of a chart to export the image of the chart in PNG or CSV format.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 246
Chapter 9:
Health
The Health page displays and manages the state of the array.
Hardware
The Hardware panel graphically displays the status of the FlashArray or Cloud Block Store hard-
ware components. See Figure 9-1 for a schematic representation of a FlashArray with several
component pop-ups displayed.
Figure 9-1. Hardware – FlashArray
See Figure 9-2 for a schematic representation of a Cloud Block Store with one component pop-
up displayed.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 247
Chapter 9:Health | Hardware
The title bar of the Hardware panel includes the array name, the raw capacity value, and parity
information. The raw capacity value represents the total usable capacity of the array, displayed
in bytes in both base 2 (e.g., 98.50 T for 98.50 tebibytes), and base 10 (e.g., 108.30 TB for
108.30 terabytes) formats. The parity value represents the percentage of data that is fully pro-
tected. The parity value will drop below 100% if the data isn't fully protected, such as when a
module is pulled and the array is rebuilding the data to bring it back to full parity.
The image is a schematic representation of the array with colored indicators of each com-
ponent's status. The colored squares within each hardware component represent the com-
ponent status:
l Green: Healthy and functioning properly at full capacity.
l Yellow: At risk, outside of normal operating range, or unrecognized.
l Red: Failed, installed but not functioning, or not installed (but required).
l Black: Not installed. With FlashArray//M, used for NVRAM bays and storage bays
that are allowed to be empty.
l Gray: Disconnected. Also used for components that are temporarily offline while
undergoing a firmware update.
Hover the mouse over a hardware component to display its status and details. For example,
hover over the Temperature component to display the following details: name of the shelf or con-
troller that is being monitored, physical location of the temperature sensors, and current tem-
perature readings.
Hardware components that can be actively managed from the Purity//FA GUI include buttons
that perform certain functions, such as turning ID lights on and off, and changing shelf ID num-
bers. For example, hover over the Shelf component to display its health status and shelf ID num-
ber. Click the Turn On ID Light button to turn on the LED light on the physical shelf for easy
identification. Click the Change ID button to change the ID number that appears on the physical
shelf.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 248
Chapter 9:Health | Hardware
Hover over a flash module component to display its health status, physical location in the shelf,
and capacity. If the module has been added to the array and is waiting to be admitted, click the
Admit all unadmitted drives button to admit all of the unadmitted modules, including the current
one.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 249
Chapter 9:Health | Hardware
The following tables list the //XL, //X, and//M hardware components that report status, grouped
by their location on the array. The hardware component names are used throughout Purity//FA,
for instance in the GUI Health > Hardware page, and with CLI commands such as puredrive
and purehw. See Table 9-3 for chassis components, Table 9-4 for controller components, and
Table 9-5 for storage shelf components.
The Identify Light column shows which components have an LED light on the physical com-
ponent that can be turned on and off.
Table 9-3. Chassis (CH0)
Component Name Identify Light Component Type
CH0 Yes Chassis
CH0.BAYn Yes Storage bay
CH0.NVBn Yes NVRAM bay
CH0.PWRn — Power module
Pure Storage Confidential - For distribution only to Pure Customers and Partners 250
Chapter 9:Health | Hardware
Pure Storage Confidential - For distribution only to Pure Customers and Partners 251
Chapter 9:Health | Alerts
3 Hover over the newly added modules to verify that they are in unadmitted status, indicating
that the modules have been successfully connected but not yet admitted to the array.
4 Hover over any one of the unadmitted modules and click Admit all unadmitted modules to
admit all modules that have been added (connected) but not yet admitted to the array.
5 Hover over the newly admitted modules to verify that all of the shelves and drives are in
healthy status, indicating that the modules have been successfully admitted and are in use
by the system. This completes the drive admission process.
Alerts
Purity//FA generates an alert when there is a change to the array or to one of its hardware or soft-
ware components.
The Alerts panel displays the list of alerts that have been generated on the array. See Figure 9-
3.
Figure 9-3. Alerts
Pure Storage Confidential - For distribution only to Pure Customers and Partners 252
Chapter 9:Health | Alerts
To conserve space, Purity//FA stores a reasonable number of alert records on the array. Older
entries are deleted from the log as new entries are added. To access the complete list of mes-
sages, contact Pure Storage Technical Services.
Purity//FA assigns a unique numeric ID to each alert as it is created. By default, alerts are sorted
in chronological descending order by "Last Seen" date.
The icons that appear along the left side of each alert in the list output represent the alert sever-
ity level:
l Blue (INFO) icons represent informational messages generated due to a change in
state. INFO messages can be used for reporting and analysis purposes. No action is
required.
l Yellow (WARNING) icons represent important messages warning of an impending
error if action is not taken.
l Red (CRITICAL) icons represent urgent messages that require immediate attention.
Click any of the column headings in the Alerts panel to change the sort order, and click any-
where in an alert row to display additional alert details.
Each alert in the list output includes the following information:
l Flag: Alert that has been flagged by Purity//FA or the user. Purity//FA automatically
flags all warning and critical alerts. An alert remains flagged until you have manually
cleared the flag to indicate that the alert has been addressed. If there are further
changes to the condition that caused the alert (for example, a temperature of a con-
troller or shelf has changed), Purity//FA will set the flag again.
l Sev: Alert severity, categorized as critical, warning, or info.
Critical (red) alerts are typically triggered by service interruptions, major performance
issues, or risk of data loss, and require immediate attention. For example, the array
triggers a critical alert if a module has been removed from the chassis.
Warning (yellow) alerts are of low to medium severity and require attention, though
not as urgently as critical alerts. For example, the array triggers a warning alert if it
detects an unhealthy module.
Informational (blue) alerts inform users of a general behavior change and require no
action. For example, the array triggers an informational alert if the NFS service is
unhealthy.
By default, alerts of all severity levels are displayed. To filter the list to display only
alerts of a certain minimum severity level, click the All Severity Levels drop-down but-
ton and select the desired minimum severity level from the list.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 253
Chapter 9:Health | Alerts
l ID: Unique number assigned by the array to the alert. ID numbers are assigned to
alerts in chronological ascending order.
l Code: Alert code number that Pure Storage uses to identify the type of alert event.
l State: Current state of the alert. Possible states include: open and closed.
An alert goes from open state to closed state when the issue is completely
resolved.
By default, both open and closed alerts are displays. To filter the list to display only
open alerts, click the Open and Closed drop-down button and select Open Only.
l Created: Date and time the alert was first generated and initial alert email noti-
fications were sent to alert watchers.
By default, all alert records on the array are displayed. To display a list of alerts that
were created within a certain time range, click the All Time drop-down button and
select the desired time range from the list.
l Updated: Most recent date and time the array saw the issue that generated the alert.
Note that alerts that have been updated within the last 24 hours and are still open also
appear in the Dashboard > Recent Alerts panel.
l Category: Group to which the alert belongs. Categories include Array Alerts, Hard-
ware Alerts, Software Alerts.
By default, alerts from all categories are displayed. To filter the list to display only
alerts from a certain category, click the All Categories drop-down button and select
the category from the list.
l Component: Specific array, software, or hardware component that triggered the alert.
l Subject: Alert details.
Alerting also appears in other sections of the Purity//FA GUI. From any page of the Purity//FA
GUI, the alert icons that appear in the upper-right corner of the page display the number or open
alerts for the respective alert severity. For example, a "1" next to a yellow warning icon indicates
one open warning alert.
In the Dashboard page, the Recent Alerts pane displays a list of open alerts that have been
updated within the last 24 hours.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 254
Chapter 9:Health | Connections
Connections
The Connections page displays connectivity details between the Purity//FA hosts and the array
ports.
The Host Connections panel displays a list of hosts, the connectivity status of each host, and the
number of initiator ports associated with each host. See Figure 9-4.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 255
Chapter 9:Health | Connections
The Paths column displays the connectivity status between the host and controllers in a highly
available environment, where the colored value indicates one of the following connection health
statuses:
l Green: Fully redundant and highly available. No issues detected.
l Yellow: Not fully redundant. Issues detected that may impact high availability.
l Red: Single controller connectivity only.
l Gray: No connectivity.
Possible connection statuses include:
Redundant
All paths between the host and each of the controllers in a highly available array are con-
nected.
Uneven
The number of paths between the host and each controller is uneven. This may impact
high availability. Make sure that there are the same number of paths from the host to
each controller.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 256
Chapter 9:Health | Connections
Unused Port
The host has unused initiators. This may impact high availability. Make sure that all of the
initiators have at least one path to the array.
Single Controller
The host has paths to only one of the controllers. No paths exist to the other controller.
This impacts high availability. Make sure that there are redundant paths from the host to
both controllers.
Single Controller - Failover
The host has paths to one controller, but one or more of those paths has failed over.
None
The host is not connected to any of the controllers.
Select the check boxes along the top of the Host Connections list to filter the hosts by con-
nection status.
The Array Ports panel displays the connection mappings between each array port and initiator
port. Each array port includes the following connectivity details: associated iSCSI Qualified
Name (IQN), NVMe Qualified Name (NQN), or Fibre Channel World Wide Name (WWN)
address, communication speed, and failover status. A check mark in the Failover column mean
the port has failed over to the corresponding port pair on the primary controller.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 257
Chapter 9:Health | Connections
Select Health > Connections. The Array Ports pane displays the connection mappings between
each array port and initiator port.
Optionally click the menu icon and select Download CSV to save the ports.csv file to your
local machine.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 258
Chapter 9:Health | Network
Network
View network statistics, bandwidth, and errors for the network interfaces on the array by select-
ing Health > Network. See Figure 9-5.
Figure 9-5. Network
Pure Storage Confidential - For distribution only to Pure Customers and Partners 259
Chapter 9:Health | Network
l CRC Errors/s (RX): Indicates the number of received packets per second
with incorrect checksums. A cyclic redundancy check (CRC) is an error-
detecting code for data transmission.
l Frame Errors/s (RX): Indicates the number of received packets per
second with misaligned Ethernet frames.
l Carrier Errors/s (TX): Indicates the number of transmitted packets per
second with duplex mismatch or faulty hardware issues.
l Dropped Errors/s (TX): Indicates the number of transmitted packets per
second that were dropped.
l Other Errors/s: Indicates the number of packets per second with all other
types of receive and transmit errors.
l Total Errors/s
Displays the graphical representation of the error history over the selected range of
time, annotated with the number of total errors per second for individual (or the sum of
all) interfaces at a specific point in time.
l Bandwidth
Displays the graphical representation of the bandwidth history over the selected
range of time, annotated with the numbers of the transmitted bytes, received bytes,
and total bytes per second for individual (or the sum of all) interfaces at a specific
point in time.
l Packets/s
Displays the graphical representation of the historical packet information over the
selected range of time, annotated with the numbers of transmitted packets, received
packets, and total packets per second for individual (or the sum of all) interfaces at a
specific point in time.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 260
Chapter 9:Health | Network
l Summary
Displays the network statistics information. This is the default.
l Errors
Displays the error statistics including CRC, frame, carrier, dropped, and other errors.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 261
Chapter 9:Health | Network
6 (Optional) In the Total Errors/s, Bandwidth, or Packets/s pane, click the graph to update the
total errors, bandwidth, and packets information in the Ports pane for a specific point in time.
Click the menu icon of a chart to export the image of the chart in PNG or CSV format.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 262
Chapter 10:
Settings
The Settings page displays and manages the general attributes and network settings of an
array.
System
The Settings > System page displays and manages the general attributes of the FlashArray.
See Figure 10-1.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 263
Chapter 10:Settings | System
Pure Storage Confidential - For distribution only to Pure Customers and Partners 264
Chapter 10:Settings | System
Array Name
At the top of the System page is the name of the array. The array name is used for various admin-
istration and configuration purposes.
The array name appears in audit and alert messages. The array name also represents the send-
ing account name for Purity//FA email alert messages.
The name is used to identify the array when connecting it to other arrays. For asynchronous rep-
lication, the array name appears as part of the snapshot name when viewing replicated snap-
shots on a target array. For ActiveCluster (synchronous replication), the array name is used to
identify the arrays over which pods are stretched and unstretched.
The array can be renamed at any time, and the name change takes effect immediately. Note that
Purity//FA does not register array names with the DNS, so if you change the array name, you
must re-register the name before the array can be addressed by name in browser address bars,
ICMP ping commands, and so on.
Alert Watchers
Purity//FA generates an alert whenever the health of a component degrades or a capacity
threshold is reached. Alerts can also be sent as email notifications to designated alert watchers.
The Alert Watchers panel displays the email addresses of designated alert watchers and the
alert status of each watcher. The sending account name for Purity//FA alert email notifications is
the array name at the configured sender domain.
The list of alert watchers includes the built-in [email protected]
address, which cannot be deleted.
Once added, an alert watcher will starts receiving alert email notifications.
Alert watchers can be in enabled or disabled status. Alert watchers who are in enabled status
receive alert email notifications. When an alert watcher is created, its watcher status is
Pure Storage Confidential - For distribution only to Pure Customers and Partners 265
Chapter 10:Settings | System
automatically set to enabled status. Alert watchers who are in disabled status do not receive
alert email notifications. Disabling an alert watcher does not delete the recipient's email address
- it only stops the watcher from receiving alert notifications. Alert watchers can be enabled and
disabled at any time. The current alert watcher status is determined by the color of the toggle but-
ton that appears next to the alert watcher email address, where blue represents an enabled alert
watcher and gray represents a disabled alert watcher.
Deleting an alert watcher completely removes the watcher from the list. Once an email address
has been deleted, the corresponding alert watcher will no longer receive alert notifications.
Alert Routing
The Alert Routing panel displays the ways in which alerts and logs are managed.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 266
Chapter 10:Settings | System
Relay Host
The relay host represents the hostname or IP address of the email relay server currently being
used as a forwarding point for alert email notifications generated by the array.
For SMTP servers that require authentication, also specify the username and password. The
username represents the SMTP account name used to authenticate into the relay host SMTP
server. The password represents the SMTP password used to authenticate into the relay host
SMTP server.
If a relay host is not configured, Purity//FA sends all alert email notifications directly to the recip-
ient addresses rather than route them via the relay (mail forwarding) server.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 267
Chapter 10:Settings | System
Sender Domain
The sender domain determines how logs are parsed and treated by Pure Storage Technical Ser-
vices. The domain name is also used in the "from" address of outgoing alert email notifications.
By default, the sender domain is set to the domain name please-configure.me.
It is crucial that you set the sender domain to the correct domain name. If the array is not a Pure
Storage test array, set the sender domain to the actual customer domain name. For example,
mycompany.com.
The email address that Purity//FA uses to send alert messages includes the sender domain
name and is comprised of the following components:
<Array_Name>-<Controller_Name>@<Sender_Domain_Name>.com
For example, [email protected].
Important: The sender domain determines how Purity//FA logs are parsed and
treated by Pure Storage Technical Services, so it is crucial that you set the sender
domain to the correct domain name.
4 Click the check mark icon to confirm the change.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 268
Chapter 10:Settings | System
UI
The UI panel displays and manages general user interface details, including banner text and idle
timeout.
Login Banner
The Login Banner section enables you to create a message that Purity//FA users see in the Pur-
ity//FA GUI login screen when logging into the Purity//FA GUI, and before the password prompt
when logging into the CLI.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 269
Chapter 10:Settings | System
idle before the user is logged out. The idle time can be any length between 5 and 180
minutes. To disable the idle timeout setting, set the idle time to 0 minutes.
3 Click the check mark to confirm the change. The idle timeout setting takes effect the next
time you log in to the Purity//FA GUI.
Syslog Servers
The Syslog Servers feature enables you to forward syslog messages to remote servers.
The Purity//FA syslog logging facility generates messages of major events within the FlashArray
and forwards the messages to remote servers. Purity//FA generates syslog messages for three
types of events:
l Alerts (purity.alert)
l Audit Trails (purity.audit)
l Tests (purity.test)
Purity//FA generates alerts when there is a change to the array or to one of the Purity//FA hard-
ware or software components. There are three alert severity levels:
l INFO: Informational messages that are generated due to a change in state. INFO
messages can be used for reporting and analysis purposes. No action is required.
l WARNING: Important messages that warn of an impending error if action is not
taken.
l CRITICAL: Urgent messages that require immediate attention.
Syslog alerts are broken down into the following format:
<Event Timestamp> <Array IP Address> purity.alert <Alert Severity>
<Alert Details>
Pure Storage Confidential - For distribution only to Pure Customers and Partners 270
Chapter 10:Settings | System
In , Purity//FA generated a WARNING alert because space consumption on the array exceeded
90%:
Figure 10-2. Syslog Server – Alerts
Alerts are also sent via the phone home facility to the Pure Storage Technical Services team. If
configured, alerts can also be sent to designated email recipients and SNMP trap managers.
You can also view alerts through the GUI (Health > Alerts) and CLI (puremessage list com-
mand).
An audit trail represents a chronological history of the GUI, CLI, or REST API operations that a
user has performed to modify the configuration of the array. Each message within an audit trail
includes the name of the Purity//FA user who performed the operation and the Purity//FA oper-
ation that was performed.
Syslog audit trail messages are broken down into the following format:
Event Timestamp> <Array IP Address> <purity.audit> <Purity//FA User-
name> <Purity//FA Command> <Audit Trail Message Details>
In , pureuser performed various GUI, CLI, or REST API operations:
Figure 10-3. Syslog Server – Audit Trails
Pure Storage Confidential - For distribution only to Pure Customers and Partners 271
Chapter 10:Settings | System
You can also view audit messages through the GUI (Settings > Access) and CLI (pureaudit
list command).
Test messages represent a history of all tests generated by users to verify that the array can
send messages to email recipients. The message does not indicate whether or not the test mes-
sage successfully reached the recipients.
Syslog test messages are broken down into the following format:
Event Timestamp> <Array IP Address> <purity.test> <Purity//FA User-
name> <Test Message Details>
In , pureuser performed a test to determine if the array could send messages to email
addresses:
Figure 10-4. Syslog Server – Tests
Pure Storage Confidential - For distribution only to Pure Customers and Partners 272
Chapter 10:Settings | System
SMI-S
The SMI-S panel manages the Pure Storage Storage Management Initiative Specification (SMI-
S) provider.
Enable the SMI-S provider to administer the array through an SMI-S client. The SMI-S provider
is optional and must be enabled before its first use.
For more information about the SMI-S provider, refer to the Pure Storage SMI-S Provider Guide
on the Knowledge site at https://support.purestorage.com.
Array Time
The Array Time panel displays the array’s current time, and the IP addresses or fully qualified
hostnames of the Network Time Protocol (NTP) servers with which array time is synchronized.
Pure Storage technicians set the array time zone during installation. By default, the array time is
synchronized to an NTP server operated by Pure Storage. Alternate NTP servers can be des-
ignated.
Time
The displayed time is based on the time zone of the array, which is set during the FlashArray
installation.
NTP Servers
The NTP Servers section displays the hostnames or IP addresses of the Network Time Protocol
(NTP) servers that are currently being used by the array to maintain reference time. The install-
ation technician sets the proper time zone for an array when it is installed. During operation,
arrays maintain time synchronization by interacting with the NTP server.
Designating an Alternate NTP server
The array maintains time synchronization by interacting with the NTP server.
To designate an alternate NTP server:
1 Select Settings > System.
2 In the NTP Servers section of the Time panel, perform one of the following tasks:
Pure Storage Confidential - For distribution only to Pure Customers and Partners 273
Chapter 10:Settings | System
l To add an NTP server, in the New NTP Server(s) text box, type the hostname or IP
address of the NTP server used by the array to maintain reference time, and then
click the Add button. You can add up to four NTP servers. Enter multiple servers as
comma-separated values.
If specifying an IP address, for IPv4, specify the IP address in the form ddd.d-
dd.ddd.ddd, where ddd is a number ranging from 0 to 255 representing a group of
8 bits. For IPv6, specify the IP address in the form
xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx, where xxxx is a hexadecimal
number representing a group of 16 bits. When specifying an IPv6 address, con-
secutive fields of zeros can be shortened by replacing the zeros with a double colon
(::).
l To remove an NTP server, select the check box of the server you want to remove,
and then click the delete icon.
Cloud Features
Cloud Features displays and manages features associated with cloud applications.
Single Sign-On
The single sign-on (SSO) facility enables users to configure secure access to cloud applications.
To change the setting, click the edit icon and then click the toggle button to switch between
enabled (blue) and disabled (gray) status. Then click Save.
Pure1 Support
The Pure1 Support panel displays and manages the features used to communicate with Pure
Storage Technical Services.
Phone Home
The phone home facility provides a secure direct link between the array and Pure Storage Tech-
nical Services to transmit log and diagnostic information.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 274
Chapter 10:Settings | System
This information provides Pure Storage Technical Services with complete recent history about
array performance and significant events in case diagnosis or remedial actions are required.
Alerts are reported immediately when they occur so that timely action can be taken.
The phone home facility can be enabled (blue) or disabled (gray) at any time. By default, the
phone home facility is enabled. Log and diagnostic information is only transmitted when the fea-
ture is enabled. If the phone home facility is disabled, historical log contents are delivered when
the facility is next enabled; Purity will continue to send alerts to designated email recipients and
SNMP trap managers if those features are configured.
Enabling and disabling phone home
Enable the phone home facility to automatically transmit log files on an hourly basis to Pure Stor-
age Technical Services via the phone home channel.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 275
Chapter 10:Settings | System
l All Log History: Sends log information from the previous day (in the array’s time
zone)
3 Click Send Now to send the log files to Pure Storage Technical Services.
Remote Assist
In some cases, the most efficient way for Pure Storage Technical Services to service a FlashAr-
ray array or diagnose problems is through direct access to the array. A remote assistance (RA)
session grants Pure Storage Technical Services direct and secure access to the array through a
reverse tunnel which you, the administrator, open. This is a two-way communication.
Opening an RA session gives Pure Storage Technical Services the ability to log into the array,
effectively establishing an administrative session. Once the RA session is successfully estab-
lished, the array returns connection details, including the date and time when the session was
opened, the date and time when the session expires, and the proxy status (true, if configured).
After the Pure Storage Technical Services team has performed all of the necessary diagnostic
or maintenance functions, close the RA session to terminate the connection.
RA sessions can be opened/connected (blue) and closed/disconnected (gray) at any time. By
default, the RA session is closed/disconnected.
Opening and closing a remote assist session does not affect the current administrative session.
An open RA session automatically terminates (disconnects) after two days have elapsed.
Opening and Closing a Remote Assistance (RA) Session
To open and close an RA session:
1 Select Settings > System.
2 In the Remote Assistance section of the Pure1 Support panel, click the toggle button to open
(blue) and close (gray) an RA session. Opening an RA session gives Pure Storage Technical
Services direct and secure access to the array. After the Pure Storage Technical Services
team has performed all of the necessary diagnostic functions, close the RA session.
Support Logs
Purity//FA continuously logs a variety of array activity, including performance metrics, hardware
and software operations, and administrative actions. Array activity is time stamped and organ-
ized in chronological order. The Support Logs panel allows you to download the Purity//FA log
contents of the specified controller to the current administrative workstation.
If Phone Home is enabled, the logs are periodically transmitted to Pure Storage. The logs are
also saved to the array, available for manual download.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 276
Chapter 10:Settings | System
When the support logs are manually downloaded, the array generates a password-protected
.zip file containing all of the logs and saves it to your local machine.
Downloading Support Logs
1 Select Settings > System.
2 In the Support Logs section of the Pure1 Support panel, select the time range representing
the approximate array time the activity of interest occurred.
3 In the "Download from" section, click the button corresponding to the controller from which
you want to download the support logs. For example, click CT0 to download the logs for the
primary controller. The password-encrypted .zip file is saved to your local machine. The file
can only be opened by Pure Storage Technical Services.
Event Logs
The Purity//FA event log continuously logs array events and administrative actions with time-
stamped entries. The logging detail level is customizable for audit, security monitoring,
forensics, time line, troubleshooting, or other purposes.
Complete event log content is not displayed directly through the GUI or CLI. Instead, event logs
are available for manual download through the GUI (Settings > System > Pure1 Support
> Event Logs). A portion of event log content are alert, audit, and session entries that are dis-
played (separately from the event log) in the GUI (Health > Alerts, Settings > Access > Audit
Trail, and Settings > Access > Session Log) and by CLI commands (purealert list, pur-
eaudit list, and puresession list).
Use the CLI purelog global setattr --logging-severity command to customize
the event logging level.
The severity of events that are collected in the event log is customizable to the following levels:
l notice. Events that are unusual or require attention, including warnings and errors.
l info. Normal operations that require no action. Default.
l debug. Verbose information useful for debugging and auditing.
The event log retains logs either for 90 days or for 10GB of logs, whichever occurs first. In addi-
tion, if remote syslog is configured, the contents of the event log are sent to the remote syslog.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 277
Chapter 10:Settings | System
2 In the Event Logs section of the Pure1 Support panel, select the time range of logs to down-
load: today's logs or the last 3, 7, 30, or 90 days of logs.
3 Click Download. Navigate to the location to save the file. Optionally rename the file.
The .zip file contains .gz files for the selected time range and an .md5 checksum file for each
.gz file.
Proxy Server
The Proxy section manages the proxy hostname for https log transmission. The proxy host-
name, if set, represents the server to be used as the HTTP or HTTPS proxy. The format for the
proxy host name is http(s)://hostname:port, where hostname is the name of the proxy
host, and port is the TCP/IP port number used by the proxy host.
SSL Certificate
Purity//FA creates a self-signed certificate and private key when you start the system for the first
time. The SSL Certificate panel allows you to view and change certificate attributes, create a
Pure Storage Confidential - For distribution only to Pure Customers and Partners 278
Chapter 10:Settings | System
new self-signed certificate, construct certificate signing requests, import certificates and private
keys, and export certificates.
Self-Signed Certificate
Creating a self-signed certificate replaces the current certificate. When you create a self-signed
certificate, include any attribute changes, specify the validity period of the new certificate, and
optionally generate a new private key. See Figure 10-5.
Figure 10-5. SSL Certificate – Create Self-Signed Certificate
When you create the self-signed certificate, you can generate a private key and specify a dif-
ferent key size. If you do not generate a private key, the new certificate uses the existing key.
You can change the validity period of the new self-signed certificate. By default, self-signed cer-
tificates are valid for 3650 days.
CA-Signed Certificate
Certificate authorities (CA) are third party entities outside the organization that issue certificates.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 279
Chapter 10:Settings | System
To obtain a CA certificate, you must first construct a certificate signing request (CSR) on the
array. See Figure 10-6.
Figure 10-6. SSL Certificate – Construct Certificate Signing Request
The CSR represents a block of encrypted data specific to your organization. You can change the
certificate attributes when you construct the CSR; otherwise, Purity//FA will reuse the attributes
of the current certificate (self-signed or imported) to construct the new one. Note that the cer-
tificate attribute changes will only be visible after you import the signed certificate from the CA.
Send the CSR to a certificate authority for signing. The certificate authority returns the SSL cer-
tificate for you to import. Verify that the signed certificate is PEM formatted (Base64 encoded),
includes the"-----BEGIN CERTIFICATE-----"and"-----END CERTIFICATE-----
"lines, and does not exceed 3000 characters in total length. When you import the certificate,
also import the intermediate certificate if it is not bundled with the CA certificate. See Figure 10-
7.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 280
Chapter 10:Settings | System
If the certificate is signed with the CSR that was constructed on the current array and you did not
change the private key, you do not need to import the key. However, if the CSR was not con-
structed on the current array or if the private key has changed since you constructed the CSR,
you must import the private key. If the private key is encrypted, also specify the passphrase.
Certificate Administration
The attributes of a self-signed certificate can only be changed by creating a new certificate. Cer-
tificate attributes include organization-specific information, such as country, state, locality, organ-
ization, organizational unit, common name, and email address.
The export feature allows you to view and export the certificate and intermediate certificates for
backup purposes.
Note: When you change the certificate attributes, Purity//FA replaces the existing cer-
tificate with the new certificate and its specified attributes.
1 Select Settings > System.
2 In the SSL Certificate panel, click the menu icon and select Create Self-Signed Certificate.
The Create Self-Signed Certificate pop-up window appears.
3 Complete or modify the following fields:
Pure Storage Confidential - For distribution only to Pure Customers and Partners 281
Chapter 10:Settings | System
l Generate new key: Click the toggle button to generate (blue) or not generate (gray) a
new private key with the self-signed certificate. If you do not generate a new private
key, the certificate uses the existing key.
l Key Size: If you generate a new private key, specify the key size. The default key size
is 2048 bits. A key size smaller than 2048 is considered insecure.
l Country: Enter the two-letter ISO code for the country where your organization is loc-
ated.
l State/Province: Enter the full name of the state or province where your organization
is located.
l Locality: Enter the full name of the city where your organization is located.
l Organization: Enter the full and exact legal name of your organization. The organ-
ization name should not be abbreviated and should include suffixes such as Inc,
Corp, or LLC.
l Organizational Unit: Enter the department within your organization that is managing
the certificate.
l Common Name: Enter the fully qualified domain name (FQDN) of the current array.
For example, the common name for https://purearray.example.com is pur-
earray.example.com, or *.example.com for a wildcard certificate. The common name
can also be the management IP address of the array or the short name of the current
array. Common names cannot have more than 64 characters.
l Email: Enter the email address used to contact your organization.
l Days: Specify the number of valid days for the self-signed certificate being gen-
erated. If not specified, the self-signed certificate expires after 3650 days.
4 Click Create. Purity//FA restarts the GUI and signs you in using the self-signed certificate.
Note: When you change the certificate attributes, Purity//FA replaces the existing cer-
tificate with the new certificate and its specified attributes.
1 Select Settings > System.
2 In the SSL Certificate panel, click the menu icon and select Construct Certificate Signing
Request. The Construct Certificate Signing Request pop-up window appears.
3 Complete or modify the following fields:
Pure Storage Confidential - For distribution only to Pure Customers and Partners 282
Chapter 10:Settings | System
l Country: Enter the two-letter ISO code for the country where your organization is loc-
ated.
l State/Province: Enter the full name of the state or province where your organization
is located.
l Locality: Enter the full name of the city where your organization is located.
l Organization: Enter the full and exact legal name of your organization. The organ-
ization name should not be abbreviated and should include suffixes such as Inc,
Corp, or LLC.
l Organizational Unit: Enter the department within your organization that is managing
the certificate.
l Common Name: Enter the fully qualified domain name (FQDN) of the current array.
For example, the common name for https://purearray.example.com is pur-
earray.example.com, or *.example.com for a wildcard certificate. The common name
can also be the management IP address of the array or the short name of the current
array. Common names cannot have more than 64 characters.
l Email: Enter the email address used to contact your organization.
4 Click Create to construct the CSR. The CSR pop-up window appears, displaying the CSR as
a block of encrypted data.
5 Click Download to download the CSR, which you can send to a certificate authority (CA) for
signing.
Importing a CA Certificate
After you receive the signed certificate from the CA, you are ready to import it to replace the
existing certificate.
1 Verify that the signed certificate is PEM formatted (Base64 encoded), includes the "-----
BEGIN CERTIFICATE-----" and "-----END CERTIFICATE-----" lines, and does
not exceed 3000 characters in length.
2 Select Settings > System.
3 In the SSL Certificate panel, click the menu icon and select Import Certificate. The Import
Certificate pop-up window appears.
4 Complete or modify the following fields:
l Intermediate Certificate: If you also received an intermediate certificate from the CA,
click Choose File and select the intermediate certificate.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 283
Chapter 10:Settings | System
l Key: If the CSR was not constructed on the current array or the private key has
changed since you constructed the CSR, click Choose File and select the private
key.
l Key Passphrase: If the private key is encrypted with a passphrase, enter the pass-
phrase.
l Certificate: Click Choose File and select the signed certificate you received from the
CA.
5 Click Import.
Maintenance Windows
The Maintenance Windows panel displays whether the array is undergoing maintenance. If the
array is being maintained, the Enabled field indicates True, the message "The system is cur-
rently undergoing maintenance" appears in the panel, and the name, time it was created, and
expiration time are listed in a table below that message.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 284
Chapter 10:Settings | System
Pure Storage Confidential - For distribution only to Pure Customers and Partners 285
Chapter 10:Settings | System
Pure Storage Confidential - For distribution only to Pure Customers and Partners 286
Chapter 10:Settings | System
Note: The Rapid Data Locking feature is not supported on Cloud Block Store.
SNMP
The Simple Network Management Protocol (SNMP) is used by SNMP agents and SNMP man-
agers to send and retrieve information. FlashArray supports SNMP versions v2c and v3.
The SNMP panel displays the SNMP agent and the list of SNMP managers running in hosts with
which the array communicates.
In the FlashArray, the built-in SNMP agent has local knowledge of the array. The agent collects
and organizes this array information and translates it via SNMP to or from the SNMP managers.
The agent, named localhost, cannot be deleted or renamed. The managers are defined by
creating SNMP manager objects on the array. The managers communicate with the agent via
the standard TCP port 161, and they receive notifications on port 162.
In the FlashArray, the localhost SNMP agent has two functions, namely, responding to GET-
type SNMP requests and transmitting alert messages.
The agent responds to GET-type SNMP requests made by the SNMP managers, returning val-
ues for an information block, such as purePerformance, or individual variables within the block,
depending on the type of request issued. The variables supported are:
pureArrayReadBandwidth Current array-to-host data transfer rate
pureArrayWriteBandwidth Current host-to-array data transfer rate
pureArrayReadIOPS Current read request execution rate
pureArrayWriteIOPS Current write request execution rate
pureArrayReadLatency Current average read request latency
pureArrayWriteLatency Current average write request latency
The FlashArray Management Information Base (MIB) describes the purePerformance variables
and can be downloaded from the array to your local machine.
SNMP managers are added to the array through the creation of SNMP manager objects. When
creating an SNMP manager object, enter the Host, which represents the DNS hostname or IP
address of the computer that hosts the SNMP manager. Also specify the SNMP version from the
Version drop-down list. Valid versions are v2c and v3.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 287
Chapter 10:Settings | System
The SNMP agent generates and transmits messages to the SNMP manager as traps or inform
requests (informs), depending on the notification type that is configured on the manager. An
SNMP trap is an unacknowledged SNMP message, meaning the SNMP manager does not
acknowledge receipt of the message. An SNMP inform is an acknowledged trap.
If the SNMP manager notification type is set to trap, the agent sends the SNMP message (trap)
without expecting a response. If the SNMP manager is set to inform, the agent sends the
SNMP message (inform) and waits for a reply from the manager confirming message retrieval. If
the agent does not receive a response within a certain timeframe, it will retry until the inform has
passed through successfully. If the notification type is not set, the manager defaults to trap.
SNMPv2 uses a type of password called a community string to authenticate the messages that
are passed between the agent and manager. The community string is sent in clear text, which is
considered an unsecured form of communication. SNMPv3, on the other hand, supports secure
communication between the agent and manager through the use of authentication and privacy
encryption methods. As such, SNMPv2c and SNMPv3 have different security attributes.
To configure the SNMPv2c agent and managers, set the Community field to the community
string under which the agent is to communicate with the managers. The agent and manager
must belong to the same community; otherwise, the agent will not accept requests from the man-
ager. When setting the community, Purity prompts twice for the community string. To remove the
agent or manager from the community, leave the field blank.
To configure the SNMPv3 agent and managers, in the User field, specify the user ID that Purity
uses to communicate with the SNMP manager. Also set the authentication and privacy encryp-
tion security levels for the agent and managers. SNMPv3 supports the following security levels:
l noAuthNoPriv. Authentication and privacy encryption is not set. Similar to SNMPv2c,
communication between the SNMP agent and managers is not authenticated and not
encrypted. noAuthNPriv security requires no configuration.
l authNoPriv. Authentication is set, but privacy encryption is not set. Communication
between the SNMP agent and managers is authenticated but not encrypted. Pass-
word authentication is based on MD5 or SHA hash authentication.
To configure authNoPriv security, in the Auth Protocol field, set the authentication pro-
tocol to MD5 or SHA, and in the Auth Passphrase field, enter an authentication pass-
phrase.
l authPriv. Communication between the SNMP agent and managers is authenticated
and encrypted. Password authentication is based on MD5 or SHA hash authentication.
Traffic between the FlashArray and SNMP manager is encrypted using encryption
protocol AES or DES.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 288
Chapter 10:Settings | System
To configure authPriv security, in the Auth Protocol field, set the authentication pro-
tocol to MD5 or SHA, and in the Auth Passphrase field, enter an authentication pass-
phrase. Also, in the Privacy Protocol field, set the privacy protocol to AES5 or DES,
and in the Privacy Passphrase field, enter a privacy passphrase.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 289
Chapter 10:Settings | System
Pure Storage Confidential - For distribution only to Pure Customers and Partners 290
Chapter 10:Settings | System
recipient does not acknowledge receipt of the message. An SNMP inform request is
an acknowledged trap. If not specified, the notification type defaults to trap.
4 Click Save.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 291
Chapter 10:Settings | System
Pure Storage Confidential - For distribution only to Pure Customers and Partners 292
Chapter 10:Settings | Network
Network
The Network page displays the network connection attributes of the array. See Figure 10-10.
Figure 10-10. Settings – Network Page
The Network page panels manage the Fibre Channel (physical), Ethernet (physical), subnets
and virtual, bond, VLAN, and app interfaces used to connect the array to a network.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 293
Chapter 10:Settings | Network
Fibre Channel
The Fibre Channel panel manages the Fibre Channel interfaces used to connect the array to a
network. The panel displays the Fibre Channel interfaces on the array along with the following
network connection attributes: interface status (enabled or disabled), World Wide Network
(WWN), speed, and network service (scsi-fc, replication, or nvme-fc) that is attached to the inter-
face.
A value of True in the Enabled column indicates that an interface is enabled.
Ethernet
The Ethernet panel manages the Ethernet interfaces used to connect the array to a network.
The panel displays the Ethernet interfaces on the array along with the following network con-
nection attributes: interface status (enabled or disabled), type of connection (physical, bond,
LACP bond, or virtual interface), subnet, IP address, netmask, gateway, maximum transmission
units (MTU), MAC address, speed, network service (file, iscsi, management, nvme-roce, or
nvme-tcp) that is attached to the interface, and subinterfaces.
A value of True in the Enabled column indicates that an interface is enabled. If an interface
belongs to a subnet, the subnet name appears in the Subnet column, and all of its interfaces are
grouped with the subnet. A dash (-) in the Subnet column means the interface does not belong
to a subnet.
Subnets
Note: Subnets can only be configured on Ethernet ports.
Interfaces with common attributes can be organized into subnetworks, or subnets, to enhance
the efficiency of data (file, iSCSI, NVMe-RoCE, or NVMe-TCP), management, and replication
traffic.
In Purity//FA, subnets can include physical, virtual, bond, and VLAN interfaces. Physical, virtual,
and bond interfaces can belong to the same subnet. VLAN interfaces can only belong to subnets
with other VLAN interfaces.
If the subnet is assigned a valid IP address, once it is created, all of its enabled interfaces are
immediately available for connection. The subnet inherits the services from all of its interfaces.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 294
Chapter 10:Settings | Network
Likewise, the interfaces contained in the subnet inherit the gateway, MTU, and VLAN ID (if
applicable) attributes from the subnet.
Physical, virtual, and bond interfaces in a subnet share common address and MTU attributes.
The subnet can contain a mix of physical, virtual, and bond interfaces, and the interface services
can be of any type, such as file, iSCSI, management, NVMe-RoCE, NVMe-TCP, or replication
services.
Adding physical, virtual, and bond interfaces to a subnet involves the following steps:
1 Create a subnet.
2 Add the physical, virtual, and bond interfaces to the subnet.
A VLAN interface is a dedicated virtual network interface that is designed to be used with an
organization’s virtual local area network (VLAN). Through VLAN interfaces, Purity//FA employs
VLAN tags to ensure the data passing between the array and VLANs is securely isolated and
routed properly.
VLAN Tagging
VLAN tagging allows customers to isolate traffic through multiple virtual local area networks
(VLANs), ensuring data routes to and from the appropriate networks. The array performs the
work of tagging and untagging the data that passes between the VLAN and array.
VLAN tagging is supported for the following service types: file, iSCSI, NVMe-RoCE, and NVMe-
TCP. Before creating a VLAN interface, verify that one or more of these are configured on the
physical interface.
Creating and adding VLAN interfaces to a subnet involves the following steps:
1 Create a subnet, assigning a VLAN ID to the subnet.
2 Add one VLAN interface to the subnet for each corresponding physical network interface to
be associated with the VLAN. All of the VLAN interfaces within the subnet must be in the
same VLAN.
In Purity//FA, VLAN interfaces have the naming structure CTx.ETHy.z, where x denotes the con-
troller (0 or 1), y denotes the interface (0 or 1), and z denotes the VLAN ID number. For example,
ct0.eth1.500.
When VLAN tagging is used for file, VLAN IDs must be mirrored for two controllers. For
example, if a subnet with VLAN ID 50 is assigned to ct0.eth5, the same subnet must be
assigned to ct1.eth5.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 295
Chapter 10:Settings | Network
The new subnet details appear In the Subnets panel. See Figure 10-12.
Click Add interface to add interfaces to the subnet.
Figure 10-12. Network – Subnets Panel
LACP
Link Aggregation Control Protocol (LACP) is an IEEE standard that allows individual Ethernet
links to be aggregated into a single logical Ethernet link. Depending on your scenario, it can be
used to increase bandwidth utilization, increase availability, or simplify network configurations.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 296
Chapter 10:Settings | Network
In order for LACP to work with the FlashArray, the network switch must be configured for LACP
as well.
LACP (IEEE 802.3ad) is supported on the following FlashArray Ethernet ports:
l iSCSI
l File VIFs
l NMVe-TCP
l Replication (ActiveCluster only)
Prior to configuring LACP on the FlashArray, LACP must be configured on the network switch
according to the network switch vendor’s best practices. LACP can only be configured between
Ethernet ports on the same controller. LACP is not supported on ports across controllers. Subin-
terfaces added to an LACP interface must have the same speed, MTU, and service.
l Type: Indicates the interface type which is physical for Ethernet interfaces. The
type cannot be changed.
l Address: IP address to be associated with the specified Ethernet interface.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 297
Chapter 10:Settings | Network
Pure Storage Confidential - For distribution only to Pure Customers and Partners 298
Chapter 10:Settings | Network
Creating a Subnet
Creating the subnet involves setting the subnet attributes, and then adding the interfaces to the
subnet.
A subnet can contain physical, virtual, and bond interfaces (for non-VLAN tagging purposes) or
VLAN interfaces (for VLAN tagging purposes).
To create a subnet:
1 Select Settings > Network.
2 In the Subnets panel, click the Create Subnet icon in the upper-right corner of the panel. The
Create Subnet dialog box appears.
3 Complete the following fields:
l Name: Name of the subnet.
l Enabled: Indicates whether the subnet is enabled (blue) or disabled (gray).
l Prefix: IP address of the subnet prefix and prefix length (defaults to 24).
l For IPv4, specify the prefix in the form ddd.ddd.ddd.ddd/dd.
l For IPv6, specify the prefix in the form
xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx/xxx. Consecutive
fields of zeros can be shortened by replacing the zeros with a double colon
(::).
l VLAN: For VLAN tagging, specify the VLAN ID, between 1 and 4094, to which the
subnet is associated. If you specify the VLAN ID number, Purity//FA filters out all
Pure Storage Confidential - For distribution only to Pure Customers and Partners 299
Chapter 10:Settings | Network
available physical interfaces to only those set to iSCSI services. The physical inter-
face name with the appended VLAN ID number becomes the VLAN interface name.
If the interface is not part of a VLAN, leave this field blank.
l Gateway: IP address of the gateway through which the specified interface is to com-
municate with the network.
l For IPv4, specify the gateway IP address in the form ddd.ddd.ddd.ddd.
l For IPv6, specify the gateway IP address in the form
xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx. Consecutive fields
of zeros can be shortened by replacing the zeros with a double colon (::).
l MTU: Maximum transmission unit (MTU) of the subnet. If not specified, the MTU
value defaults to 1500. Interfaces inherit their MTU values from the subnet. Note that
the MTU of a VLAN interface cannot exceed the MTU of the corresponding physical
interface.
4 Click Create. After the subnet has been created, add interfaces to it.
Deleting a Subnet
Deleting a subnet automatically removes all of the interfaces for the subnet and deletes the sub-
net. Any current connections through the subnet are disconnected.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 300
Chapter 10:Settings | Network
To delete a subnet:
1 Select Settings > Network.
2 In the Subnets & Interfaces panel, click the Delete Subnet icon for the subnet you want to
delete.
The Delete Subnet dialog box appears notifying you that all interfaces in the subnet will be
removed and the subnet will be deleted. When Purity//FA removes the interfaces, any cur-
rent connections through the subnet are disconnected.
3 Click Save. The interfaces appear in the subnet. If the subnet and added interfaces are
enabled, they are immediately available for connection.
DNS Settings
The DNS Settings panel manages the DNS attributes for an array's administrative and, option-
ally, file services network. DHCP mode is not supported. DNS server settings can be added,
edited, or deleted.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 301
Chapter 10:Settings | Access
Access
The buttons at the top of the page allow you to switch between the Array page and the File Sys-
tem page. The Array page manages the Purity//FA user accounts and their attributes. This page
also displays user details, such as audit trails and login activity. The File System page manages
the Purity//FA file system local users and groups. See Figure 10-13.
Figure 10-13. Settings – Access Page
Pure Storage Confidential - For distribution only to Pure Customers and Partners 302
Chapter 10:Settings | Access
Array Accounts
The Array page displays a list of Purity//FA user accounts and their attributes.
Users Panel
In the Array section, the Users panel displays the following types of users:
l pureuser administrative account.
l Users that have been created on the array.
l LDAP users with a public key and/or API token. LDAP users that do not have a public
key or API token do not appear in the list.
The FlashArray array is delivered with a single administrative account named pureuser. The
account is password protected and may alternatively be accessed using a public-private key
pair. The pureuser account is set to the array administrator role, which has array-wide per-
missions. The pureuser account cannot be renamed or deleted.
Users can be added to the array either locally by creating and configuring a local user directly on
the array, or through Lightweight Directory Access Protocol (LDAP) by integrating the array with
a directory service, such as Active Directory or OpenLDAP. For more information about integ-
rating the array with a directory service, refer to the Settings > Access > Directory Service sec-
tion.
Locally, on the array, users can only be created by array administrators. The name of the local
user must be unique. The local user name cannot be the same name as an LDAP user. If an
LDAP user appears with the same name as a local user, the local user always has priority. The
Type column of the Users panel identifies the way in which a user is added to the array as Local
or LDAP.
Role-based access control (RBAC) restricts system access and capabilities to each user based
on their assigned role in the array.
All users in the array, whether created locally or added to the array through LDAP integration,
are assigned one of the following roles in the array:
l Read-Only. Users with the Read-Only (readonly) role can perform operations that
convey the state of the array. Read Only users cannot alter the state of the array.
l Ops Admin. Users with the Ops Admin (ops_admin) role can perform the same oper-
ations as Read Only users plus enable and disable remote assistance sessions. Ops
Admin users cannot alter the state of the array.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 303
Chapter 10:Settings | Access
l Storage Admin. Users with the Storage Admin (storage_admin) role can perform
the same operations as Read Only users plus storage related operations, such as
administering volumes, hosts, and host groups. Storage Admin users cannot perform
operations that deal with global and system configurations.
l Array Admin. Users with the Array Admin (array_admin) role can perform the same
operations as Storage Admin users plus array-wide changes dealing with global and
system configurations. In other words, Array Admin users can perform all operations.
For local users, the role is set during user creation. For LDAP users, the role is set by configuring
groups in the directory that correspond to the FlashArray user roles.
Each local user account on the array is password protected. The password is assigned during
user creation and can be modified by array administrators. All local users can manage their own
passwords, but only array administrators can manage the passwords of other users. Changing a
local user's password requires knowledge of the current password. If the password of a local
user is unknown, delete the account and recreate it with the desired password. Note that delet-
ing a local user's account means deleting any public key associated with the user. If the pass-
word of the pureuser account is unknown, contact Pure Storage Technical Services to reset
the account to the default pureuser password. Passwords of LDAP users are managed in the
directory service.
Note: For arrays with optional multi-factor authentication enabled, passwords are not
used. Instead, a third-party application, such as Microsoft® Active Directory Federation
Services (AD FS) authentication identity management system or RSA SecurID® soft-
ware, manages array authentication. For AD FS, see "Multi-factor Authentication with
SAML2 SSO" on page 316. Multi-factor authentication with RSA SecurID® software is
managed only with the CLI puremultifactor command. The Purity//FA GUI does not
configure or show the status of RSA SecurID® software multi-factor authentication on an
array.
If a public key has been created for the user, it appears masked in the Public Key column. All
users can manage their own public keys, but only array administrators can manage the public
keys associated with other users.
If an API token has been created for the user, it appears masked in the API Token column. API
tokens are used to securely create REST API sessions. After creating an API token, users can
create REST API sessions and start sending requests. For more information about the Pure Stor-
age REST API, refer to the REST API Reference Guide on the Knowledge site at https://sup-
port.purestorage.com.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 304
Chapter 10:Settings | Access
An API token is unique to the Purity//FA user for whom it was created. Once created, an API
token is valid until it is deleted or recreated.
API token management does not affect Purity//FA user names and passwords. For example,
deleting an API token does not invalidate the Purity//FA user name or password that was used to
create the token. Likewise, changing the Purity//FA password does not affect the API token.
Single sign-on (SSO) gives LDAP users the ability to navigate seamlessly from Pure1 Manage
to the current array through a single login. If single sign-on is not enabled on an array, users
must manually log in with their credentials each time they navigate from Pure1 Manage to the
array. Enabling and disabling single sign-on takes effect immediately. By default, single sign-on
is not enabled.
Enabling single sign-on is a two-step process: first, configure single sign-on and LDAP integ-
ration through Pure1 Manage, and second, enable single sign-on on the array through Pur-
ity//FA. For more information about SSO and LDAP integration with Pure1 Manage, refer to the
Pure1 Manage - SSO Integration article on the Knowledge site at https://sup-
port.purestorage.com.
Creating a User
1 Select Settings > Access.
2 In the Users panel, click the Edit icon in the upper-right corner of the panel and select Create
User… The Create User pop-up window appears.
3 In the User field, type the name of the new user. The name must be between 1 and 32 char-
acters (alphanumeric and '-') in length and begin and end with a letter or number. The name
must include at least one letter or '-'. All letters must be in lowercase.
4 In the Role field, select the role for the new user. Options include:
l Read-Only: Users with the Read-Only (readonly) role can perform operations that
convey the state of the array. Read Only users cannot alter the state of the array.
l Ops Admin. Users with the Ops Admin (ops_admin) role can perform the same oper-
ations as Read Only users plus enable and disable remote assistance sessions. Ops
Admin users cannot alter the state of the array.
l Storage Admin. Users with the Storage Admin (storage_admin) role can perform
the same operations as Read Only users plus storage related operations, such as
administering volumes, hosts, and host groups. Storage Admin users cannot perform
operations that deal with global and system configurations.
l Array Admin. Users with the Array Admin (array_admin) role can perform the same
Pure Storage Confidential - For distribution only to Pure Customers and Partners 305
Chapter 10:Settings | Access
operations as Storage Admin users plus array-wide changes dealing with global and
system configurations. In other words, Array Admin users can perform all operations.
5 In the Password field, type a password for the new user. The password must be between 1
and 100 characters in length, and can include any character that can be entered from a US
keyboard.
6 In the Confirm Password field, type the password again.
7 Click Create.
Changing the Login Password of a User
1 Select Settings > Access.
2 In the Users panel, click the Edit icon for the user you want to modify and select Edit User….
The Edit User pop-up window appears.
3 In the Current Password field, type the user's current password.
4 In the New Password field, type the user's new password. The password must be between 1
and 100 characters in length, and can include any character that can be entered from a US
keyboard.
5 In the Confirm New Password field, type the new password again.
6 Click Save. The new password is required the next time the user logs in to Purity//FA.
Changing the Role of a User
1 Select Settings > Access.
2 In the Users panel, click the Edit icon for the user you want to modify and select Edit User….
The Edit User pop-up window appears.
3 In the Role field, select the role.
4 Click Save.
Deleting a User
1 Select Settings > Access.
2 In the Users panel, click the Edit icon for the user you want to modify and select Delete
User…. The Delete User pop-up window appears.
3 Click Delete.
Adding a Public Key
1 Select Settings > Access.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 306
Chapter 10:Settings | Access
2 In the Users panel, click the Edit icon in the panel heading and select Update Public Key….
The Update Public Key pop-up window appears.
3 In the User field, type the name of the local or LDAP user for which you want to create the
public key.
4 If the user does not have an existing public key, enter the public key in the Public Key field. If
the user already has a public key, select Overwrite and enter the public key.
5 Click Save.
Updating a Public Key
1 Select Settings > Access.
2 In the Users panel, click the Edit icon for the user you want to modify and select Edit User….
The Edit User pop-up window appears.
3 In the Public Key field, select Overwrite and enter the public key.
4 Click Save.
Deleting a Public Key
1 Select Settings > Access.
2 In the Users panel, click the Edit icon for the user you want to modify and select Edit User….
The Edit User pop-up window appears.
3 In the Public Key field, select Remove.
4 Click Save.
5 Click Remove.
Creating an API Token
Pure Storage Confidential - For distribution only to Pure Customers and Partners 307
Chapter 10:Settings | Access
API Clients
An API client represents an identity type that is created on the array. The user name and identity
tokens of an API client are used as claims for the JSON Web Token that you create to authen-
ticate into the REST API.
To create an API client,
1 Select Settings > Access.
2 In the API Clients panel, click the Create API Client icon (+).
3 In the Create API Client window, enter the API client name, OAuth issuer, the maximum role
(Array Admin, Storage Admin, Ops Admin, or Read-Only), the time to live (one day is the
default), and RSA public key in PEM format.
4 Click Create.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 308
Chapter 10:Settings | Access
Directory Service
The Directory Service panel manages the integration of FlashArray arrays with an existing dir-
ectory service.
The Purity//FA release comes with a single local administrative account named pureuser with
array-wide (array_admin) permissions. The account is password protected and may altern-
atively be accessed using a public-private key pair.
Additional users can be added to the array by creating and configuring local users directly on the
array. For more information about local users, refer to the Settings > Access > Users section.
Users can also be added to the array through Lightweight Directory Access Protocol (LDAP) by
integrating the array with an existing directory service. If a user is not found locally, the directory
servers are queried. OpenLDAP and Microsoft's Active Directory (AD) are two implementations
of LDAP that Purity//FA supports.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 309
Chapter 10:Settings | Access
With LDAP integration, the array leverages the directory for authentication (validate user's pass-
word) and authorization (determine user's role in the array).
The Directory Service panel displays the settings for the directory service to be used for role-
based access control.
The Configuration section of the Directory Service panel displays the details for the base con-
figuration of the directory service, including its URLs, base DN, bind user name, and bind pass-
word. Configuring and then enabling the directory service allows users in the LDAP directory to
log in to the array. If Check Peer is enabled, server authenticity using the CA certificate is
enforced during the bind and query test. Note that you must set the CA certificate before you can
enable Check Peer.
The Roles section of the Directory Service panel displays the current role-to-group con-
figurations for the directory service. In order to log in to the array, a user must belong to a con-
figured group in the LDAP directory, and that group must be mapped to an RBAC role in the
array. The Group field represents the common name (CN) of the configured group that maps to
the role in the array. The group name excludes the "CN=" specifier. For example, pure-
readonly. The Group Base field represents the common organizational unit (OU) under which
to search for the group. The order of OUs gets smaller in scope from right to left. Multiple OUs
are listed in comma-separated format.
The Test button in the upper-right corner of the Directory Service panel, when clicked, runs a
series of tests to verify that the URIs can be resolved and that the array can bind and query the
tree using the bind user credentials. The test also verifies that the array can find all the con-
figured groups to ensure the common names and group base are correctly configured. The test
can be run at any time.
Users
For IPv4, specify the IP address in the form ddd.ddd.ddd.ddd, where ddd is a number ran-
ging from 0 to 255 representing a group of 8 bits.
For IPv6, specify the IP address in the form
[xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx], where xxxx is a hexadecimal number
Pure Storage Confidential - For distribution only to Pure Customers and Partners 310
Chapter 10:Settings | Access
representing a group of 16 bits. Enclose the entire address in square brackets ([]). Consecutive
fields of zeros can be shortened by replacing the zeros with a double colon (::).
For directory service enabled accounts, user passwords to the array are managed through the
directory service, while public keys are configured through Purity//FA.
Accounts with user names that conflict with local accounts will not be authenticated against the
directory. These account names include, but are not limited to: pureuser, os76, root, dae-
mon, sys, man, mail, news, proxy, backup, nobody, syslog, mysql, ntp, avahi, post-
fix, sshd, snmp.
If an LDAP user has the same name as a locally created user, the locally created user always
has priority.
Users with disabled accounts will not have access to the array.
Groups
A group in the LDAP directory consists of users who share a common purpose.
Each configured group in the directory has a unique distinguished name (DN) representing the
entire path of the object's location in the directory tree. The DN is comprised of the following
attribute-value pairs:
l DC - Domain component base of the DN. For example, DC=mycompany,DC=com.
l OU - Organizational unit base of the group. For example,
OU=PureGroups,OU=SAN,OU=IT.
l CN - Common name of the groups themselves. For example, CN=purereadonly.
For example,
CN=purereadonly,OU=PureGroups,OU=SAN,OU=IT,DC=mycompany,DC=com is the DN
for configured group purereadonly at group base OU=PureGroups,OU=SAN,OU=IT and
with base DN DC=mycompany,DC=com.
The DN can contain multiple DC and OU attributes.
OUs are nested, getting more specific in purpose with each nested OU.
For OpenLDAP, for group configurations based on the non-posixAccount class, groups must
have the full DN of members in the member attribute (groupOfNames). For group con-
figurations based on the posixAccount class, groups must have the uid of members in the
memberUid attribute.
When a user who is a member of a configured group logs in to the array, only the CLI actions
that the user has permission to execute will be visible. Similarly, in the GUI, actions the user
does not have permission to execute will be grayed out or disabled.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 311
Chapter 10:Settings | Access
For Active Directory, two types of groups are supported: security groups and distribution groups.
Distribution groups are used only with email applications to distribute messages to collections of
users. Distribution groups are not security enabled. Security groups assign access to resources
on your network. All groups configured on the array must be security groups.
Role-Based Access Control
Role-based access control (RBAC) restricts the system access and capabilities of each user
based on their assigned role in the array.
All users in the array, whether created locally or added to the array through LDAP integration,
are assigned one of the following roles in the array:
l Read Only. Users with the Read-Only (readonly) role can perform operations that
convey the state of the array. Read Only users cannot alter the state of the array.
l Ops Admin. Users with the Ops Admin (ops_admin) role can perform the same oper-
ations as Read Only users plus enable and disable remote assistance sessions. Ops
Admin users cannot alter the state of the array.
l Storage Admin. Users with the Storage Admin (storage_admin) role can perform
the same operations as Read Only users plus storage related operations, such as
administering volumes, hosts, and host groups. Storage Admin users cannot perform
operations that deal with global and system configurations.
l Array Admin. Users with the Array Admin (array_admin) role can perform the same
operations as Storage Admin users plus array-wide changes dealing with global and
system configurations. In other words, Array Admin users can perform all operations.
For LDAP users, role-based access control is achieved by configuring the groups in the LDAP
directory to correspond to the different roles in the array. For example, a group named "pure-
readonly" in the directory might correspond to the readonly role in the array.
For security purposes, each user should be assigned to only one role in the array. If a user
belongs to multiple configured groups that map to different roles in the array, modify the LDAP
directory to ensure that the user belongs to only one group. If a user has multiple roles, one of
which includes the ops_admin role, the user will be locked out of the system and an alert will be
sent to all alert recipients. Modify the LDAP directory to ensure that the user has only one user
role assigned. If a user has multiple roles, none of which include the ops_admin role, the user
will have privileges corresponding to the least privileged group. For example, a user who has
both the readonly and array_admin roles will have read-only privileges.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 312
Chapter 10:Settings | Access
Pure Storage Confidential - For distribution only to Pure Customers and Partners 313
Chapter 10:Settings | Access
Enter the comma-separated list of up to 30 URIs of the directory servers. Each URI
must include the scheme ldap:// or ldaps:// (for LDAP over SSL), a hostname,
and a domain name or IP address. For example, ldap://ad.company.com con-
figures the directory service with the hostname "ad" in the domain "company.com"
while specifying the unencrypted LDAP protocol.
If specifying a domain name, it should be resolvable by the configured DNS servers.
If specifying an IP address, for IPv4, specify the IP address in the form ddd.d-
dd.ddd.ddd, where ddd is a number ranging from 0 to 255 representing a group of
8 bits.
For IPv6, specify the IP address in the form
[xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx], where xxxx is a hexa-
decimal number representing a group of 16 bits. Enclose the entire address in square
brackets ([]). Consecutive fields of zeros can be shortened by replacing the zeros
with a double colon (::).
If the scheme of the URIs is ldaps://, SSL is enabled. SSL is either enabled or dis-
abled globally, so the scheme of all supplied URIs must be the same. They must also
all have the same domain.
If base DN is not configured and a URI is provided, the base DN will automatically
default to the domain components of the URIs.
Optionally specify a port. Append the port number after the end of the entire address.
Default ports are 389 for ldap, and 636 for ldaps. Non-standard ports can be specified
in the URI if they are in use.
Base DN:
Enter the base distinguished name (DN) of the directory service. The Base DN is built
from the domain and must be in a valid DN syntax. For example, for
ldap://ad.storage.company.com, the Base DN would be: “DC=sto-
orage,DC=company,DC=com.”
Bind User:
Enter the username for the account that is used to perform directory lookups.
For Active Directory, enter the user name—often referred to as sAMAccountName or
User Logon Name—for the account that is used to perform directory lookups. The user
name cannot contain the characters " [ ] : ; | = + * ? < > / \, and can-
not exceed 20 characters in length.
For OpenLDAP, enter the full DN of the user. For example, "CN=John,O-
U=Users,DC=example,DC=com".
The bind account must be configured to allow the array to read the directory. It is good
practice for this account to not be tied to any actual person and to have different pass-
word restrictions, such as "password never expires". The bind account should also not
be a privileged account, since only read access to the directory is required.
Bind Password:
Pure Storage Confidential - For distribution only to Pure Customers and Partners 314
Chapter 10:Settings | Access
Enter the password for the bind user account. The password appears in masked form.
Check Peer:
Optionally click the toggle button to enable (blue) Check Peer. If Check Peer is
enabled, Purity//FA validates the authenticity of the directory servers using the CA
Certificate. If you enable Check Peer, you must provide a CA Certificate.
4 Click Save.
Configuring the CA Certificate
1 Select Settings > Access.
2 In the Directory Service panel, click Edit next to CA Certificate. The Edit CA Certificate dialog
box appears.
3 In the Edit CA Certificate dialog box, enter the certificate of the issuing certificate authority.
Only one certificate can be configured at a time, so the same certificate authority should be
the issuer of all directory server certificates.
4 The certificate must be PEM formatted (Base64 encoded) and include the "-----BEGIN
CERTIFICATE-----" and "-----END CERTIFICATE-----" lines. The certificate can-
not exceed 3000 characters in total length.
5 Click Save.
Configuring the Directory Service Roles
1 Select Settings > Access.
2 In the Directory Service panel, click the Roles edit icon. The Edit Directory Service Roles dia-
log box appears.
3 In the Edit Directory Service Roles dialog box, complete or modify the following fields:
Group:
Enter the common name (CN) of the configured group that maps to the role in the
array. The group name should be just the common name of the group without the
"CN=" specifier. For example, purereadonly.
Group Base:
Enter the common organizational unit (OU) under which to search for the group. Spe-
cify "OU=" for each organizational unit. The order of OUs should get smaller in scope
from right to left. List multiple OUs in comma-separated format.
4 Click Save.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 315
Chapter 10:Settings | Access
The IdP metadata file contains this information and having the IdP metadata file URL
is sufficient.
l The directory service used with the array must be the same directory service instance
as used by the identity provider in the relying party trust configuration for this array.
Purity//FA does not support multiple or federated directory services.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 316
Chapter 10:Settings | Access
l If both Pure1 Manage SSO and FlashArray SAML2 SSO are enabled, both must use
the same directory service.
l The AD FS server must be configured to support TLS 1.2 or 1.3 with strong authen-
tication, if AD FS monitoring is to be configured for the array relying party trust.
Important: Before configuring SSO, create a strong password for the pureuser and
other array administrator accounts and save those passwords according to your organ-
ization’s security policies.
This list summarizes the steps to configure and enable SAML2 SSO authentication on a FlashAr-
ray. Configurations are required in both the service provider side (Purity//FA) and on the identity
provider side, in order to complete the SSO configuration.
1 In Purity//FA, configure SAML2 SSO on the array.
a Obtain IdP information from AD FS, Okta, Azure AD,and Duo Security or an admin-
istrator.
b Configure the service provider (SP) with array information and IdP information.
c Test the basic SP Configuration.
2 In the identity provider, set up SSO using SP information from the array.
3 In Purity//FA, run the end-to-end test of the SAML2 SSO configuration.
4 In Purity//FA, enable SAML2 SSO Authentication.
The service provider configuration, basic test, end-to-end test, and enabling SAML2 SSO are
performed in the Settings > Access tab > SAML2 SSO pane.
For directory service configuration, see the Settings > Access tab > Directory Service pane. Use
the AD FS Management Tool on AD FS to configure the AD FS IdP for SSO and optionally MFA
with the array.
Important: Tests are required at two different steps. Do not bypass either test.
Configuration Notes
l The verification certificate (the AD FS primary token-signing certificate) must be an
X.509 certificate in PEM format.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 317
Chapter 10:Settings | Access
l The array name portion of the URL in the browser used to configure the service pro-
vider must be consistent with the URL entered into the Array URL field in the SAML2
SSO pane. Whether the URLs are based on a FQDN (such as pure01.-
mycompany.com) or on a hostname (such as pure01), the browser URL and the
Array URL field configured in the SAML2 SSO pane must be consistent in the way the
array name is specified.
When there is a mismatch, the browser used to configure SAML2 SSO cannot find
the results of the required end-to-end test.
l Due to a difference in the treatment of IP addresses, the directory service test and the
SAML2 SSO configuration tests may fail on Cloud Block Store arrays.
Contact Pure Storage Technical Services to run these tests on Cloud Block Store
arrays.
Group to Role Mapping
Group to Role Mapping on Purity//FA
Group to role mapping on Purity//FAis only certified with the AD FS server. To configure the
group to role mapping , see the Settings > Access tab > Directory Service pane. Follow the steps
to configure the AD FS server to send the user group in the SAML response, see Mapping attrib-
utes from AD with AD FS and SAML.
Group to Role Mapping on IdPs
Group to role mapping on IdPs is certified with AD FS, Okta, Duo Security, and Azure AD. Each
IdP has unique configuration steps to map the user group(s) to the Purity//FAroles (array_
admin, storage_admin, ops_admin, and readonly). The specified role is then sent in the
SAML respose with attribute name purity_roles. Contact Pure Storage Technical Services
to create an SSO integration application on the IdP side and to configure group to role mapping
for the specified IdP.
Note: Directory service configuration is no longer neccesary for role mapping on IdPs.
This step is required only if the directory service has not yet been configured for use with the
FlashArray or is not the same directory service instance as used by the IdP.
Notes about the directory service:
Pure Storage Confidential - For distribution only to Pure Customers and Partners 318
Chapter 10:Settings | Access
l The directory service configured with the array must be the same directory service
instance that is configured in the IdP relying party trust for this array.
l The directory service configuration must include groups and roles. (See the "Dir-
ectory Service" on page 309 section, especially "Groups" on page 311 and "Con-
figuring the Directory Service Roles" on page 315.)
l If you configure the directory service, also run its test of array management con-
figuration. The Test button is near the top right of the Settings > Access > Directory
Service pane.
Configure SAML2 SSO in Purity//FA
Figure 10-14 shows a sample Edit SAML2 SSO Configuration page completed for the initial con-
figuration. This page can optionally be completed with only a configuration display name, array
URL, and URL for the IdP metadata file. Purity//FA fills in the shaded SP ID and URL fields
based on the configuration Name and Array URL fields.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 319
Chapter 10:Settings | Access
We recommend not using the SP optional credentials or the IdP request or assertion
until after the initial configuration passes the end-to-end test. Note also that the
Enable toggle remains in the off position.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 320
Chapter 10:Settings | Access
Note: These instructions require IdP information available either from the
AD FS Management Tool or from an AD FS administrator.
1 Open the Settings > Access tab and scroll to the SAML2 SSO pane. Click the Create SAML2
SSO icon.
2 In the Name field, enter a local display name for the SAML2 SSO configuration on the array.
3 Leave the Enabled toggle off. The toggle cannot be set on at this time.
4 Review the Array URL discussion in "Configuration Notes" on page 317.
In the Array URL field, enter the FlashArray URL. The URL must use HTTPS.
5 In the Service Provider (SP) section, Purity//FA fills in the ID and URL fields based on the
configuration Name and Array URL information provided in the previous steps.
The information in these service provider fields is required later to create a relying party for
the array in the AD FS identity provider.
6 Leave the optional signing credential and decryption credential fields empty for the initial con-
figuration.
7 Obtain the IdP information from your IdP administrator or from the AD FS Management Tool.
The IdP Entity ID is found under the AD FS Management Tool under `AD FS/Ser-
vice/Federated Service Properties and other URLs are under AD FS Ser-
Pure Storage Confidential - For distribution only to Pure Customers and Partners 321
Chapter 10:Settings | Access
9 If "Testing directory service correctness" does not pass, click Test on the Directory Service
pane for detailed error messages. Rerun the SAML2 SSO test after correcting the directory
service configuration.
10 Click Close on the test results pop-up.
11 On the Edit SAML2 SSO Configuration page, click Save.
Configure the Active Directory Federation Services IdP
This step registers Purity//FA as a relying party trust on AD FS and requires administrator
access to the AD FS Management Tool.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 322
Chapter 10:Settings | Access
Optionally use the copy icons to the right of the SP ID and URL fields in the Purity//FA SAML2
SSO pane to copy and paste during the AD FS configuration.
1 On the machine running AD FS, open Control Panel > System and Security > Admin-
istrative Tools and select AD FS Management.
2 In the left panel, select Relying Party Trusts. In the right panel, select Add Relying Party
Trust....
3 The Add Relying Party Trust Wizard opens.
Table 10-6. Sample Configuration in the Add Relying Party Trust Wizard
Wizard Page Action
Welcome Ensure Claims aware is selected
Select Data Source Select Enter data about the relying party manually
Enter a display name for the new relying party
Specify Display (for convenience, this name could match the SP configuration display name, as set in the Pur-
Name ity//FA Settings > Access > SAML2 SSO pane)
Configure Cer-
tificate No action
Select Enable support for the SAML 2.0 Web SSO protocol
In the Relying party SAML 2.0 SSO service URL field, enter the Assertion Con-
Configure URL sumer URL from the Purity SAML2 SSO pane
Configure Iden- In the Relying party trust identifier field, enter the SP entity ID from the Purity
tifiers SAML2 SSO pane
Choose Access Con-
trol Policy Select Permit everyone
Ready to Add Trust No action
Finish Select Configure claims insurance policy for this application
Pure Storage Confidential - For distribution only to Pure Customers and Partners 323
Chapter 10:Settings | Access
Table 10-7. Actions in the Add Transform Claim Rule Wizard for Users
Field Action
Claim rule name Enter a descriptive name for the rule, such as map name id
Attribute store Select Active Directory
Mapping table LDAP Attribute column Select SAM-Account-Name
Select Name ID
Outgoing Claim Type column Click Finish
The new rule for name mapping appears in the Edit Claims Insurance Policy page Insur-
ance transform Rules table.
a Again click Add Rule..., this time for the group information rule.
b The Add Transform Claim Rule Wizard opens. In the Select Rule Template page, in the
Claim rule template field, select Send LDAP Attributes as Claims.
Table 10-8. Actions in the Add Transform Claim Rule Wizard for Groups
Field Action
Claim rule name Enter a descriptive name for the rule, such as pass group info
Attribute store Select Active Directory
Mapping table LDAP Attribute column Select Is-Member-Of-DL
Select Group
Outgoing Claim Type column Click Finish
The new rule for group mapping appears in the Edit Claims Insurance Policy page Insur-
ance Transform Rules table.
Perform the SAML2 SSO End-to-end Test
Do not bypass this test. SAML2 SSO configuration is complicated, requiring correct con-
figuration on both the SP and IdP. All SSO user authentication can fail if SSO is enabled pre-
maturely. Perform the end-to-end test before enabling SSO and also after future configuration
changes.
This test does not affect current user sessions or attempts to login.
1 Open the Settings > Access tab and scroll to the SAML2 SSO pane.
2 Click Test.
3 The Test SAML2 SSO Configuration dialog opens. The "Basic test results" section reports on
an array URL test, connectivity to the AD FS server and the directory service, and basic
Pure Storage Confidential - For distribution only to Pure Customers and Partners 324
Chapter 10:Settings | Access
directory service configuration. If any failures appear in the "Basic test results" section,
resolve those issues and redo the test before proceeding.
4 When the basic tests all pass, click E2E Test.
5 The AD FS login page opens in a new browser tab. See Figure 10-16.
As this page is customizable, the page for each organization will have different text and
appearance.
Figure 10-16. AD FS Login Screen
Pure Storage Confidential - For distribution only to Pure Customers and Partners 325
Chapter 10:Settings | Access
If the test reports any error, check that the service provider and AD FS configurations are
consistent and correct, make any corrections if necessary, and rerun the test. See "Perform
the SAML2 SSO End-to-end Test" on page 324.
Note: If you cannot complete the test, because it takes too long or for a different reason,
the SAML2 SSO configuration may be incorrect and may not function properly if enabled.
Note: If the browser does not open the End To End Test Results pop-up, confirm that the
current browser matches (in terms of FQDN or hostname) the URL specified in the Array
URL field. Log into Purity//FA using the URL specified in the Array URL field, and retry the
test.
7 When the E2E test passes, click Close and go back to configurations.
8 The Test SAML2 SSO Configuration dialog opens again but still shows "End-to-end test has
not been started yet". Click Check E2E Test Result to update the display.
Figure 10-18 for an example of a successful test.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 326
Chapter 10:Settings | Access
9 Click Close.
10 In the SAML2 SSO configuration screen, click Close.
Optionally Enable Multi-factor Authentication
The types of multi-factor authentication available depend on the IdP. The AD FS IdP supports
certificate authentication, authentication with Microsoft® Azure™ software, and others.
If the GUI end-to-end test is successful with password authentication, optionally enable MFA:
1 In the AD FS Management tool Relying Party Trust page, select the relying party you cre-
ated. In the right panel, select Edit Access Control Policy....
2 On the "Choose an access control policy" page, select Permit everyone and require MFA
and then click Next.
3 Next steps, such as configuring a certificate, depend on the type of MFA selected.
Enable SSO Authentication
After the SAML2 SSO configuration being enabled, all new login attempts to the Purity//FA GUI
by SAML users are referred to the identity provider for authentication. Existing user sessions are
not affected.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 327
Chapter 10:Settings | Access
Important: Only enable SAML2 SSO authentication if the end-to-end test passes!
Otherwise all SAML users could be locked out of the Purity//FA management GUI.
Then the only access to the management GUI is the login page Local Access link.
1 Optionally log into Purity//FA in a second browser also, for use if login access is interrupted.
2 Open the Settings > Access tab and scroll to the SAML2 SSO pane. Click the Configuration
edit icon.
3 Click the right side of the Enable toggle to slide the toggle to the right. When enabled, the
toggle is on the right side and changes to blue.
Click Save.
4 The SAML2 SSO pane shows that SSO authentication is enabled. See Figure 10-19.
5 Click Save.
With SAML2 SSO properly configured and enabled, the Purity//FA login screen no longer
prompts for a password, as shown in Figure 10-20.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 328
Chapter 10:Settings | Access
On a user's first log in, and also after an SSO session expires, the AD FS login page opens for
the user's AD FS credentials.
The appearance of the login page varies based on organizational customizations. "GUI Login"
on page 78 describes login steps.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 329
Chapter 10:Settings | Access
2 Enter the signing credential in the Service Provider Signing Credential field. This credential
must match the signature verification certificate configured for the relying party in the IdP.
Enable Encrypt Assertion
When SAML2 SSO is enabled, Purity//FA GUI session timeouts are based on AD FS timeouts.
By default, an AD FS SSO session times out after eight hours.
Optionally see the following Microsoft articles for information on customizing the time out setting.
l AD FS Single Sign-On Settings
l Set-AdfsProperties -- discusses the PowerShell cmdlets Get-AdfsProperties,
SsoLifetime, and Set-AdfsProperties.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 330
Chapter 10:Settings | Access
For instructions on how to enable TLS1.2/1.3 and strong authentication in AD FS, see the sec-
tions Enable and Disable TLS 1.2 and Enabling Strong Authentication for .NET applications
in the Microsoft article Managing SSL/TLS Protocols and Cipher Suites for AD FS.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 331
Chapter 10:Settings | Access
Runtime Notes
l If a user is deactivated in AD FS but is currently logged in, the current login session is
not affected. The user is denied access at the next login attempt.
l In case the SAML2 SSO service is temporarily unavailable, an array administrator
(such as pureuser) can access the array through the Local Access link on the login
page. This link provides emergency administrator access when GUI logins are
unavailable.
l By default, an SSO session times out after eight hours. A different time out length can
be configured in the identity provider. See "SSO Session Timeout" on page 330.
l When a user logs in through SAML2 SSO, the browser URL field reports its URL
based on what is configured in the Array URL field, regardless of whether the user
entered the array hostname or FQDN when directing the browser to the array.
Limitations
The following considerations apply to this release:
l Only one SAML2 SSO configuration can be created at a time on an array.
l Only one AD FS identity provider instance is supported with an array.
l SSO authentication applies only to GUI logins. SSH logins continue to use their exist-
ing password authentication or other authentication mechanism, including LDAP
authentication or multi-factor authentication with RSA SecurID® software.
SAML2 SSO Troubleshooting
This section lists common error messages and suggestions to resolve the error.
After making any configuration change, rerun the end-to-end configuration test from the SAML2
SSO pane under Settings > Access (see "Perform the SAML2 SSO End-to-end Test" on
page 324).
End-to-end Test Not Completed
Assertion is Missing a Subject
No User Group Information is Provided
Failed to Load Metadata from Metadata URL
No Assertions Found in Response
Failed to Read Signing Credential
Failed to Read Decryption Credential
Pure Storage Confidential - For distribution only to Pure Customers and Partners 332
Chapter 10:Settings | Access
To recover:
1 Expand the Error details link in the AD FS login page, as shown below, or go to the IdP for
more information.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 333
Chapter 10:Settings | Access
Pure Storage Confidential - For distribution only to Pure Customers and Partners 334
Chapter 10:Settings | Access
To recover:
1 Ensure that a claim rule for Name ID is configured in AD FS. See "Configure the Active Dir-
ectory Federation Services IdP" on page 322.
2 After changing the IdP configuration, rerun the end-to-end test. Check the test results and
examine any error messages. Make configuration changes if necessary.
3 Repeat until the end-to-end test passes.
To recover:
Pure Storage Confidential - For distribution only to Pure Customers and Partners 335
Chapter 10:Settings | Access
1 Ensure that a claim rule for Group is configured in AD FS. See "Configure the Active Dir-
ectory Federation Services IdP" on page 322.
2 After changing the IdP configuration, rerun the end-to-end test. Check the test results and
examine any error messages. Make configuration changes if necessary.
3 Repeat until the end-to-end test passes.
To recover:
1 Confirm that the Metadata URL field in the Purity//FA SAML2 SSO pane matches the
Metadata URL in under AD FS/Service/Endpoints.
2 After changing the configuration, rerun the end-to-end test. Check the test results and exam-
ine any error messages. Make configuration changes if necessary.
3 Repeat until the end-to-end test passes.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 336
Chapter 10:Settings | Access
To recover:
1 If the Sign Request feature is not required, disable Sign Request in the Purity//FA SAML2
SSO pane. Then save the configuration and run the end-to-end test.
2 If the Sign Request feature is required:
a Enable Sign Request in the Purity//FA SAML2 SSO pane and save the configuration.
b Ensure the signature verification certificates on the IdP are correct and not expired.
c If the encryption certificate is configured on the IdP, ensure that certificate is correct and
not expired.
d Rerun the end-to-end test. Check the test results and examine any error messages.
Make configuration changes if necessary. Repeat until the end-to-end test passes.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 337
Chapter 10:Settings | Access
To recover:
1 Ensure that the signing credential exists on the IdP.
2 After an IdP configuration change, rerun the end-to-end test. Check the test results and
examine any error messages. Make additional configuration changes if necessary.
3 Repeat until the end-to-end test passes.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 338
Chapter 10:Settings | Access
To recover:
1 Ensure that the decryption credential exists on the IdP.
2 After an IdP configuration change, rerun the end-to-end test. Check the test results and
examine any error messages. Make additional configuration changes if necessary.
3 Repeat until the end-to-end test passes.
To recover:
1 Ensure the verification certificate (primary token-signing certificate) on the IdP has not
expired.
2 Ensure that the verification certificate is correctly entered in the Purity//FA SAML2 SSO
pane.
3 After a configuration change, rerun the end-to-end test. Check the test results and examine
any error messages. Make additional configuration changes if necessary.
4 Repeat until the end-to-end test passes.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 339
Chapter 10:Settings | Access
To recover:
1 Ensure that the IdP EntityID is correctly entered in the Purity//FA SAML2 SSO pane.
2 Rerun the end-to-end test. Check the test results and examine any error messages. Make
additional configuration changes if necessary.
Repeat until the end-to-end test passes.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 340
Chapter 10:Settings | Access
To recover:
1 If the Encrypt Assertion feature is not required, disable Encrypt Assertion in the Purity//FA
SAML2 SSO pane and remove the encryption certificate on the IdP. Then save the con-
figuration and run the end-to-end test.
2 If the Encrypt Assertion feature is required:
a Enable Encrypt Assertion in the Purity//FA SAML2 SSO pane and save the con-
figuration.
b Ensure the encryption certificate on the IdP is correct and not expired.
c Save the configuration and rerun the end-to-end test. Check the test results and exam-
ine any error messages. Make additional configuration changes if necessary.
Repeat until the end-to-end test passes.
To recover:
1 For the AD FS credentials entered on the AD FS login page, ensure that user is a member of
only one valid directory service group that is mapped to a role.
2 After correcting the user account in the directory service, have the user retry the login.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 341
Chapter 10:Settings | Access
To recover:
1 For the AD FS credentials entered on the AD FS login page, ensure that user is added to a
valid directory service group that is mapped to a role.
2 After correcting the user account in the directory service, have the user retry the login.
To recover:
Pure Storage Confidential - For distribution only to Pure Customers and Partners 342
Chapter 10:Settings | Access
This error is seen when the AD FS server is not available or when there is an issue with the
SAML2 SSO configuration.
To recover:
1 Confirm with the AD FS administrator whether the AD FS server is operational and reachable
by the array.
2 Use the Local Access link to log into the array and rerun the end-to-end test.
3 Check the test results and resolve any configuration issues.
4 Repeat the end-to-end test and configuration changes until the test passes.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 343
Chapter 10:Settings | Access
Purity//FA offers also multi-factor authentication (MFA) with SAML2 Single Sign-on (SSO), see
"Multi-factor Authentication with SAML2 SSO" on page 316. When SAML2 SSO authentication
is configured for the array, MFA is an option available through the identity provider, such as the
Microsoft® Active Directory Federation Services (AD FS) authentication identity management
system.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 344
Chapter 10:Settings | Access
l Logging in to the Purity//FA GUI or Purity//FA CLI. This includes remote logins to the
Purity//FA CLI via SSH.
l Logging out of the Purity//FA GUI or Purity//FA CLI
l Opening a Pure Storage REST API session
l Pure Storage REST API session timeouts
Authentication actions include:
l Generating an API token through the REST API
l Submitting a REST API request in a closed REST session
l Attempting to log in to the Purity//FA GUI or Purity//FA CLI using an invalid password
or multi-factor passcode
l Attempting to open a Pure Storage REST API session using an invalid API token
l Attempting to obtain a REST API token using an invalid user name and/or password
The Location column displays the IP address of the user client connecting to the array.
The Method column displays the authentication method by which the user attempted to log in,
log out, or authenticate. Authentication methods include API token, password, public
key, and saml2_sso. saml2_sso indicates a session authenticated by an identity provider
through SAML2 SSO.
By default, all user session events on the array are displayed. To display a list of user session
events that were performed within a certain time range, click the All Time drop-down button and
select the desired time range from the list.
In addition to the Sessions panel, user session messages are also logged and transmitted to
Pure Storage Technical Services via the phone home facility. If configured, Purity//FA can also
send user session messages as syslog messages to remote servers.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 345
Chapter 10:Settings | Access
The pureuser command logs in to the Purity//FA GUI with a valid password. See Figure 10-37.
Figure 10-37. User Session Logs – Login and Logout Example
Example 2
The root user logs in to the Purity//FA CLI with a valid public key and then logs out. See Figure
10-38.
Figure 10-38. User Session Logs - Login and Logout Example
Example 3
The pureuser opens a REST API session with a valid API token and then the session times
out. See Figure 10-39.
Figure 10-39. User Session Logs - Login and Logout Example
Example 4
A user logs into the Purity//FA GUI through SAML2 SSO. See Figure 10-40.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 346
Chapter 10:Settings | Access
Authentication Events
Users can log into Purity//FA through various authentication methods, including passwords, pub-
lic keys, and API tokens.
Purity//FA creates a “failed authentication” event when a user performs any of the following
actions: log in to the Purity//FA GUI with an incorrect password, log in to the Purity//FA CLI with
an invalid password or public key, or open a REST API session with an invalid API token.
Purity//FA creates an “API token obtained” event when a user attempts to create an API token
via any of the Purity//FA interfaces.
Purity//FA creates a “request without session” event when a user attempts to submit a REST API
request as an unauthenticated user.
In the Sessions panel, repeated failed authentication attempts are displayed in pre-configured
time periods. By default, failed authentication attempts are displayed in 15-minute time periods.
The Repeat value represents the number of attempts in addition to an initial attempt that a user
has performed an authentication action within the 15-minute time period.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 347
Chapter 10:Settings | Access
Pure Storage Confidential - For distribution only to Pure Customers and Partners 348
Chapter 10:Settings | Access
l Enabled: The user account can be enabled (true) or disabled (false). Enabling a user
is only allowed when the password is set.
l Password: A password is only required when the user account is enabled. The pass-
word must be between 1 and 100 characters in length, and can include any character
that can be entered from a US keyboard.
l UID: The unique user ID, automatically or manually set.
l SID: The security identifier of the user is automatically set.
l Email: An optional email address, for example used for quota notifications.
Creating a Local User
1 Select Settings > Access and select the File System section using the File System button.
2 In the Local Users panel, click the plus icon in the upper-right corner of the panel, or click the
menu icon and select Create... The Create Local User window appears. Fill in the following
information:
l Name: Type the name of the new local user.
l Primary Group: Click and then select one of the groups to act as the primary group of
the user.
l Enabled: Toggle button to enable (blue) the user. Note that if enabled, a password is
required.
l Password: Type a password for the new user, only required when the user account is
enabled.
l Confirm Password: Type the password again.
l Uid: Optionally, enter the user ID to override the automatically set UID.
l Email: Optionally, enter the email address for the user.
3 Click Create.
Managing a Local User
A local user can be managed as follows:
1 Select Settings > Access and select the File System section using the File System button.
2 In the Local Users panel, click the menu icon and select one of the following operations:
l Edit... enables you to change the primary group of a user, enable or disable the user,
set a new password, set an optional UID, or set the optional email address.
l Rename... to change the user name.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 349
Chapter 10:Settings | Access
l Add to local group... to add the user to one or multiple secondary groups.
l Remove from local group... to remove the user from one or more secondary groups.
l Delete... to delete the local user.
3 Confirm the changes.
Deleting Local Users
1 Select Settings > Access and select the File System section using the File System button.
2 In the Local Users panel:
l To delete one local user: Click the menu icon next to the user and select Delete...
l Many users: Click the menu icon in the upper-right corner of the panel and select
Delete... Select the users to delete.
3 Click Delete to confirm.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 350
Chapter 10:Settings | Access
Pure Storage Confidential - For distribution only to Pure Customers and Partners 351
Chapter 10:Settings | Software
Software
The Software page manages software, apps and third party plug-ins associated with the array.
See Figure 10-42.
Figure 10-42. Settings – Software Page
Updates
The Updates panel displays a list of software updates. Software updates add or enhance Purity
features and functionality. Perform periodic software updates to get the most out of your Purity
system.
An interactive software upgrade process is supported with the puresw upgrade CLI command
but is not available in the GUI. This section describes the GUI's one-click non-interactive update.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 352
Chapter 10:Settings | Software
The Auto Download toggle icon enables (blue) or disables (gray) the Auto Download feature. If
Auto Download is enabled, any software installation files that Pure Storage Technical Services
send to the array will be automatically downloaded and ready to install. If Auto Download is dis-
abled, the software installation files that Pure Storage Technical Services send to the array will
only be downloaded during the software update process. Auto Download is disabled by default.
Note that the Auto Download feature impacts software updates only. The Auto Download feature
does not impact Purity apps or third party plug-ins.
A software version that is available for update will have one of the following statuses:
l available: A software update for this version is available, but the installation files have
not been downloaded to the array. Instead, the files will be downloaded to the array
during the installation process. When scheduling the software update, make sure to
factor enough time for the download process.
l downloaded: The installation files for this software version have been successfully
downloaded to the array.
Click Install to start the software update process. As the software update process progresses,
the following statuses will appear:
l downloading: Purity is downloading the installation files to the array for this software
version. If Auto Download is enabled, any software installation files that Pure Storage
Technical Services send to the array will be automatically downloaded and ready to
install. If Auto Download is disabled, the software installation files that Pure Storage
Technical Services send to the array will only be downloaded during the software
update process. Auto Download is disabled by default.
l installing: Purity is updating the software. Be prepared to be logged out of the soft-
ware during the update process.
During the update process, you will be logged out of the software. After you have been logged
out, log back in to continue monitoring the process. The software update process is complete
when the software update no longer appears in the Updates panel.
If the software update fails, the software reverts to the previous version. If you encounter any
problems during the update process, contact Pure Storage Technical Services.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 353
Chapter 10:Settings | Software
l Click the Auto Download toggle button to enable (blue) automatic download. When
the software update files are available, they will be automatically downloaded to the
array.
l To disable Auto Download, click the Auto Download toggle button (gray).
vSphere Plugin
The Pure Storage Management Plugin for vSphere extends the vSphere Web Client, enabling
users to manage Pure Storage FlashArray volumes and snapshots in a vCenter context.
The vSphere Plugin panel displays the connection details for the vSphere Web Client. Once a
connection has been established, users can open Purity//FA GUI sessions via the vSphere Web
Client.
For more information about the vSphere plugin, refer to the Pure Storage Management Plugin
for vSphere User Guide on the Knowledge site at https://support.purestorage.com.
App Catalog
The Purity Run platform extends array functionality by integrating add-on services into the Pur-
ity//FA operating system. Each service that runs on the platform is provided by an app.
The App Catalog panel displays a list of apps that are available to be installed on the array,
along with the following attributes for each app:
l Name: App name. The app name is pre-assigned and cannot be changed.
l Version: App version that is ready to be installed on the array.
l Status: Status of the app installation. Possible app statuses include:
Pure Storage Confidential - For distribution only to Pure Customers and Partners 354
Chapter 10:Settings | Software
Note: The App Catalog panel is not supported on Cloud Block Store.
Apps require CPU, memory, network, and storage resources. For this reason, apps by default
are not installed.
To install an app, click the menu icon next to the app and select Install. After an app has been
installed, it appears in the Installed Apps panel.
Installing an App
1 Select Settings > Software.
2 In the App Catalog panel, click the menu icon and select Install. The Install App dialog box
appears.
3 Click Install.
Installed Apps
The Installed Apps panel displays a list of apps that are installed on the array, along with the fol-
lowing attributes for each app:
Pure Storage Confidential - For distribution only to Pure Customers and Partners 355
Chapter 10:Settings | Software
l Name: App name. The app name is pre-assigned and cannot be changed.
l Enabled: App enable/disable status. An app must be enabled so the array can reach
the app service. Apps are disabled by default.
l Version: App version that is currently installed on the array.
l Status: App status. A status of healthy means the app is running. A status of
unhealthy means the app is not running.
There are various factors that contribute to an unhealthy app. In most cases, the
unhealthy status is temporary, such as when the app is being restarted; upon suc-
cessful restart, the app returns to healthy status. The app might also be unhealthy if,
upon enabling the app, Purity//FA determines that there are insufficient resources to
run it. An accompanying message appears in the Details column stating that there
are insufficient resources to operate the app. Disable any apps that are currently not
in use to free up some resources and try to enable the app again.
If the app is in an unhealthy status for a longer than expected period of time, contact
Pure Storage Technical Services.
l VNC Enabled: Indicates whether VNC access is enabled (true) or disabled (false)
for each installed app. The default is false. When VNC Enabled is true, a port is
open to allow VNC connections.
App Volumes
For each app that is installed, a boot volume is created. For some apps, a data volume is also
created. Boot and data volumes are known as app volumes.
Select Storage > Volumes to see a list of volumes, including app volumes.
Boot and data app volume names begin with a distinctive @ symbol. The naming convention for
app volumes is @APP_boot for boot volumes and @APP_data for data volumes, where APP
denotes the app name.
App volumes are connected to their associated app host. For example, the linux boot and data
volumes are connected to the linux app host. From the list of volumes, click an app volume to
see its associated app host.
The boot volume represents a copy of the boot drive of the app. Do not modify or save data to
the boot volume. When an app is upgraded, the boot volume is overwritten, completely des-
troying its contents including any other data that is saved to it. The data volume is used by the
app to store data.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 356
Chapter 10:Settings | Software
The following example shows that the drives were correctly mounted inside the linux app.
pureuser@linux:~$ df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 8198768 0 8198768 0% /dev
tmpfs 1643272 8756 1634516 1% /run
/dev/sda1 15348720 1721392 12824616 12% /
/dev/sdb 17177782208 33608 17177748600 1% /data
Disk device /dev/sdb, which corresponds to the app data volume, is mounted on /data,
meaning the data will be saved to the data volume (and not the boot volume), and disk device
/dev/sda1, which corresponds to the app boot volume, is mounted on /.
App Hosts
Each app has a dedicated host, known as an app host. The app host is connected to the asso-
ciated boot and data volumes. The app host is also used to connect FlashArray volumes to the
app.
Select Storage > Hosts to see a list of hosts, including app hosts.
Unlike regular FlashArray hosts, app hosts cannot be deleted, renamed, or modified in any way.
Furthermore, app hosts cannot be added to host groups or protection groups.
App host names begin with a distinctive @ symbol. The naming convention for app hosts is
@APP, where APP denotes the app name.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 357
Chapter 10:Settings | Software
App Interfaces
For each app that is installed, one app management interface is created per array management
interface. An app data interface may also be created for high-speed data transfers.
Select Settings > Network to view and configure app interfaces.
The naming convention for app interfaces is APP.datay for the app data interface, and
APP.mgmty for the app management interface, where APP denotes the app name, and y
denotes the interface.
Configure an app interface to give pureuser the ability to log into the app or transfer data
through a separate interface. Configuring an app interface involves assigning an IP address to
the interface and then enabling the interface.
Optionally set the gateway. Note that only one of the app interfaces of a particular app can have
a gateway set.
Before you configure an app interface, make sure the corresponding external interface is phys-
ically connected.
Configure one or more of the following app interfaces:
l App Management Interface
Configure the app management interface to give pureuser the ability to log into the
app with the same Purity//FA login credentials. If a public key has been created for
Pure Storage Confidential - For distribution only to Pure Customers and Partners 358
Chapter 10:Settings | Software
the user, it can be used to log into the app. Purity//FA password changes are auto-
matically applied to the app.
To configure the app management interface, assign an IP address to one of the app
management interfaces, and then enable the interface.
l App Data Interface
Configure the app data interface to use a separate interface for high-speed data trans-
fers.
To configure the app data interface, assign an IP address to the app data interface,
and then enable the interface.
Nodes of an App
A node of an app is a dedicated instance running the app. Some apps are made up of multiple
nodes. For easy identification, nodes are indexed starting at 0.
Uninstalling an App
1 Select Settings > Software.
2 In the Installed Apps panel, verify the app is disabled.
3 Click the menu icon and select Uninstall. The Uninstall App dialog box appears.
4 Click Uninstall.
Enabling an App
1 Select Settings > Software.
2 In the Installed Apps panel, click the menu icon and select Enable.
Disabling an App
1 Select Settings > Software.
2 In the Installed Apps panel, click the menu icon and select Disable.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 359
Chapter 10:Settings | Software
Pure Storage Confidential - For distribution only to Pure Customers and Partners 360
Chapter 11:
Cloud Block Store
Pure Cloud Block Store™ is Pure’s state-of-the-art software defined storage solution running Pur-
ity//FA and delivered natively in the cloud. Pure Cloud Block Store™ provides seamless data
mobility across on-premises and cloud environments with a consistent experience, regardless of
whether your data lives on premises, cloud, hybrid cloud or multicloud.
To learn more about Pure Cloud Block Store™ refer to the Knowledge site at Pure Cloud Block
Store.
The following information can be found on the Knowledge site:
l General design, use, and interoperability of Pure Cloud Block Store™.
l Requirements, procurement, and deployment of Pure Cloud Block Store™.
l Operations and capabilities of Pure Cloud Block Store™.
l General troubleshooting information.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 361
Chapter 12:
FlashArray Storage Capacity and
Utilization
The discussion of array administrators in this chapter does not apply to administrators of Ever-
green//One™ subscription storage, as Pure Storage is responsible for managing the physical
capacity of arrays that supply subscription storage.
The two keys to FlashArray cost-effectiveness are highly efficient provisioning and data reduc-
tion. One of an array administrator's primary tasks is understanding and managing physical and
virtual storage capacity. This chapter describes the ways in which physical storage and virtual
capacity are used and measured.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 362
Chapter 12:FlashArray Storage Capacity and Utilization | Array Capacity and Storage Con-
sumption
l Unique data. Reduced host-written data that is not duplicated elsewhere in the array
and descriptive metadata.
l Shared data. Deduplicated data. Data that comprises the contents of two or more
sector addresses in the same or different volumes (FlashArray deduplication is array-
wide).
l Stale data. Overwritten or deleted data. Data representing the contents of virtual sec-
tors that have been overwritten or deleted by a host or by an array administrator.
Such storage is deallocated and made available for future use by the continuous stor-
age reclamation process, but because the process runs asynchronously in the back-
ground, deallocation is not immediate.
l Unallocated storage. Available for storing incoming data.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 363
Chapter 12:FlashArray Storage Capacity and Utilization | Volume and Snapshot Storage Con-
sumption
Effective used capacity (EUC), reflecting billable capacity, is displayed through the CLI (pur-
earray list --effective-used). For example (single line of output displayed over two
rows):
Provisioning
The provisioned size of a volume is its capacity as reported to hosts. As with conventional disks,
the size presented by a FlashArray volume is nominally fixed, although it can be increased or
decreased by an administrator. To optimize physical storage utilization, however, FlashArray
volumes are thin and micro provisioned.
l Thin provisioning. Like conventional arrays that support thin provisioning, FlashAr-
rays do not allocate physical storage for volume sectors that no host has ever written,
or for trimmed (expressly deallocated by host or array administrator command) sector
addresses.
l Micro provisioning. Unlike conventional thin provisioning arrays, FlashArrays alloc-
ate only the exact amount of physical storage required by each host-written block
Pure Storage Confidential - For distribution only to Pure Customers and Partners 364
Chapter 12:FlashArray Storage Capacity and Utilization | Volume and Snapshot Storage Con-
sumption
Data Reduction
The second key to FlashArray cost effectiveness is data reduction, which is the elimination of
redundant data through pattern elimination, duplicate elimination, and compression.
l Pattern elimination. When Purity//FA detects sequences of incoming sectors whose
contents consist entirely of repeating patterns, it stores a description of the pattern
and the sectors that contain it rather than the data itself. The software treats zero-
filled sectors as if they had been trimmed—no space is allocated for them.
l Duplicate elimination. Purity//FA computes a hash value for each incoming sector
and attempts to determine whether another sector with the same hash value is stored
in the array. If so, the sector is read and compared with the incoming one to avoid the
possibility of aliasing. Instead of storing the incoming sector redundantly, Purity//FA
stores an additional reference to the single data representation. Purity//FA dedu-
plicates data globally (across an entire array), so if an identical sector is stored in an
array, it is a deduplication candidate, regardless of the volume(s) with which it is asso-
ciated.
l Compression. Purity//FA attempts to compress the data in incoming sectors, curs-
orily upon entry, and more exhaustively during its continuous storage reclamation
background process.
Purity//FA applies pattern elimination, duplicate elimination, and compression techniques to
data as it enters an array, as well as throughout the data's lifetime.
Figure 12-2 for a hypothetical example of the cumulative effect of FlashArray data reduction on
physical storage consumption.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 365
Chapter 12:FlashArray Storage Capacity and Utilization | Volume and Snapshot Storage Con-
sumption
In the example, hosts have written data to a total of 1,000 unique sector addresses through:
l Pattern elimination. 100 blocks contain repeated patterns, for which Purity//FA
stores metadata descriptors rather than the actual data.
l Duplicate elimination. 200 blocks are duplicates of blocks already stored in the
array; Purity//FA stores references to these rather than duplicating stored data.
l Compression. The remaining 70% of blocks compress to half their host-written size;
Purity//FA compresses them before storing, and during continuous storage reclam-
ation.
Therefore, the net physical storage consumed by host-written data in this example is 35% of the
number of unique volume sector addresses to which hosts have written data.
The data reduction example is hypothetical; each data set reduces differently, and unrelated
data stored in an array can influence reduction. Nevertheless, administrators can use the array
and volume measures reported by Purity//FA to estimate the amount of physical storage likely to
be consumed by data sets similar to those already stored in an array.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 366
Chapter 12:FlashArray Storage Capacity and Utilization | Volume and Snapshot Storage Con-
sumption
In Figure 12-3, two snapshots of a volume, S1 and S2, are taken at times t1 and t2 (t1 prior to t2).
If a host writes data to the volume after t1 but before t2, Purity//FA preserves the overwritten sec-
tors' original contents and associates them with S1 (i.e., space accounting charges them to S1).
If in the interval between t1 and t2 a host reads sectors from snapshot S1, Purity//FA delivers:
l For sectors not modified since t1, current sector contents associated with the volume.
l For sectors modified since t1, preserved volume sector contents associated with S1.
Similarly, if a host writes volume sectors after t2, Purity//FA preserves the overwritten sectors'
previous contents and associates them with S2 for space accounting purposes. If a host reads
sectors from S2, Purity//FA delivers:
l For sectors not modified since t2, current sector contents associated with the volume.
l For sectors modified since t2, preserved volume sector contents associated with S2.
If, however, a host reads sectors from S1 after t2, Purity//FA delivers:
l For sectors not modified since t1, current sector contents associated with the volume.
l For sectors modified between t1 and t2, preserved volume sector contents associated
with S1.
l For sectors modified since t2, preserved volume sector contents associated with S2.
If S1 is destroyed, storage associated with it is reclaimed because there is no longer a need to
preserve pre-update content for updates made prior to t2.
Pure Storage Confidential - For distribution only to Pure Customers and Partners 367
Chapter 12:FlashArray Storage Capacity and Utilization | Volume and Snapshot Storage Con-
sumption
Pure Storage Confidential - For distribution only to Pure Customers and Partners 368
Chapter 12:FlashArray Storage Capacity and Utilization | FlashArray Data Lifecycle
Pure Storage Confidential - For distribution only to Pure Customers and Partners 369
Chapter 12:FlashArray Storage Capacity and Utilization | FlashArray Data Lifecycle
Pure Storage Confidential - For distribution only to Pure Customers and Partners 370
Pure Storage, Inc.
Twitter: @purestorage
2555 Augustine Drive
Santa Clara, CA 95054
T: 650-290-6088
F: 650-625-9667
Sales: [email protected]
Support: [email protected]
Media: [email protected]
General: [email protected]