PurityFA 6.4.10 FlashArray AdminGuide

Download as pdf or txt
Download as pdf or txt
You are on page 1of 371

Purity//FA

Administration Guide
Version 6.4.10
Copyright Statement
© 2023 Pure Storage (“Pure”), Portworx and its associated trademarks can be found here and its
virtual patent marking program can be found here. Third party names may be trademarks of their
respective owners.
The Pure Storage products and programs described in this documentation are distributed under
a license agreement restricting the use, copying, distribution, and decompilation/reverse engin-
eering of the products. No part of this documentation may be reproduced in any form by any
means without prior written authorization from Pure Storage, Inc. and its licensors, if any. Pure
Storage may make improvements and/or changes in the Pure Storage products and/or the pro-
grams described in this documentation at any time without notice.
THIS DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED
CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED
WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-
INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS
ARE HELD TO BE LEGALLY INVALID. PURE STORAGE SHALL NOT BE LIABLE FOR
INCIDENTAL OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH THE FURNISHING,
PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE INFORMATION CONTAINED
IN THIS DOCUMENTATION IS SUBJECT TO CHANGE WITHOUT NOTICE.
Pure Storage, Inc. 2555 Augustine Drive, Santa Clara, CA 95054 http://www.purestorage.com
Direct comments to mailto: [email protected].
Version 1

Pure Storage Confidential - For distribution only to Pure Customers and Partners 2
Table of Contents
Chapter 1:About this Guide 20
What's New? 21
Organization of the Guide 24
A Note on Format and Content 25
Related Documentation 25
Contact Us 26
Documentation Feedback 26
Product Support 26
General Feedback 26
Chapter 2:FlashArray Concepts and Features 27
Arrays 27
Array Service Type 28
Connected Arrays 28
Hardware Components 29
Network Interface 30
Block Storage 32
Volumes 32
Volume Groups 33
Volume Snapshots 33
Volume Snapshots vs. Protection Group Snapshots 33
Eradication Delays 35
Eradication Delay Settings after an Upgrade 36
Extending or Decreasing an Eradication Pending Period 36
SafeMode 37
Automatic Protection Group Assignment for Volumes 38
SafeMode Status 39
Always-On Quality of Service 39
Hosts 39

Pure Storage Confidential - For distribution only to Pure Customers and Partners 3
|

Host Guidelines 41
Host Groups 41
Host Group Guidelines 42
Host-Volume Connections 42
Private Connections 43
Shared Connections 43
Breaking Private and Shared Connections 43
Logical Unit Number (LUN) Guidelines 44
Connection Guidelines 44
Connections 44
Protection Groups and Protection Group Snapshots 45
Protection Groups 45
Space Consumption Considerations 48
Protection Group Snapshots 48
File Storage 49
File Systems 50
Managed Directories 50
Exports 51
Auto Managed Policies 52
NFS Datastore 52
Local Users 52
NFSv3 and File Locking 53
NFS User Mapping 54
Directory Quotas 54
Snapshots 56
Previous Versions 56
Protection Plan 57
Hard Links and Symbolic Links (Symlinks) 59
Object Names 59
File and Directory Names 59

Pure Storage Confidential - For distribution only to Pure Customers and Partners 4
|

Virtual Interfaces 60
Authentication and Authorization 60
ACL and Mode_t Interoperability 60
Users and Security 61
Directory Service 61
Multi-factor Authentication 62
Multi-factor Authentication through SAML2 Single Sign-on 62
Multi-factor Authentication with RSA SecurID® Authentication 62
SSL Certificate 63
Industry Standards 63
Troubleshooting and Logging 64
Alerts 64
Audit Trail 65
User Session Logs 65
SNMP Agent and SNMP Managers 66
Remote Assist Facility 66
Event Logging Facility 67
Syslog Logging Facility 67
Chapter 3:Conventions 68
Object Names 68
Volume Sizes 69
IP Addresses 69
Storage Network Addresses 70
Chapter 4:GUI Overview 72
GUI Navigation 73
End User Agreement (EULA) 77
GUI Login 78
Logging in to the Purity//FA GUI 79
Logging in with Password Authentication 79
Logging in with SAML2 SSO Authentication 80

Pure Storage Confidential - For distribution only to Pure Customers and Partners 5
|

Logging in with RSA SecurID® Authentication 81


Accepting the Terms of the End User Agreement (EULA) 82
Chapter 5:Dashboard 83
Capacity 84
Purchased Arrays 84
Subscription Storage 85
Recent Alerts 86
Hardware Health 86
Performance Charts 87
Note About the Performance Charts 88
Chapter 6:Storage 90
Array 92
Hosts and Host Groups 93
Hosts 93
Host Groups 95
Creating Hosts 96
Creating Host Groups 97
Configuring Host Ports 99
Adding Hosts to Host Groups 100
Configuring CHAP Authentication 100
Configuring Host Personalities 101
Adding Preferred Arrays 101
Removing Preferred Arrays 102
Renaming a Host 102
Deleting a Host 102
Renaming a Host Group 103
Deleting a Host Group 103
Removing a Host from a Host Group 103
Removing a Host Port 104
Downloading Host Details 104

Pure Storage Confidential - For distribution only to Pure Customers and Partners 6
|

Downloading Host Group Details 104


Volumes 106
Volumes Overview 107
Storage Containers 107
Virtual Volumes 109
Volume Details 110
Volume Groups 112
Quality of Service Limits and DMM Priority Adjustments 113
Working with Volumes 115
Creating a Volume 115
Moving a Volume 118
Moving a Volume when SafeMode is Enabled 119
Renaming a Volume 119
Resizing a Volume 119
Copying a Volume 120
Downloading Volume Details 121
Configuring the Maximum QoS Bandwidth and IOPS Limits of a Volume 121
Configuring the Priority or Priority Adjustment of a Volume 121
Destroying and Eradicating Volumes 122
Destroying a Volume 122
Recovering a Destroyed Volume 123
Eradicating a Destroyed Volume 123
Working with Volume-Host Connections 123
Establishing Private Volume-Host Connections 123
Establishing Shared Volume-Host Group Connections 124
Breaking Volume-Host Connections 125
Breaking Volume-Host Group Connections 125
Working with Volume Snapshots 126
Creating a Volume Snapshot 126
Restoring a Volume from a Volume Snapshot 127

Pure Storage Confidential - For distribution only to Pure Customers and Partners 7
|

Copying a Volume Snapshot 127


Renaming a Volume Snapshot Suffix 127
Destroying a Volume Snapshot 128
Recovering a Volume Snapshot 128
Eradicating a Volume Snapshot 128
Working with Volume Groups 129
Creating a Volume Group 129
Configuring the Maximum QoS Bandwidth and IOPS Limits of a Volume Group 130
Configuring the DMM Priority Adjustment for a Volume Group 131
Renaming a Volume Group 131
Destroying and Eradicating Volume Groups 132
Destroying a Volume Group 132
Recovering a Destroyed Volume Group 132
Eradicating a Destroyed Volume Group 132
Pods 133
Configuring Failover Preference 139
Automatic Default Protection for Volumes in a Pod 139
ActiveDR Replication 140
Key Features 140
Setting Up ActiveDR Replication 141
Connecting the Source and Target FlashArrays 142
Setting Up a Source Pod 142
Setting Up a Pod on the Target FlashArray 143
Demoting the Pod on the Target FlashArray 143
Adding Data to a Pod on a Source FlashArray 144
Creating a Replica Link to Initiate ActiveDR Replication 144
Managing Replica Links 145
Promotion Status of a Pod 146
Demoting Pods 148
Promoting Pods 148

Pure Storage Confidential - For distribution only to Pure Customers and Partners 8
|

Replica Links 149


Replica-Link Status 149
Lag and Recovery Point 150
Bandwidth Requirements 151
Unlink Operation 151
Displaying Replica Links 151
Displaying the Lag and Bandwidth Details of Replica Links 152
Performing a Failover for Fast Recovery 153
Failover Preparation 155
Performing a Reprotect Process after a Failover 155
Performing a Failback Process after a Failover 158
Performing a Planned Failover 159
Recovery Strategies for Planned Failovers 160
Performing a Test Recovery Process 161
File Systems 163
Creating a File System 164
Renaming a File System 164
Destroying a File System 164
Creating a Directory 165
Renaming a Directory 166
Creating a File Export 166
Adding a Policy 167
Creating a Directory Snapshot 167
Directory Details 168
Policies 169
Creating an Export Policy 169
Adding Rules to an Export Policy 170
Creating a File Export by Member 173
Creating a Quota Limit 173
Modifying a Quota 174

Pure Storage Confidential - For distribution only to Pure Customers and Partners 9
|

Editing a Policy 175


Enabling or Disabling a Policy 175
Enabling SMB Access Based Enumeration 176
Changing NFS Version 176
Renaming a Policy 176
Deleting an Export Policy 177
Storage Policy Based Management 177
Chapter 7:Protection 178
Array 179
Offload Targets 182
Connecting Arrays 188
Configuring Network Bandwidth Throttling 189
Getting the Array Connection Key 189
Disconnecting Arrays 190
Displaying Offload Targets Connected to the Array 190
Displaying Protection Group and Volume Snapshot Details for an Offload Target 190
Connecting the Array to an Azure Blob Container 191
Connecting the Array to an NFS Offload Target 192
Connecting the Array to an S3 Bucket 192
Disconnecting the Array from an Offload Target 193
Restoring a Volume Snapshot from an Offload Target to the Array 193
Destroying an Offloaded Protection Group Snapshot 194
Recovering a Destroyed Offloaded Protection Group Snapshot 194
Eradicating a Destroyed Offloaded Protection Group Snapshot 195
Default Protection for Volumes 195
Customizing a Default Protection Group List 196
Disabling Default Protection 197
Snapshots 197
Destroying a Snapshot 198
Recovering a Snapshot 199

Pure Storage Confidential - For distribution only to Pure Customers and Partners 10
|

Eradicating a Snapshot 199


Download a CSV File 200
Copy a Volume Snapshot 200
Policies 201
Creating a Snapshot Policy 201
Setting Policy Members and Rules 202
Enabling or Disabling a Snapshot Policy 204
Renaming a Snapshot Policy 204
Deleting a Policy 204
Removing a Member 205
Removing a Rule 205
Protection Groups 205
Default Protection Groups 209
Members 210
Targets 210
Source Arrays 211
Protection Group Snapshots 211
On-Demand Snapshots 211
Snapshot and Replication Schedules 212
Snapshot Schedule 213
Replication Schedule 214
Protection Group Configuration 216
Snapshot Schedule Configuration 216
Create and Configure the Protection Group 216
Set the Snapshot and Retention Schedule 216
Replication Schedule Configuration 217
Connect the Source and Targets 217
Create and Configure the Protection Group 217
Set the Replication and Retention Schedule 218
SafeMode 218

Pure Storage Confidential - For distribution only to Pure Customers and Partners 11
|

Creating a Protection Group 219


Adding a Member (volume, host, or host group) to a Protection Group 219
Adding a Target to a Protection Group 220
Configuring the Snapshot and Retention Schedule for a Protection Group 220
Configuring the Replication and Retention Schedule for a Protection Group 221
Enabling the Snapshot and Replication Schedules 222
Generating an On-demand Snapshot 222
Disabling the Snapshot and Asynchronous Replication Schedules 223
Copying a Snapshot 223
Renaming a Protection Group 224
Destroying a Protection Group 224
Recovering a Destroyed Protection Group 225
Eradicating a Destroyed Protection Group 225
Allowing Protection Group Replication 226
Disallowing Protection Group Replication 226
Enabling SafeMode 227
ActiveDR 228
Creating a replica link 228
ActiveCluster 229
Chapter 8:Analysis 231
Performance 233
Note about the Performance Charts 236
Exporting Array-Wide Performance Metrics 236
Capacity 237
Exporting Array-Wide Capacity Metrics 240
Replication 240
Exporting Array-Wide Replication Metrics 242
Replication Bandwidth 242
Viewing Replication Bandwidth 245
Viewing Replication Bandwidth in Graphical Representations 245

Pure Storage Confidential - For distribution only to Pure Customers and Partners 12
|

Chapter 9:Health 247


Hardware 247
FlashArray Hardware Components 249
Hardware Components in FlashArray//XL, FlashArray//X, and FlashArray//M 249
Capacity Upgrade and Drive Admission 251
Upgrading Array Capacity 251
Alerts 252
Flagging an Alert Message 254
Clearing an Alert Flag 255
Connections 255
Viewing Host Connection Details 257
Viewing Array Port Details 257
Network 259
Viewing Network Statistics 260
Viewing Network Statistics in Graphical Representations 261
Chapter 10:Settings 263
System 263
Array Name 265
Renaming the Array 265
Alert Watchers 265
Adding an alert watcher 266
Enabling and Disabling an Alert Watcher 266
Deleting an Alert Watcher 266
Alert Routing 266
Relay Host 267
Configuring the SMTP Relay Host 267
Deleting the SMTP Relay Host 268
Sender Domain 268
Configuring the Sender Domain 268
UI 269

Pure Storage Confidential - For distribution only to Pure Customers and Partners 13
|

Login Banner 269


Creating a Banner Message 269
GUI Idle Timeout 269
Setting the Idle Timeout Value 269
Disabling the Idle Timeout Setting 270
Syslog Servers 270
Setting the Syslog Server Output Location 272
SMI-S 273
Array Time 273
Time 273
NTP Servers 273
Designating an Alternate NTP server 273
Cloud Features 274
Single Sign-On 274
Pure1 Support 274
Phone Home 274
Enabling and disabling phone home 275
Manual Phone Home 275
Sending Phone Home Logs to Pure Storage Technical Services 275
Remote Assist 276
Opening and Closing a Remote Assistance (RA) Session 276
Support Logs 276
Downloading Support Logs 277
Event Logs 277
Downloading Event Logs 277
Proxy Server 278
Configuring the Proxy Host 278
Deleting the Proxy Host 278
SSL Certificate 278
Self-Signed Certificate 279

Pure Storage Confidential - For distribution only to Pure Customers and Partners 14
|

CA-Signed Certificate 279


Certificate Administration 281
Creating or Changing the Attributes of a Self-Signed Certificate 281
Constructing a Certificate Signing Request to Obtain a CA Certificate 282
Importing a CA Certificate 283
Viewing and Exporting Certificate Details 284
Maintenance Windows 284
Initiating a Maintenance Window 284
Eradication Delay Settings 285
Changing an Eradication Delay Setting 286
Rapid Data Locking 286
SNMP 287
Downloading the Management Information Base (MIB) File 289
Specifying the SNMP Community String (Applies to SNMPv2c Only) 289
Creating an SNMP Manager Object 289
Configuring the SNMP Manager Object 291
Deleting an SNMP Manager Object 292
Sending a Test SNMP Message to a Manager 292
Network 293
Fibre Channel 294
Ethernet 294
Subnets 294
VLAN Tagging 295
Networking – Creating a Subnet with VLAN Interfaces 296
LACP 296
Changing the Attributes of a Network Interface 297
Enabling or Disabling a Network Interface 299
Creating a Subnet 299
Enabling or Disabling a Subnet 300
Deleting a Subnet 300

Pure Storage Confidential - For distribution only to Pure Customers and Partners 15
|

DNS Settings 301


Configuring Domain Name System (DNS) Server IP Addresses 301
Access 302
Array Accounts 303
Users Panel 303
Creating a User 305
Changing the Login Password of a User 306
Changing the Role of a User 306
Deleting a User 306
Adding a Public Key 306
Updating a Public Key 307
Deleting a Public Key 307
Creating an API Token 307
Recreating an API Token 308
Removing an API Token 308
Displaying the Details of an API Token 308
API Clients 308
Active Directory Accounts 308
Directory Service 309
Users 310
Groups 311
Role-Based Access Control 312
Directory Service Configuration 313
Configuring the Directory Service 313
Configuring the CA Certificate 315
Configuring the Directory Service Roles 315
Testing the Directory Service Settings 316
Multi-factor Authentication with SAML2 SSO 316
Overview 316
Prerequisites 316

Pure Storage Confidential - For distribution only to Pure Customers and Partners 16
|

SAML2 SSO Configuration 317


Configuration Notes 317
Group to Role Mapping 318
Group to Role Mapping on Purity//FA 318
Group to Role Mapping on IdPs 318
Typical Configuration Run 318
Configure the Directory Service 318
Configure SAML2 SSO in Purity//FA 319
Configure the Active Directory Federation Services IdP 322
Perform the SAML2 SSO End-to-end Test 324
Optionally Enable Multi-factor Authentication 327
Enable SSO Authentication 327
Other Configuration Steps 329
Enable Sign Request 329
Enable Encrypt Assertion 330
SSO Session Timeout 330
TLS 1.2 or 1.3 Support 330
Runtime Notes 332
Limitations 332
SAML2 SSO Troubleshooting 332
Multi-factor Authentication with RSA 343
Audit and Session Logs 344
Audit Trail 344
Session Log 344
Login and Logout Events 345
Login and Logout Event Examples 346
Example 1 346
Example 2 346
Example 3 346
Example 4 346

Pure Storage Confidential - For distribution only to Pure Customers and Partners 17
|

Authentication Events 347


File System Local Users and Groups 347
Local Users Panel 348
Creating a Local User 349
Managing a Local User 349
Deleting Local Users 350
Local Groups Panel 350
Creating a Local Group 351
Modifying a Local Group 351
Deleting Local Groups 351
Software 352
Updates 352
Enabling and Disabling Auto Download 353
Performing a Software Update 354
vSphere Plugin 354
App Catalog 354
Installing an App 355
Installed Apps 355
App Volumes 356
App Hosts 357
Connecting FlashArray Volumes to an App 357
App Interfaces 358
VNC Access for Apps 359
Nodes of an App 359
Uninstalling an App 359
Enabling an App 359
Disabling an App 359
Enabling VNC access for an app 360
Disabling VNC access for an app 360
Displaying the Node Details of an App 360

Pure Storage Confidential - For distribution only to Pure Customers and Partners 18
|

Establishing Connections Between FlashArray Volumes and Apps 360


Chapter 11:Cloud Block Store 361
Chapter 12:FlashArray Storage Capacity and Utilization 362
Array Capacity and Storage Consumption 362
Physical Storage States 362
Reporting Array Capacity and Storage Consumption 363
Volume and Snapshot Storage Consumption 364
Provisioning 364
Data Reduction 365
Snapshots and Physical Storage 366
Reporting Volume and Snapshot Storage Consumption 368
FlashArray Data Lifecycle 369

Pure Storage Confidential - For distribution only to Pure Customers and Partners 19
Chapter 1:
About this Guide
The Pure Storage® FlashArray User Guide is written for array administrators who view and man-
age the Pure Storage FlashArray storage system.
FlashArrays are administered through the Purity for FlashArray (Purity//FA) graphical user inter-
face (GUI) or command line interface (CLI). Users should be familiar with system, storage, and
networking concepts, and have a working knowledge of Windows or UNIX.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 20
Chapter 1:About this Guide | What's New?

What's New?
The Purity//FA 6.4.x release line introduces new features and enhancements to increase func-
tionality. The following have been implemented in 6.4.x releases.

6.4.10:
l Adds SafeMode™ Default Protection to the vVol SPBM interface. vVol Storage
Policy Based Management (SPBM) introduces a new capability "Default Protection"
that allows users to influence the placement of volumes in default protection groups
upon creation. To learn more on virtual volumes, including configuration steps, refer
to the Pure Storage vSphere Web Client Plugin for vSphere User Guide on the Know-
ledge Base site at https://support.purestorage.com.
l NFS v4.1 for file services. FlashArray file services now supports version 4.1 of the
NFS protocol. This introduces new features and enhancements for interoperability
and ease of use over version 3 of NFS that a number of FlashArray users already
have access to.
l Capacity metrics for subscription storage. For Evergreen//One customers, capacity
metrics are now based on effective used capacity, a metric closer to host-written capa-
city, in line with the storage consumption billing model. With Evergreen//One™ sub-
scription-based storage, “Evergreen//One” appears in the top left corner of the
Purity//FA GUI and in the purearray list --service CLI command output.
Introduces the concept of array service type to distinguish subscription storage from
purchased arrays. The purearray list --service CLI command returns
FlashArray on a purchased array and returns Evergreen//One on subscription
storage.
l New SafeMode eradication pending period. Introduces a separate eradication
pending period with a default of 8 days for array objects protected by SafeMode, in
addition to the eradication pending period with a default of one day for other array
objects. See "Eradication Delays" on page 35".
6.4.9:
l Introduces new CLI options to prevent scheduled protection group snapshot cre-
ation from overloading the array performance. Adds support for throttling of snap-
shots to lessen the impact on the array performance. Use the --allow-throttle
and --dry-run options for purepgroup snap and purevol snap commands

Pure Storage Confidential - For distribution only to Pure Customers and Partners 21
Chapter 1:About this Guide | What's New?

during manual snapshot creation.


l Offload and ransomware protection in the vVol SPBM interface. SPBM introduces
a new SPBM placement capability “Offload” that allows VMware users to use offload
targets to store their snapshots. Additionally, a new SPBM placement capability
"Ransomware Protection” has been added. To learn more on virtual volumes, includ-
ing configuration steps, refer to the Pure Storage vSphere Web Client Plugin for
vSphere User Guide on the Knowledge Base site at https://support.purestorage.com.
6.4.7:
l GUI support for FA File local users. Adds support for adding, removing, and renam-
ing local users and groups through the Purity//FA GUI. Customers using Purity//FA
File services can create and manage local users and groups in the Local Users and
Local Groups panes in the Settings > Access > File System tab.
6.4.5:
l NFS datastore for VMware vSphere. The FlashArray File can now serve as a
VMware NFS datastore, using vSphere 7.0 or later. NFS datastores can be created
using NFS exports on FlashArray through the NFS version 3 protocol. To get started,
refer to the VMware NFS Datastores on FlashArray Quick Start Guide on the Know-
ledge site at https://support.purestorage.com, or the purepolicy command in the
Purity//FA CLI Reference Guide.
l Auto managed policies (autodir) enable use cases such as NFS datastore. For
specific use, for example, manageability per VM using FlashArray File as VMware
NFS datastore. With the new autodir policy type, subdirectories created by the host
application below a managed directory automatically appear as managed directories.
Use the CLI purepolicy autodir command to manage autodir policies. For more
information, refer to the Purity//FA CLI Reference Guide.
6.4.4:
l Moving a volume when SafeMode is enabled. A volume now can be moved to a dif-
ferent pod, if to a protection group with at least equal SafeMode protections. Two new
fields in the Move Volume dialog, "Remove from Protection Group" and "Add to Pro-
tection Group", are used for the current protection group and the destination pro-
tection group, respectively. These moves do not require the involvement of Pure
Storage Technical Services.
l File system, network, and protocol GUI enhancements. Multiple GUI enhance-
ments that provide Purity//FA users access to features in the area of file systems, net-
work, and protocols. NFS policies are extended with selectable user mapping for use

Pure Storage Confidential - For distribution only to Pure Customers and Partners 22
Chapter 1:About this Guide | What's New?

with or without LDAP/Kerberos environments. Enhancements include modification of


existing quota rules, options for managing virtual interfaces, VLAN tagging, DNS con-
figurations and Active Directory accounts.
l Pod Quota Limits via CLI and Volume Move UX Improvements via CLI and GUI
added.
l Custom alert rules. Customers can now create custom alert rules with specific para-
meters using the purealert rule command. For more information, refer to the Pur-
ity//FA CLI Reference Guide.
6.4.2:
l Network Lock Manager (NLM) and Network Status Monitor (NSM) for file
services. The NLM and NSM protocols work with the NFS version 3 protocol to
enable file locking on NFSv3 exports and to prevent loss of locks during client/server
restarts. These protocols are enabled by default with the NFS file service and allow all
clients mounting the same NFS shared file system to see file locks set by other cli-
ents. The puredir lock nlm-reclamation create command can be used to
release all NLM locks for the array.
l File snapshot retention up to five years. FlashArray supporting file services now sup-
ports snapshot retention for up to five years. For more information, see "Snapshots"
on page 56, "Creating a Directory Snapshot" on page 167, "Setting Policy Members
and Rules" on page 202, or puredir snapshot and purepolicy snapshot
rule in the Purity//FA CLI Reference Guide.
6.4.1:
l Local Users for File. FlashArray supporting file services now have support for local
users, which allows a locally stored directory of users and groups, internal to the
array, in place of an external authentication solution such as Active Directory or
LDAP. Clients connect to the FlashArray File domain through SMB or NFS protocols
and authenticate with the respective credentials. For more information, see "Local
Users" on page 52, and pureds local in the Purity//FA CLI Reference Guide.
l Multiple VASA storage containers. Purity//FA now enables multi-tenancy in vSphere
environments through VASA storage containers in pods. Array administrators can
deploy these storage containers in the Volumes pane under Storage > Volumes or
Storage > Pods or with the purevol create --protocol-endpoint CLI com-
mand.
6.4.0:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 23
Chapter 1:About this Guide | Organization of the Guide

l Automatic SafeMode protection for volumes on new arrays. When Purity//FA 6.4.0
is installed on a new array, by default all newly created and copied volumes auto-
matically become members of a ratcheted protection group. Purity//FA automatically
creates a protection group at the root of the array and in each new pod. Default pro-
tection groups are managed through the GUI (Protection > Array > Default Protection
pane). For more information, see "Default Protection for Volumes" on page 195.
l ActiveWorkload. Multiple Synchronous Connections supports up to five synchronous
connections between Purity//FA to support hub and spoke topology for stretched
pods. For more information, see "SafeMode" in "Pods" on page 133.
l Security Patches Mechanism. FlashArray users are now able to install critical secur-
ity patches on their arrays without assistance from Pure Storage Technical Services.
Pure1 users can install critical security through Pure1 Edge Service. For more inform-
ation see puresw in the Purity//FA CLI Reference Guide.

Organization of the Guide


The guide is organized into the following major sections:
FlashArray Overviews and Concepts
Provides a brief introduction to FlashArray hardware, networking, and storage com-
ponents and describes how they are managed in Purity//FA.
Using the GUI to Administer a FlashArray
Describes the use of the browser-based Purity//FA graphical user interface (GUI) to
administer, configure, and monitor FlashArrays, both physical arrays and Cloud Block
Store arrays. These chapters describe configuring volumes, hosts, file systems, rep-
lication, snapshots, network connections, users, access, apps, and auditing, as well as
monitoring array performance, capacity, latency, replication, and array component
health.
FlashArray Storage Capacity and Utilization
Describes how FlashArray physical storage and virtual capacity are used and measured.
These concepts help an administrator understand FlashArray's highly efficient pro-
visioning and data reduction.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 24
Chapter 1:About this Guide | A Note on Format and Content

A Note on Format and Content


FlashArray technology is evolving rapidly. As with all advanced information technologies, once
basic architecture is in place, implementation develops at different rates in its different facets,
each preceded by feature-by-feature detailed design.
This edition of the guide describes the properties and behavior of arrays that run the Purity//FA
6.4.10 release. It may include information about planned capabilities whose external form has
been specified at the time of the release. Such material is included to provide users of this
release with information for design planning purposes, and is subject to change as new func-
tionality is implemented. Material relating to not-yet-implemented functionality is identified as
such in the text.

Related Documentation
Refer to the following related guides to learn more about the FlashArray:
l Purity//FA CLI Reference Guide. The Purity//FA command line interface (CLI) is a
non-graphical, command-driven interface used to query and administer the FlashAr-
ray. The Purity//FA CLI is comprised of built-in commands specific to the Purity//FA
operating environment. Refer to the Purity//FA CLI Reference Guide for a description
of the CLI and a detailed description of each command.
Pure Storage REST API Guide. The Pure Storage REpresentational State Transfer
(REST) API uses HTTP requests to interact with the FlashArray resources. The Pure
Storage REST API Guide provides an overview of the REST API and a list of all avail-
able resources.
l Pure Storage SMI-S Provider Guide. Purity//FA includes the Pure Storage Storage
Management Initiative Specification (SMI-S) provider, which allows FlashArray admin-
istrators to manage the array using an SMI-S client over HTTPS. The Pure Storage
SMI-S Provider Guide describes functionality the provider supports and information
on connecting to the provider.
l Third-party plugin guides. Pure Storage packages and plug-ins extend the func-
tionality of the FlashArray. Available packages and plug-ins include, but are not lim-
ited to, VSS Hardware Provider, FlashArray OpenStack Cinder Volume Driver,

Pure Storage Confidential - For distribution only to Pure Customers and Partners 25
Chapter 1:About this Guide | Contact Us

FlashArray Storage Replication Adapter, Management Plugin for vSphere, and vReal-
ize Operations Management Pack.
l Pure1 Manage User Guide. Pure1 Manage is an integrated cloud-based, mobile-
friendly platform that lets you monitor and manage your Pure Storage arrays from any-
where with just a web browser. Pure1 Manage provides full-stack monitoring with
visual summaries of array conditions, predictive analysis, capacity and workload plan-
ning, and support cases. Pure1 Manage includes the Pure1 Digital Marketplace,
where you can directly purchase, manage, and renew Pure products and services.
All related guides are available on the Knowledge site at https://support.purestorage.com.

Contact Us
Pure Storage is always eager to hear from you.

Documentation Feedback
We welcome your feedback about Pure Storage documentation and encourage you to send
your questions and comments to <[email protected]>.

Product Support
If you are a registered Pure Storage user, log in to the Pure Storage Technical Services website
at https://support.purestorage.com to browse our knowledge base, view the status of your open
support cases, and view the details of past support cases.
You can also contact Pure Storage Technical Services at <[email protected]>.

General Feedback
For all other questions and comments about Pure Storage, including products, sales, service,
and just about anything that interests you about data storage, email <[email protected]>.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 26
Chapter 2:
FlashArray Concepts and
Features
This chapter provides a brief introduction to the FlashArray hardware, networking, and storage
components and describes where they are managed in Purity//FA.

Purity//FA is the operating environment that manages the FlashArray. Purity//FA, which comes
bundled with the FlashArray, can be administered through a graphical user interface (Purity//FA
GUI) or command line interface (Purity//FA CLI).
The FlashArray can also be managed through the Pure Storage® REpresentational State Trans-
fer (REST) API, which uses HTTP requests to interact with resources within Pure Storage. For
more information about the Pure Storage REST API, refer to the Pure Storage REST API Refer-
ence Guide on the Knowledge site at https://support.purestorage.com.

Arrays
A FlashArray controller contains the processor and memory complex that runs the Purity//FA
software, buffers incoming data, and interfaces to storage shelves, other controllers, and hosts.
FlashArray controllers are stateless, meaning that all metadata related to the data stored in a
FlashArray is contained in storage-shelf storage. Therefore, it is possible to replace the con-
troller of an array at any time with no data loss.
The following are some array-specific tasks that can be performed through the Purity//FA GUI:
l Display array health through the Health > Hardware page.
l Monitor capacity, storage consumption, performance (latency, IOPS, bandwidth) met-
rics , and replication through the Analysis page.
l Change the array name and other configuration settings through the Settings > Sys-
tem page.
The same tasks can also be performed through the CLI purearray command.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 27
Chapter 2:FlashArray Concepts and Features | Arrays

Array Service Type


Two service types are available with FlashArray: the traditional purchased array and also the
Evergreen//One™ subscription-based service. Evergreen//One is a storage-as-a-service plat-
form with a consumption-based subscription model. On Evergreen//One arrays, reported capa-
city metrics reflect storage consumption based on effective used capacity (EUC) rather than
array size.
Service type is reflected in the top left corner of the Purity//FA GUI. An Evergreen//One sub-
scription array has the Evergreen//One name and logo in the top left corner. An Evergreen//One
array is shown on the left and a purchased array on the right.
Figure 2-1. Evergreen//One Logo

Service type is also seen through the purearray list --service CLI command.

Connected Arrays
A connection must be established between two arrays in order for data transfer to occur.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 28
Chapter 2:FlashArray Concepts and Features | Hardware Components

For example, two arrays must be connected in order to perform asynchronous replications.
When two arrays are connected to replicate data from one array to another, the array where data
is being transferred from is called the source array, and the array where data is being transferred
to is called the target array.
As another example, two arrays must be connected to perform ActiveCluster replication or Act-
iveDR replication.
Arrays are connected using a connection key, which is supplied from one array and entered into
the other array.
For asynchronous replication, once two arrays are connected, optionally configure network
bandwidth throttling to set maximum threshold values for outbound traffic.
Connected arrays are managed through the GUI (Storage > Array) and CLI (purearray con-
nect command).
Network bandwidth throttling is configured through the GUI (Storage > Array) and CLI (pur-
earray throttle command).

Hardware Components
Purity//FA displays the operational status of most FlashArray hardware components. The dis-
play is primarily useful for diagnosing hardware-related problems.
Status information for each component includes the functioning status, index numbers, speed at
which a component is operating, and reported temperature.
In addition to general hardware component operational status, Purity//FA also displays status
information for each flash module and NVRAM module on the array. Status information includes
module status, physical storage capacity, module health, and time at which a module became
non-responsive.
FlashArray hardware names are fixed. When they are powered on, FlashArray controllers and
storage shelves automatically discover each other and self-configure to optimize I/O per-
formance, data integrity, availability, and fault recoverability, all without administrator inter-
vention.
Purity//FA visually identifies certain hardware components through LED lights and numbers.
Controllers, flash module bays, NVRAM bays, and storage shelves contain LED lights that can
be turned on and off with Purity//FA. Furthermore, storage shelves contain LED integers to
uniquely identify shelves in multi-shelf arrays.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 29
Chapter 2:FlashArray Concepts and Features | Network Interface

Hardware components are displayed and administered through the GUI (Health > Hardware)
and CLI (purehw command).
Flash modules and NVRAM modules are displayed through the GUI (Health > Hardware) and
CLI (puredrive command).
Each hardware component in a FlashArray has a unique name that identifies its location in the
array for service purposes.
The hardware component names are used throughout Purity//FA, for instance in the GUI Health
> Hardware page, and with CLI commands such as puredrive and purehw.

Network Interface
View and configure network interface, subnet, and DNS attributes through Purity//FA.
The Purity//FA network interfaces manage the bond, Ethernet, virtual, and VLAN interfaces used
to connect the array to an administrative network. See Figure 2-2.
Figure 2-2. Settings > Network

Pure Storage Confidential - For distribution only to Pure Customers and Partners 30
Chapter 2:FlashArray Concepts and Features | Network Interface

Each FlashArray controller is equipped with two Ethernet interfaces that connect to a data cen-
ter network for array administration.
A bond interface combines two or more similar Ethernet interfaces to form a single virtual "bon-
ded" interface with optional child devices. A bond interface provides higher data transfer rates,
load balancing, and link redundancy. A default bond interface, named replbond, is created
when Purity//FA starts for the first time.
Array administrators cannot create or delete bond interfaces. To create or delete a bond inter-
face, contact Pure Storage Technical Services.
Apply a service to an interface to specify the type of network traffic the device serves. Each inter-
face must have at least one or more services applied. Supported services include ds, file,
iscsi, management, nvme-roce, nvme-tcp, and replication. For example, apply the rep-
lication service to the replbond bond interface to channel all replication traffic through that
device.
View the network connection attributes, including interface, netmask, and gateway IP
addresses, maximum transmission units (MTUs), and the network services attached to each net-
work interface.
Enable or disable an interface through Purity//FA at any time. Disabling an interface while an
administrative session is being conducted causes the session to lose SSH connection and no
longer be able to connect to the controller.
Configure the network connection attributes, including the interface, netmask, and gateway IP
addresses, and the MTU. Ethernet and bond interface IP addresses and netmasks are set expli-
citly, along with the corresponding netmasks. DHCP mode is not supported.
Manage the domain name system (DNS) domains that are configured for the array. Each DNS
domain can include up to three static DNS server IP addresses. DHCP mode is not supported.
Network interfaces and DNS settings are configured through the GUI (Settings > Network) and
CLI (purenetwork command for network interfaces, and puredns for DNS settings).

Note: Editing the following attributes is not supported on Cloud Block Store:
l Network interfaces, including bond, Ethernet, and VLAN interfaces
l Subnet netmasks
l DNS settings

Pure Storage Confidential - For distribution only to Pure Customers and Partners 31
Chapter 2:FlashArray Concepts and Features | Block Storage

Block Storage

Volumes
FlashArrays eliminate drive-oriented concepts such as RAID groups and spare drives that are
common with disk arrays. Purity//FA treats the entire storage capacity of all flash modules in an
array as a single homogeneous pool from which it allocates storage only when hosts write data
to volumes created by administrators. Therefore, creating a FlashArray volume only requires a
volume name, to be used in administrative operations and displays, and a provisioned size.
FlashArray volumes are virtual, so creating, renaming, resizing, and destroying a volume has no
meaning outside the array.
Create a single volume or multiple volumes at one time. Purity//FA administrative operations rely
on volume names, so they must be unique within an array.
Creating a volume creates persistent data structures in the array, but does not allocate any phys-
ical storage. Purity//FA allocates physical storage only when hosts write data. Volume creation
is therefore nearly instantaneous. Volumes do not consume physical storage until data is actu-
ally written to them, so volume creation has no immediate effect on an array's physical storage
consumption.
Rename a volume to change the name by which Purity//FA identifies the volume in admin-
istrative operations and displays. The new volume name is effective immediately and the old
name is no longer recognized in CLI, GUI, or REST interactions.
Resize an existing volume to change the virtual capacity of the volume as perceived by the
hosts. The volume size changes are immediately visible to connected hosts. If you decrease
(truncate) the volume size, Purity//FA automatically takes an undo snapshot of the volume. The
undo snapshot enters an eradication pending period, after which time the snapshot is destroyed.
During the eradication pending period, the undo snapshot can be viewed, recovered, or per-
manently eradicated through the Destroyed Volumes folder. Increasing the size of a truncated
volume does not restore any data that is lost when the volume was first truncated.
Eradication pending periods are configured in the Settings > System > Eradication Configuration
pane. See "Eradication Delays" on page 35 and "Eradication Delay Settings" on page 285.
Copy a volume to create a new volume or overwrite an existing one. After you copy a volume,
the source of the new or overwritten volume is set to the name of the originating volume.
Destroy a volume if it is no longer needed. When you destroy a volume, Purity//FA automatically
takes an undo snapshot of the volume. The undo snapshot enters an eradication pending

Pure Storage Confidential - For distribution only to Pure Customers and Partners 32
Chapter 2:FlashArray Concepts and Features | Block Storage

period. During the eradication pending period, the undo snapshot can be viewed, recovered, or
permanently eradicated through the Destroyed Volumes folder. Eradicating a volume com-
pletely obliterates the data within the volume, allowing Purity//FA to reclaim the storage space
occupied by the data. After the eradication pending period, the undo snapshot is completely
eradicated and can no longer be recovered.
Limits and priority adjustments can be set on volumes to reflect the relative importance of their
workloads. The bandwidth limit enforces the maximum allowable throughput and the IOPS limit
enforces the maximum I/O operations processed per second. A priority adjustment increases or
decreases the performance priority of a volume relative to other volumes, when supported by
the FlashArray hardware.

Volume Groups
Volume groups organize FlashArray volumes into logical groupings. An action such as con-
necting to a host, applying a policy, configuring bandwidth or IOPS limits, or setting priority
adjustments, when taken on the volume group acts on all volumes within the group.
Volume group tasks are performed through the GUI (Storage > Volumes) or CLI (puregroup
command).
A volume can belong to only one volume group.

Volume Snapshots
Volume snapshots are immutable, point-in-time images of the contents of one or more volumes.

Volume Snapshots vs. Protection Group Snapshots


There are two types of volume snapshots:
Volume Snapshot
A volume snapshot is a snapshot that captures the contents of a single volume. Volume
snapshot tasks include creating, renaming, destroying, restoring, and copying volume
snapshots.
Volume snapshot tasks are performed through the GUI (Storage > Volumes) or CLI
(purevol command).
Protection Group Volume Snapshot

Pure Storage Confidential - For distribution only to Pure Customers and Partners 33
Chapter 2:FlashArray Concepts and Features | Block Storage

A protection group volume snapshot is a volume snapshot that is created from a group of
volumes that are part of the same protection group. All of the volume snapshots created
from a protection group snapshot are point-in-time consistent with each other.
Protection group snapshots can be manually generated on demand or enabled to auto-
matically generate at scheduled intervals. After a protection group snapshot has been
taken, it is either stored on the local array or replicated over to a remote (target) array.
Protection group volume snapshot tasks performed through the Storage > Volumes
page of the GUI or purevol command of the CLI are limited to copying snapshots. All
other protection group snapshot tasks are performed through the Storage > Protection
Groups page of the GUI or purepgroup command of the CLI.
For more information about protection groups and protection group snapshots, refer to
the Protection Groups and Protection Group Snapshots section. See Figure 2-3.
All volume snapshots are visible through the Storage > Volumes page.
Figure 2-3. Storage - Details Pane - Volumes - Snapshots

Create a volume snapshot to generate a point-in-time image of the contents of the specified
volume(s). Volume snapshot names append a unique number assigned by Purity//FA to the
name of the snapped volume. For example, vol01.4166. Optionally specify a suffix to replace the
unique number.
The volume snapshot naming convention is VOL.NNN, where:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 34
Chapter 2:FlashArray Concepts and Features | Block Storage

l VOL is the name of the volume.


l NNN is a unique monotonically increasing number or a manually-assigned volume
snapshot suffix name.
Rename a volume snapshot suffix to change the name by which Purity//FA identifies the snap-
shot in administrative operations and displays. The new snapshot suffix name is effective imme-
diately and the old name is no longer recognized in CLI, GUI, or REST interactions.
Destroy a volume snapshot if it is no longer needed. If you destroy a volume snapshot, Pur-
ity//FA automatically takes an undo snapshot. The undo snapshot enters an eradication pending
period, after which time the snapshot is eradicated. During the eradication pending period, the
undo snapshot can be viewed, recovered, or permanently eradicated through the Destroyed
Volumes folder.
Restore a volume from a volume snapshot to bring the volume back to the state it was when the
snapshot was taken. When a volume is restored from a volume snapshot, Purity//FA overwrites
the entire volume with the snapshot contents. After you restore a volume snapshot, the created
date of the overwritten volume is set to the snapshot created date. Purity//FA automatically
takes an undo snapshot of the overwritten volume. The undo snapshot enters an eradication
pending period, after which time the snapshot is destroyed. During the pending period, the undo
snapshot can be viewed, recovered, or permanently eradicated through the Destroyed Volumes
folder.
Copy a volume snapshot or protection group volume snapshot to create a new volume or over-
write an existing one. After you copy a snapshot, the source of the new or overwritten volume is
set to the name of the originating volume, and the created date of the volume is set to the snap-
shot created date. If the copy overwrites an existing volume, Purity//FA automatically takes an
undo snapshot of the existing volume. The undo snapshot enters an eradication pending period,
after which time the snapshot is destroyed. During the pending period, the undo snapshot can
be viewed, recovered, or permanently eradicated through the Destroyed Volumes folder.

Eradication Delays
The eradication delays protect against the accidental deletion of data in a destroyed object.
When an object is destroyed, it enters an eradication pending period of between 1 and 30 days,
after which the object is automatically eradicated. This applies to all individual data and con-
figuration objects. An object in the eradication pending period can be manually eradicated prior
to the end of the eradication pending period (unless the SafeMode manual eradication pre-
vention feature is enabled).

Pure Storage Confidential - For distribution only to Pure Customers and Partners 35
Chapter 2:FlashArray Concepts and Features | Block Storage

Purity supports two types of eradication delays, one for SafeMode-protected objects and one for
other objects:
l Disabled delay: The eradication delay for SafeMode-protected objects on the array.
Sets the length of the eradication pending period for SafeMode-protected objects.
Only takes effect when SafeMode is enabled. Known as the "disabled" eradication
delay because manual eradication is disabled on those objects. Default 8 days. 14
days are recommended.
l Enabled delay: The eradication delay for objects for which eradication is enabled,
that is, objects not protected by SafeMode. Sets the length of the eradication pending
period for array objects not protected by SafeMode. Default 1 day.
The eradication delays support both ActiveCluster and ActiveDR. In ActiveCluster, the destroy
time is stored within an object and is consistent for objects that are replicated across two arrays.
The eradication delays are displayed and configured through the GUI (Settings > System >
Eradication Delay). In addition, the user may contact Pure Storage Technical Services to con-
figure eradication delays.

Eradication Delay Settings after an Upgrade


When upgrading from an earlier version of Purity that has only one eradication delay setting, not
both an enabled delay and an disabled delay, the eradication delay value after the upgrade
depends on whether SafeMode is enabled or not.
If SafeMode is enabled at the time of the upgrade, then after the upgrade, both the enabled
delay and the disabled delay are set to the pre-upgrade eradication delay setting.
If SafeMode is not enabled before the upgrade, then after the upgrade, the enabled delay is set
to the pre-upgrade eradication delay setting and the disabled delay is set to the higher of the
pre-upgrade eradication delay setting and the post-upgrade default of 8 days.

Extending or Decreasing an Eradication Pending Period


Except for file systems, if the eradication pending period is increased, items already pending
eradication immediately inherit the new pending period.
Except for file systems, if the eradication pending period is decreased, items pending erad-
ication keep their higher pending period.
File systems are an exception. The eradication pending period for a destroyed file system is not
affected by a later increase or decrease of an enabled delay or disabled delay setting.
The disabled eradication delay cannot be decreased when SafeMode is enabled, as a pro-
tection against accidental or other deletion of important data.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 36
Chapter 2:FlashArray Concepts and Features | Block Storage

SafeMode
SafeMode for Purity//FA is a family of features that adds additional security to provide ransom-
ware protection for storage objects through the following means:
l Manual eradication prevention. Disables the ability to manually eradicate destroyed
objects. Only the expiration of a destroyed object’s eradication pending period can
cause eradication.
l Snapshot and replication protection. Prevents snapshot and replication schedules
from being disabled and retention period from being reduced.
l Volume protection. Ensures that volume data is protected by protection group snap-
shots, providing per protection group ransomware protection.
l Automatic volume protection for new arrays. Provides automatic protection group
membership for newly created or copied volumes. A default protection group is auto-
matically created for each pod and also for volumes that are not in a pod. Default pro-
tection is configured in the Protection > Array > Default Protection pane. See
"Automatic Protection Group Assignment for Volumes" on the next page.
For best protection, Pure recommends enabling the retention lock feature in addition to extend-
ing the eradication pending period to seven days or more, which is configured through the GUI
(Settings > System > Eradication Configuration) and CLI (purearray eradication-con-
fig setattr command).
To allow granular and flexible control of the SafeMode feature, FlashArray supports retention
lock per protection group. The protection group retention lock is configured through the GUI (Pro-
tection > Protection Groups) and CLI (purepgroup retention-lock ratchet com-
mand).
For protection groups, the retention lock is by default unlocked. By ratchet enabling the retention
lock, all of the following are disallowed for a non-empty protection group:
l Destroying the protection group
l Manual eradication of the protection group and its container
l Member and target removal
l Decreasing the eradication delay
l Disabling snapshot or replication schedule
l Decreasing snapshot or replication retention or frequency

Pure Storage Confidential - For distribution only to Pure Customers and Partners 37
Chapter 2:FlashArray Concepts and Features | Block Storage

l Changing the blackout period, only clear blackout period is allowed


l Disallow on the target side
Once the protection group retention lock is ratcheted, it cannot be unlocked by the user. Contact
Pure Storage Technical Services for further assistance. Enrollment is required with at least two
administrators and pin codes.
Retention lock is not supported for and cannot be ratcheted on protection groups with host or
host group members, or groups with offload targets. To use retention lock in one of these situ-
ations, create another protection group that includes the volumes to be protected, not including
the offload targets, on which retention lock can be enabled. The same applies when trying to add
host or host group members, or adding offload targets, to a protection group that is ratcheted.
Alternatively, SafeMode can be enabled globally on the array. Global SafeMode supports
FlashArray volumes, volume groups, volume snapshots, pods, protection groups, protection
group snapshots, and files and directories and their snapshots. Contact Pure Storage Technical
Services for more information.

Note: Volumes protected through the SafeMode global volume protection feature are rep-
resented by an asterisk in Protection > Members panel and by the purepgroup list
CLI command and are not listed by name.

Automatic Protection Group Assignment for Volumes


Purity//FA provides automatic protection group membership for all newly created volumes.
Automatic protection is implemented with configurable lists of one or more default protection
groups. Each pod has its own separate list of default protection groups, and an additional list
applies to volumes that are not members of a pod. Newly created volumes, including copied
volumes, are automatically placed in each of the protection groups in the appropriate list. Each
list as well as its protection groups and the protection configuration of the groups can be cus-
tomized as needed. Default protection is enabled and customized in Protection > Array >
Default Protection. Protection groups must be created before they can be included in a default
protection group list.
Initially, the root array default protection group list contains one protection group, named
pgroup-auto. The initial configuration of the default protection groups list for a new pod contains
one protection group, named <pod-name>::pgroup-auto.
When a pod is created, the initial default protection groups list for the pod is a copy of the current
root default protection group list, modified with the pod name. For example, if the root array

Pure Storage Confidential - For distribution only to Pure Customers and Partners 38
Chapter 2:FlashArray Concepts and Features | Block Storage

default protection groups list contains pgroup-auto,pgroup-B, the pod default protection groups
list is created with <pod-name>::pgroup-auto,<pod-name>::pgroup-B.
Purity//FA automatically creates each protection group in the pod default protection groups list.
When a volume is created in or copied into a pod, the new volume is given membership in all pro-
tection groups contained in the pod default protection group list.

SafeMode Status
The protection group SafeMode pane indicates whether retention lock is enabled for the pro-
tection group. Retention Lock displays one of the following values:
l Ratcheted - The protection group is ratcheted and if the protection group is not
empty, manual eradication is disabled and retention reduction is disallowed.
l Unlocked - The protection group is not ratcheted.
Similarly, the SafeMode status appears in the lower section of the left navigation pane and dis-
plays one of the following values:
l Enabled - Either global SafeMode is enabled, or minimum one non-empty protection
group is ratcheted.
l Disabled - Global SafeMode is not enabled and no non-empty protection groups are
ratcheted.

Always-On Quality of Service


Always-On Quality of Service (QoS) balances I/O response in saturated arrays by reducing the
impact of noisy volumes on the response times of less-busy volumes when array resources are
saturated. It limits I/O execution concurrency when it detects that an array is near saturation,
and reorders its backlog of queued requests thereafter to give less-active volumes access to
array resources. QoS is enabled by default and is always active in FlashArrays, but will only take
effect if an array becomes saturated. Contact Pure Storage Technical Services to disable QoS.

Hosts
The host organizes the storage network addresses - the iSCSI Qualified Names (IQNs), NVMe
Qualified Names (NQNs), and Fibre Channel World Wide Names (WWNs) - that identify the host
computer initiators. The host communicates with the array through the Ethernet or Fibre Chan-

Pure Storage Confidential - For distribution only to Pure Customers and Partners 39
Chapter 2:FlashArray Concepts and Features | Block Storage

nel ports. The array accepts and responds to commands received on any of its ports from any of
the IQNs, NQNs, and WWNs associated with a host.

Note: Cloud Block Store accepts and responds only to the iSCSI Qualified Names
(IQNs); the NVMe Qualified Names (NQNs) and Fibre Channel World Wide Names
(WWNs) are not supported.
Purity//FA hosts are virtual, so creating, renaming, and deleting a host has no meaning outside
the array.
Create hosts to access volumes on the array. A Purity//FA host is comprised of a host name and
one or more IQNs, NQNs, or WWNs. Host names must be unique within an array.
Associate one or more IQNs, NQNs, or WWNs with the host after it has been created. The host
cannot communicate with the array until at least one IQN, NQN, or WWN has been associated
with it.
iSCSI Qualified Names (IQNs) follow the naming standards set by the Internet Engineering Task
Force (see RFC 3720). For example, iqn.2016-01.com.example:flasharray.491b30d0efd97f25.
NVMe Qualified Names (NQNs) follow the naming standards set by NVM Express. For example,
nqn.2016-01.com.example:flasharray.491b30d0efd97f25.
Fibre Channel World Wide Names (WWNs) follow the naming standards set by the IEEE Stand-
ards Association. WWNs are comprised of eight pairs of case-insensitive hexadecimal numbers,
optionally separated by colons. For example, 21:00:00:24:FF:4C:C5:49.
Like hosts, IQNs, NQNs, and WWNs must be unique in an array. A host can be associated with
multiple storage network addresses, but a storage network address can only be associated with
one host.
Host IQNs, NQNs, and WWNs can be added or removed at any time.
Rename a host to change the name by which Purity//FA identifies the host in administrative oper-
ations and displays. Host names are used solely for FlashArray administration and have no sig-
nificance outside the array, so renaming a host does not change its relationship with host groups
and volumes. The new host name is effective immediately and the old name is no longer recog-
nized in CLI, GUI, or REST interactions.
Optionally, configure the Challenge-Handshake Authentication Protocol (CHAP) to verify the
identity of the iSCSI initiators and targets to each other when they establish a connection. By
default, the CHAP credentials are not set.
To ensure the array works optimally with the host, set the host personality to the name of the
host operating or virtual memory system. The host personality setting determines how the

Pure Storage Confidential - For distribution only to Pure Customers and Partners 40
Chapter 2:FlashArray Concepts and Features | Block Storage

Purity//FA system tunes the protocol used between the array and the initiator. For example, if
the host is running the HP-UX operating system, set the host personality to HP-UX. By default,
the host personality is not set. If your system is not listed as one of the valid host personalities,
do not set it.
Delete a host if it is no longer required. Purity//FA will not delete a host while it has connections
to volumes, either private or shared. You cannot recover a host after it has been deleted.

Host Guidelines
Purity//FA will not create a host if:
l The specified name is already associated with another host in the array.
l Any of the specified IQNs, NQNs, or WWNs are already associated with an existing
host in the array.
l The creation of the host would exceed the limit of concurrent hosts, or the creation of
the IQN, NQN, or WWN would exceed the limit of concurrent initiators.
Purity//FA will not delete a host if:
l The host has private connections to one or more volumes.
Purity//FA will not associate an IQN, NQN, or WWN with a host if:
l The creation of the IQN, NQN, or WWN would exceed the maximum number of con-
current initiators.
l The specified IQN, NQN, or WWN is already associated with another host on the
array.
Hosts are configured through the GUI (Storage > Hosts) and CLI (purehost command).

Host Groups
A host group represents a collection of hosts with common connectivity to volumes.
Purity//FA host groups are virtual, so creating, renaming, and deleting a host group has no mean-
ing outside the array.
Create a host group if several hosts share access to the same volume(s). Host group names
must be unique within an array.
After you create a host group, add hosts to the host group and then establish connections
between the volumes and the host group.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 41
Chapter 2:FlashArray Concepts and Features | Block Storage

When a volume is connected to a host group, it is assigned a logical unit number (LUN), which
all hosts in the group use to communicate with the volume. If a LUN is not manually specified
when the connection is first established, Purity//FA automatically assigns a LUN to the con-
nection.
Once a connection has been established between a host group and a volume, all of the hosts
within the host group are able to access the volume through the connection. These connections
are called shared connections because the connection is shared between all of the hosts within
the host group.
Rename a host group to change the name by which Purity//FA identifies the host group in admin-
istrative operations and displays. Renaming a host group does not change its relationship with
hosts and volumes. The new host group name is effective immediately and the old name is no
longer recognized in CLI, GUI, or REST interactions.
Delete a host group if it is no longer required. You cannot recover a host group after it has been
deleted.

Host Group Guidelines


Purity//FA will not create a host group if:
l A host group with the specified name already exists in the array.
l The creation of the host group would exceed the limit of concurrent host groups.
Purity//FA will not delete a host group if:
l Any hosts are associated with the host group or any volumes are connected to it.
A host cannot be added to a host group if:
l The host is associated with another host group. A host can only be associated with
one host group at a time.
l The host has a private connection to a volume associated with the host group.
Host groups are configured through the GUI (Storage > Hosts) and CLI (purehgroup com-
mand).

Host-Volume Connections
For a host to read and write data on a FlashArray volume, the two must be connected. Purity//FA
only responds to I/O commands from hosts to which the volume addressed by the command is

Pure Storage Confidential - For distribution only to Pure Customers and Partners 42
Chapter 2:FlashArray Concepts and Features | Block Storage

connected; it ignores commands from unconnected hosts.


Hosts are connected to volumes through private or shared connections. Private and shared con-
nections are functionally identical: both make it possible for hosts to read and write data on
volumes. They differ in how administrators create and delete them.

Private Connections
Connecting a volume to a host establishes a private connection between the volume and the
host. You can connect multiple volumes to a host. Likewise, a volume can be connected to mul-
tiple hosts.
Disconnecting a volume from a host, or vice versa, breaks the private connection between the
volume and host. Other shared and private connections are unaffected.

Shared Connections
Connecting a volume to a host group establishes a shared connection between the volumes and
all of the hosts within that host group. You can connect multiple volumes to a host group. Like-
wise, a volume can be connected to multiple host groups.
Disconnecting a volume-host group connection breaks the shared connection between the
volume and all of the hosts within the host group. Other shared and private connections are unaf-
fected.

Breaking Private and Shared Connections


Breaking a connection between a host and volume causes the host to lose access to the
volume. There are three ways in which a host-volume connection can be broken:
l Break the private connection between a volume and a host, which causes the host to
lose access to the volume. Volume-host connections are broken when you dis-
connect a volume from its host, or disconnect a host from the volume.
l Break the shared connection between a volume and a host group, which disconnects
the volume and all of the host group’s member hosts. Other shared and private con-
nections to the volume are unaffected. Volume-host group connections are broken
when you disconnect a volume from its host group, or disconnect a host group from
the volume.
l Remove a host from a host group, which breaks the connections between the host
and all volumes with shared connections to the host group. The removed host’s
private connections are unaffected.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 43
Chapter 2:FlashArray Concepts and Features | Block Storage

Logical Unit Number (LUN) Guidelines


Each host-volume connection has three components: a host, a volume, and a logical unit num-
ber (LUN) used by the host to address the volume. Purity//FA supports LUNs in the [1...4095]
range.
Hosts establish connections to volumes either through private or shared (via host groups) con-
nections. A host can have only one connection, private or shared, to a given volume at a time.
Purity//FA follows these guidelines to automatically assign a LUN to the connection:
l For private connections, Purity//FA starts at LUN 1 and counts up to the maximum
LUN 4095, assigning the first available LUN to the connection.
l For shared connections, Purity//FA starts at LUN 254 and counts down to the min-
imum LUN 1, assigning the first available LUN to the connection. If all LUNs in the
[1...254] range are taken, Purity//FA starts at LUN 255 and counts up to the maximum
LUN 4095, assigning the first available LUN to the connection.
A host cannot be associated with a host group if it has a private connection to a volume asso-
ciated with the same host group.
The LUN can be changed after the connection has been created. If you change a LUN, the
volume may become temporarily disconnected from the host to which it is connected.

Connection Guidelines
Purity//FA will not establish a (private) connection between a volume and a host if:
l An unavailable LUN was specified.
l The volume is already connected to the host, either through a private or shared con-
nection.
l Purity//FA will not establish a (shared) connection between a volume and host group
if:
l An unavailable LUN was specified.
l The volume is already connected to the host group.
l The volume is already connected to a host associated with the host group.
Host-volume connections are performed through the GUI (Storage > Hosts and Storage >
Volumes) and CLI (purehgroup connect, purehost connect and purevol connect commands).

Connections
The Connections page displays connectivity details between the Purity//FA hosts and the array
ports.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 44
Chapter 2:FlashArray Concepts and Features | Block Storage

The Host Connections pane displays a list of hosts, the connectivity status of each host, and the
number of initiator ports associated with each host. Connectivity statuses range from "None",
where the host does not have any paths to any target ports, to "Redundant", where the host has
the same number of paths from every initiator to every target port on both controllers.
The Target Ports pane displays the connection mappings between each array port and initiator
port. Each array port includes the following connectivity details: associated iSCSI Qualified
Name (IQN), NVMe Qualified Name (NQN), or Fibre Channel World Wide Name (WWN)
address, failover status, and communication speed. A check mark in the Failover column indic-
ates that the port has failed over to the corresponding port pair on the primary controller.
Host connections and target ports are displayed through the GUI (select Health > Connections)
and CLI (pureport list, purehost list --all, and purevol list --all commands).

Protection Groups and Protection Group Snapshots


Protection groups support several Purity//FA features:
l Purity//FA FlashRecover is a policy-based data protection and disaster recovery solu-
tion. Through FlashRecover, generate protection group snapshots and retain them on
the array and/or asynchronously replicate them to target arrays.
l Purity//FA Snap to NFS and Purity//FA Snap to Cloud are policy-based solutions that
manage portable snapshots through the offload of volume snapshots to a target, such
as an Azure Blob container, a NAS/NFS device, an NFS storage system, an S3
bucket, or a generic Linux server, for long-term retention.
Protection groups and protection group snapshots are configured and managed through the GUI
(Storage > Protection Groups and Storage > Volumes) and CLI (purepgroup and purevol com-
mands). Offload targets are further configured and managed through the GUI (Storage > Array
> Offload Targets) and CLI (pureoffload command).

Protection Groups
A protection group defines a set of volumes, hosts, or host groups (called members) that are pro-
tected together through snapshots with point-in-time consistency across the member volumes.
The members within the protection group have common data protection requirements and the
same snapshot, replication, and retention schedules.
Each protection group includes the following components:
l Source array. An array from which Purity//FA generates a point-in-time snapshot of
its protection group volumes. Depending on the protection group schedule settings,
the snapshot data is either retained on the source array or replicated over to and

Pure Storage Confidential - For distribution only to Pure Customers and Partners 45
Chapter 2:FlashArray Concepts and Features | Block Storage

retained on one or more target arrays.


l Targets. One or more arrays or storage systems that receive snapshot data from the
source array. Targets are only required if snapshot data needs to be replicated over
to remote arrays or storage systems.
l Members. Volumes, hosts, or host groups that have common data protection require-
ments and the same snapshot/replication frequency and retention policies. Only mem-
bers of the same object type can belong to a protection group.
Replication to offload targets only supports volumes; hosts and host groups are not
supported.
For asynchronous replication, a single protection group can consist of multiple hosts,
host groups, and volumes. Likewise, hosts, host groups, and volumes can be asso-
ciated with multiple protection groups. Protection groups can also contain over-
lapping volumes, hosts, and host groups. In such cases, Purity//FA counts the volume
once and ignores all other occurrences of the same member.
l Schedules. Each protection group includes a snapshot schedule and a replication
schedule.
Configure and enable the snapshot schedule to generate snapshots and retain them
on the source array.
Configure and enable the replication schedule to generate snapshots on the source
array, immediately replicate the snapshots to the targets, and retain those snapshots
on the targets. When replicating to an array or offload target, Purity//FA only transfers
the incremental data between two snapshots. Furthermore, during the replication
data transfer process, data deduplicated on the source array is not sent again if the
same data was previously sent to the same target array. See Figure 2-4.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 46
Chapter 2:FlashArray Concepts and Features | Block Storage

Figure 2-4. Protection Group Schedules

Create a protection group to add members (volumes, hosts, or host groups) that have common
data protection requirements. Pure Storage protection groups are virtual, so creating, renaming,
and destroying a protection group has no meaning outside the array. Protection group names
must be unique within an array.
Copy a protection group to restore the state of the volumes within a protection group to a pre-
vious protection group snapshot. The restored volumes are added as real volumes to a new or
existing protection group. Note that restoring volumes from a protection group snapshot does
not automatically expose the restored volumes to hosts and host groups.
Rename a protection group to change the name by which Purity//FA identifies the protection
group in administrative operations and displays. When you rename a protection group, the
name change is effective immediately and the old name is no longer recognized by Purity//FA.
Destroy a protection group if it is no longer needed.
Destroying a protection group implicitly destroys all of its snapshots. Once a protection group
has been destroyed, all snapshot and replication processes for the protection group stop and
the destroyed protection group begins its eradication pending period of from 1 to 30 days.
When the eradication pending period has elapsed, Purity//FA starts reclaiming the physical stor-
age occupied by the protection group snapshots.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 47
Chapter 2:FlashArray Concepts and Features | Block Storage

During the eradication pending period, you can recover the protection group to bring the group
and its content back to its original state, or manually eradicate the destroyed protection group to
reclaim physical storage space occupied by the destroyed protection group snapshots.
Once reclamation starts, either because you have manually eradicated the destroyed protection
group, or because the eradication pending period has elapsed, the destroyed protection group
and its snapshot data can no longer be recovered.
The Time Remaining column displays the eradication pending period in hh:mm format, which
begins at the number of days in its eradication pending period and counts down to 00:00. When
the eradication pending period reaches 00:00, Purity//FA starts the reclamation process. The
Time Remaining number remains at 00:00 until the protection group or snapshot is completely
eradicated.

Space Consumption Considerations


Consider space consumption when you configure the snapshot, replication, and retention sched-
ules.
The amount of space consumed on the source array depends on how many snapshots you want
to generate, how frequently you want to generate the snapshots, how many snapshots you want
to retain, and how long you want to retain the snapshots.
Likewise, the amount of space consumed on the target depends how many snapshots you want
to replicate, how frequently you want to replicate the snapshots, how many replicated snapshots
you want to retain, and how long you want to retain them.

Protection Group Snapshots


Protection group snapshots capture the content of all volumes on the array for the specified pro-
tection group at a single point in time. The snapshot is an immutable image of the volume data at
that instance in time. The volumes are either direct members of the protection group or con-
nected to any of its hosts or host groups within a protection group.
Generate a protection group snapshot to create snapshots of the volumes within the protection
group.
Protection group snapshots can be generated automatically (using schedules) or on-demand.
The volumes within a protection group snapshot can be copied as-needed to create live, host-
accessible volumes.
The protection group snapshot naming convention is PGROUP.NNN, where:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 48
Chapter 2:FlashArray Concepts and Features | File Storage

l PGROUP is the name of the protection group.


l NNN is a unique monotonically increasing number or a manually-assigned protection
group snapshot suffix name.
The protection group volume snapshot naming convention is PGROUP.NNN.VOL, where:
l PGROUP is the name of the protection group.
l NNN is a unique monotonically increasing number or a manually-assigned protection
group snapshot suffix name.
l VOL is name of the volume member.
If you are viewing replicated snapshots on a target array, the snapshot name begins with the
name of the source array from where the snapshot was taken.
Destroy a protection group snapshot if it is no longer required. Destroying a protection group
snapshot destroys all of its protection group volume snapshots, thereby reclaiming the physical
storage space occupied by its data.
Destroyed protection group snapshots follow the same eradication pending behavior as des-
troyed protection groups. If you destroy a protection group snapshot, Purity//FA automatically
takes an undo snapshot. The undo snapshot enters an eradication pending period, after which
time the snapshot is eradicated. During the eradication pending period, the undo snapshot can
be viewed, recovered, or permanently eradicated.
Protection group volume snapshots cannot be destroyed individually. A protection group volume
snapshot can only be destroyed by destroying the protection group snapshot to which it belongs.

File Storage
File services are supported on FlashArray//C and FlashArray//X.
File services are administered through the Purity for FlashArray (Purity//FA) graphical user inter-
face (GUI) or command line interface (CLI). Users should be familiar with file system, storage,
and networking concepts, and have a working knowledge of Windows or UNIX.
Before you begin, contact Pure Storage Technical Services to have file services activated on the
FlashArray.

Note: The FlashArray//X50R2 model does not support both block storage and file storage
at the same time.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 49
Chapter 2:FlashArray Concepts and Features | File Storage

File Systems
A FlashArray can contain up to 50 separate file systems, each with a number of directories
which can be exported via supported protocols. Clients, using Active Directory or LDAP, can con-
nect and access these exports using SMB or NFS:
l SMB version 1.0 / 2.0 / 2.1 / 3.0 / 3.02 / 3.11
l NFS version 3 / 4.1

Note: SMB version 1.0 is deprecated and disabled by default due to security reasons.
Because it lacks encryption and protection, the best practice is to avoid the use of this ver-
sion. For more information, contact Pure Storage Technical Services.
During ActiveDR replication, a FlashArray can contain up to 50 separate file systems. Arrays
can replicate the file systems to the target array; however, if the number of file systems on the tar-
get array is over 50 file systems no additional file systems can be created until the number of file
systems on the target array is reduced to less than 50.
A managed directory is a directory that allows attaching exports, quotas, and snapshot policies.
In addition, for these directories, metrics and space information are available. Managed dir-
ectories are created by an administrator and are limited to the top eight levels of directories,
counting the root directory as the first level.
Since managed directories and exports should be placed in useful places and clients only see
their own part of the file system, there is rarely a need for a massive number of separate file sys-
tems. Most of the time, one or a few file systems are sufficient.
File systems, directories, and files are dynamically allocated and do not require you to allocate
or partition any of the storage prior to use. Storage space is allocated when used and given back
to the combined pool of block and file storage when content is eradicated.
Creation, destruction, and eradication of a file system is a management-only operation through
the Storage > File Systems page. Alternatively, refer to the purefs command in the Purity//FA
CLI Reference Guide.

Managed Directories
Not every directory in a file system matters to an administrator. Define the ones that matter by
using managed directories. Only these directories can have policies attached. They also provide
space reporting and metrics.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 50
Chapter 2:FlashArray Concepts and Features | File Storage

When a new file system is created, the root directory is automatically created. This is a managed
directory named “root”. This directory can only be destroyed together with the entire file system.
All directories created through management (GUI or CLI) are managed. Directories created by
protocol clients are not managed, except when using the auto managed directory feature.
Managed directories can be created, up to eight levels deep, only as children of other managed
directories. Since the root directory is a managed directory, all directories in a file system will
have at least one managed directory as an ancestor that can provide access points, protection,
space reporting, metrics, and quota notifications and enforcement. Export, quota and snapshot
policies can be added to any managed directory.
A managed directory can only be deleted through management. To avoid accidental eradication
of content, the managed directory can only be deleted when no content exists and all shares are
either removed or disabled.
Client directories can be moved within the scope (tree) of a parent managed directory. However,
directories cannot be moved out of the scope of a managed directory or into the scope of
another managed directory.
Managed directories are managed through the Storage > File Systems page of the GUI, or the
CLI puredir command.

Exports
Exports (that is, shares) are entry points for clients to connect to the file system using local users
for file, Active Directory or LDAP for authentication and authorization. With NFS User Mapping
Disabled, exports can be accessed without directory services. Clients connect by using the file
service IP address or URL and the export name. For each protocol, export names must be
unique for the entire FlashArray. Exports and files can be made accessible for clients that use
SMB and, at the same time, clients that use NFS, using the same export name. When granted
access, clients only see the part of the file system that the export exposes, meaning the target
directory and its subdirectories. With Access Based Enumeration (ABE) enabled, SMB policies
allow directories and files to be hidden from clients that do not have sufficient permissions.
Exports are created by using SMB or NFS policies with rules, adding each policy to one or more
managed directories. A policy can be created, modified, temporarily disabled, and when no
longer needed, permanently removed.
Export policies can be reused to create many exports. Modifying a policy by adding or removing
rules, affects the exports for all directories where the policy is used.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 51
Chapter 2:FlashArray Concepts and Features | File Storage

SMB and NFS policies are managed through the Storage > Policies page of the GUI, or the CLI
purepolicy smb and nfs commands. Exports are managed through the Storage > File Sys-
tem page or the CLI puredir export command.

Auto Managed Policies


Given the correct permissions, connected clients create subdirectories on their mounted share
through the SMB or NFS protocols. These non-managed directories cannot granularly be man-
aged within Purity//FA. With the use of auto managed directory policies (autodir policies), sub-
directories automatically become managed directories, one level below the first managed
directory. Any subdirectories nested below this first level of managed subdirectories will become
non-managed directories.
Auto managed directory policies are only allowed on managed directories that allow (that is, has
room for) at least one more level of nested managed directories below itself.
Auto managed policies are created and managed through the CLI purepolicy autodir com-
mand or through the REST API.

NFS Datastore
FlashArray File can serve as a VMware NFS datastore. Using vSphere 7.0 or later, NFS data-
stores can be created using NFS exports on FlashArray through the NFS protocol version 3 or
4.1.
For getting started with NFS datastores on the FlashArray, refer to the VMware NFS Datastores
on FlashArray Quick Start Guide on the Knowledge site at https://support.purestorage.com, or
refer to the purepolicy command in the Purity//FA CLI Reference Guide.

Local Users
Local Users is a file services feature that allows you to use a locally stored directory of users and
groups, internal to the FlashArray, in place of an external authentication solution such as Active
Directory (AD) or LDAP. After users and groups are created on the array, clients are allowed to
connect to the FlashArray File domain and authenticate with their respective credentials.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 52
Chapter 2:FlashArray Concepts and Features | File Storage

Local Users for file is a separate concept from FlashArray local users. The main purpose of local
users for file is to access file systems via SMB or NFS protocols, while the purpose of FlashAr-
ray local users is to manage the array.
A user is a local user account which includes a username and password. Each user is a member
of one primary group. Before creating a user, its primary group must be created if it does not
already exist. A user can also be a member of other groups, denoted as secondary groups.
A group is a local group account under which one or more users can be gathered for simplified
management of permissions. For example, accounting, development, sales, and so on. A group
can have many members. Only users can be members of a group, not other groups. Before
deleting a group, all members must be removed from the group.
External members, user accounts or groups that reside on external AD or LDAP servers, can be
added to local groups as well. The purpose of this is to authenticate external users through the
local group, similar to local users, and authenticate the user within the array, rather than the
entire domain.
There are two built-in local user accounts: Administrator and Guest, and three built-in group
accounts: Administrators, Guests and Backup Operators. These built-in users and groups can-
not be removed or modified.
Permissions are managed from the client side, for example through Windows Explorer or Com-
puter Management, by adding and removing permissions to users or groups.
Local users for file are managed through the Settings > Access > File System tab of the GUI, or
the CLI pureds local command.

NFSv3 and File Locking


File services enable users or applications to lock a file so that other users cannot perform oper-
ations on the same file. File locks are interoperable across the NFS and SMB protocols.
For NFS version 3, file locking is handled through the Network Lock Manager (NLM) protocol.
NLM locks are considered advisory locks in that, if a client has access to the file, the NFS ser-
vice itself does not automatically prevent the client from accessing the file. The service does not
check for the existence of or try to obtain an NLM lock on a file ahead of time.
The NLM protocol works with the NFS version 3 protocol to ensure NLM file locks are visible
across all clients for I/O coordination and to help clients coordinate access to files. This allows
all clients mounting the same NFS shared file system to see file locks set by other clients.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 53
Chapter 2:FlashArray Concepts and Features | File Storage

The NLM and Network Status Monitor (NSM) services are enabled by default with the NFS pro-
tocol service.
The NLM protocol depends on the NSM protocol to solve cases where the client or server
restarts, which would otherwise leave hanging locks. In cases where files are unintentionally left
in a locked state, run the puredir lock nlm-reclamation create command to release
all NLM locks for the entire array. By doing this, client applications are notified, allowing them to
reclaim the lock.
NFS version 4.1 introduces a number of new features and enhancements for interoperability and
ease of use over version 3 of NFS. The NFS 4.1 and SMB protocols do not use the NLM protocol
for file locking. Instead, file locking is handled internally on the array.

NFS User Mapping


The storage environment typically makes use of directory services such as Active Directory or
LDAP for user authentication and user mapping. With user mapping, the user UID is looked up
and, if the user is found, the file request is performed by the found user. If the user is not found or
not authorized, the request is denied. This is the default behavior.
By disabling user mapping on NFS, the exports can be accessed without directory services and
the array relies instead upon AUTH_SYS (previously known as AUTH_UNIX) RPC authen-
tication. The array will trust the UID and GIDs from the host
The recommendation is to not disable user mapping for exports when user mapping has been in
use, since existing files and directories could become inaccessible. Do not remove, delete, or
unjoin existing directory services; the existing files and directories have ACLs based on Active
Directory or LDAP user IDs and memberships, and without these services, the files will become
inaccessible. Similar accessibility issues might occur with SMB exports if applied on the same
directory as user mapping disabled NFS exports.

Directory Quotas
Directory quotas allow restriction of storage space for each managed directory, including all sub-
directories below. It is an always-on feature which, once a one-time initial scan of the file system
is complete, allows for instant quota enablement upon attaching a quota policy.
There are two types of directory quota limits: unenforced (soft quota) and enforced (hard quota).
The unenforced quota will be informative to the user or administrator and can be used to better

Pure Storage Confidential - For distribution only to Pure Customers and Partners 54
Chapter 2:FlashArray Concepts and Features | File Storage

plan for future system resource upgrades but will not affect operations. The enforced quota will
result in all future space increasing operations being prevented with ENOSPC errors, meaning
there is no storage space left, until the quota overage has been mitigated. The enforced quota
size also provides information to the client about disk space.
Directory quotas are implemented by creating and attaching quota policies to managed dir-
ectories. Each managed directory can have no more than one quota policy attached, but each
policy can include multiple rules for quota limits.
Limits are defined so that there can be zero, one, or more unenforced limits per policy, and
optionally one enforced limit. When the enforced limit is used, all unenforced limits must be 80%
of the enforced limit, or lower.
Note that clients might be able to briefly go beyond the set limit by continuing to write for an
amount of time after the hard quota has been reached. This is by design to avoid impacting I/O
performance, as directory quota on FlashArray runs as a background process. Normally, this
allows clients to continue for no more than 15 seconds of additional storage beyond the set
quota. In the case of a heavy load, this may extend the time up to three minutes.
When applying a quota limit to a managed directory already in use, the current usage must not
exceed the new enforced limit. This is to avoid unexpected ENOSPC errors. The “ignore usage”
option can be used to override this, and the quota will then be applied.
Quota limits can also be nested so that directories with individual quota limits exist below
another directory with a quota limit. The limits then become dynamic, so that the directories
below, while having their own quota limits, may also be limited by the quota that exists above, if
this limit were to be reached first.
Directory quota is unaware of data reduction and deduplication. Logical file sizes are accounted
for, which means that sparse files, or empty space within files, are also counted. Space used for
snapshots are not counted towards the quota limit.
When a quota limit threshold is exceeded, an email notification will be sent to the owner of the
managed directory, either the user, the group, or both the user and the group, according to the
corresponding quota rule settings. That is, if the notification parameter is set to “group”, the
email will be sent to the email address associated with the group, and for “user”, to the email
address associated with the user.
There are three email severity levels:
l Informational: Exceeding a soft quota threshold, or 80% of a hard quota threshold,
generates a notification with informational severity.
l Warning: An important message is generated when exceeding 90% of a hard quota

Pure Storage Confidential - For distribution only to Pure Customers and Partners 55
Chapter 2:FlashArray Concepts and Features | File Storage

threshold.
l Critical: Urgent message when a hard quota threshold is reached.
The owner of a directory can be viewed and changed from a connected client with a chow-
n/chgrp type operation. For example, with Windows directory properties, the directory owner can
be viewed or changed in the advanced part of the security view.
For email notifications to be sent, SMTP must be correctly configured in the Alert Routing panel
on the Settings > System page. Furthermore, the groups and users, found in directory the ser-
vices such as Active Directory or LDAP, or through FlashArray File Local Users, must be pop-
ulated with their associated email addresses. The attribute for email is typically found in the
preferences for each user and group.
Quota policies are managed through the Storage > Policies page or the CLI purepolicy
quota command. The relationship between quota policies and managed directories can be
managed through the Storage > File Systems page or with the CLI puredir quota command.

Snapshots
Snapshots give you the ability to retrieve earlier versions of folders and files in case of unwanted
changes or deletion of content.
A snapshot is a copy of the underlying file structure with files and content that are consistent to a
single point in time. When a snapshot is accessed, it appears to be a full copy at the time the
snapshot was taken, but with read-only access. The copy includes the directory with sub-
directories and files below.
Each snapshot is located in a separate subdirectory within the .snapshot directory, which is a
hidden directory. Snapshots are immutable and cannot be altered. Thus, files or directories must
be copied out of the snapshot directory before they can be used (for example, to restore con-
tent). Alternatively, with SMB, use the Previous Versions feature to access snapshot content.
Scheduled snapshots are managed via snapshot policies. In addition, snapshots can be created
manually by an administrator. In any case, a retention period can be set which defines when the
snapshot is eradicated. For scheduled snapshots, the retention period is required so that the
number of snapshots is kept within reasonable limits.

Previous Versions
Previous Versions is an SMB feature that allows the user to access previous versions of files
and directories based on snapshots of the respective data. Using software that supports the fea-
ture, for example Windows File Explorer, the user can select a previous version and then

Pure Storage Confidential - For distribution only to Pure Customers and Partners 56
Chapter 2:FlashArray Concepts and Features | File Storage

choose to open or restore the selected content. Restoring files or directories overwrites the exist-
ing files and cannot be undone. Before restoring a file or directory, the user can select open to
make sure that it is the correct version.
Similarly, previous versions are accessible through SMB shares (exports) by adding a UTC
timestamp to the export name when accessing the share. This is the timestamp of the chosen
snapshot. For example, if a snapshot was taken July 10th of 2020 at 10:18:28 UTC, on the root
level of the share, the following path provides access to that version: \\server\share\@GMT-
2020.07.10-10.18.28
For snapshots that are taken on a sub-directory, the directory follows after the timestamp, such
as in the following example: \\server\share\@GMT-2020.07.10-10.18.28\folder4
The feature is available via the SMB protocol version 2.0 or later.

Protection Plan
A protection plan can be defined by creating a snapshot policy with the addition of one or more
rules. Attach the snapshot policy to the managed directory to be safeguarded with scheduled
snapshots. When the policy is attached (and enabled by default), the scheduler creates, des-
troys, and eradicates snapshots automatically in order to fulfill the protection plan at any given
time.
The name of each snapshot consists of the client name of the rule that triggered the snapshot,
with a counter added. For example: hourly.1.
Policies can be reused and attached to other managed directories. Modifying the rules for one
policy affects the scheduling of snapshots for all directories in which the policy is used. However,
snapshots already taken are not altered by modifying rules or policies.

Note: Modifying a policy may lead to additional snapshots being taken to fulfill a partially
complete protection plan.
Thinning Rules: At any specific point in time, a snapshot policy produces no more than one
snapshot for each attached directory, even in the presence of multiple rules. For each policy, the
first snapshot to be taken is the one with the longest keep-for time and thus gives full data pro-
tection. All other snapshots are, at that point in time, postponed.
The rule with the highest frequency (that is, the shortest "every"), is the base rule that determ-
ines the scheduled time slots. The minimum value is 5 minutes.
The scheduler determines the next scheduled snapshot using the time that a snapshot was
scheduled for, not the time that it was created. This prevents the scheduler from drifting in case
of delayed snapshots due to system load.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 57
Chapter 2:FlashArray Concepts and Features | File Storage

With the "at" parameter, snapshots can be taken at a selected time of day. When used, the
scheduled time "every" must be a multiple of a day (24-hour period).
Protection plans are defined as follows:
1 Create a snapshot policy and give it a name.
2 Add one or more rules to the policy, each rule specifying the following:
l Every: The elapsed time until a new snapshot should be created.
l Keep for: The time to keep the snapshot before automatic eradication.
l Client name: The client visible name for snapshots.
l At: Optionally, the time of day to create a snapshot (every must then be a multiple of
one day).
3 Attach the policy to one or more directories.
If you manually destroy a scheduled snapshot, it will no longer be managed by the scheduler. If
you recover this snapshot, it will be considered a manual snapshot, not a scheduled one. After
recovery, the snapshot is kept until it is manually destroyed or the optional keep-for period
expires.
If a manually destroyed snapshot results in a protection plan not being fulfilled, a new snapshot
is created to replace the destroyed one. This happens as soon as possible and usually within the
next thirty seconds. Since the goal is to satisfy the protection plan rather than the schedule, the
schedule intentionally becomes askew in the following way: The newly created snapshot exists
for the defined (keep-for) period of time, starting at this point in time. The schedule for the fol-
lowing snapshots is calculated from this new point in time.
For example, a protection plan with three rules:
l Hourly snapshots: Every one hour, keep for 24 hours, client name "hourly"
l Daily snapshots: Every one day, keep for 30 days, client name "daily"
l Weekly snapshots: Every one week, keep for 52 weeks, client name "weekly"
The scheduler fulfills the maximum data protection by choosing the rule with the longest keep
time first. In this example, the weekly (keep for 52 weeks) creates a snapshot named
weekly.1. The presence of a week-long snapshot covers the need for any other snapshot for
one hour. The next rule to create a snapshot is the daily (keep for 30 days), named daily.2.
The presence of the daily snapshot covers the need for a snapshot for one hour. The hourly, hav-
ing been postponed twice, is taken: hourly.3.
Directory snapshots are managed through the Protection > Snapshots page, or the CLI
puredir snapshot command. Snapshot policies are managed through the Protection >
Policies page, or the CLI purepolicy snapshot command.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 58
Chapter 2:FlashArray Concepts and Features | File Storage

Hard Links and Symbolic Links (Symlinks)


Hard links and symlinks are supported for both SMB and NFS. While both are links to directories
or files, they are used differently. Hard links are pointers to file content and thus behave similar
to ordinary files, while symlinks are references to directories or files.

Object Names
File systems, managed directories, and policies can, like most objects in Purity//FA, be named.
For managed directories, the name does not have to be the same as the path directory name.
The full name of a managed directory consists of the file system name and managed directory
name (not path), separated by a colon (:). For example, FS1:Managed1.
The object names can be 1-63 characters in length. Valid characters are letters (A-Z and a-z),
digits (0-9), and the hyphen (-) character. The first and last characters of the name must be
alphanumeric, and the name must contain at least one letter or '-'. Names are case-insensitive
on input. For example, fs1, Fs1, and FS1 all represent the same file system. Purity//FA displays
names in the case in which they were specified when created or renamed.

File and Directory Names


Contrary to object names, file and directory names have a maximum length of 259 characters or
1,036 bytes Unicode UTF-8. The name must not include any of the first 32 ascii characters (con-
trol codes) and the following characters are not allowed: " \ / : | < > * ?
File and directory names are case sensitive only through NFS. Thus, using case sensitive file or
directory names that only differ by letter casing should be avoided if these are to be accessed
through the SMB protocol.
Paths must be unique; therefore, you cannot create a path that already exists in the same file
system. The maximum length of a single usable path string is 244 characters for SMB (not includ-
ing drive and root path) and 255 characters for NFS. The .snapshot directory name is
reserved for use by the directory snapshot feature.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 59
Chapter 2:FlashArray Concepts and Features | File Storage

Virtual Interfaces
Clients communicate with one or more file systems through virtual interfaces. The virtual inter-
face service named "File" is used for this purpose. Built-in failover functionality allows clients to
be automatically moved to another interface if one interface fails during operation or in the pro-
cess of a non-disruptive system update.

Authentication and Authorization


Authentication services provide a layer of security by verifying users' credentials or applications
before allowing access to read or modify data.
The following authentication services are supported:
l Kerberos 5 - authentication and login
l Kerberos 5i - authentication with checksum verification
l Kerberos 5p - authentication with integrity checksum and encryption
l NTLMv2 authentication with passthrough
Use the GUI (Settings > Access) or the CLI puread account command to manage the
access to an Active Directory server.
For LDAP, use the GUI (Settings > Access) or the CLI pureds command.
Once the file system is joined to a directory service, clients can connect using their user cre-
dentials.

ACL and Mode_t Interoperability


On the FlashArray, files and directories are interoperable between the NFS version 3, NFS ver-
sion 4.1, and SMB protocols. Depending on the protocols used, ACL or mode_t are used for own-
ership and access control. Translation between ACL and mode_t is automatic so that items
appear with matching permissions through SMB and NFS.
NFS version 3 uses only mode_t which is translated to ACL for use with NFS version 4.1 or
SMB:
l The owner ACE will have the SID of the owner of the file.
l The group ACE will have the SID of the group of the file.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 60
Chapter 2:FlashArray Concepts and Features | Users and Security

l The everyone ACE will have the SID everyone.


Only the non-inherited part of the ACL is stored and the full ACLs are then calculated in memory
upon usage.
When ACL is set, and the client is accessing the file using NFS version 3, it will be translated into
mode_t. ACL to mode_t translation is based on the owner of the node, the owner’s primary
group of the node, and whether Everyone has access.

Users and Security


Purity//FA release comes with a single local administrative account named pureuser. The
account is password-protected, and may alternatively be accessed using a public-private key
pair.
Users can be added to the array either locally by creating and configuring a local user directly on
the array, or through Lightweight Directory Access Protocol (LDAP) by integrating the array with
a directory service, such as Active Directory or OpenLDAP.
The Pure Storage REST API uses authentication tokens to create sessions. All Purity//FA users
can generate their own API token and view only their own token.
Local user configuration and API token generation is performed through the GUI (select Set-
tings > Users) and CLI (pureadmin command).

Directory Service
Additional Purity//FA accounts can be enabled by integrating the array with an existing directory
service, such as Microsoft Active Directory or OpenLDAP, allowing multiple users to log in and
use the array and providing role-based access control.
Configuring and enabling the Pure Storage directory service changes the array to use the dir-
ectory when performing user account and permission level searches. If a user is not found loc-
ally, the directory servers are queried.
Directory service configuration is performed through the GUI (Settings > Users) and CLI
(pureds command).

Pure Storage Confidential - For distribution only to Pure Customers and Partners 61
Chapter 2:FlashArray Concepts and Features | Users and Security

Multi-factor Authentication
Multi-factor authentication (MFA) provides an additional layer of security used to verify users'
identities during login attempts.
For arrays with optional multi-factor authentication enabled, a third-party software package veri-
fies authentication requests for the array and also administers the array's authentication
policies.
Purity//FA supports MFA through the RSA SecurID® Authentication Manager and through
SAML2 single sign-on (SSO) with Microsoft® Active Directory Federation Services (AD FS),
Okta, Azure Active Directory (Azure AD), and Duo Security authentication identity management
systems.

Multi-factor Authentication through SAML2 Single Sign-on


Multi-factor authentication with identity providers requires that SAML2 single sign-on be con-
figured in both the Purity//FA service provider and the identity provider and also be enabled. The
identity provider supports MFA with certificates, with Microsoft AzureTM authentication, and with
other authentication methods.
See the SAML2 SSO section in the "Settings " on page 263 chapter for more information about
single sign-on and multi-factor authentication.

Multi-factor Authentication with RSA SecurID® Authentication


When multi-factor authentication is enabled on an array through RSA SecurID® Authentication
Manager, the following requirements apply to all user logins:
l Password authentication is suspended on the array.
l Users log into the array with a passcode obtained from the third-party authentication
management software (possibly combined with a personal PIN number).
l Only local array users (not directory service users) are supported.
The third-party authentication management software does not define or configure user roles.
User roles are assigned in the configuration of local users on the array.
The puremultifactor command (see thePurity//FA CLI Reference Guide) configures multi-
factor authentication on the array and manages enabling and disabling multi-factor authen-
tication. Multi-factor authentication configuration and management are not supported in the GUI.
Configuration steps are also required on the RSA SecurID® Authentication Manager.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 62
Chapter 2:FlashArray Concepts and Features | Industry Standards

SSL Certificate
Purity//FA creates a self-signed certificate and private key when the system is started for the first
time.
SSL certificate configuration includes changing certificate attributes, creating new self-signed
certificates to replace existing ones, constructing certificate signing requests, importing cer-
tificates and private keys, and exporting certificates.
SSL certificate configuration is performed through the GUI (Settings > System) and CLI (pure-
cert command).

Industry Standards
Purity//FA includes the Pure Storage Storage Management Initiative Specification (SMI-S) pro-
vider.
The SMI-S initiative was launched by the Storage Networking Industry Association (SNIA) to
provide a unifying interface for storage management systems to administer multi-vendor
resources in a storage area network. The SMI-S provider in Purity//FA allows FlashArray admin-
istrators to manage the array using an SMI-S client over HTTPS.
SMI-S client applications optionally use the Service Location Protocol (SLP) as a directory ser-
vice to locate resources.
The SMI-S provider is optional and must be enabled before its first use. The SMI-S provider is
enabled and disabled through the GUI (Settings > System) and CLI (puresmis command).
For detailed information on the Pure Storage SMI-S provider, refer to the Pure Storage SMI-S
Provider Guide on the Knowledge site at https://support.purestorage.com.
For general information on SMI-S, refer to the Storage Networking Industry Association (SNIA)
website at https://www.snia.org.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 63
Chapter 2:FlashArray Concepts and Features | Troubleshooting and Logging

Troubleshooting and Logging


Purity//FA continuously logs a variety of array activities, including performance summaries, hard-
ware and operating status reports, and administrative actions that modify the array. For certain
array state changes and events that are potentially significant to array operation, Purity//FA
immediately generates alert messages and transmits them to one or more user-specified des-
tinations for immediate action.
The FlashArray troubleshooting mechanisms assume that Pure Storage Technical Services can
actively participate in helping organizations maintain "healthy" arrays. However, for organ-
izations where operating procedures do not permit outside connections to equipment,
troubleshooting reports can be directed to internal email addresses or displayed on a GUI or CLI
console.

Alerts
Alert, audit record, and user session messages are retrieved from a list of log entries that are
stored on the array.
To conserve space, Purity//FA stores a reasonable number of log entries on the array. Older
entries are deleted from the log as new entries are added. To access the complete list of mes-
sages, configure the Syslog Server feature to forward all messages to your remote server.
An alert is triggered when there is an unexpected change to the array or to one of the Purity//FA
hardware or software components. Alerts are categorized by severity level as critical, warning,
or informational.
Alerts are displayed in the GUI and CLI. Alerts are also logged and transmitted to Pure Storage
Technical Services via the phone home facility. Furthermore, alerts can be sent as messages to
designated email addresses and as Simple Network Management Protocol-based (SNMP) traps
and informs to SNMP managers.
Phone Home Facility
The phone home facility provides a secure direct link between the array and the Pure Stor-
age Technical Services team. The link is used to transmit log contents and alert mes-
sages to the Pure Storage Technical Services team.
If the phone home facility is disabled, the log contents are delivered when the facility is
next enabled or when the user manually sends the logs through the GUI or CLI.
Optionally configure the proxy host for HTTPS communication.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 64
Chapter 2:FlashArray Concepts and Features | Troubleshooting and Logging

The phone home facility is managed through the GUI (Settings > System and CLI (pur-
earray command).
Proxies are configured through the GUI (Settings > System and CLI (purearray
setattr --proxy command).
Email
Alerts can be sent to designated email recipients. The list includes the built-in flashar-
[email protected] address, which cannot be deleted. Individual email
addresses can be added to and removed from the list, and transmission of alert mes-
sages to specific addresses can be temporarily enabled or disabled without removing
them from the list.
The list of email alert recipients is managed through the GUI (Settings > System and CLI
(purealert command).
SNMP Managers
If SNMP manager objects are configured on the array, each alert is transmitted to the
SNMP managers.
The SNMP manager objects are configured through the GUI (Settings > System and CLI
(puresnmp command).

Alerts are displayed through the GUI (Health > Alerts) and the CLI (puremessage command).

Audit Trail
The audit trail represents a chronological history of the Purity//FA GUI, Purity//FA CLI, or REST
API operations that a user has performed to modify the configuration of the array. For example,
changing the size of a volume, deleting a host, changing the replication frequency of a protection
group, and associating a WWN to a host generates an audit record.
Audit trails are displayed through the GUI (Settings > Access) and the CLI (pureaudit com-
mand).

User Session Logs


User session logs represent user login and authentication events performed in the Purity//FA
GUI, Purity//FA CLI, and REST API. For example, logging in to and out of the Purity//FA GUI,
attempting to log in to the Purity//FA CLI with an invalid password, or opening a Pure Storage
REST API session generates a user session log entry.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 65
Chapter 2:FlashArray Concepts and Features | Troubleshooting and Logging

User sessions are displayed through the GUI (Settings > Users) and the CLI (puremessage
command).

SNMP Agent and SNMP Managers


The Simple Network Management Protocol (SNMP) is used by SNMP agents and SNMP man-
agers to send and retrieve information. FlashArray supports SNMP versions v2c and v3.
In the FlashArray, the built-in SNMP agent has local knowledge of the array. The agent collects
and organizes this array information and translates it via SNMP to or from the SNMP managers.
The agent, named localhost, cannot be deleted or renamed. The managers are defined by
creating SNMP manager objects on the array. The managers communicate with the agent via
the standard TCP port 161, and they receive notifications on port 162.
In the FlashArray, the localhost SNMP agent has two functions, namely, responding to GET-
type SNMP requests and transmitting alert messages.
The SNMP agent generates and transmits messages to the SNMP manager as traps or inform
requests (informs), depending on the notification type that is configured on the manager. An
SNMP trap is an unacknowledged SNMP message, meaning the SNMP manager does not
acknowledge receipt of the message. An SNMP inform is an acknowledged trap.
SNMPv2 uses a type of password called a community string to authenticate the messages that
are passed between the agent and manager. The community string is sent in clear text, which is
considered an unsecured form of communication. SNMPv3, on the other hand, supports secure
communication between the agent and manager through the use of authentication and privacy
encryption methods.
The SNMP agent and list of SNMP managers are managed through the GUI (Settings >
System) and CLI (puresnmp command).
Download the MIB through the GUI (Settings > System).

Remote Assist Facility


In many cases, the most efficient way to service an array or diagnose problems is through direct
intervention by a Pure Storage Technical Services representative.
The Remote Assist facility enables Pure Storage Technical Services to communicate with an
array, effectively establishing an administrative session for service and diagnosis. Optionally
configure the proxy host for HTTPS communication.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 66
Chapter 2:FlashArray Concepts and Features | Troubleshooting and Logging

Remote assist sessions are controlled by the array administrator, who opens a secure channel
between the array and Pure Storage Technical Services, making it possible for a technician to
log in to the array. The administrator can check session status and close the channel at any
time.
Remote assist sessions are opened and closed through the GUI (Settings > System) and CLI
(purearray remoteassist command).
Proxies are configured through the GUI (Settings > System and CLI (purearray setattr -
-proxy command).

Event Logging Facility


The Purity//FA event logging facility offers an always-on, accessible list of array events providing
up to 90 days of array history. Event logs can be downloaded and used for audit, security mon-
itoring, forensics, time line, troubleshooting, or other purposes.
Event logs are downloaded through the GUI (Settings > System > Pure1 Support > Event
Logs). The event logging level is configured through the CLI (purelog global setattr --
logging-severity).
Event log entries are also sent to the remote syslog.

Syslog Logging Facility


The Purity//FA syslog logging facility generates messages deemed major events within the
FlashArray and forwards the messages to remote servers via TCP or UDP protocol. Purity//FA
generates syslog messages for three types of events:
l Alerts (purity.alert)
l Audit Trails (purity.audit)
l Tests (purity.test)
The syslog server output location is configured through the GUI (Settings > System) and CLI
(purearray setattr command).

Pure Storage Confidential - For distribution only to Pure Customers and Partners 67
Chapter 3:
Conventions
Purity//FA is the operating environment that queries and manages the FlashArray hardware, net-
working, and storage components. The Purity//FA software is distributed with the FlashArray.
Purity//FA provides two ways to administer the FlashArray: through the browser-based graphical
user interface (Purity//FA GUI) and the command-driven interface (Purity//FA CLI).
Purity//FA follows certain naming and numbering conventions.

Object Names
Valid characters are letters (A-Z and a-z), digits (0-9), and the hyphen (-) character. The first and
last characters of the name must be alphanumeric, and the name must contain at least one letter
or '-'.
Most objects in Purity//FA that can be named, including host groups, hosts, volumes, protection
groups, volume and protection group suffixes, SNMP managers, and subnets, can be 1-63 char-
acters in length.
Array names can be 1-56 characters in length. The array name length is limited to 56 characters
so that the names of the individual controllers, which are assigned by Purity//FA based on the
array name, do not exceed the maximum allowed by DNS.
Names are case-insensitive on input. For example, vol1, Vol1, and VOL1 all represent the
same volume. Purity//FA displays names in the case in which they were specified when created
or renamed.
Pods and volume groups provide a namespace with unique naming conventions.
All objects in a pod have a fully qualified name that include the pod name and object name. The
fully qualified name of a volume in a pod is POD::VOLUME, with double colons (::) separating
the pod name and volume name. The fully qualified name of a protection group in a pod is
POD::PGROUP, with double colons (::) separating the pod name and protection group name.
For example, the fully qualified name of a volume named vol01 in a pod named pod01 is
pod01::vol01, and the fully qualified name of a protection group named pgroup01 in a pod
named pod01 is pod01::pgroup01.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 68
Chapter 3:Conventions | Volume Sizes

If a protection group in a pod is configured to asynchronously replicate data to a target array, the
fully qualified name of the protection group on the target array is POD:PGROUP, with single
colons (:) separating the pod name and protection group name. For example, if protection group
pod01::pgroup01 on source array array01 asynchronously replicates data to target array
array02, the fully qualified name of the protection group on target array array02 is pod01:p-
group01.
All objects in a volume group have a fully qualified name that includes the volume group name
and the object name, separated by a forward slash (/). For example, the fully qualified name of
a volume named vol01 in a volume group named vgroup01 is vgroup01/vol01.

Volume Sizes
Volume sizes are specified as an integer, optionally followed by one of the suffix letters K, M,
G, T, P, denoting 512-byte sectors, KiB, MiB, GiB, TiB, and PiB, respectively, where "Ki"
denotes 2^10, "Mi" denotes 2^20, and so on. If a suffix letter is not specified, the size is
expressed in sectors.
Volumes must be between one megabyte and four petabytes in size. If a volume size of less
than one megabyte is specified, Purity//FA adjusts the volume size to one megabyte. If a volume
size of more than four petabytes is specified, the Purity//FA command fails.
Volume sizes cannot contain digit separators. For example, 1000g is valid, but 1,000g is not.

IP Addresses
FlashArray supports two versions of the Internet Protocol: IP Version 4 (IPv4) and IP Version 6
(IPv6). IPv4 and IPv6 addresses follow the addressing architecture set by the Internet Engin-
eering Task Force.
An IPv4 address consists of 32 bits and is entered in the form ddd.ddd.ddd.ddd, where ddd
is a number ranging from 0 to 255 representing a group of 8 bits. Here are some examples:

puredns setattr --domain mydomain.com --nameservers 192.0.2.10


purelog create --uri tcp://192.0.2.100:614 LOGSERVER2
purenetwork eth setattr --address 192.0.2.0/24 ct0.eth1

Pure Storage Confidential - For distribution only to Pure Customers and Partners 69
Chapter 3:Conventions | Storage Network Addresses

purenetwork eth setattr --address 192.0.2.0 --netmask 255.255.255.0 ct0.eth1


puresnmp create --host 192.0.2.255 --community SNMPMANAGER1
puresubnet create --prefix 192.0.2.0/24 --vlan 100 ESXHost001

An IPv6 address consists of 128 bits and is written in in the form


xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx, where xxxx is a hexadecimal number
representing a group of 16 bits. Colons separate each 16-bit field. Leading zeros can be omitted
in each field. Furthermore, consecutive fields of zeros can be shortened by replacing the zeros
with a double colon (::). For example, IPv6 address 2001:0d-
b8:85a3:0000:0000:8a2e:0370:7334 becomes 2001:db8:85a3::8a2e:370:7334.
To use an IPv6 address in a URL or URI, enclose the entire address in square brackets ([]).
When specifying a URL or URI with a port number, append the port number after the end of the
entire address. Here are some examples:

puredns setattr --domain mydomain.com --nameservers 2001:db8:85a3::8a2e:370:7334


purelog create --uri tcp://[2001:db8:85a3::8a2e:370:7334]:614 LOGSERVER2
purenetwork eth setattr --address 2001:db8:85a3::8a2e:370:7334/64 ct0.eth1
purenetwork eth setattr --address 2001:db8:85a3::8a2e:370:7334 --netmask 64 ct0.eth1
puresnmp create --host [2001:db8:85a3::8a2e:370:7334] --community SNMPMANAGER1
puresubnet create --prefix 2001:db8:85a3::/64 --vlan 100 ESXHost001

Storage Network Addresses


A Purity//FA host is comprised of a host name and one or more IQNs, NQNs, or WWNs. The
host cannot communicate with the array until at least one IQN, NQN, or WWN has been asso-
ciated with it.
iSCSI Qualified Names (IQNs) follow the naming standards set by the Internet Engineering Task
Force (see RFC 3720). For example, iqn.2016-01.com.ex-
ample:flasharray.491b30d0efd97f25.
NVMe Qualified Names (NQNs) follow the naming standards set by NVM Express. For example,
nqn.2016-01.com.example:flasharray.491b30d0efd97f25.
Fibre Channel World Wide Names (WWNs) follow the naming standards set by the IEEE Stand-
ards Association. WWNs are comprised of eight pairs of case-insensitive hexadecimal numbers,
optionally separated by colons. For example, 21:00:00:24:FF:4C:C5:49.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 70
Chapter 3:Conventions | Storage Network Addresses

Like hosts, IQNs, NQNs, and WWNs must be unique in an array. A host can be associated with
multiple storage network addresses, but a storage network address can only be associated with
one host.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 71
Chapter 4:
GUI Overview
The Purity//FA graphical user interface (GUI) is a browser-based system used to view and
administer the FlashArray. See Figure 4-1.
Figure 4-1. Purity//FA GUI

The Purity//FA GUI contains the following pages:


Dashboard
Represents a graphical overview of the array, including storage capacity, recent alerts,
hardware status, and performance metrics.
Storage
Displays storage objects on the array, including hosts, host groups, volumes, protection
groups, volume groups, pods, file systems, and managed directories. View and manage
the storage objects and the connections between them.
Protection
Displays the configuration of replication and data protection features. View and manage
snapshots, policies, protection groups, ActiveCluster, and ActiveDR.
Analysis

Pure Storage Confidential - For distribution only to Pure Customers and Partners 72
Chapter 4:GUI Overview | GUI Navigation

Displays historical array information, including storage capacity and I/O performance met-
rics, from various viewpoints.
Health
Displays array health, including hardware status, parity, alerts, and connections.
Settings
Displays array-wide system and network settings. Manage array-wide components,
including network interfaces, system time, connectivity and connection configurations,
and alert settings. Also display user accounts, audit trails, user session logs, and soft-
ware details.

GUI Navigation
The dark gray navigation pane that appears along the left side of the Purity//FA GUI contains
links to the GUI pages. See Figure 4-2.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 73
Chapter 4:GUI Overview | GUI Navigation

Figure 4-2. Purity//FA GUI - Navigation Pane

Click the Pure Storage® logo at the top of the navigation pane to toggle between the expanded
and collapsed views of the pane. Just below the Pure Storage logo are links to the GUI pages.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 74
Chapter 4:GUI Overview | GUI Navigation

Click a link to analyze or configure the information that appears in the page to its right. For
example, click the Storage link to view information about the FlashArray storage objects, such
as hosts, host group, volumes, protection group, volume groups, pods, file systems, and dir-
ectories. The navigation pane includes links to the following external sites:
Help
Accesses the FlashArray user guides and launches the Pure1® community and Pure
Storage Technical Services portals.
End User Agreement
Displays the terms of the Pure Storage End User Agreement (EULA). For more inform-
ation about the Pure Storage End User Agreement, refer to End User Agreement (EULA).
Terms
Launches the Pure Storage Product End User Information page, which includes a link to
the Pure Storage End User Agreement.
Log Out
Logs the current user out of the Purity//FA GUI.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 75
Chapter 4:GUI Overview | GUI Navigation

The lower portion of the navigation pane displays FlashArray and Purity//FA version information,
the SafeMode status, and the name of the user who is currently logged into the Purity//FA GUI.
The pane to the right of the navigation pane displays the information and configuration options
for the selected GUI link. The information on each page is organized into panels, charts, and
lists. See Figure 4-3.
Figure 4-3. Purity//FA GUI - Page and Buttons

The alert icons that appear to the far right of the title bar indicate the number of recent Warning
and Critical alerts, respectively. A recent alert represents one that Purity//FA saw within the past
24 hours and still considers an open issue that requires attention. Click anywhere in an alert row
to display additional alert details. To analyze the alerts in more detail, select Health > Alerts.
The Search field (magnifying glass) in the upper-right corner of the screen allows you to quickly
search for existing hosts, host groups, volumes, protection groups, volume groups, and pods on

Pure Storage Confidential - For distribution only to Pure Customers and Partners 76
Chapter 4:GUI Overview | GUI Navigation

the array. Type any part of the name (case-insensitive) in the field to display all matches, and
then click the name in the list of results to view its details in the Storage page. See Figure 4-4.
Figure 4-4. Purity//FA GUI - Navigation - Quick Search

Various panels, such as Storage > Volumes and Health > Alerts, contain lists of information.
The total number of rows in a list output is displayed in the upper-right corner of the list. Some
lists can be very large, extending beyond hundreds of rows. See Figure 4-5.
Figure 4-5. Purity//FA GUI - Navigation - List Output

Pagination divides a large list output into discrete pages. Pagination is enabled by default and is
only in effect if the number of lines in the list output exceeds 10 rows. To move through a pagin-
ated list, click < to go to the previous page, or click > to go to the next page.

End User Agreement (EULA)


The Pure Storage End User Agreement (EULA) represents a contract between Pure Storage
and users of Pure Storage software. The most recent version of the agreement governs use of
the Purity//FA software and can be found at http://www.purestor-
age.com/legal/productenduserinfo.html. To view the terms of the Pure Storage End User Agree-
ment through the Purity//FA GUI, click End User Agreement. The name and title of the
individual who accepted the terms of the agreement appear at the bottom of the End User

Pure Storage Confidential - For distribution only to Pure Customers and Partners 77
Chapter 4:GUI Overview | GUI Login

Agreement pop-up window. Click Download Agreement to download a copy of the End User
Agreement from the array to your local machine. Accept the terms of the agreement by com-
pleting the fields at the bottom of the agreement and clicking Accept. Only array administrators
(i.e., users with the Array Admin role) have the necessary permissions to complete the fields at
the bottom of the agreement and click Accept.
Accepting the agreement requires the following information:
l Name - Full legal name of the individual at the company who has the authority to
accept the terms of the agreement.
l Title - Individual's job title at the company.
l Company - Full legal name of the entity.
The name, title, and company name must each be between 1 and 64 characters in length.
If the agreement is not accepted, Purity//FA generates an alert notifying all Purity//FA alert watch-
ers that the agreement is pending acceptance. A warning alert also appears in the Purity//FA
GUI. Pure Storage is not notified of the alert. The alert remains open until the agreement is
accepted. Furthermore, whenever a user logs in to Purity//FA GUI, the End User Agreement win-
dow pops up as a reminder that the agreement is pending acceptance.
Once the terms of the agreement have been accepted, Purity//FA closes the alert and stops gen-
erating the End User Agreement pop-up window.

GUI Login
Logging in to the Purity//FA GUI requires a virtual IP address or fully-qualified domain name
(FQDN) and an Purity//FA login username and password; this information is provided during the
FlashArray installation. FlashArray is installed with one administrative account with the user-
name pureuser. The initial password for the account is pureuser. For security purposes,
Pure Storage recommends that the password for the account be changed immediately upon first
login through the pureadmin CLI command. Pure Storage tests the Purity//FA GUI with the two
most recent versions of the following web browsers:
l Apple Safari
l Google Chrome
l Microsoft Edge

Pure Storage Confidential - For distribution only to Pure Customers and Partners 78
Chapter 4:GUI Overview | GUI Login

l Microsoft Internet Explorer (IE)


l Mozilla Firefox
To launch the Purity//FA GUI login screen, open a browser and type the FlashArray virtual IP
address or FQDN into the address bar, and press Enter. See Figure 4-6.
Figure 4-6. Purity//FA GUI - Login Screen

Logging in to the Purity//FA GUI


Logging in with Password Authentication
Log in to the Purity//FA GUI to view and administer the FlashArray.
1 Open a web browser.
2 Type the virtual IP address or fully-qualified domain name of the FlashArray in the address
bar and press Enter. The Purity//FA GUI login screen appears.
3 In the Username field, type the FlashArray user name. For example, pureuser.
4 In the Password field, type the password for the FlashArray user. For example, pureuser.
5 Click Log In to log in to the Purity//FA GUI.
6 If the Pure Storage End User Agreement (EULA) has not been accepted, the End User
Agreement pop-up window appears. Accept the terms of the agreement by completing the
fields at the bottom of the agreement and clicking Accept. Note that only array administrators
have the necessary permissions to complete the fields at the bottom of the agreement and
click Accept. The agreement should be signed by individuals at the company who have the
authority to accept the terms of the agreement. For more information about the Pure Storage
End User Agreement, refer to End User Agreement (EULA).

Pure Storage Confidential - For distribution only to Pure Customers and Partners 79
Chapter 4:GUI Overview | GUI Login

Logging in with SAML2 SSO Authentication


See Figure 4-7 for the login screen for an array with SAML2 single sign-on (SSO) authentication
enabled. Login credentials are authenticated by a third-party identity provider, such as
Microsoft® Active Directory Federation Services.
Figure 4-7. Purity//FA GUI - Login Screen with SAML2 SSO

1 Open a new web browser tab.


2 Type the virtual IP address or fully-qualified domain name of the FlashArray in the address
bar and press Enter. The Purity//FA GUI login screen appears.
3 Click Click for Single-Sign-On.
4 On the first time a user logs in, and also after each SSO session expires, the user must
provide their AD FS credentials on the AD FS login page.
5 If your organization requires a second authentication method, such as Microsoft® Azure™
authentication, a certificate, or other method, you are prompted for that also.
Follow your organization's instructions. This step depends on customizations your organ-
ization has applied to the authentication server.

Note: In case the SAML2 SSO service is temporarily unavailable, an array administrator
(such as pureuser) can access the array through the Local Access link on the login
page. This link is for emergency use only.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 80
Chapter 4:GUI Overview | GUI Login

Logging in with RSA SecurID® Authentication


For arrays with RSA SecurID® multi-factor authentication enabled, log in with the passcode
obtained from the multi-factor authentication software. When multi-factor authentication is
enabled, the login prompt is passcode, instead of password. See Figure 4-8.
Figure 4-8. Purity//FA GUI - Login Screen with Multi-factor Authentication

To log in, you supply your passcode, which is based on an RSA SecurID tokencode. See Figure
4-9.
Figure 4-9. Example RSA SecurID Tokencode

Contact your RSA SecurID administrators for your organization's passcode instructions. To log
into a FlashArray with multi-factor authentication enabled:
1 Open a web browser.
2 Type the virtual IP address or fully-qualified domain name of the FlashArray in the address
bar and press Enter. The Purity//FA GUI login screen appears.
3 In the Username field, type the FlashArray user name. For example, pureuser.
4 In the Passcode field, type your passcode obtained from the third-party authentication soft-
ware.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 81
Chapter 4:GUI Overview | GUI Login

5 Click Log In to proceed.

Accepting the Terms of the End User Agreement


(EULA)
Only array administrators have the necessary permissions to complete the fields at the bottom
of the agreement and click Accept. The agreement should be signed by individuals at the com-
pany who have the authority to accept the terms of the agreement.
1 From the Purity//FA GUI, click End User Agreement.
2 Read the terms of the agreement.
3 At the bottom of the agreement, complete the following fields:
l Name - Full legal name of the individual at the company who has the authority to
accept the terms of the agreement.
l Title - Individual's job title at the company.
l Company - Full legal name of the entity.
The name, title, and company name must each be between 1 and 64 characters in length.

4 Click Accept to accept the terms of the agreement.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 82
Chapter 5:
Dashboard
The Dashboard page displays a running graphical overview of the array's storage capacity or
effective used capacity (EUC), performance, and hardware status.
Figure 5-1. Dashboard

On subscription storage, the Capacity panel displays metrics based on effective used capacity.
Figure 5-2. Dashboard Capacity Pane on Subscription Storage

The Dashboard page contains the following panels and charts:


l Capacity
l Recent Alerts

Pure Storage Confidential - For distribution only to Pure Customers and Partners 83
Chapter 5:Dashboard | Capacity

l Hardware Health
l Performance Charts

Capacity
The Capacity panel displays array size and storage consumption or effective used capacity
details. The percentage value in the center of the wheel is calculated as Used/Total. All capa-
city values are rounded to two decimal places.

Purchased Arrays
On a purchased array, the capacity wheel displays the percentage of array space occupied by
data and metadata. and the capacity wheel is broken down into the following components:
System
Physical space occupied by internal array metadata.
Replication Space
Physical system space used to accommodate pod-based replication features, includ-
ing failovers, resync, and disaster recovery testing.
Shared Space
Physical space occupied by deduplicated data, meaning that the space is shared with
other volumes and snapshots as a result of data deduplication.
Snapshots
Physical space occupied by data unique to one or more snapshots.
Unique
Physical space that is occupied by data of both volumes and file systems after data
reduction and deduplication, but excluding metadata and snapshots.
Empty
Unused space available for allocation.
The capacity panel also displays the following information for a purchased array:
Data Reduction

Pure Storage Confidential - For distribution only to Pure Customers and Partners 84
Chapter 5:Dashboard | Capacity

Ratio of mapped sectors within a volume versus the amount of physical space the
data occupies after data compression and deduplication. The data reduction ratio
does not include thin provisioning savings.
For example, a data reduction ratio of 5:1 means that for every 5 MB the host writes to
the array, 1 MB is stored on the array's flash modules.
Total Reduction
Ratio of provisioned sectors within a volume versus the amount of physical space the
data occupies after reduction via data compression and deduplication and with thin
provisioning savings. Total reduction is data reduction with thin provisioning savings.
For example, a total reduction ratio of 10:1 means that for every 10 MB of provisioned
space, 1 MB is stored on the array's flash modules.
Used
Physical storage space occupied by volume, snapshot, shared space, and system
data.
Total
Total physical usable space on the array.
Replacing a drive may result in a dip in usable space. This is intended behavior. RAID
striping splits data across an array for redundancy purposes, spreading a write across
multiple drives. A newly added drive cannot use its full capacity immediately but must
stay in line with the available space on the other drives as writes are spread across
them. As a result, usable capacity on the new drive may initially be reported as less
than the amount expected because the array will not be able to write to the unal-
locatable space. Over time, usable capacity fluctuations will occur, but as data is writ-
ten to the drive and spreads across the array, usable capacity will eventually return to
expected levels.
Size
Total provisioned size of all volumes. Represents storage capacity reported to hosts.

Subscription Storage
The capacity panel displays the following information for subscription storage:
Unique
Effective used capacity data of both volumes and file systems after removing
clones, but excluding metadata and snapshots.
Snapshots
Effective used capacity consumed by data unique to one or more snapshots.
Shared

Pure Storage Confidential - For distribution only to Pure Customers and Partners 85
Chapter 5:Dashboard | Recent Alerts

Effective used capacity consumed by cloned data, meaning that the space is
shared with cloned volumes and snapshots as a result of data deduplication.
Used
Total effective used capacity containing user data, including Shared, Snapshots,
and Unique storage.
Estimated Total
Estimated total effective used capacity available from a host’s perspective, includ-
ing both consumed and unused storage.
Provisioned Size
The sum of the sizes of all volumes on the array.
Displays a ‘-‘ sign for arrays when a file system on the array has unlimited pro-
visioned size.
Virtual
The amount of data that the host has written to the volume as perceived by the
array, before any data deduplication or compression.

Recent Alerts
The Recent Alerts panel displays a list of alerts that Purity//FA saw within the past 24 hours and
considers open issues that require attention. The list contains recent alerts of all severity levels.
To view the details of an alert, click the alert message.
To view a list of all alerts including ones that are no longer open, go to the Health > Alerts page.

Hardware Health
The Hardware Health panel displays the operational state of the array controllers, flash mod-
ules, and NVRAM modules.
To analyze the hardware components in more detail, click the image or go to the Health > Hard-
ware page.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 86
Chapter 5:Dashboard | Performance Charts

Performance Charts
The performance charts display I/O performance metrics in real time.
Figure 5-3. Dashboard - Performance Graphs

The performance metrics are displayed along a scrolling graph; incoming data appear along the
right side of each graph every few minutes as older numbers drop off the left side. Each per-
formance chart includes Read (R), Write (W), and Mirrored Write (MW), representing the most
recent data samples rounded to two decimal places.
Hover over any of the charts to display metrics for a specific point in time.
The performance panel includes Latency, IOPS, and Bandwidth charts.
Latency
The Latency chart displays the average latency times for various operations.
l Read Latency (R) - Average arrival-to-completion time, measured in milliseconds, for
a read operation.
l Write Latency (W) - Average arrival-to-completion time, measured in milliseconds,
for a write operation.
l Mirrored Write Latency (MW) - Average arrival-to-completion time, measured in mil-
liseconds, for a write operation. Represents the sum of writes from hosts into the

Pure Storage Confidential - For distribution only to Pure Customers and Partners 87
Chapter 5:Dashboard | Performance Charts

volume's pod and from remote arrays that synchronously replicate into the volume's
pod. The MW value only appears if there are writes through ActiveCluster replication
being processed.
IOPS
The IOPS (Input/output Operations Per Second) chart displays I/O requests processed
per second by the array. This metric counts requests per second, regardless of how much
or how little data is transferred in each.
l Read IOPS (R) - Number of read requests processed per second.
l Write IOPS (W) - Number of write requests processed per second.
l Mirrored Write IOPS (MW) - Number of write requests processed per second. Rep-
resents the sum of writes from hosts into the volume's pod and from remote arrays
that synchronously replicate into the volume's pod. The MW value only appears if
there are writes through ActiveCluster replication being processed.
Bandwidth
The Bandwidth chart displays the number of bytes transferred per second to and from all
file systems. The data is counted in its expanded form rather than the reduced form
stored in the array to truly reflect what is transferred over the storage network. Metadata
bandwidth is not included in these numbers.
l Read Bandwidth (R) - Number of bytes read per second.
l Write Bandwidth (W) - Number of bytes written per second.
l Mirrored Write Bandwidth (MW) - Number of bytes written into the volume's pod per
second. Represents the sum of writes from hosts into the volume's pod and from
remote arrays that synchronously replicate into the volume's pod.
By default, the performance charts display performance metrics for the past 1 minute. To display
more than 1 minute of historical data, select Analysis > Performance.

Note About the Performance Charts


The Dashboard and Analysis pages display the same latency, IOPS, and bandwidth per-
formance charts, but the information is presented differently between the two pages.
In the Dashboard page:
l The performance charts are updated once every 30 seconds.
l The performance charts display up to 30 day's worth of historical data.
l The Latency charts displays only internal latency times. SAN times are not included.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 88
Chapter 5:Dashboard | Performance Charts

In the Analysis page:


l The performance charts are updated once every minute.
l The performance charts display up to one year's worth of historical data.
l The performance charts can be further dissected by I/O type.
l The Latency charts displays both internal latency times and SAN times.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 89
Chapter 6:
Storage
The Storage page displays configuration, space, and snapshot details for all types of FlashArray
storage objects.
The metrics that appear near the top of each page represent the capacity and consumption
details for the selected storage object. For example, the Storage > Array page displays array-
wide capacity usage. Likewise, the Storage > Pods page displays the capacity and consumption
details for all volumes within all pods in the FlashArray. Drill down to a specific storage object to
view additional details. For example, drill down to a specific volume to see its creation date and
unique serial number. See Figure 6-1 for the Storage tab on a purchased array.
Figure 6-1. Storage

On subscription storage, the reported metrics are based on effective used capacity (EUC). See
Figure 6-2 for the Storage tab on a subscription storage and "Subscription Storage" on page 85
for information on subscription capacity metrics.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 90
Chapter 6:Storage |

Figure 6-2. Evergreen//One Storage

FlashArray storage objects include:


l Array
l Hosts
l Volumes
l Pods
l File Systems
l Policies

Pure Storage Confidential - For distribution only to Pure Customers and Partners 91
Chapter 6:Storage | Array

Array
The Storage > Array page displays a summary of all storage components on the array. See Fig-
ure 6-3.
Figure 6-3. Storage > Array

The array summary panel (with the array name in the header bar) contains a series of rectangles
(technically known as hero images) representing the storage components of the array. The num-
bers inside each hero image represent the number of objects created for each of the respective
components. Click a rectangle to jump to the page containing the details for that particular stor-
age component.
The "Connecting Arrays" on page 188 and "Offload Targets" on page 182 panes now are under
the Protection > Arrays tab.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 92
Chapter 6:Storage | Hosts and Host Groups

Hosts and Host Groups


The Storage > Hosts page displays summary information for each host and host group on the
array.

Hosts
The Hosts panel displays summary information, including host group association, interface, con-
nected volumes (both shared and private), provisioned size, and either storage consumption or
effective used capacity for each host on the array.
Host names that begin with an @ symbol represent app hosts. For more information about app
hosts, see "Installed Apps" on page 355. See Figure 6-4.
Figure 6-4. Storage > Hosts

Pure Storage Confidential - For distribution only to Pure Customers and Partners 93
Chapter 6:Storage | Hosts and Host Groups

From the Hosts page, click a host name to display its details. Figure 6-5 displays the details for
host ESXi-GRP-Cluster02-H0001, which is connected to one host group (ESXi-GRP-
Cluster02-HG003) and four volumes, and is a member of protection group PG002.
Figure 6-5. Hosts Page

The Host details page contains the following panes:


Connected Volumes
Displays a list of volumes that have private connections to the host or shared connections
through a host group.
Host Ports
Displays a list of the iSCSI Qualified Names (IQNs), NVMe Qualified Names (NQNs), or
Fibre Channel World Wide Names (WWNs) of the ports associated with the host.
Protection Groups
Displays any protection groups to which the host belongs.
Details

Pure Storage Confidential - For distribution only to Pure Customers and Partners 94
Chapter 6:Storage | Hosts and Host Groups

Displays additional details specific to the selected host, including CHAP credentials and
host personality.

Host Groups
In the Storage > Hosts page, the Host Groups panel displays summary information, including
host associations, connected (shared) volumes, provisioned size, and either storage con-
sumption or effective used capacity, for each host group on the array.
From the Hosts page, click a host group name to display its details. Figure 6-6 displays the
details for host group ESXi-GRP-Cluster02-HG003, which is connected to two hosts and
three volumes.
Figure 6-6. Host Group Connected to Two Hosts and Three Volumes

The Host Group details page contains the following panes:


Member Hosts
Displays a list of hosts that have been added to the host group.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 95
Chapter 6:Storage | Hosts and Host Groups

Connected Volumes
Displays a list of volumes that have shared connections to the host group.
Protection Groups
Displays any protection groups to which the host group belongs.

Creating Hosts
Create hosts to access volumes on the array. Create a single host or multiple hosts at one time.
To create a host:
1 Select Storage > Hosts.
2 In the Hosts panel, click the menu icon and select Create... . The Create Host dialog box
appears.
3 In the Name field, type the name of the new host.
4 In the Personality field, select the name of the host operating or virtual memory system. If
your host personality does not appear in the list, select None.
5 To add the new host to a protection group, leave the Add to protection group after hosts are
created box checked (default; recommended).
6 Click Create.
7 If you selected the Add to protection group after hosts are created box, the Add to Pro-
tection Group dialog opens.
aTo add the new host to an existing protection group, select that protection group in the
Add host(s) to field.
bTo add the new host to a new protection group, click Create Protection Group.
In the Create Protection Group dialog, enter the pod name and the name of the new pro-
tection group.
cClick Create.
To create multiple hosts:
1 Select Storage > Hosts.
2 In the Hosts panel, click the menu icon and select Create... . The Create Host dialog box
appears.
3 Click Create Multiple…. The Create Multiple Hosts dialog box appears.
4 Complete the following fields:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 96
Chapter 6:Storage | Hosts and Host Groups

l Name: Specify the template used to create the host names. Host names cannot con-
sist of all numeric values.
Place the hash (#) symbol where the numeric part of the host name should appear.
When Purity//FA creates the host names, the hash symbol is replaced with the host
number, beginning with the start number specified.
l In the Personality field, select the name of the host operating or virtual memory sys-
tem. If your host personality does not appear in the list, select None.
l Start Number: Enter the host number used to create the first host name.
l Count: Enter the number of hosts to create.
l Number of Digits: Enter the minimum number of numeric digits of the host number. If
the number of digits is greater than the start number, the host number begins with
leading zeros.
l Add to protection group after hosts are created: To add the new hosts to a pro-
tection group, leave this box checked (default; recommended).
5 Click Create.
6 If you selected the Add to protection group after hosts are created box, the Add to Pro-
tection Group dialog opens.
aTo add the new hosts to an existing protection group, select that protection group in the
Add host(s) to field.
bTo add the new hosts to a new protection group, click Create Protection Group.
In the Create Protection Group dialog, enter the pod name and the name of the new pro-
tection group.
cClick Create

Creating Host Groups


Create a host group if several hosts share access to the same volume(s). After you create the
host group, add the hosts to the host group and then establish a shared connection between the
volumes and the host group. Once a shared connection is established, all of the hosts within the
host group share access to the volumes.
Create a single host group or multiple host groups at one time.
To create a host group:
1 Select Storage > Hosts.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 97
Chapter 6:Storage | Hosts and Host Groups

2 In the Host Groups panel, click the menu icon and select Create... . The Create Host Group
dialog box appears.
3 In the Name field, type the name of the new host group.
4 To add the new host group to a protection group, leave the Add to protection group after
host groups are created: box checked (default; recommended).
5 Click Create.
6 If you selected the Add to protection group after host groups are created box, the Add to
Protection Group dialog opens.
aTo add the new host group to an existing protection group, select that protection group
in the Add host group(s) to field.
bTo add the new host group to a new protection group, click Create Protection Group.
In the Create Protection Group dialog, enter the pod name and the name of the new pro-
tection group.
cClick Create
To create multiple host groups:
1 Select Storage > Hosts.
2 In the Host Groups panel, click the menu icon and select Create... . The Create Host Groups
dialog box appears.
3 Click Create Multiple…. The Create Multiple Host Groups dialog box appears.
4 Complete the following fields:
l Name: Specify the template used to create the host group names. Host group names
cannot consist of all numeric values.
Place the hash (#) symbol where the numeric part of the host group name should
appear. When Purity//FA creates the host group names, the hash symbol is replaced
with the host group number, beginning with the start number specified.
l Start Number: Enter the number used to create the first host group name.
l Count: Enter the number of host groups to create.
l Number of Digits: Enter the minimum number of numeric digits of the host group num-
ber. If the number of digits is greater than the start number, the host group number
begins with leading zeros.
l Add to protection group after host groups are created: To add the new host groups
to a protection group, leave this box checked (default; recommended).
5 Click Create.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 98
Chapter 6:Storage | Hosts and Host Groups

6 If you selected the Add to protection group after host groups are created box, the Add to
Protection Group dialog opens.
aTo add the new host groups to an existing protection group, select that protection group
in the Add host group(s) to field.
bTo add the new host groups to a new protection group, click Create Protection Group.
In the Create Protection Group dialog, enter the pod name and the name of the new pro-
tection group.
cClick Create

Configuring Host Ports


Hosts communicate with the array via one or more IQNs, NQNs, or WWNs. The initiators are
either discovered by Purity//FA or manually assigned.
To associate iSCSI IQNs with a host:
1 Select Storage > Hosts.
2 In the Hosts panel, click the host name to drill down to its details.
3 In the Host Ports panel, click the menu icon and select Configure IQNs.... The Configure
iSCSI IQNs dialog box appears.
4 In the Port IQNs field, type the IQNs in comma-separated format.
5 Click Add.
To associate NVMe-oF NQNs with a host:
1 Select Storage > Hosts.
2 In the Hosts panel, click the host name to drill down to its details.
3 In the Host Ports panel, click the menu icon and select Configure NQNs.... The Configure
NVMe-oF NQNs dialog box appears.
4 In the Port NQNs field, type the NQNs in comma-separated format.
5 Click Add.

Note: Configuring NVMe-oF NQNs is not supported on Cloud Block Store.


To associate Fibre Channel WWNs with a host:
1 Select Storage > Hosts.
2 In the Hosts panel, click the host name to drill down to its details.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 99
Chapter 6:Storage | Hosts and Host Groups

3 In the Host Ports panel, click the menu icon and select Configure Fibre Channel WWNs....
The Configure Fibre Channel WWNs dialog box appears.
The WWNs in the Existing WWNs column of the dialog box represent the WWNs that
have been discovered by Purity//FA (i.e., the WWNs of computers whose initiators have
"logged in" to the array).
4 Click an existing WWN in the left column to add it to the Selected WWNs column.
Alternatively, to manually add a WWN, click Enter WWNs Manually and type the WWNs,
in comma-separated format, in the Port WWNs field.
5 Click Add.

Note: Configuring Fibre Channel WWNs is not supported on Cloud Block Store.

Adding Hosts to Host Groups


Adding a host to a host group automatically establishes connections between it and all volumes
with shared connections to the host group.
To add a host to a host group:
1 Select Storage > Hosts.
2 In the Host Groups panel, click the host group to drill down to its details.
3 In the Member Hosts panel, click the menu icon and select Add.... The Add Hosts to Host
Group dialog box appears The hosts in the Existing Hosts column of the dialog box represent
the hosts that are not associated with any host groups and are eligible to be connected to
the host group.
4 Click an existing host in the left column to add it to the Selected Hosts column.
5 Click Add.

Configuring CHAP Authentication


To configure iSCSI CHAP authentication:
1 Select Storage > Hosts.
2 In the Hosts panel, click the host for which you want to configure CHAP authentication.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 100
Chapter 6:Storage | Hosts and Host Groups

3 In the Details panel, click the menu icon and select Configure CHAP.... The Configure
CHAP dialog box appears.
4 Complete the following fields:
l Host User: Set the host user name for CHAP authentication.
l Host Password: Enter the host password for CHAP authentication. The password
must be between 12 and 255 characters (inclusive) and cannot be the same as the
target password.
l Target User: Set the target user name for CHAP authentication.
l Target Password: Enter the target password for CHAP authentication. The host pass-
word cannot be the same as the target password. The password must be between 12
and 255 characters (inclusive) and cannot be the same as the host password.
5 Click Save. To disable CHAP, clear the fields in the Configure CHAP dialog box and click
Save.

Configuring Host Personalities


To configure host personalities:
1 Select Storage > Hosts.
2 In the Hosts panel, click the host for which you want to configure the host personality.
3 In the Details panel, click the menu icon and select Set Personality.... The Configure Per-
sonality dialog box appears.
4 Select the name of the host operating or virtual memory system. If your host personality does
not appear in the list, select None.
5 Click Save.

Adding Preferred Arrays


The preferred array must already be connected for synchronous replication.
To add a preferred array:
1 Select Storage > Hosts.
2 In the Hosts panel, click the host for which you want to add a preferred array.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 101
Chapter 6:Storage | Hosts and Host Groups

3 In the Details panel, click the menu icon and select Add Preferred Arrays.... The Add Pre-
ferred Arrays dialog box appears.
4 From the Available Arrays column, click the arrays you want to add as preferred arrays for
the host.
5 Click Add.

Removing Preferred Arrays


To remove a preferred array:
1 Select Storage > Hosts.
2 In the Hosts panel, click the host from which you want to remove a preferred array.
3 In the Details panel, click the X for the array you want to remove as a preferred array. The
Remove Preferred Array dialog box appears.
4 Click Remove.

Renaming a Host
To rename a host:
1 Select Storage > Hosts.
2 In the Hosts panel, click the rename icon for the host you want to rename. The Rename Host
dialog box appears.
3 In the Name field, enter the new name of the host.
4 Click Rename.

Deleting a Host
You can delete a host either through private or shared connections. You cannot delete a host if it
is connected to volumes, Before deleting a host, disconnect all hosts and volumes from the host
group.
To delete a host:
1 Select Storage > Hosts.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 102
Chapter 6:Storage | Hosts and Host Groups

2 In the Hosts panel, click the delete icon for the host you want to delete. The Delete Host dia-
log box appears.
3 Click Delete. Any volumes that were connected to the host are disconnected, and the
deleted host no longer appears in the Host panel.

Renaming a Host Group


To rename a host group:
1 Select Storage > Hosts.
2 In the Host Groups panel, click the rename icon for the host group you want to rename. The
Rename Host Group dialog box appears.
3 In the Name field, enter the new name of the host group.
4 Click Rename.

Deleting a Host Group


You cannot delete a host if it is connected to volumes, either through private or shared con-
nections. Before deleting a host, break all of its connections. You cannot delete a host group if it
is connected to volumes or hosts. Before deleting a host group, disconnect all hosts and
volumes from the host group.
To delete a host group:
1 Select Storage > Hosts.
2 In the Host Groups panel, click the delete icon for the host group you want to delete. The
Delete Host Group dialog box appears.
3 Click Delete.

Removing a Host from a Host Group


Removing a host from a host group disconnects all volumes with shared connections from the
removed host. The removed host’s private connections are unaffected.
1 Select Storage > Hosts.
2 In the Host Groups panel, click the host group to drill down to its details.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 103
Chapter 6:Storage | Hosts and Host Groups

3 From the Member Hosts panel, click the remove host (x) icon next to the host you want to dis-
connect. The Remove Host dialog box appears.
4 Click Remove.

Removing a Host Port


Caution: Removing a host may disrupt connectivity to some volumes.
To disassociate an IQN, NQN, or WWN from a host:
1 Select Storage > Hosts.
2 In the Hosts panel, click the host from which you want to remove the host ports.
3 From the Host Ports panel, click the remove port (x) icon next to the IQN, NQN, or WWN you
want to remove. The Remove Selected Port dialog box appears.
4 Click Remove. Purity//FA immediately breaks the association and any communication with
the array via the initiator associated with the removed IQN, NQN, or WWN.

Downloading Host Details


Downloading host details generates a comma-separated value text file containing host summary
information.
To download host details:
1 Select Storage > Hosts.
2 In the Hosts panel, click the menu icon and select Download CSV to save the hosts.csv
file to your local machine.

Downloading Host Group Details


Downloading host group details generates a comma-separated value text file containing host
group summary information.
To download host group details:
1 Select Storage > Hosts.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 104
Chapter 6:Storage | Hosts and Host Groups

2 In the Host Groups panel, click the menu icon and select Download CSV to save the host_
groups.csv file to your local machine.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 105
Chapter 6:Storage | Volumes

Volumes
The Storage > Volumes page displays summary information for all volumes on the array. See
Figure 6-7.
Figure 6-7. Storage > Volumes Page

The Volumes chapter contains the following sections:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 106
Chapter 6:Storage | Volumes

l Volumes Overview
l Working with Volumes
l Destroying and Eradicating Volumes
l Working with Volume-Host Connections
l Working with Volume Snapshots
l Working with Volume Groups
l Destroying and Eradicating Volume Groups

Volumes Overview
The Volumes panel displays a list of all volumes on the array.
Volume names that include a double colon (::) represent volumes inside pods. Volume names
that include a forward slash (/) represent volumes inside volume groups.
Volume names that begin with an @ symbol represent app volumes. For more information about
app volumes, see "Installed Apps" on page 355.
The Volumes page also displays volumes, volume snapshots, and volume groups that have
been destroyed and are pending eradication.
The Volumes and Volume Groups panels are organized into the following three tabs:
l Space - Displays information about the provisioned (virtual) size, snapshots, and
either physical storage consumption or effective used capacity for each volume or
volume group.
l QoS - Displays the bandwidth limit, the IOPS limit, the last priority adjustment, and pri-
ority of each volume or volume group (the priority field appears only in the Volume
panel). If the bandwidth limit is not set, the value appears as a dash (-), representing
unlimited throughput. If the IOPS limit is not set, the value appears as a dash (-), rep-
resenting unlimited IOPS.
l Details - Displays general information about each volume, including the number of
hosts to which the volume is connected either through private or shared connections,
and the unique serial number of the volume.

Storage Containers
A volume can reside in one of the following types of storage containers: root of the array (""),
pod, or volume group. The most simple of array configurations is one that contains volumes at

Pure Storage Confidential - For distribution only to Pure Customers and Partners 107
Chapter 6:Storage | Volumes

the root of the array. Each pod and volume group is a separate namespace for the volumes it
contains.
Pods are created and configured to store volumes and protection groups that need to be fully
synchronized with other arrays.
Each volume in a pod consists of the pod namespace identifier and the volume name, separated
by a double colon (::). The naming convention for a volume inside a pod is POD::VOL, where:
l POD is the name of the container pod.
l VOL is the name of the volume inside the pod.
For example, the fully qualified name of a volume named vol01 inside a pod named pod01 is
pod01::vol01.
For more information about pods, see "Pods" on page 133.
Volume groups organize volumes into logical groupings. If virtual volumes are configured, a
volume group is automatically created for each virtual machine that is created.
Each volume in a volume group consists of the volume group namespace identifier and the
volume name, separated by a forward slash (/). The naming convention for a volume inside a
volume group is VGROUP/VOL, where:
l VGROUP is the name of the container volume group.
l VOL is the name of the volume in the volume group.
For example, the fully qualified name of a volume named vol01 inside a volume group named
vgroup01 is vgroup01/vol01.
For more information about volume groups, see "Volume Groups" on page 112.
Volumes that reside in one storage container are independent of the volumes that reside in other
containers. For example, a volume named vol01 is completely unrelated to a volume named
vgroup01/vol01.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 108
Chapter 6:Storage | Volumes

In , volume vol05 represents a volume named vol05 that resides on the root of the array,
volume vgroup01/vol05 represents a volume named vol05 that resides in volume group
vgroup01, volume vgroup02/vol05 represents a volume named vol05 that resides in volume
group vgroup02, and volume pod01::vol05 represents a volume named vol05 that resides in
pod pod01. Though all four volumes have "vol05" in their names, they are completely inde-
pendent of one another.
Figure 6-8. Volumes

Virtual Volumes
VMware Virtual Volumes (vVols) storage architecture is designed to give VMware administrators
the ability to perform volume operations and apply protection group snapshot and replication
policies to FlashArray volumes directly through vSphere.
On the FlashArray side, virtual volumes are created and then connected to VMware ESXi hosts
or host groups via a protocol endpoint (also known as a conglomerate volume). The protocol
endpoint itself does not serve I/Os; instead, its job is to form connections between FlashArray
volumes and ESXi hosts and host groups.
Each protocol endpoint can connect multiple virtual volumes to a single host or host group, and
each host or host group can have multiple protocol-endpoint connections.
LUN IDs are automatically assigned to each protocol endpoint connection and each virtual
volume connection. Specifically, each protocol endpoint connection to a host or host group cre-
ates a LUN (PE LUN), while each virtual volume connection to a host or host group creates a
sub-LUN. The sub-LUN is in the format x:y, where x represents the LUN of the protocol end-

Pure Storage Confidential - For distribution only to Pure Customers and Partners 109
Chapter 6:Storage | Volumes

point through which the virtual volume is connected to the host or host group, and y represents
the sub-LUN assigned to the virtual volume.
In Figure 6-9, one virtual volume named pure_VVol and one protocol endpoint named pure_
PE are connected to host host01. The virtual volume is identified by the sub-LUN (7:1).
Figure 6-9. Virtual Volumes

Note: Virtual volumes are primarily configured through the vSphere Web Client plugin.
For more information about virtual volumes, including configuration steps, refer to the Pure Stor-
age vSphere Web Client Plugin for vSphere User Guide on the Knowledge site at
https://support.purestorage.com.

Volume Details
From the Volumes page, click a volume name to display details, including connected hosts
(through both shared and private connections), provisioned size, either storage consumption or
effective used capacity, and serial number, for the specified volume on the array.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 110
Chapter 6:Storage | Volumes

Figure 6-10 displays the details for volume ESXi-Cluster02-vol001, which is connected to
three hosts and two host groups, and is a member of protection group PG003.
Figure 6-10. Volume Details

The Volume details page contains the following panes:


Connected Hosts
Displays a list of private host connections to the volume, and the LUNs they use to
address the volume.
Connected Host Groups
Displays a list of public host connections to the volume, and the LUNs they use to
address the volume.
Protection Groups
Displays any protection groups to which the volume belongs.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 111
Chapter 6:Storage | Volumes

Volume Snapshots
Displays a list of volume snapshots. A volume snapshot is a point-in-time image of the
contents of a volume. There are various ways to create volume snapshots: as a single
volume or multiple volumes at the same time (atomically) through the Storage > Volumes
page, or as part of protection group snapshots through the Protection > Protection
Groups page.
Details
Displays the unique details for the volume, such as volume creation date, unique serial
number, and QoS information including bandwidth limit, IOPS limit, and DMM priority
adjustment. If the volume was created from another source, such as a volume snapshot,
the Source field displays the name of the source from where the volume was created.

Volume Groups
The Volumes Groups panel displays a list of volume groups that have been created on the array.
Volume groups organize FlashArray volumes into logical groupings.
If virtual volumes are configured, each volume group on the array represents its associated vir-
tual machine, and inside each of those volume groups are the FlashArray volumes that are
assigned to the virtual machine. Volume groups that are associated with virtual machines have
names that begin with "vvol-" and end with the virtual machine name. For more information
about virtual volumes, including configuration steps, refer to the Pure Storage vSphere Web Cli-
ent Plugin for vSphere User Guide on the Knowledge site at https://sup-
port.purestorage.com.
Volume groups can also be created through the Volumes page. Once a volume group has been
created, create new volumes directly in the volume group or move existing ones into the volume
group.
In the Volume Groups panel, click the name of a volume group to display its details, such as pro-
visioned size, storage consumption or effective used capacity, and a list of volumes that reside
in the volume group.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 112
Chapter 6:Storage | Volumes

Figure 6-11 displays the details for volume group vgroup01, which contains five volumes.
Figure 6-11. Volume Groups

Quality of Service Limits and DMM Priority Adjustments


Quality of service (QoS) limits define the maximum level of throughput and the maximum num-
ber of I/O operations per second for a volume or volume group. The QoS limits of a volume can
be set when you create or configure the volume, but the QoS limits of a volume group can be set
only when you configure the volume group. Note that you cannot set the QoS limits or priority
adjustments on individual volumes in a volume group.
Quality of service limits include:
l QoS Bandwidth Limit
The bandwidth limit can be set on volumes or volume groups to enforce maximum
allowable throughput. Whenever throughput exceeds the bandwidth limit, throttling
occurs. If set, the bandwidth limit must be between 1 MB/s and 512 GB/s. By default,
the QoS bandwidth limit is unlimited.
The bandwidth limit of a volume group represents the aggregate bandwidth for all the
volumes in the volume group.
QoS bandwidth limits are not enforced on volumes or volume groups that do not have
the bandwidth limit set.
l QoS IOPS Limit
The IOPS limit can be set on volumes or volume groups to enforce maximum I/O oper-
ations processed per second. Whenever the number of I/O operations per second

Pure Storage Confidential - For distribution only to Pure Customers and Partners 113
Chapter 6:Storage | Volumes

exceeds the IOPS limit, throttling occurs. If set, the IOPS limit must be between 100
and 100M. By default, the QoS IOPS limit is unlimited.
The IOPS limit of a volume group represents the aggregate IOPS for all the volumes
in the volume group.
QoS IOPS limits are not enforced on volumes or volume groups that do not have the
IOPS limit set.
l DMM Priority Adjustment
A DMM priority adjustment can be applied to volumes or volume groups to increase
or decrease their relative performance priority, when supported by FlashArray hard-
ware such as Direct Memory Modules. For example, use a DMM priority adjustment
to configure a higher performance priority for volumes that run critical, latency-sens-
itive workloads or to configure a lower priority for volumes that run workloads with
less latency sensitivity.
To apply a priority adjustment, use the optional QoS Configuration > DMM Priority
Adjustment fields when creating the volume or use the Configure QoS dialog for an
existing volume. Priority values are 10 (high), 0 (default), and -10 (low). By default,
all volumes have the same priority value of 0. Adjustment values are +10 (higher pri-
ority), 0 (no change or default priority), and -10 (lower priority). Volumes can also be
set to a specific priority with the equals sign, = 10 for high priority, = 0 for default pri-
ority, and = -10 for low priority.
In general, volumes that are members of a volume group inherit the priority adjust-
ment of their volume group. However, if a volume has a priority value set with the '='
operator (for example, =+10), it retains that value and is unaffected by any volume
group priority adjustment settings.
Notes:
l If all volumes are set to the same priority, even the higher priority (10),
then all volumes have the same relative priority and no volume receives a
performance priority.
l If various volumes have priority values of 10, 0, and -10, then the volumes
with a value of 10 receive performance priority. Those volumes with values
0 and -10 are treated equally (and do not receive priority).
l If various volumes have priority values of 0 and -10, then the volumes with
a value of 0 receive performance priority.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 114
Chapter 6:Storage | Volumes

l If various volumes have priority values of 10 and -10, then the volumes
with a value of 10 receive performance priority.
l If various volumes have priority values of 10 and 0, then the volumes with
a value of 10 receive performance priority.
l +10 and -10 are maximum and minimum priority values, respectively.
Applying a +10 adjustment to a volume that already has a priority value of
10 has no effect. Similarly, applying a -10 adjustment to a volume that
already has a priority value of -10 has no effect.

Working with Volumes


Creating a Volume
Create a single volume or multiple volumes at one time.
To create a volume:
1 Select Storage > Volumes.
2 In the Volumes panel, click the menu icon and select Create... . The Create Volume dialog
box appears.
3 In the Pod or Volume Group field, select the pod or volume group to where the volume will be
created.
4 In the Name field, type the name of the new volume.
5 In the Provisioned Size field, specify the provisioned (virtual) size number and size unit. The
volume size must be between one megabyte and four petabytes. The provisioned size is
reported to hosts.
6 Optionally click QoS Configuration (Optional) to set quality of service (QoS) limits.
l In the Bandwidth Limit field, set the maximum QoS bandwidth limit for the volume.
Whenever throughput exceeds the bandwidth limit, throttling occurs. If set, bandwidth
limit must be between 1 MB/s and 512 GB/s.
l In the IOPS Limit field, set the maximum QoS IOPS limit for the volume. Whenever
the number of I/O operations per second exceeds the IOPS limit, throttling occurs. If
set, IOPS limit must be between 100 and 100M.
l In the DMM Priority Adjustment menus, optionally select +10 to give the volume a
higher priority or -10 for a lower priority.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 115
Chapter 6:Storage | Volumes

7 Optionally click Protection Configuration (Optional) to view, add, or remove default pro-
tection groups and to enable or disable default protection for the new volume. The current
default protection group list for the new volume is shown in the Data Protection field.
aTo add additional protection groups for the new volume, click the Edit icon on the right of
the Data Protection field. The Select Protection Groups dialog appears, with Available
Protection Groups listed on the left. Protection groups that are already listed in the cur-
rent default protection group list have their check boxes grayed out. The Selected Pro-
tection Groups column lists the protection groups to which the new volume will be
assigned.
l To add the new volume to an additional protection group, in the Available Pro-
tection Groups column, select the check box for that protection group. The pro-
tection group is then listed in the Selected Protection Groups column on the
right.
l To remove a protection group, click the 'x' icon on the right of the protection
group row.
l To remove all protection groups from the Selected Protection Groups column,
click Clear all.
bWhen the Selected Protection Groups column is correct, click Select.
cTo enable default protection for the new volume, leave the Use Default Protection
check box enabled (recommended). To disable default protection for the new volume,
uncheck the Use Default Protection check box.
4 Click Create.
To create multiple volumes:
1 Select Storage > Volumes.
2 In the Volumes panel, click the menu icon and select Create... . The Create Volumes dialog
box appears.
3 Click Create Multiple…. The Create Multiple Volumes dialog box appears.
4 Complete the following fields:
l In the Pod or Volume Group field, select the pod or volume group to where the
volumes will be created.
l Name: Specify the template used to create the volume names. Volume names can-
not consist of all numeric values.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 116
Chapter 6:Storage | Volumes

Place the hash (#) symbol where the numeric part of the volume name should
appear. When Purity//FA creates the volume names, the hash symbol is replaced
with the volume number, beginning with the start number specified.
l Provisioned Size: Specify the provisioned (virtual) size of the volume. The volume
size must be between one megabyte and four petabytes. The provisioned size is
reported to hosts.
l Start Number: Enter the volume number used to create the first volume name.
l Count: Enter the number of volumes to create.
l Number of Digits: Enter the minimum number of numeric digits of the volume num-
ber. If the number of digits is greater than the start number, the volume number
begins with leading zeros.
l Bandwidth Limit: Optionally set the maximum QoS bandwidth limit. The bandwidth
limit applies to each of the volumes created in this set of volumes. Whenever through-
put exceeds the bandwidth limit, throttling occurs. If set, bandwidth limit must be
between 1 MB/s and 512 GB/s.
l IOPS Limit: Optionally set the maximum QoS IOPS limit. The IOPS limit applies to
each of the volumes created in this set of volumes. Whenever the number of I/O oper-
ations per second exceeds the IOPS limit, throttling occurs. If set, the IOPS limit must
be between 100 and 100M.
l DMM Priority Adjustment: Optionally select +10 to give the volumes a higher priority
or -10 for a lower priority.
5 Optionally click Protection Configuration (Optional) to view, add, or remove default pro-
tection groups and to enable or disable default protection for the new volumes. The current
default protection group list for the new volumes is shown in the Data Protection field.
aTo add additional protection groups for the new volumes, click the Edit icon on the right
of the Data Protection field. The Select Protection Groups dialog appears, with Available
Protection Groups listed on the left. Protection groups that are already listed in the cur-
rent default protection group list have their check boxes grayed out. The Selected Pro-
tection Groups column lists the protection groups to which the new volumes will be
assigned.
l To add the new volumes to an additional protection group, in the Available Pro-
tection Groups column, select the check box for that protection group. The pro-
tection group is then listed in the Selected Protection Groups column on the
right.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 117
Chapter 6:Storage | Volumes

l To remove a protection group, in the Selected Protection Groups column, click


the 'x' icon on the right of the protection group row.
l To remove all protection groups from the Selected Protection Groups column,
click Clear all.
bWhen the Selected Protection Groups column is correct, click Select.
cClick Select.
dTo enable default protection for the new volumes, leave the Use Default Protection
check box enabled (recommended). To disable default protection for the new volumes,
uncheck the Use Default Protection check box.
5 Click Create.

Moving a Volume
Volumes can be moved into, out of, and between pods and volume groups.
See also "Moving a Volume when SafeMode is Enabled" on the next page
To move a single volume:
1 Select Storage > Volumes.
2 Select the specific volume you want to move.
3 Click the menu icon and select Move.... The Move Volume dialog box appears.
4 From the Pod or Volume Group field, select the pod or volume group you wish to move the
volume to.
If the protection groups in the source container cannot be in the target container, the
Remove from Pgroup and Data Protection fields will appear. The Remove from Pgroup
field will have protection groups that must be abandoned for the move to complete. In the
Data Protection field, specify the protection groups the volume should be added to.
5 Click Move.
To move multiple volumes:
1 Select Storage > Volumes.
2 In the Volumes panel, click the menu icon and select Move.... The Move Volumes dialog box
appears.
3 In the Existing Volumes column, select the volumes you want to move. All of the selected
volumes will be moved to the same destination.
4 From the Pod or Volume Group field, select the Pod or Volume group you want to move the
selected volumes to.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 118
Chapter 6:Storage | Volumes

If the protection groups in the source container cannot be in the target container, the
Remove from Pgroup and Data Protection fields will appear. The Remove from Protection
Group field will have protection groups that must be abandoned for the move to complete.
From the Add to Protection Group field, specify the protection groups the volumes should
be added to.
5 Click Move.
Moving a Volume when SafeMode is Enabled
A volume in a SafeMode-enabled protection group can only be moved to a SafeMode-enabled
protection group with equal or better SafeMode protections, as determined by the following con-
figuration characteristics:
l Snapshot schedule frequency and retention.
l Replication schedule frequency and retention.
l Target retention number of days and number retained per day.
l Blackout window (if the current protection group has a blackout window).
If the target protection group does not match or exceed the current protection group on all of
these configurations, the volume cannot be moved.
Notes about volume moves when SafeMode is enabled:
l For volumes in protection groups based on host or hostgroup membership, Purity
does not ensure that the target protection group has equal or better SafeMode pro-
tections.
l Contact Pure Technical Support to move a volume that is currently a member of more
than one protection group.

Renaming a Volume
To rename a volume:
1 Select Storage > Volumes.
2 In the Volumes panel, click the rename icon for the volume you want to rename. The
Rename Volume dialog box appears.
3 In the Name field, enter the new name of the volume.
4 Click Rename.

Resizing a Volume
Resizing a volume changes its provisioned (virtual) size.
To change the provisioned size a volume:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 119
Chapter 6:Storage | Volumes

1 Select Storage > Volumes.


2 From the Volumes panel, click the volume name to drill down to its details.
3 Click the menu icon and select Resize.... The Resize Volume dialog box appears.
4 In the Provisioned Size field, enter the new volume size. The volume size must be between
one megabyte and four petabytes.
5 Click Resize.

Copying a Volume
Copy a volume to create a new volume or overwrite an existing one. You cannot copy volumes
across pods if the source and target pods are both stretched but on different pairs.
To copy a volume:
1 Select Storage > Volumes.
2 In the Volumes panel, click the volume that you want to copy. The volume detail page opens.
3 In the Volume > <volume name> row, click the menu icon and select Copy....
4 Click the menu icon and select Copy Volume. The Copy Volume dialog box appears.
5 In the Container field, specify the root location, pod, or volume group to where the new
volume will be created. The forward slash (/) represents the root location of the array.
6 In the Name field, type the name of the new or existing volume.
7 To overwrite an existing volume, click the Overwrite toggle button to enable (blue) the over-
write feature.
8 Optionally click Protection Configuration (Optional) to view or add default protection groups
and to enable or disable default protection for the newly copied volumes. The current default
protection group list for the copied volumes is shown in the Data Protection field.
aTo add groups to the default protection group list, click the Edit icon on the right of the
Data Protection field. The Select Protection Groups dialog appears, with Available Pro-
tection Groups listed on the left. Protection groups that are already listed in the current
default protection group list have their check boxes grayed out.
bClick Select.
cTo enable default protection for the copied volume, leave the Use Default Protection
check box enabled (recommended). To disable default protection for the copied volume,
uncheck the Use Default Protection check box.
4 Click Copy.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 120
Chapter 6:Storage | Volumes

Downloading Volume Details


Downloading volume details generates a comma-separated value text file containing volume
summary information.
To download volume details:
1 Select Storage > Volumes.
2 In the Volumes panel, click the menu icon and select Download CSV to save the
volumes.csv file to your local machine.

Configuring the Maximum QoS Bandwidth and IOPS Limits of a Volume


To configure the maximum QoS bandwidth and IOPS limits of a volume:
1 Select Storage > Volumes.
2 Select one of the following options to edit the bandwidth and IOPS limits:
l In the Volumes panel, click the menu icon for the volume you want to set the QoS
bandwidth and IOPS limits, and then select Configure QoS....
l In the Volumes panel, click the volume to drill down to its details, and then click the
QoS edit icon in the Details panel.
3 In the Bandwidth Limit field, set the maximum QoS bandwidth limit for the volume. Whenever
throughput exceeds the bandwidth limit, throttling occurs. If set, the bandwidth limit must be
between 1 MB/s and 512 GB/s.
To give the volume unlimited throughput, clear the Bandwidth Limit field.
4 In the IOPS Limit field, set the maximum QoS IOPS limit for the volume. Whenever the num-
ber of I/O operations per second exceeds the IOPS limit, throttling occurs. If set, the band-
width limit must be between 100 and 100M.
To give the volume unlimited IOPS, clear the IOPS Limit field.
5 Click Save.

Configuring the Priority or Priority Adjustment of a Volume


To configure the priority or priority adjustment of a volume:
1 Select Storage > Volumes.
2 Select one of the following options to edit the priority adjustment of a volume:
l In the Volumes panel, click the menu icon for the volume, and then select Configure
QoS....

Pure Storage Confidential - For distribution only to Pure Customers and Partners 121
Chapter 6:Storage | Volumes

l In the Volumes panel, click the volume to drill down to its details, and then click the
QoS edit icon in the Details panel.
3 In the DMM Priority Adjustment menus select +10 to give the volume a higher priority or -
10 for a lower priority, or use the equals sign (=) to set a specific priority: 10 (higher), 0
(default), or -10 (lower).

Destroying and Eradicating Volumes


Destroying a Volume
Destroying a volume will also destroy its snapshots.
You cannot destroy a volume if it is connected to hosts, either through private or shared con-
nections. Before deleting a volume, disconnect all hosts and host groups from the volume.
To destroy a volume:
1 Select Storage > Volumes.
2 From the Volumes panel, click the garbage icon for the volume you want to destroy. The Des-
troy Volume dialog box appears.
3 Click Destroy.
The destroyed volume(s) appear in the Destroyed Volumes folder and begins its erad-
ication pending period.
During the eradication pending period, you can recover the volume to bring it and its snapshots
back to their previous states, or manually eradicate the destroyed volume to reclaim physical
storage space occupied by the volume snapshots.
When the eradication pending period has elapsed, Purity//FA starts reclaiming the physical stor-
age occupied by the volume snapshots.
Once reclamation starts, either because you have manually eradicated the destroyed volume, or
because the eradication pending period has elapsed, the destroyed volume and its snapshots
can no longer be recovered.
The length of the eradication pending period typically is different for SafeMode-protected objects
and other objects, and is configured in the Settings > System > Eradication Configuration pane.
See "Eradication Delays" on page 35 and "Eradication Delay Settings" on page 285.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 122
Chapter 6:Storage | Volumes

Recovering a Destroyed Volume


To recover a destroyed volume:
1 Select Storage > Volumes.
2 From the Destroyed Volumes panel, click the recover icon for the volume you want to
recover. The Recover Volume dialog box appears.
3 Click Recover. The recovered volume(s) and its snapshots appear in the list of volumes.

Eradicating a Destroyed Volume


Eradicating a volume destroys the volume and its snapshots. During the eradication pending
period, you can manually eradicate the destroyed volume to reclaim physical storage space
occupied by the destroyed volume's snapshots.
Once reclamation starts, the destroyed volume and its snapshots can no longer be recovered.
To eradicate destroyed volumes:
1 Select Storage > Volumes.
2 From the Destroyed Volumes panel, click the eradicate (garbage) icon for the volume you
want to eradicate. The Eradicate Volume dialog box appears.
3 Click Eradicate. The volume and its snapshots are completely eradicated from the array.
Manual eradication is not supported when SafeMode retention lock is enabled.

Working with Volume-Host Connections


Establishing Private Volume-Host Connections
Creating a volume-host connection establishes a private connection between the volume and
host. There are two ways to establish private volume-host connections: 1) establish a private
connection from a volume to a host, or 2) establish a private connection from a host to a volume.
To establish a private connection from a volume to a host:
1 Select Storage > Hosts.
2 In the Hosts panel, click the host to drill down to its details.
3 In the Connected Volumes panel, click the menu icon and select Connect.... The Connect
Volumes to Host dialog box appears.
The volumes in the Existing Volumes column represent the volumes that are eligible to be
connected to the host.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 123
Chapter 6:Storage | Volumes

4 Click an existing volume in the left column to add it to the Selected Volumes column. If the
volume does not exist, click Create New Volume to create a new volume and connect it to
the host.
5 Click Connect.
To establish a private connection from a host to a volume:
1 Select Storage > Volumes.
2 In the Volumes panel, click the volume to drill down to its details.
3 In the Connected Hosts panel, click the menu icon and select Connect.... The Connect
Hosts dialog box appears.
The hosts in the Available Hosts column represent the hosts that are eligible to be con-
nected to the volume.
4 Click an existing volume in the left column to add it to the Selected Volumes column. If the
volume does not exist, click Create New Volume to create a new volume and connect it to
the host.
5 Optionally assign a LUN to the connection. If the field is left blank, Purity//FA automatically
assigns the next available LUN to the connection.
6 Click Connect.

Establishing Shared Volume-Host Group Connections


Creating a shared volume-host group connection automatically establishes connections
between the volume and all hosts affiliated with the host group. There are two ways to establish
shared volume-host connections: 1) establish a shared connection from a volume to a host
group, or 2) establish a shared connection from a host group to a volume.
To establish a shared connection from a volume to a host group:
1 Select Storage > Hosts.
2 In the Host Groups panel, click the host group to drill down to its details.
3 In the Connected Volumes panel, click the menu icon and select Connect.... The Connect
Shared Volumes to Host Group dialog box appears.
The volumes in the Existing Volumes column represent the volumes that are eligible to be
connected to the host group.
4 Click an existing volume in the left column to add it to the Selected Volumes column.
5 Click Connect.
To establish a shared connection from a host group to a volume:
1 Select Storage > Volumes.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 124
Chapter 6:Storage | Volumes

2 In the Volumes panel, click the volume to drill down to its details.
3 In the Connected Host Groups panel, click the menu icon and select Connect.... The Con-
nect Host Groups dialog box appears.
The host groups in the Available Host Groups column represent the host groups that are eli-
gible to be connected to the volume.
4 Click an existing host group in the left column to add it to the Selected Host Groups column. If
the host group does not exist, click Create New Host Group to create a new host group and
connect it to the volume.
5 Click Connect.

Breaking Volume-Host Connections


Break a volume-host connection when there is no longer a need for the two to communicate.
Breaking a private volume-host connection causes the host to lose access to the volume. Other
shared and private connections to the volume are unaffected.
There are two ways to break private volume-host connections: 1) disconnect a volume from its
host, or 2) disconnect a host from the volume.
To disconnect a volume from its host:
1 Select Storage > Hosts.
2 In the Hosts panel, click the host name to drill down to its details.
3 In the Connected Volumes panel, click the disconnect volume (x) icon next to the volume you
want to disconnect. The Disconnect Volume dialog box appears.
4 Click Disconnect.
To disconnect a host from a volume:
1 Select Storage > Volumes.
2 In the Volumes panel, click the volume to drill down to its details.
3 In the Connected Hosts panel, click the disconnect host (x) icon next to the host you want to
disconnect. The Disconnect Host dialog box appears.
4 Click Disconnect.

Breaking Volume-Host Group Connections


Break a volume-host group connection when there is no longer a need for the volume to com-
municate to the hosts within the host group.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 125
Chapter 6:Storage | Volumes

Breaking a volume-host group connection breaks all connections between the volume and all
hosts affiliated with the host group. Other shared and private connections to the volume are unaf-
fected.
There are two ways to break shared volume-host group connections: 1) disconnect a volume
from its host group, or 2) disconnect a host group from the volume.
To disconnect a volume from its host group:
1 Select Storage > Hosts.
2 In the Host Groups panel, click the host group name to drill down to its details.
3 In the Connected Volumes panel, click the disconnect volume (x) icon next to the volume you
want to disconnect. The Disconnect Volume dialog box appears.
4 Click Disconnect.
To disconnect a host group from a volume:
1 Select Storage > Volumes.
2 In the Volumes panel, click the volume to drill down to its details.
3 In the Connected Host Groups panel, click the disconnect host group (x) icon next to the host
group you want to disconnect. The Disconnect Host Group dialog box appears.
4 Click Disconnect.

Working with Volume Snapshots


Creating a Volume Snapshot
Volume snapshots are created through the Storage > Volumes page. Protection group snap-
shots are created through the Protection > Protection Groups page.
To create a volume snapshot:
1 Select Storage > Volumes.
2 In the Volumes panel, click the volume to drill down to its details.
3 In the volume details page, scroll to the Volume Snapshots panel, which displays a list of
volume and protection group snapshots that have been created for the selected volume.
4 In the Volume Snapshots panel, click the menu icon and select Create.... The Create Snap-
shot dialog box appears.
5 Optionally, specify a suffix to replace the unique number that Purity//FA creates for the
volume snapshot. The suffix cannot consist of all numeric values.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 126
Chapter 6:Storage | Volumes

6 Click Create.

Restoring a Volume from a Volume Snapshot


Restoring a volume from a snapshot overwrites the contents of the volume with data from the
snapshot. To restore a volume from a volume snapshot:
1 Select Storage > Volumes.
2 In the Snapshots panel, click the menu icon for to the snapshot you want to restore and
select Restore Volume.... The Restore Volume from Snapshot dialog box appears.
3 Click Restore. Optionally view the volume Details to verify that the created date of the over-
written volume is set to the snapshot creation date, indicating that the volume snapshot has
been successfully restored.

Copying a Volume Snapshot


Copy a volume snapshot to create a new volume or overwrite an existing one.
To copy a volume snapshot:
1 Select Storage > Volumes.
2 In the Snapshots panel, click the menu icon for to the snapshot you want to copy and select
Create Volume.... The Create Volume from Snapshot dialog box appears.
3 In the Container field, specify the root location, pod, or volume group to where the new
volume will be created. The forward slash (/) represents the root location of the array.
4 In the Volume Name field, type the name of the new or existing volume.
5 To overwrite an existing volume, click the Overwrite toggle button to enable (blue) the over-
write feature.
6 Click Copy. Optionally click the Details sub-heading and verify that the created date of the
new or overwritten volume is set to the snapshot created date.

Renaming a Volume Snapshot Suffix


To rename a volume snapshot suffix:
1 Select Storage > Volumes.
2 In the Snapshots panel, click the menu icon for to the snapshot you want to rename and
select Rename.... The Rename Snapshot dialog box appears.
3 In the Name field, enter the new name of the volume snapshot suffix.
4 Click Rename.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 127
Chapter 6:Storage | Volumes

Destroying a Volume Snapshot


You can only destroy a volume snapshot from its originating container. For example, to destroy
a volume snapshot that was created on the root of the array, go to Storage > Volumes. Likewise,
to destroy a volume snapshot that was created as part of a protection group snapshot, go to
Storage > Protection Group to destroy the protection group snapshot. To destroy a volume snap-
shot that was created on the root of the array:
1 Select Storage > Volumes.
2 In the Snapshots panel, click the menu icon for to the snapshot you want to destroy and
select Destroy.... The Destroy Snapshot dialog box appears.
3 Click Destroy. The destroyed snapshot appears in the Destroyed Snapshots panel and
begins its eradication pending period.
During the eradication pending period, you can recover the volume snapshot to bring it back to
its previous state, or manually eradicate the destroyed volume snapshot to reclaim physical stor-
age space occupied by the snapshot.
When the eradication pending period has elapsed, Purity//FA starts reclaiming the physical stor-
age occupied by the volume snapshot.
Once reclamation starts, either because you have manually eradicated the destroyed volume
snapshot, or because the eradication pending period has elapsed, the destroyed volume snap-
shot can no longer be recovered.

Recovering a Volume Snapshot


To recover a destroyed volume snapshot:
1 Select Storage > Volumes.
2 In the Destroyed Snapshots panel, click the Recover Snapshot icon for to the snapshot you
want to recover. The Recover Snapshot dialog box appears.
3 Click Recover. The snapshot is recovered to the volume from which it was destroyed.

Eradicating a Volume Snapshot


During the eradication pending period, you can manually eradicate the destroyed volume snap-
shot to reclaim physical storage space occupied by the destroyed snapshot.
Once reclamation starts, the destroyed volume snapshot can no longer be recovered.
To eradicate a destroyed volume snapshot:
1 Select Storage > Volumes.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 128
Chapter 6:Storage | Volumes

2 In the Destroyed Snapshots panel, click the Eradicate Snapshot icon for to the snapshot you
want to permanently eradicate. The Eradicate Snapshot dialog box appears.
3 Click Eradicate.

Manual eradication is not supported when SafeMode retention lock is enabled.

Working with Volume Groups


Creating a Volume Group
Create a single volume group or multiple volume groups at one time.
To create a volume group:
1 Select Storage > Volumes.
2 In the Volume Groups panel, click the menu icon and select Create... . The Create Volume
Group dialog box appears.
3 In the Name field, type the name of the new volume group.
4 Optionally click QoS Configuration (Optional) to set quality of service (QoS) limits.
l In the Bandwidth Limit field, set the maximum QoS bandwidth limit for volumes in this
group. Whenever throughput exceeds the bandwidth limit, throttling occurs. If set,
bandwidth limit must be between 1 MB/s and 512 GB/s.
l In the IOPS Limit field, set the maximum QoS IOPS limit for volumes in this group.
Whenever the number of I/O operations per second exceeds the IOPS limit, throttling
occurs. If set, IOPS limit must be between 100 and 100M.
l In the DMM Priority Adjustment menus, select +10 to give the volume a higher priority
or -10 for a lower priority.
5 Click Create.
To create multiple volume groups:
1 Select Storage > Volumes.
2 In the Volume Groups panel, click the menu icon and select Create... . The Create Volume
Group dialog box appears.
3 Click Create Multiple…. The Create Multiple Volume Groups dialog box appears.
4 Complete the following fields:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 129
Chapter 6:Storage | Volumes

l Name: Specify the template used to create the volume group names. Volume group
names cannot consist of all numeric values.
Place the hash (#) symbol where the numeric part of the volume group name should
appear. When Purity//FA creates the volume group names, the hash symbol is
replaced with the volume group number, beginning with the start number specified.
l Start Number: Enter the volume number used to create the first volume group name.
l Count: Enter the number of volume groups to create.
l Number of Digits: Enter the minimum number of numeric digits of the volume group
number. If the number of digits is greater than the start number, the volume number
begins with leading zeros.
l Bandwidth Limit: Optionally set the maximum QoS bandwidth limit. The bandwidth
limit applies to each volume that becomes a member of these groups. Whenever
throughput exceeds the bandwidth limit, throttling occurs. If set, bandwidth limit must
be between 1 MB/s and 512 GB/s.
l IOPS Limit: Optionally set the maximum QoS IOPS limit. The IOPS limit applies to
each volume that becomes a member of these groups. Whenever the number of I/O
operations per second exceeds the IOPS limit, throttling occurs. If set, the IOPS limit
must be between 100 and 100M.
l DMM Priority Adjustment: Optionally select +10 to give the volume a higher priority
or -10 for a lower priority.
5 Click Create.

Configuring the Maximum QoS Bandwidth and IOPS Limits of a Volume


Group
To configure the maximum QoS bandwidth and IOPS limits of a volume group:
1 Select Storage > Volumes.
2 Select one of the following options to edit the bandwidth and IOPS limits:
l In the Volume Groups panel, click the menu icon for the volume group you want to set
the QoS bandwidth and IOPS limits, and then select Configure QoS....
l In the Volumes Groups panel, click the volume group to drill down to its details, and
then click the QoS edit icon in the Details panel.
The Configure QoS dialog box appears.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 130
Chapter 6:Storage | Volumes

3 In the Bandwidth Limit field, set the maximum QoS bandwidth limit for the volume group.
Whenever throughput exceeds the bandwidth limit, throttling occurs. If set, the bandwidth
limit must be between 1 MB/s and 512 GB/s.
To give the volume group unlimited throughput, clear the Bandwidth Limit field.
4 In the IOPS Limit field, set the maximum QoS IOPS limit for the volume group. Whenever the
number of I/O operations per second exceeds the IOPS limit, throttling occurs. If set, the
IOPS limit must be between 100 and 100M.
To give the volume group unlimited IOPS, clear the IOPS Limit field.
5 Click Save.

Configuring the DMM Priority Adjustment for a Volume Group


To configure the priority adjustment for all volumes in a volume group:
1 Select Storage > Volumes.
2 Select one of the following options to edit the priority adjustment:
l In the Volume Groups panel, click the menu icon for the volume group, then select
Configure QoS....
l In the Volumes Groups panel, click the volume group to drill down to its details, and
then click the QoS edit icon in the Details panel.
The Configure QoS dialog box appears.
3 With the DMM priority adjustment menus, select +10 to give the volume a higher priority or -
10 for a lower priority.

Renaming a Volume Group


To rename a volume group:
1 Select Storage > Volumes.
2 In the Volume Groups panel, click the rename icon for the volume group you want to rename.
The Rename Volume Group dialog box appears.
3 In the Name field, enter the new name of the volume group.
4 Click Rename.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 131
Chapter 6:Storage | Volumes

Destroying and Eradicating Volume Groups


Destroying a Volume Group
A volume group can only be destroyed if it is empty, so before destroying a volume group,
ensure all volumes inside the volume group have been either moved out of the volume group or
destroyed.
To destroy a volume group:
1 Select Storage > Volumes.
2 From the Volume Groups panel, click the garbage icon for the volume group you want to des-
troy. The Destroy Volume Group dialog box appears.
3 Click Destroy.
The destroyed volume group appears in the Destroyed Volume Groups folder and begins
its eradication pending period.
During the eradication pending period, you can recover the volume group or manually eradicate
the destroyed volume group. When the eradication pending period has elapsed, Purity//FA erad-
icates the destroyed volume group. Once a volume group has been eradicated, it can no longer
be recovered.

Recovering a Destroyed Volume Group


To recover a destroyed volume group:
1 Select Storage > Volumes.
2 From the Destroyed Volume Groups panel, click the recover icon for the volume group you
want to recover. The Recover Volume Group dialog box appears.
3 Click Recover. The recovered volume group appears in the list of volume groups.

Eradicating a Destroyed Volume Group


Once a volume group has been eradicated, it can no longer be recovered.
To eradicate a destroyed volume group:
1 Select Storage > Volumes.
2 From the Destroyed Volume Groups panel, click the eradicate (garbage) icon for the volume
group you want to eradicate. The Eradicate Volume dialog box appears.
3 Click Eradicate. The volume group is permanently eradicated and can no longer be
recovered.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 132
Chapter 6:Storage | Pods

Manual eradication is not supported when SafeMode retention lock is enabled.

Pods
The Storage > Pods page displays summary information for each pod on the array. See Figure
6-12.
Figure 6-12. Storage > Pods

Pure Storage Confidential - For distribution only to Pure Customers and Partners 133
Chapter 6:Storage | Pods

A pod is a management container containing a group of volumes that can be stretched or linked
between two FlashArrays. A pod serves as a consistency group that is created for truly active-
active synchronous replication (ActiveCluster) or active-passive continuous replication (Act-
iveDR). When a pod is stretched over two FlashArrays, any time there is a failover between the
two FlashArrays, anything that was contained in that pod will be write-order consistent.
For ActiveCluster, Purity supports multiple connections between FlashArrays for a hub-and-
spoke topology for stretched pods. This way, a single FlashArray can participate as a con-
solidator, synchronously replicating the desired volumes for FlashArrays dedicated for specific
workloads. IP supports up to five synchronous connections between FlashArrays. Fibre Channel
supports one synchronous connection.
An array can have multiple pods, and each pod can be stretched and unstretched. When stretch-
ing pods for ActiveCluster synchronous replication, make sure not to exceed the limits for
stretched objects like pods, volumes, volume snapshots, and protection group snapshots. For
information about the ActiveCluster synchronous IP or FC replication limits, see one of the
FlashArray Model Limits articles, as applicable to the given model, on the Knowledge site at
https://support.purestorage.com.
Volumes can be moved into and out of pods, but they cannot be moved into or out of stretched
pods. To move volumes into or out of a stretched pod, unstretch the pod before you move the
volumes. A volume cannot be copied across pods if the source and target pods are both
stretched but on different pairs.
Pods can also contain protection groups with volume, host, or host group members. Addi-
tionally, file systems can be created inside pods and file systems can be moved into and out of
pods. The Storage > Pods > File Systems page allows the creation of file systems within a pod.
See Figure 1-2.
A pod provides a private namespace, so the names of file systems, and volume and protection
groups in pods will not conflict with any volumes or protection groups with the same name on the
root of the array. The fully qualified name of a volume in a pod is POD::VOLUME, with double
colons (::) separating the pod name and volume name. The fully qualified name of a protection
group in a pod is POD::PGROUP, with double colons (::) separating the pod name and pro-
tection group name.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 134
Chapter 6:Storage | Pods

Figure 6-13. Storage > Pods > File Systems

Name the file system to be created in the pod, then click Create to create a new file system in
the pod.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 135
Chapter 6:Storage | Pods

For example, a volume named vol02 in a pod named pod01 will be named pod01::vol02. A
protection group named pgroup01 in a pod named pod01 will be named pod01::pgroup01.
See Figure 6-14.
Figure 6-14. Configuring a Pod (part 1)

Pure Storage Confidential - For distribution only to Pure Customers and Partners 136
Chapter 6:Storage | Pods

Figure 6-15. Configuring a Pod (part 2)

If a protection group in a pod is configured to asynchronously replicate data to a target array, the
fully qualified name of the protection group on the target array is POD:PGROUP, with single
colons (:) separating the pod name and protection group name. For example, if protection group
pod01::pgroup01 on source array array01 asynchronously replicates data to target array
array02, the fully qualified name of the protection group on target array array02 is pod01:p-
group01.
In addition to passive mediation and failover preference, Purity provides the pre-election beha-
vior to further ensure a stretched pod remains online. With pre-election, an array within a
stretched pod is chosen by Purity to keep a pod online when other failures occur in the envir-
onment.
The pre-election behavior elects one array of the stretched pod to remain online in the rare event
that:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 137
Chapter 6:Storage | Pods

l The mediator is inaccessible on both arrays within the stretched pod, preventing the
arrays from racing to the mediator to determine which one keeps the pod online.
...and then later...
l The arrays within the stretched pod become disconnected from each other.
When the mediator becomes inaccessible on both arrays, Purity pre-elects an array per pod to
keep the pod online. Then, if the arrays lose contact with each other, the pre-elected array for
that pod takes over to keep the pod online while its peer array takes the pod offline.
If either array reconnects to the mediator before they lose contact with each other, the pre-elec-
tion result is cancelled. The array with access to the mediator will race to the mediator and keep
the pod online if its peer array fails or the arrays become disconnected from each other.
The pre-election status appears in the form of a heart symbol in the Mediator status column of
the Storage > Pods > Arrays panel; a gray heart means the array was pre-elected by Purity to
keep the pod online, while an empty heart means the array was pre-elected by Purity to take the
pod offline. If a heart does not appear, this means the array is connected to its peer array within
the stretched pod and at least one array in the pod has access to the mediator.
One and only one array within each pod is pre-elected at a given point in time, so while a pre-
elected array is keeping the pod online, the pod on its non-elected peer array remains offline dur-
ing the communication failure.
Users cannot pre-elect arrays. Purity uses various factors, including the following ones (listed in
order of precedence), to determine which array is pre-elected:
l If a pod has a failover preference set, then the array that is preferred will be pre-elec-
ted.
l If one of the arrays has no hosts connected to volumes in the pod, then the other
array will be pre-elected.
l If neither of the above factors applies, one of the arrays is selected by Purity.
If the pre-elected array goes down while pre-election is in effect, the non-elected peer array will
not bring the pod online.
If the non-elected array reconnects to the mediator while it is still disconnected from the pre-elec-
ted array, it is ignored and will still keep the pod offline. If the data in the non-elected pod must
be accessed, clone it to create a point-in-time consistent copy of the pod and its contents, includ-
ing its volumes and snapshot history. After the pod has been cloned, disconnect the hosts from
the original volumes and reconnect the hosts to the volumes within the cloned pod.
If the arrays re-establish contact with each other but the mediator is still inaccessible, the array
that was online throughout the outage starts replicating pod data to its peer array until the pod is

Pure Storage Confidential - For distribution only to Pure Customers and Partners 138
Chapter 6:Storage | Pods

once again in sync and both arrays serve I/O. One array will still be pre-elected (with the appro-
priate heart status still displayed) in case both arrays lose contact with each other again.
When the peer arrays re-establish contact with each other and can access the mediator, the
array that was online throughout the outage starts replicating pod data to its peer array until the
pod is once again in sync and both arrays serve I/O, at which time pod activity returns to normal.

Configuring Failover Preference


1 Log in to the target FlashArray.
2 Select Storage > Pods.
3 In the Pods panel, click the name of the pod to drill down to the associated details.
4 In the Details panel, click the menu icon and select Add Arrays to Failover Preference….
The Add Arrays to Failover Preference dialog box appears.
5 Select the FlashArrays you want to add for failover preference.
6 Click Add.

Automatic Default Protection for Volumes in a Pod


Purity//FA provides automatic protection group membership for all newly created or copied
volumes. When a new pod is created, a new default protection group list is created for the pod
by copying the default protection group list for the root of the array. Each protection group listed
in the default protection group list is automatically created in the pod. When a volume is created
in the pod or is copied to the pod, the volume automatically has membership in each protection
group named in the pod default protection group list.
If the array root default protection group list is an empty list, no pod default protection group list
and no new pod protection groups are created, and volumes later created in or copied to the pod
have no default protection.
After the initial creation of the pod default protection group list and pod protection groups, the
pod default protection group list and pod protection groups are completely independent of the
array root default protection group list and protection groups.
See "Automatic Protection Group Assignment for Volumes" on page 38 for information about
default protection group lists.
See "Default Protection for Volumes" on page 195 to enable or customize default protection.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 139
Chapter 6:Storage | Pods

ActiveDR Replication
ActiveDR is a Purity//FA data protection feature that enables active-passive, continuous rep-
lication by linking pods across two FlashArrays, providing a low RPO (Recovery Point Object-
ive).
ActiveDR replication streams pod-to-pod transfer of compressed data from a source FlashArray
at the production site to a target FlashArray at the recovery site. If the source FlashArray
becomes unavailable due to events such as a disaster or workload migration, you can imme-
diately fail over to the target FlashArray.
A low RPO allows you to recover at the target site with less data loss compared to scheduled
snapshot replication. Because ActiveDR replication constantly replicates data to the target
FlashArray and does not wait for the write acknowledgment from the target FlashArray, no addi-
tional host write latency is incurred when the distance between the two FlashArrays increases.
For information about ActiveDR and how to use ActiveDR replication to provide fast recovery,
see the following topics:
l Key Features
l Setting Up ActiveDR Replication
l Promotion Status of a Pod
l Replica Links
l Adding File Data to the Pod on the Source Array
l Performing a Failover for Fast Recovery
l Performing a Reprotect Process after a Failover
l Performing a Failback Process after a Failover
l Performing a Planned Failover
l Performing a Test Recovery Process

Key Features
ActiveDR replication provides the following key features:
l Pod-based replication - Uses a storage pod as a management container for rep-
lication, failover, and consistency. An active pod on a source array can be linked to a
passive pod on a target array to form a pod-to-pod replication pair.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 140
Chapter 6:Storage | Pods

l Near-zero Recovery Point Objective (RPO) - Achieves near-zero data loss for rapid
disaster recovery at the DR site, enabling you to keep the data on the source and tar-
get FlashArrays almost synchronized.

Note: Maintaining a near-zero RPO is dependent on the workloads and the


available network bandwidth.

Note: ActiveDR replication for file systems provides up to one hour RPO.
l Test recovery without disrupting replication - Enables failover testing without dis-
rupting data replication to the recovery site to maintain the RPO.
l Pre-configured volume and host connection - Allows hosts to be connected to the
volumes on the target FlashArray at the recovery site before a failover to speed up
and simplify the failover process.
l Bidirectional replication - Allows different pods in the same two FlashArrays to link
and replicate in opposite directions across sites.

Setting Up ActiveDR Replication


As part of a disaster recovery strategy, you can protect data on the source FlashArray at the pro-
duction site by setting up ActiveDR replication to a target FlashArray at the disaster recovery
(DR) site.
Setting up ActiveDR replication involves these steps.
1 Connecting the Source and Target FlashArrays
2 Setting Up a Source Pod
3 Setting Up a Pod on the Target FlashArray
4 Demoting the Pod on the Target FlashArray
5 Creating a Replica Link to Initiate ActiveDR Replication
Figure 6-16 shows ActiveDR replication from the source pod pod1 to the target pod pod2.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 141
Chapter 6:Storage | Pods

Figure 6-16. ActiveDR Replication

Connecting the Source and Target FlashArrays


Before you configure ActiveDR replication, you must connect a source FlashArray at the pro-
duction site to a target FlashArray at the disaster recovery site. For more information about how
to connect two FlashArrays, see "Pods" on page 133.

Setting Up a Source Pod


To set up a source pod for ActiveDR replication, first create a source pod on the source FlashAr-
ray:
1 Select Storage > Pods.
2 In the Pods pane, click the menu icon and select Create....
3 In the Create Pod pane, enter the name of the source pod and click Create.
Then move existing volumes into the source pod:
1 Select Storage > Volumes.
2 In the Volumes pane, click the menu icon and select Move....
3 In the Move Volumes pane, select the check boxes of the existing volumes to move into the
source pod.
4 In the Container field, enter the pod name and click Move.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 142
Chapter 6:Storage | Pods

Note: You cannot move volumes into the source pod after the replica link is created.

Setting Up a Pod on the Target FlashArray


Set up a pod on the target FlashArray at the disaster recovery (DR) site for ActiveDR replication.

To create a pod,
1 Select Storage > Pods.
2 In the Pods pane, click the menu icon and select Create....
3 In the Create Pod pane, enter the name of the pod that you want to set up as the intended tar-
get pod and click Create.

Demoting the Pod on the Target FlashArray


Demote the pod created for ActiveDR replication on the target FlashArray at the disaster recov-
ery (DR) site to allow it to become a target pod.
When you demote a pod,
l The pod promotion status changes.
When you create a pod initially, its promotion status is promoted, which allows the
pod to provide read/write access to the host. When the demotion process is com-
plete, the pod promotion status changes to demoted, which allows read-only access.

l An undo-demote pod is created.


When the pod status changes to demoted, an undo-demote pod is created and
placed in an eradication pending period. An undo-demote pod preserves the pod con-
figuration and data in the state before the demotion process. Therefore, you can
retrieve the data not being updated during the demotion process by using the undo-
demote pod. An undo-remote pod is automatically eradicated after its eradication
pending period has elapsed.
(Configure eradication pending periods in the Settings > System > Eradication Con-
figuration pane. See "Eradication Delays" on page 35 and "Eradication Delay Set-
tings" on page 285.)

Note: If an undo-demote pod already exists, the demotion process fails with an
error.
For more information, see "Promotion Status of a Pod" on page 146.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 143
Chapter 6:Storage | Pods

To demote a pod at the recovery site,


1 Log in to the target FlashArray at the recovery site.
2 Select Storage > Pods.
3 In the Pods panel, click the menu icon of the pod to demote and select Demote....
The Demote Pod dialog box appears.
4 Click Demote.
The promotion status of the pod shows demoting initially and then transitions to demoted
when the demotion process is complete.
You can cancel the demotion process when the pod is in the demoting status by clicking the
menu icon and selecting Promote....

Adding Data to a Pod on a Source FlashArray


Once a source pod is created on the FlashArray, file data or volumes can be added. You can
move existing file systems or volumes into the pod, create new file systems or volumes, and cre-
ate policies and snapshot policies. Volumes cannot be added to pods with file systems or vice
versa. A volume protected in a protection group cannot be moved. Directory snapshots man-
aged by a snapshot policy will become unmanaged when the parent file system is moved to a dif-
ferent pod.

Note: Volumes cannot be added to pods with file systems and file systems cannot be
added to pods with volumes.
With the file systems or volumes in place, a replica link can be created to initiate ActiveDR rep-
lication. File systems and volumes cannot be moved into a pod when the replication is initiated;
only after deletion of the replica link. However, you may create a new volume or file system in
the pod without deleting or pausing the link.
A replica link can only be created when the Purity//FA on the target array is of the same version
as the source, or newer. Hence, if the source array is being upgraded to a newer version, a link
cannot be created or recreated after deletion, without first upgrading the target array. Only file
policy or block features that are supported on both the source and target arrays, can be used;
unsupported operations will fail. The recommendation is therefore to use the same version of
Purity//FA on both arrays, source and target.

Creating a Replica Link to Initiate ActiveDR Replication


To initiate ActiveDR replication, create a replica link from the source pod at the production site to
the demoted pod at the target site.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 144
Chapter 6:Storage | Pods

When you link a source pod with a demoted pod using a replica link, the demoted pod becomes
the target pod of the source pod. The target pod serves as a replica pod to track the changes of
the source pod, including volumes, snapshots, protection groups, and protection group snap-
shots.
To create a replica link,
1 Log in to the source FlashArray at the production site.
2 Select Protection > ActiveDR.
3 In the Replica Links pane, click the menu icon and select Create....
The Create Replica Link dialog box appears.
4 Provide information for the following fields:
5 Click Create.
The local and remote FlashArrays are connected, and the replica link starts the baseline
process between the source and target pods. When the baseline process is complete, the
source pod starts to replicate data to the target pod, changing the replica-link status to rep-
licating.
For more information about replica links, see "Replica Links" on page 149.

Managing Replica Links


Follow these steps to manage a replica link and change the promotion status of the local pod.
1 Choose Protection > ActiveDR.
2 In the Replica Links pane, click the menu icon of the replica link that you want to manage and
select one of the actions presented.
l Delete
Deletes a replica link between a local FlashArray and a remote FlashArray.
l Pause
Pauses ActiveDR replication by stopping the replica-link connection between a local
FlashArray and a remote FlashArray. The write streams continue in the background.
To continue the replication, resume the replica link.
l Resume
Resumes ActiveDR replication after the pause operation.
l Promote Local Pod
Promotes the pod to provide read/write access to the host and presents the content
from the most recent recovery point.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 145
Chapter 6:Storage | Pods

l Demote Local Pod


Demotes the pod to provide read-only access to the host.

Promotion Status of a Pod


The promotion status of a pod indicates whether a pod is promoted or demoted in the pod-to-
pod, ActiveDR replication process. A promoted pod provides read/write access to the host, while
a demoted pod provides read-only access. By default, the promotion status of a pod is pro-
moted when it is initially created.
To see the promotion status of a pod, select Storage > Pods. A pod can have one of the fol-
lowing promotion statuses:
l promoting - The promotion process of the pod is under way.
l promoted - The promotion process is complete, and the pod has been promoted.
This is the default status of a pod when it is initially created.
l demoting - The demotion process of the pod is under way.
l demoted - The demotion process is complete, and the pod has been demoted.
To manage the promotion status of a pod, see
l "Demoting Pods" on page 148
l "Promoting Pods" on page 148
When you demote a pod by clicking the menu icon of the pod and selecting Demote....
l The promotion status of the pod changes.
When you create a pod initially, its promotion status is promoted, which allows the
pod to provide read/write access to the host. During the demotion process, the pro-
motion status changes to demoting initially before transitioning to demoted when
the demotion process is complete. A pod with the demoting or demoted status
stops receiving new writes and configuration changes from the host; that is, it allows
read-only access to the host.
l An undo-demote pod is created if the demoted pod has no undo-demote pod.
When the pod status changes to demoted, an undo-demote pod is created and
placed in an eradication pending period. An undo-demote pod preserves the pod con-
figuration and data in the state before the demotion process. Therefore, you can
retrieve the data not being updated during the demotion process by using the undo-
demote pod. An undo-remote pod is automatically eradicated after its eradication
pending period has elapsed.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 146
Chapter 6:Storage | Pods

Note: If an undo-demote pod already exists, the demotion process fails with an
error.
A pod can have only one undo-demote pod named pod_name.undo-demote. You
cannot demote a pod that already has an undo-demote pod. To demote such a pod,
you must first eradicate the undo-demote pod. You cannot rename an undo-demote
pod; however, when you rename a demoted pod, the associated undo-demote pod
automatically inherits the new pod name. For example, renaming a demoted pod
podA to podB automatically changes the undo-demote pod name from podA.undo-
demote to podB.undo-demote.
When you demote a pod that is the source of a replica link, you must restrict the promotion
status transitions by clicking either the Quiesce button or the Skip Quiesce button in the
Demote Pod dialog box.
l The Quiesce setting
Demotes a pod to allow it to become a target pod after the replica-link status changes
to quiesced. Setting this option ensures that all local data has been replicated to the
remote pod before the pod is demoted.
You should set this option when performing a planned failover.
l The Skip Quiesce setting
Demotes a pod to allow it to become a target pod without waiting for the quiesced
status of the replica link. Using this option loses any data that has not been replicated
to the remote pod.
When you promote a pod that was demoted, note the following conditions:
l The promotion status of the pod initially shows the promoting status, indicating the
promotion process is in progress. When the promotion process is complete, the pro-
motion status transitions to promoted.
Note: You must wait for the promotion status to transition to promoted before access-
ing the data in the pod.
l Promoting a pod is restricted if the replica-link status is quiescing.
To override this restriction, select the Abort Quiesce button to force promotion
without waiting for the quiesce operation to complete replicating data from the source.
Using this option loses any data that has not been replicated and reverts the pod to
its most recent recovery point.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 147
Chapter 6:Storage | Pods

Demoting Pods
By default, the promotion status of a pod is promoted when it is initially created. You can
demote a pod to allow it to become a target pod for ActiveDR replication.
To demote a pod,
1 Log in to the FlashArray in which you want to demote the pod.
2 Select Storage > Pods and select the pod to demote.
3 In the Pods panel, click the menu icon of the pod and select Demote….
The Demote Pod dialog box appears.
4 If the pod is the source of a replica link, configure one of the following settings:
5 Click Demote.
The promotion status of the pod changes to demoted when the demotion process is com-
plete.
For more information, see "Promotion Status of a Pod" on page 146.

Promoting Pods
You can promote a pod that was previously demoted to allow read/write access to the host. If the
pod is the target of a replica link, the pod will be updated with the latest replicated data from the
journal.
To promote a pod,
1 Log in to the FlashArray in which you want to promote the pod.
2 Select Storage > Pods and select the pod to promote.
3 In the Pods panel, click the menu icon of the pod and select Promote….
The Promote Pod dialog box appears.
4 (Optional) Select the Abort Quiesce check box.
Using the setting promotes the pod while the replica-link status is quiescing without wait-
ing for the quiesce operation to complete.
5 (Optional) Select the Promote From 'pod.undo-demote' check box.
Setting this option promotes the pod using the associated undo-demote pod as the source.
When the promotion process is complete, the pod contains the same configuration and
data as the undo-demote pod. The undo-demote pod will be eradicated.
6 Click Promote.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 148
Chapter 6:Storage | Pods

The promotion status of the pod changes to promoted when the promotion process is com-
plete. You must wait for the promotion status to transition to promoted before accessing
the data in the pod.
For more information, see "Promotion Status of a Pod" on page 146.

Replica Links
When you associate a source pod with a demoted pod by creating a replica link, the demoted
pod becomes the target pod of the source pod. The direction of the replica link is from the pro-
moted source pod to the demoted target pod. You can create replica links in either direction
between the same two FlashArrays. The target pod of a replica link cannot be on the same
FlashArray as the source pod.
The target pod of a replica link tracks the data and configuration changes of the source pod,
including changes to volumes, snapshots, protection groups, and protection group snapshots.
Changes to the source pod are continuously replicated to the target FlashArray where they are
stored in the background in a journal. When the target pod is demoted, it is updated with the
latest changes from the journal every few minutes.
This form of replication does not have an impact on front-end write latency because host writes
on the source are not required to wait for acknowledgment from the target FlashArray as they
would with ActiveCluster replication. Therefore, writes on the source are not affected by latency
on the replication network or the distance between the source and target FlashArrays.
Note the following configuration differences between a source pod and the associated target
pod:
l The replicated volumes in the target pod have different serial numbers from the same
volumes in the source pod.
l The target pod has different hosts and host group connections.
To view more detailed information of replica links, see
l "Displaying Replica Links" on page 151
l "Displaying the Lag and Bandwidth Details of Replica Links" on page 152

Replica-Link Status
Replica-link status includes the following values:
l baselining

Indicates that the source pod is sending the initial data set.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 149
Chapter 6:Storage | Pods

Note: During the baseline process, promoting a target pod in the demoted status is
not allowed.
l idle
Indicates that write streams stop because the source pod is being demoted with the
Skip Quiesce setting.
l paused
Indicates that ActiveDR replication between the source and target pods has been
paused.
For information on how to resume the replication, see "Managing Replica Links" on
page 145.
l quiescing
Indicates that the source pod is not accepting new writes and the most recent writes
to the source pod are currently being replicated to the target pod.
l quiesced
Indicates that the source pod is demoted and all the new writes have been replicated
to the target pod.
l replicating
Indicates that the source pod is replicating data to the target pod.
l unhealthy
Indicates that the current replica link is unhealthy. You should check the connection.

Lag and Recovery Point


In addition to the replica-link status, you can analyze the replication status by comparing dif-
ferent lags. In the case of a long lag time, you can use the recovery point to determine the most
recent snapshot to recover.
l Lag
The amount of time measured in seconds (CLI) or in milliseconds (REST API) that the
replication target is behind the source. This is the time difference between the current
time and the recovery point.
l Recovery point
l Timestamp of the most recent changes that have been successfully rep-
licated in seconds (CLI) or in milliseconds (REST API) since the UNIX
epoch.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 150
Chapter 6:Storage | Pods

l The value represents recovery point if the pod is promoted.


l The value is null if the replica link is baselining.

Note: The lag and recovery point refer to the data that is successfully replicated to the
journal on the target and can be recovered by promoting the pod. The current contents of
the target pod might not reflect the reported recovery point. The reason is that the target
pod is updated periodically only when it is demoted.

Bandwidth Requirements
There are no bandwidth requirements to maintain the near-zero RPO for ActiveDR replication.
However, when the front-end data transfer rate exceeds the available bandwidth in your envir-
onment, RPO increases and ActiveDR replication automatically transitions to asynchronous
mode to minimize lag.

Unlink Operation
ActiveDR replication associates a source pod with a target pod using a replica link. When you
unlink the two pods by deleting the replica link, the data in the target journal is automatically
transferred to an undo-demote pod. You can retrieve the data using the undo-demote pod; there-
fore, the pods may be relinked without transferring a complete baseline of all data. An undo-
demote pod is automatically eradicated after its eradication pending period has elapsed.

Displaying Replica Links


To view the detailed information of all replica links, Select Replication > Replica Links.
The Replica Links pane displays information, such as the direction, status, recovery point, band-
width, and lag of all replica links, as shown in Figure 6-17.
Figure 6-17. Replica Links

Pure Storage Confidential - For distribution only to Pure Customers and Partners 151
Chapter 6:Storage | Pods

Displaying the Lag and Bandwidth Details of Replica Links


To view the lag and bandwidth details of specific replica links,
1 Select Replication > Replica Links.
2 In the Replica Links pane, select the check boxes of the replica links.
The Lag and Bandwidth panes display the lag and bandwidth information for the selected
replica links for the past one hour. If you do not select any replica link in the Replica Links
pane, the charts display no images.
To deselect a replica link from the charts, use one of the following methods:
l Deselect the check box of the replica link in the Replica Links pane.
l Click the X mark next to the replica link from the Selection(n) dropdown menu in the
upperleft corner.
l Click Clear all to clear the replica-link selections from the Selection(n) dropdown
menu in the upper-left corner.
3 Hover over the graph in the Lag pane or the Bandwidth pane to display the lag and band-
width information in the point-in-time tooltips of the selected replica links.
4 In the Lag pane, display the following lag details:
l Click the graph to show the lag information for a specific time in the Replica Links
pane.
l Click Avg to view the average lag.
l Click Max to view the maximum lag.
5 In the Bandwidth pane, display the following bandwidth details:
l Click the graph to show the bandwidth information for a specific time in the Replica
Links pane.
l Select the To remote check box to view the bandwidth information to the remote
FlashArray.
l Select the From remote check box to view the bandwidth information from the remote
FlashArray.
l Select both the To remote and From remote check boxes to view the total bandwidth
information to and from the remote FlashArray.
See Figure 6-18.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 152
Chapter 6:Storage | Pods

Figure 6-18. Lag and Bandwidth Information of Replica Links

Performing a Failover for Fast Recovery


In the event of a disaster, the production site fails over to the disaster recovery (DR) site to min-
imize the downtime and to mitigate data loss associated with the disaster. To initiate a failover,
promote the target pod at the recovery site to be the new source pod for your production oper-
ations.

Note: Before a failover process, you should configure ActiveDR replication by linking the
source FlashArray at the production side with a target FlashArray at the recovery site to
protect your mission-critical workloads. For more information, see "Setting Up ActiveDR
Replication" on page 141.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 153
Chapter 6:Storage | Pods

To perform a failover process,


1 To speed up the failover process, you may connect the hosts to the volumes in the target pod
at the recovery site before a failover.
For more information, see "Failover Preparation" on the next page.
2 Promote the target pod at the recovery site to be the new source pod.
The promotion status of the target pod changes to promoting while the target pod is
being updated with the latest changes. When the promotion process is complete, the pro-
motion status changes to promoted and the target pod can now allow write access in
addition to read access to the host.
To force an immediate failover while the replica link is in the quiescing state without
waiting for replication to complete, promote the target pod with the Abort Quiesce setting.

For more information, see "Promotion Status of a Pod" on page 146.


3 Shut down the hosts that are associated with the unavailable FlashArray at the original pro-
duction site.
4 Start the hosts that are associated with the target pod at the recovery site.
The recovery site becomes the new production site.
After the unavailable FlashArray is restored, you can perform following processes:
1 "Performing a Reprotect Process after a Failover" on the next page
2 "Performing a Failback Process after a Failover" on page 158
See Figure 6-19 for an illustration of failover to the recovery site.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 154
Chapter 6:Storage | Pods

Figure 6-19. Failover to the Recovery Site

Failover Preparation
To speed up and simplify a failover process, you can connect the hosts to the volumes in the tar-
get pod at the recovery site before a disaster occurs. After this connection, these replica
volumes provide only read access, while the target pod is in a passive state.
In a disaster event, the source pod fails over to a designated target pod that is promoted to allow
read/write access to the host. Before you start a host application, you should remount the file sys-
tems through the host OS. This refreshing process ensures that the host OS or applications
have been cleared and do not contain stale or invalid data from the previous state of the
volumes.

Note: The capability to access volumes in a read-only state depends on the host oper-
ating system and applications to mount and read a read-only volume. This capability var-
ies by operating systems and versions.

Performing a Reprotect Process after a Failover


After a failover, the recovery site becomes the new production site. You can reprotect the data at
the new production site by performing a reprotect process when the unavailable FlashArray at
the original production site is restored. A reprotect process reverses the direction of replication

Pure Storage Confidential - For distribution only to Pure Customers and Partners 155
Chapter 6:Storage | Pods

after a failover so that the original production site becomes the target of replication and the new
production site is protected. See Figure 6-20.
Figure 6-20. Reprotecting Data at the New Production Site after a Failover

To perform a reprotect process,


1 Restore the unavailable FlashArray at the original production site.
2 Select Storage > Pods and select the pod to demote on the restored FlashArray.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 156
Chapter 6:Storage | Pods

3 In the Demote Pod dialog box, demote the pod on the restored FlashArray with the Skip Qui-
esce setting.
Using this setting demotes the pod to allow it to become a target pod without waiting for the
quiescing replica-link status. The replica link automatically reverses its direction. Note
that any data that has not been replicated is preserved for at least 24 hours in the undo-
demote pod.
For more information about demoting a pod, see "Demoting Pods" on page 148.
During the demotion process, if the network is disconnected, the promotion status of the pod
transitions to demoting. The reprotect process is complete when the network is restored and
then the pod promotion status transitions to demoted.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 157
Chapter 6:Storage | Pods

Performing a Failback Process after a Failover


After a failover, the disaster recovery (DR) site becomes the new production site. You can per-
form a failback process to switch the production operations back to the original production site
when the unavailable FlashArray is restored. A failback and a planned failover are both sched-
uled failover processes. The only difference is that a failback process switches the production
operations from the recovery site back to the original production site for data protection. See Fig-
ure 6-21.
Figure 6-21. Before and After a Failback

Pure Storage Confidential - For distribution only to Pure Customers and Partners 158
Chapter 6:Storage | Pods

To fail back to the original production site,


1 Restore the unavailable FlashArray at the original production site.
2 Quiesce the applications on the hosts of the FlashArray at the new production site (recovery
site).
3 Select Storage > Pods and select the pod to demote on the FlashArray at the new pro-
duction site.
4 Demote the pod at the new production site with the Quiesce setting in the Demote Pod dia-
log box.
The promotion status of the pod changes to demoting, and the replica-link status trans-
itions to quiescing.
Setting the Quiesce option demotes the pod to allow it to become a target pod after the rep-
lica-link status changes to quiesced. Using this setting ensures that all local data has
been replicated to the remote pod before the pod is demoted.
For more information about demoting a pod, see "Demoting Pods" on page 148.
5 Wait for the replication from the new production site to complete by monitoring the replica-
link status.
When no more new writes occur, the replica-link status changes to quiesced and the pro-
motion status of the pod changes to demoted. Alternatively, you can monitor the lag or the
recovery point to determine when the last write occurred.
For more information about replica links, see "Replica Links" on page 149.
6 Promote the source pod at the original production site.
The promotion status of the pod changes to promoting and eventually transitions to pro-
moted. When the promotion status transitions to promoted, the replica-link reverses its
direction.
7 Start your production applications on the new source FlashArray.

Performing a Planned Failover


You can perform a planned failover to transition workloads or applications from one site to
another. A planned failover and a failback are both scheduled failover processes. The only dif-
ference is that a failback process switches the operation from the recovery site back to the ori-
ginal production site after the unavailable FlashArray has been restored.
To perform a planned failover,
1 Quiesce the applications on the hosts of the source FlashArray at the production side.
2 Select Storage > Pods and select the source pod to demote at the production site.
3 In the Demote Pod dialog box, demote the source pod with the Quiesce setting.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 159
Chapter 6:Storage | Pods

The promotion status of the pod changes to demoting, and the replica-link status trans-
itions to quiescing.
Setting the Quiesce option demotes the pod to allow it to become a target pod after the rep-
lica-link status changes to quiesced. Using this setting ensures that all local data has
been replicated to the remote pod before the pod is demoted.
For more information about demoting a pod, see "Demoting Pods" on page 148.
4 Wait for the replication to complete by monitoring the replica-link status.
When no more new writes occur, the replica-link status changes to quiesced and the pro-
motion status of the pod changes to demoted. Alternatively, you can monitor the lag or the
recovery point to determine when the last write occurred.
For more information about replica links, see "Replica Links" on page 149.
5 Promote the target pod to be the new source pod by clicking the menu icon of the pod and
selecting Promote....
The promotion status of the target pod changes to promoting while the target pod is
being updated with the most recent write. When the promotion process is complete, the pro-
motion status changes to promoted and the target pod can now allow write access to the
host. As soon as the status transitions to promoted, the replica-link reverses its direction.
6 Start your production applications on the new source FlashArray.

Recovery Strategies for Planned Failovers


During a planned failover, if the target pod or the source pod goes offline unexpectedly, proceed
with these recovery strategies as appropriate.
During the demotion process of the source pod (status demoting)
l If the target pod goes offline, you can promote the source pod by clicking the menu
icon of the source pod and selecting Promote….
The replica-link status changes to unhealthy because the target pod is unavailable.
When the target pod is back online, the replica-link status changes to
replicating.
l If the source pod goes offline, you can force the promotion of the target pod by click-
ing the menu icon of the source pod and selecting Promote… with the Abort Quiesce
setting.
The replica-link status changes to unhealthy because the source pod is unavail-
able. When the source pod becomes available and re-connects to the target pod, the

Pure Storage Confidential - For distribution only to Pure Customers and Partners 160
Chapter 6:Storage | Pods

promotion status of the source pod transitions from demoting to demoted. The rep-
lica link reverses its direction and the target pod becomes the new source pod.
After the demotion process of the source pod (status demoted)
l The replica-link status is quiesced, but the replica link has not reversed its direction.
If the target pod goes offline, you can promote the source pod by clicking the menu
icon of the source pod and selecting Promote….
The replica-link status changes to unhealthy because the target pod is unavailable.
When the target pod comes back online, the replica-link status transitions to rep-
licating.

Performing a Test Recovery Process


You can run a test recovery process for testing purposes such as evaluating your disaster recov-
ery strategy or checking your applications. Simulating a failover in the test environment allows
you to assess if the recovery procedure achieves the designated RPO and RTO values in the
event of a disaster. This feature allows failover testing and promotion of a target pod without dis-
rupting replication to maintain the RPO. See Figure 6-22.
Figure 6-22. Test Recovery

1 Configure ActiveDR replication by associating the source pod with the target pod for data pro-
tection as described in "Setting Up ActiveDR Replication" on page 141
2 Select Storage > Pods and select the target pod to promote at the test site.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 161
Chapter 6:Storage | Pods

3 Promote the target pod by clicking the menu icon of the pod and selecting Promote....
The promotion status of the target pod changes to promoting. When the promotion pro-
cess is complete, the promotion status changes to promoted. After being promoted, the
target pod can now provide read/write access to the host. The source pod continues rep-
licating data in the background in a journal without periodically updating the promoted tar-
get pod.
4 Bring up the host on the target pod.
The data presented to the host will be the point in time when the last data was replicated
and before the target pod was promoted.
5 Perform your tests on the target pod.
In the meantime, replication continues streaming writes in the background in a journal
without periodically updating the target pod. Therefore, you maintain the RPO without los-
ing any data.
6 When the test is complete, terminate the test recovery process by demoting the target pod.
When you demote the target pod,
l The test data written to the target pod will be discarded. However, the data will be
saved in an undo-demote pod that is placed in an eradication pending period.
l ActiveDR replication resumes streaming writes to the target pod.
During an actual failover, when the source FlashArray is offline, ActiveDR replication is dis-
rupted so that no new writes are replicated from the source pod to the target pod. However, if
both the source and target FlashArrays are still online and connected as in the test recovery pro-
cess, ActiveDR replication streams new writes in the background in a journal without periodically
updating the target pod to maintain the RPO. You can optionally choose to stop ActiveDR rep-
lication from the source pod to the target pod.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 162
Chapter 6:Storage | File Systems

File Systems
The Storage > File Systems page displays file systems, managed directories, file exports,
policies, and directory snapshots on the FlashArray. View and manage the storage objects and
the connections between them. Click a file system or a directory to go into its details. See Figure
6-23.
Figure 6-23. Storage > File Systems Page

Pure Storage Confidential - For distribution only to Pure Customers and Partners 163
Chapter 6:Storage | File Systems

The File Systems panel, which is available from the File Systems view, displays a list of file sys-
tems on the array. Click on a file system name for further details.

Creating a File System


To create a file system:
1 Log in to the array.
2 Select Storage > File Systems.
3 In the File Systems panel, click the menu icon and select Create, or click the Create File Sys-
tem (plus) icon.
4 In the pop-up window, specify the name of the new file system, and then click the Create but-
ton.
Creation of a file system automatically creates a root directory. This directory is a managed dir-
ectory named root. This directory can only be destroyed together with the entire file system.

Renaming a File System


Note: Rename a file system to change the name by which Purity//FA identifies the file sys-
tem in administrative operations and displays. Renaming a file system cascades down to
the managed directory names. The new names are effective immediately and the old
names are no longer recognized in CLI, GUI, or REST interactions.
To rename a file system:
1 Log in to the array.
2 Select Storage > File Systems.
3 In the File Systems panel, click the rename icon for the file system you want to rename. The
Rename File System dialog box appears.
4 In the Name field, enter the new name of the file system and click the Rename button.

Destroying a File System


Note: Destroying a file system also destroys all of its directories and directory snapshots.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 164
Chapter 6:Storage | File Systems

To destroy a file system:


1 Log in to the array.
2 Select Storage > File Systems.
3 In the File Systems panel, click the menu icon and then select Destroy.
Alternatively, you can select the destroy icon (garbage) for individual file systems that you
want to destroy, and then click the Destroy button to confirm.
The destroyed file system appears in the Destroyed File Systems panel and begins its erad-
ication pending period. During the eradication pending period, you can recover the file system to
bring it back to its previous state, or manually eradicate the destroyed file system to reclaim
physical storage space. When the eradication pending period has elapsed, Purity//FA starts
reclaiming the physical storage occupied by the file system. Once reclamation starts, either
because you have manually eradicated the destroyed file system, or because the eradication
pending period has elapsed, the destroyed file system can no longer be recovered.
(For information about eradication pending periods, see "Eradication Delays" on page 35. To
configure eradication pending periods in the Settings > System > Eradication Configuration
pane, see "Eradication Delay Settings" on page 285.)

Creating a Directory
The Directories panel displays a list of all managed directories on the array or on the selected
file system.
To create a managed directory:
1 Log in to the array.
2 Select Storage > File Systems.
3 In the Directories panel, click the menu icon and select Create, or click the Create Directory
(plus) icon.
4 In the pop-up window, specify the directory as follows:
l File System: If not pre-selected, select a file system from the drop-down list.
l Name: The name to be used for administration.
l Path: The full path for the new directory. The path for a managed directory can be up
to eight levels deep, seven levels below the root.
Click the Create button and the managed directory is created.
Click on a directory name for further details.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 165
Chapter 6:Storage | File Systems

Renaming a Directory
Note: Rename a managed directory to change the name by which Purity//FA identifies
the managed directory in administrative operations and displays. The new directory name
is effective immediately and the old name is no longer recognized in CLI, GUI, or REST
interactions. Note that the root directory cannot be renamed.
To rename a managed directory:
1 Log in to the array.
2 Select Storage > File Systems.
3 In the Directories panel, click the rename icon for the directory you want to rename. The
Rename Directory dialog box appears.
4 In the Name field, enter the new name of the managed directory, and click the Rename but-
ton.

Creating a File Export


The Directory Exports panel displays a list of all directory exports on the array, for the selected
file system, or the selected directory.
Managed directory exports (i.e., shares) are created by adding export policies to managed dir-
ectories. Export policies are created on the Storage > Policies page. Each policy can be re-
used for multiple exports, each export having its own unique name. For each managed dir-
ectory, there can be one or many exports.
To create a file export:
1 Log in to the array.
2 Select Storage > File Systems.
3 In the Directories panel, select a managed directory by clicking the directory name.
4 In the Directory Exports panel, click the Create Exports (plus) icon.
5 In the pop-up window, specify the export as follows:
l NFS Policy: Select an NFS policy.
l SMB Policy: Select an SMB policy.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 166
Chapter 6:Storage | File Systems

l Export Name: The name of the export. This name is used when mounting on the cli-
ent side.
Click the Create button and the export is created.
By selecting policies for both NFS and SMB, two exports are created in one operation, both with
the same name. This is possible since the two exports reside in different namespaces.
To delete a file export: In the Directory Exports panel, click the Delete Exports icon (garbage) for
the export that you want to remove. Then confirm the action by clicking the Delete button.
To create and manage, enable or disable, export policies and rules, see the Storage > Policies
page.

Adding a Policy
Export policies, snapshot policies for scheduled snapshots, and quota policies can be added to
a managed directory. Adding an export policy is equivalent to creating a file export, as described
above.
Export policies and quota policies are created on the Storage > Policies page, and snapshot
policies are created on the Protection > Policies page.
To add a policy to a managed directory:
1 Log in to the array.
2 Select Storage > File Systems.
3 Select a directory by clicking the directory name.
4 In the Policies panel, click the menu icon and select Add Export Policies, Snapshot
Policies, or Quota Policies.
5 In the pop-up window:
l Select one or more policies to be added.
l For exports, set a name that will be used when mounting on the client side.
6 Click the Add button and the policy is added.

Creating a Directory Snapshot


Snapshots can be created for any managed directory. The Directory Snapshots panel displays a
list of snapshots on the selected directory.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 167
Chapter 6:Storage | File Systems

To manually create a directory snapshot:


1 Log in to the array.
2 Select Storage > File Systems.
3 Select a directory by clicking the directory name.
4 In the Directory Snapshots panel, click the menu icon and select Create, or click the Create
Snapshots (plus) icon.
5 In the pop-up window, specify the snapshot as follows:
l Client Name: The client visible name for the snapshot, for example manual.
l Keep For: Optionally, the time period in which the snapshot is retained before it is
eradicated, specified in the format N[m|h|d|w|y] for example 15m, 1h, 2d, 3w, or
4y. The minimum value is five minutes (5m) and the maximum value is five years (5y).

l Suffix: Optionally, specify a suffix string to replace the unique number that Purity//FA
creates for the directory snapshot.
Click the Create button and the snapshot is created.
To change snapshot attributes or destroy the snapshot, click the menu icon next to the snapshot
and select Edit, Rename, or Destroy. Only manual snapshots can be renamed.
Protection plans and scheduled snapshots are configured through the Protection > Policies
page.

Directory Details
The Directory Details panel displays additional details for the selected directory. The following
information is available:
l File System: The name of the file system where the directory exists.
l Path: The full path of the directory.
l Created: The date and time when the directory was created.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 168
Chapter 6:Storage | Policies

Policies
The Storage > Policies page displays SMB and NFS export policies, which are used to create
file exports. Directory quota policies are used for creating directory quota limits. See Figure 6-
24.
Figure 6-24. Storage > Policies Page

Note: Predefined export policies may exist, which can be used to create SMB and NFS
exports. These policies should be reviewed and updated according to actual require-
ments before use.
For export policies or quota policies, click a policy name to go into its details:
l Member panel - displays directories that are members of the policy.
l Rule panel - displays rules that are added to the policy.
l Details panel - displays information about the policy and its rules, for example: type
of export, enabled or disabled, and the supported NFS version for NFS exports.

Creating an Export Policy


The Export Policy panel displays a list of available policies for NFS and SMB exports. The ver-
sion of NFS is selected later, when adding rules.
To create an export policy:
1 Log in to the array.
2 Select Storage > Policies.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 169
Chapter 6:Storage | Policies

3 In the Export Policy panel, click the menu icon and select Create, or click the Create Policy
(plus) icon.
4 In the pop-up window, specify the snapshot as follows:
l Type: SMB or NFS, selected from the drop-down list.
l Name: The name of the policy.
l Enabled: Click the toggle icon to enable (blue) or disable (gray) the policy.
l Access Based Enumeration: SMB only. To enable this feature, click the toggle icon.
l User Mapping Enabled: NFS only. To disable user mapping, click the toggle icon
(gray).
5 Click the Create button and the export policy is created.
The SMB “Access Based Enumeration” option allows directories and files to be hidden for cli-
ents that have less than generic read permissions. When enabled, these objects are omitted
from the response by the FlashArray.
The NFS “User Mapping Enabled” option allows user UID and GID to be provided by directory
services. User mapping is enabled by default. Disabling this option allows not using directory ser-
vices for file services. Disabling user mapping for existing files or directories might cause access-
ibility issues.

Adding Rules to an Export Policy


To add a rule to an existing SMB or NFS export policy:
1 Click a policy name to access the Rules panel.
2 In the Rules panel, add one or more rules by clicking the menu icon and select Create. Altern-
atively, click the Create Rule (plus) icon.
In the pop-up window, specify the rule as follows:
l Client: Allow only clients that match the specified hostname, IPv4, or IPv6. For
example, *.cs.foo.edu, 192.168.10.2, 192.168.10.0/24, 2001:d-
b8::7873, or 2001:db8::/32. If omitted, the default value is “*” which means no
clients are restricted.
l For SMB exports:
l Anonymous Access Allowed: Click the Enabled toggle icon to enable
(blue) if you want to allow anonymous access.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 170
Chapter 6:Storage | Policies

l SMB Encryption Required: Click the Enabled toggle icon to enable


(blue) if you want SMB encryption to be required.
l For NFS exports:
l Access: Select root-squash (default), no-root-squash, or all-squash.
l Anonymous UID: Specify the UID (defaults to 65534).
l Anonymous GID: Specify the GID (defaults to 65534).
l Permission: Select rw (read and write) or ro (read only).
l Version: Select NFSv3 or NFSv4 for version 3 or version 4.1 respectively.
This can be changed later, in the Details pane.
3 Click the Create button and the export policy is created.
A client's access is granted only if Client satisfies one of the following:
l Matches the IP or IP CIDR, if the item can be converted into an IP (CIDR).
l Matches the full hostname (either with or without wildcard characters * and ?) that the
client IP belongs to, if the item cannot be converted into an IP (CIDR).
Valid IP (CIDR)
A valid IP (CIDR) item is an IPv4 or IPv6 address with or without prefix length. Examples of each
type are shown in this table.
Table 6-1. Example IP Addresses
Address Type Example
IPv4 with prefix 192.168.0.0/16
IPv4 without prefix 192.168.0.1
IPv6 with prefix fd01::1:0/112
IPv6 without prefix fd01::123

Valid Hostname
A valid hostname is one of the following:
l A fully qualified domain name (FQDN), for example mycomputer.mydomain.
l A hostname with wildcard characters, for example mycomputer*, where * matches
zero or more characters, or mycomputer.m?domain, where ? matches one char-
acter.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 171
Chapter 6:Storage | Policies

Table 6-2. Examples of Client Values


To Grant Access To ... Client Value to Use
All IPs *
All IPv4 addresses
(and only IPv4 addresses) 0.0.0.0/0
All IPv6 addresses
(and only IPv6 addresses) ::/0
The IP by itself or with a prefix length 32.
A single IPv4 address Examples: 192.168.0.1, 192.168.0.1/32
The IP by itself or with a prefix length 128.
A single IPv6 address Examples: fd01::123, fd01::123/128
IP with prefix length (any IP inside the range but typically
the subnet IP). Example for the range of IPs from
A range of IPv4 addresses 192.168.0.0 to 192.168.255.255: 192.168.0.0/16
IP with prefix length (any IP inside the range but typically
the subnet IP). Example for the range of IPs from
A range of IPv6 addresses fd01::1:0 to fd01::1:ffff: fd01::1:0/112
The host's FQDN. Example: mycomputer@mydomain
(The FQDN must match the DNS information provided to the
A single host FlashArray.)
All hosts within a domain Example: *.mydomain
Example for all computers with names beginning with
A set of hosts within a domain pure0 in the domain mydomain: pure0*.mydomain

The SMB “Anonymous Access Allowed” option allows clients that do not provide credentials
access to the export. If the option is disabled, anonymous users are restricted access.
With the “SMB Encryption Required” option enabled, data encryption is enabled for the export.
This requires the remote client to use SMB encryption. Clients that do not support encryption will
be denied access. By default, when SMB encryption is enabled, only SMB 3.0 clients are
allowed access. If the option is disabled, negotiation of encryption is enabled but data encryption
is not turned on for this export.
With the NFS “root-squash” option selected, which is the default, client users and groups with
root privileges are prevented from mapping their root privileges to a file system. All users with
UID 0 will have their UID mapped to the anonymous UID (default 65534). All users with GID 0
will have their GID mapped to anonymous GID (default 65534). With the “all-squash” option, all
users are mapped to the anonymous UID/GID. The “no-root-squash” option allows root users
and groups to access the file system with root privileges.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 172
Chapter 6:Storage | Policies

The NFS “rw” option allows both read and write requests, which is the default. With the “ro”
option, the exports that use the policy provide read-only access and any request that changes
the file system is denied. The file access timestamp will be updated for read-only access as well
as for read and write.

Creating a File Export by Member


Exports can be created for any managed directory. Create file exports by adding members to file
policies as shown here, or by creating exports on the Storage > File Systems page.
To create a file export by adding a member:
1 Log in to the array.
2 Select Storage > Policies.
3 Select an SMB or NFS export policy by clicking the export policy name.
4 In the Members panel, click the menu icon and select Add Member, or click the Add Mem-
ber (plus) icon.
5 In the pop-up window, specify the export as follows:
l Directory: Select a target directory for the export.
l Export Name: The name of the export. This name is used when mounting on the cli-
ent side.
Click the Create button and the export is created.
To delete a file export: In the Members panel, click Remove Member. Then confirm the action
by clicking the Remove button.

Creating a Quota Limit


The Quota Policy panel displays a list of available quota policies.
To create a quota policy:
1 Log in to the array.
2 Select Storage > Policies.
3 In the Quota Policies panel, click the menu icon and select Create, or click the Create Policy
(plus) icon.
4 In the pop-up window, specify the quota policy as follows:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 173
Chapter 6:Storage | Policies

l Name: The name of the policy.


l Enabled: Click the toggle icon to enable (blue) or disable (gray) the policy.
Click the Create button and the quota policy is created.
Add one or more rules to the policy as follows:
1 Click a quota policy name to access the Members and Rules panels.
2 In the Rules panel, add one or more rules by clicking the menu icon and select Create. Altern-
atively, click the Create Rule (plus) icon.
3 In the pop-up window, specify the rule as follows:
l Quota Limit: The quota limit is specified as an integer. Click the drop-down menu to
select one of the suffix letters K, M, G, T, P, representing KiB, MiB, GiB, TiB, and PiB,
respectively, where "Ki" denotes 2^10, "Mi" denotes 2^20, and so on.
l Notifications: Recipients are user or group. If omitted, no notifications are sent.
l Enforced: If the quota limit is exceeded, all future space increasing operations will be
prevented. There can be one enforced limit for each quota policy. Click the Enabled
toggle icon to enable (blue) if you want the quota rule to be enforced.
l Ignore Usage: Adds the rule even if the quota limit is already exceeded.
Click the Add button and the rule is created.
Apply a directory quota policy by adding one or more managed directories as members:
1 Select Storage > Policies.
2 In the Quota Policies panel, select a quota policy by clicking the policy name.
3 In the Members panel, click the menu icon and select Add Member.
4 In the pop-up window, select one or more available directories.
Click the Add button and the quota policy is applied.
Members can be removed from the quota policy by selecting Remove Member in the Members
panel menu. Select directories to be removed and click the Remove button.

Modifying a Quota
Once quota rules are defined, they can be modified or renamed. Members can be added or
removed.
1 Log in to the array.
2 Select Storage > Policies.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 174
Chapter 6:Storage | Policies

3 In the Quota Policies panel, select a quota policy by clicking its name.
To modify a rule, click the edit icon next to the rule. Alternatively, for multiple rules, click the
menu icon and select Edit.... Then, in the pop-up window, specify one or more rules to be mod-
ified, separated by commas.
When modifying or enforcing an existing quota limit, the “Ignore Usage” option can be used for
overriding directory usage scanning, and to allow the changes.
Click the Save button and the quota rule is modified.

Editing a Policy
Policies can be temporarily disabled and re-enabled, or selected features can be disabled or
enabled, by editing the policy:
1 Log in to the array.
2 Select Storage > Policies.
3 In the Export Policies panel or the Quota Policies panel, click the menu icon for the policy
and select Edit....
Click the toggle icon to enable (blue) or disable (gray) each feature and then click the Save but-
ton.

Enabling or Disabling a Policy


Quota policies and export policies can be temporarily disabled and re-enabled. To disable or
enable an export policy:
1 Log in to the array.
2 Select Storage > Policies.
3 In the Export Policies panel or the Quota Policies panel, click the menu icon for the policy
and select Edit....
Click the toggle icon to enable (blue) the policy or disable (gray) and then click the Save button.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 175
Chapter 6:Storage | Policies

Enabling SMB Access Based Enumeration


Access Based Enumeration (ABE) is an SMB feature that allows an export to hide directories
and files for clients that have less than generic read permissions. When enabled, these objects
are omitted from the response by the FlashArray. Changes affect connected clients only after
they refresh their view or after they reconnect. To disable or enable ABE:
1 Log in to the array.
2 Select Storage > Policies.
3 Select an SMB export policy by clicking the export policy name.
4 In the Details panel, click the menu icon and select Enable....
Click the toggle icon to enable (blue) or disable (gray) and then click the Save button.

Changing NFS Version


1 Log in to the array.
2 Select Storage > Policies.
3 Select an NFS export policy by clicking the export policy name.
4 In the Details panel, click the menu icon and select Edit Version....
Select NFSv3, NFSv4, or both and then click the Save button.

Renaming a Policy
To rename a quota policy or an export policy:
1 Log in to the array.
2 Select Storage > Policies.
3 In the Export Policies panel or the Quota Policies panel, click the menu icon for the policy you
want to rename and select Rename....
4 In the Name field, enter the new name of the policy and click the Rename button.
The policy is renamed.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 176
Chapter 6:Storage | Policies

Deleting an Export Policy


To delete a quota policy or an export policy:
1 Log in to the array.
2 Select Storage > Policies.
3 In the Export Policies panel or the Quota Policies panel, click the menu icon for the policy you
want to delete and select Delete.
4 Click the Delete button to confirm.
The policy is deleted.

Storage Policy Based Management


Storage Policy Based Management (SPBM) is an interface within vCenter that enables users to
have virtual machine storage automatically placed and configured on the desired storage
resource with the specificed features and characteristics of a customer-created policy. SPBM
allows Purity to advertise available features and characteristics to VMware so that customers
can include those in their native VM storage policies. These policies can be assigned at a VM or
virtual disk level. For more information on virtual volumes, our advertised capabilities, including
configuration steps, refer to the Pure Storage Virtual Volume User Guide on the Knowledge
Base site at https://support.purestorage.com.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 177
Chapter 7:
Protection
The Protection page displays snapshots, policies, protection groups, ActiveDR replica links, and
ActiveCluster pods that have been promoted but not linked.
The Protection page includes the following tabs:
l Arrays
l Snapshots
l Policies
l Protection Groups
l ActiveDR
l ActiveCluster

Pure Storage Confidential - For distribution only to Pure Customers and Partners 178
Chapter 7:Protection | Array

Array
The Protection > Array page displays a summary of the protection components on the array, a
list of other arrays that are connected to this array, and a list of offload targets, such as Azure
Blob containers, NFS devices, and S3 buckets, that are connected to this array. See Figure 7-1.
Figure 7-1. Protection – Array

Pure Storage Confidential - For distribution only to Pure Customers and Partners 179
Chapter 7:Protection | Array

The array summary panel (with the array name in the header bar) contains a series of rectangles
(technically known as hero images) representing the protection components, such as snap-
shots, protection groups, and policies, on the array. The numbers inside each hero image rep-
resent the number of objects created for each of the respective components. Click a rectangle to
jump to the page containing the details for that particular protection component.
Array attributes, such as array name and array time, are configured through the Settings > Sys-
tem page.
The Connected Arrays panel displays a list of arrays that are connected to the current array. A
connection must be established between two arrays in order for array-based data replication to
occur.
Purity//FA offers three types of replication: asynchronous replication, ActiveDR replication, and
ActiveCluster replication.
Asynchronous replication allows data to be replicated from one array to another. When two
arrays are connected for asynchronous replication, the array where data is being transferred
from is called the local (source) array, and the array where data is being transferred to is called
the remote (target) array. Asynchronous replication is configured through protection groups. For
more information about protection groups, refer to the Protection > Protection Groups section.
ActiveDR replication allows pod-to-pod, continuous replication of compressed data from a
source array at the production site to a target array at the recovery site, providing a near-zero
Recovery Point Objective (RPO). For more information about ActiveDR replication, see "Act-
iveDR Replication" on page 140.
ActiveCluster replication allows I/O to be sent into either of two connected arrays and have it
synced up on the other array. ActiveCluster replication is configured through pods. For more
information about pods, refer to Pods.
For information about Purity//FA replication requirements and interoperability details, see the
Purity Replication Requirements and Interoperability Matrix article on the Knowledge site at
https://support.purestorage.com.
Arrays are connected using a connection key, which is supplied from one array and entered into
the other array.
The Connected Arrays panel displays a list of FlashArray arrays that are connected to the cur-
rent array, and the attributes associated with each connection. See Figure 7-2.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 180
Chapter 7:Protection | Array

Figure 7-2. Array Connections

The Status column displays the connectivity status between the current array and each remote
array. A Status of connected means the current array is connected to the remote array. Net-
work connection issues or firewall issues could cause the current array not to establish a con-
nection to the remote array.
The Type column displays the type of connection that has been established between the two
arrays for asynchronous replication (async-replication) and synchronous replication
(sync-replication) purposes. Array connections set to async-replication support
asynchronous replications only, while array connections set to sync-replication support
both synchronous and asynchronous replications.
The Management Address column displays the virtual IP address or FQDN of the other array.
The Replication Address column displays the IP address or FQDN of the interface(s) on the
other array that have been configured with the replication service. The management and rep-
lication addresses only appear for the arrays from where an array connection was made. If the
array connection was made from its peer array, the Management Address and Replication
Address columns are empty.
The Array Connections panel also allows you to create new connections to other FlashArray
arrays, view and copy the array connection key, and configure network bandwidth throttling lim-
its for asynchronous replications.
The Network bandwidth throttling feature regulates when and how much data should be trans-
ferred between the arrays. Once two arrays are connected, optionally configure network band-
width throttling to set maximum threshold values for outbound traffic.
In the Array Connections panel, the Throttled column indicates whether network bandwidth throt-
tling has been enabled (True) or disabled (False).
Two different network bandwidth limits can be set:
l Set a default maximum network bandwidth threshold for outbound traffic.
and/or

Pure Storage Confidential - For distribution only to Pure Customers and Partners 181
Chapter 7:Protection | Array

l Set a range (window) of time in which the maximum network bandwidth threshold is in
effect.
If both thresholds are set, the “window” limit overrides the “default” limit.
The limit represents an average data rate, so actual data transfer rates can fluctuate slightly
above the configured limit.
To completely stop the data transfer process, refer to "Managing Replica Links" on page 145
and use the Replica Links pause and resume actions.
In the following example, the current array has been configured to throttle whenever the rate of
data being transferred to array vm-rep exceeds 4 GB/s, except between 10:00am and 3:00pm,
when throttling will occur whenever the data transfer rate exceeds 2 GB/s. See Figure 7-3.
Figure 7-3. Editing Bandwidth Throttling

Offload Targets
Note: Offload targets are not supported on FlashArray//C.
The offload target feature enables system administrators to replicate point-in-time volume snap-
shots from the array to an external storage system. Each snapshot is an immutable image of the
volume data at that instance in time. The data is transmitted securely and stored unencrypted on
the storage system.
Before you can connect to, manage, and replicate to an offload target, the respective Purity//FA
app must be installed. For example, to connect to an NFS offload target, the Snap to NFS app
must be installed. To connect to an Azure Blob container or S3 bucket, the Snap to Cloud app

Pure Storage Confidential - For distribution only to Pure Customers and Partners 182
Chapter 7:Protection | Array

must be installed. To determine if apps are installed on your array, run the pureapp list com-
mand. To install the Snap to NFS or Snap to Cloud app, contact Pure Storage Technical Ser-
vices.
The Offload Targets panel displays a list of all offload targets that are connected to the array.
See Figure 7-4.
Figure 7-4. Offload Targets Panel

Each offload target represents an external storage system such as an Azure Blob container,
NFS device, or S3 bucket to where Purity//FA volume snapshots (generated via protection group
snapshots) can be replicated.
An array can be connected to one offload target at a time, while multiple arrays can be con-
nected to the same offload target.
An offload target can have one of the following statuses:
l Connected: Array is connected to the offload target and is functioning properly.
l Connecting: Connection between the array and offload target is unhealthy, possibly
due to network issues. Check the network connectivity between the interfaces, dis-
connect the array from the offload target, and then reconnect. If the issue persists,
contact Pure Storage Technical Services.
l Not Connected: Offload app is not running. Data cannot be replicated to offload tar-
gets. Contact Pure Storage Technical Services.
l Scanning: A connection has been established between the array and offload target,
and the system is determining the state of the offload target. Once the scan suc-
cessfully completes, the status will change to Connected.
Offload targets that are disconnected from the array do not appear in the list of offload targets.
Whenever an array is disconnected from an offload target, any data transfer processes that are
in progress are suspended. The processes resumes when the connection is re-established.
In the Offload Targets panel, click the name of the offload target to view its details.
The Offload Targets detailed view, which is accessed by clicking the name of the offload target
from the Protection > Array > Offload Targets panel, displays a list of protection groups that are

Pure Storage Confidential - For distribution only to Pure Customers and Partners 183
Chapter 7:Protection | Array

connected to the offload target and the protection group snapshots that have been replicated
and retained on the offload target.
The Protection Groups panel displays a list of all protection groups both local and remote to the
array that are connected to the offload target. If the protection group exists on local array, click
the name of the protection group to drill down to its protection group details; otherwise, hover
over the name of the protection group to view its snapshot retention details.
In the Protection Group Snapshots panel, the details for each snapshot include the snapshot
name, source array and protection group, replication start and end times, amount of data trans-
ferred, and replication progress. The data transferred amount is calculated as the size difference
between the current and previous snapshots after data reduction.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 184
Chapter 7:Protection | Array

In Figure 7-5, an offload target named nfs-target is connected to array pure-001 and is an
offload target for protection group pgroup01 . Twelve protection group snapshots have been
replicated to offload target nfs-target.
Figure 7-5. Connecting an Offload Target to an Array

Click a protection group to further drill down to its details, including the volumes that it protects,
the snapshot and replication schedules, and the offload targets to where the protected volumes
are replicated. In the Protection Group Snapshots panel, the protection group snapshots listed
represent the snapshots that have been taken and retained on the current array in accordance
with the snapshot schedule.
To replicate volume snapshots to an offload target, the array must be able to connect to and
reach the external storage system. Before you configure an offload target on the array, perform
the following steps to verify that the network is set up to support the offload process:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 185
Chapter 7:Protection | Array

1 Verify that at least one interface with the replication service is configured on the array. Assign
an IP address to the port; this will be interface that will be used to connect to the target
device, such as an Azure Blob container, a NAS/NFS device, an NFS storage system, an S3
bucket, or a generic Linux server. For optimum performance, an Ethernet interface of at least
10GbE is recommended.
2 Prepare the offload target.
l For Azure Blob, create a Microsoft Azure Blob container and set the storage account
to the hot access tier. Grant basic read and write ACL permissions, and verify that the
container contains no blobs. By default, server-side encryption is enabled for the con-
tainer and cannot be disabled.
l For NFS, create the NFS export, granting read and write access to the array for all
users.
l For S3, create an Amazon S3 bucket. Grant basic read and write ACL permissions,
and enable default (server-side) encryption for the bucket. Also verify that the bucket
is empty of all objects and does not have any lifecycle policies.
3 Verify that the array can reach the offload target.
After you have prepared the network connections on the array to support replication to an offload
target, perform the following high-level steps to configure the offload target on the array:
1 Connect the array to the offload target.
l For Azure Blob, creating the connection to the Microsoft Azure Blob container
requires the Azure Blob account name and the secret access key, both of which are
created through the Microsoft Azure storage website.
l For NFS, creating the connection requires the host name or IP address of the server
(such as the NFS server) and the mount point on the server.
l For S3, creating the connection to the Amazon S3 bucket requires the bucket's
access key ID and secret access key, both of which are created through Amazon
Web Services.
2 Define which volumes are to be replicated to the offload target.
3 Create a protection group.
4 Add the volumes to the protection group.
5 Add the offload target to the protection group.
6 To replicate data to the offload target on a scheduled basis, set the replication schedule for
the protection group, and then enable the schedule to begin replicating the volume snapshot

Pure Storage Confidential - For distribution only to Pure Customers and Partners 186
Chapter 7:Protection | Array

data to the offload target according to the defined schedule. Skip this step if you only want to
replicate data on demand.
Snapshot data can also be replicated on demand. On-demand snapshots represent single snap-
shots that are manually generated and retained on the source array at any point in time. By
default, an on-demand snapshot is retained indefinitely or until it is manually destroyed. When
generating an on-demand snapshot, optionally add a suffix to the snapshot name, apply the
scheduled retention policy to the snapshot, and asynchronously replicate the on-demand snap-
shot to the offload target. See Figure 7-6.
When an on-demand snapshot is replicated, and no retention policy is applied, the snapshot is
retained on both the source and target arrays. If a retention policy is applied, the snapshot will
not be retained on the source after replication, although one snapshot may be kept as a
baseline. To keep the snapshot on the source after replication, take another on-demand snap-
shot without replication.
Figure 7-6. Create Snapshot

The "Optional Suffix" option allows you to add a unique suffix to the on-demand snapshot name.
The suffix name, which can include letters, numbers and dashes (-), replaces the protection
group snapshot number in the protection group snapshot name. Select the “Apply Retention”
option to apply the scheduled snapshot retention policy to the on-demand snapshot. If you do
not enable “Apply Retention”, the on-demand snapshot is saved until you manually destroy it.
Select the “Replicate Now” option to replicate the snapshot to the target arrays.
Restoring a volume brings the volume back to the state it was when the snapshot was taken.
Restoring a volume from an offload target involves getting the volume snapshot from the offload
target, and then copying the restored volume snapshot to create a new volume or overwrite an
existing one. Volumes snapshots that have been replicated to an offload target can only be
restored through the Purity//FA system.
Any array that is connected to the offload target can get the volume snapshots. However, only
the array that configured the offload target can modify its protection group replication schedule
and destroy, recover, and eradicate the protection group snapshots on the offload target.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 187
Chapter 7:Protection | Array

Destroying a protection group implicitly destroys all of its protection group snapshots. Des-
troying a protection group snapshot destroys all of its protection group volume snapshots,
thereby reclaiming the physical storage space occupied by its data.
Protection groups and protection group snapshots created for offload targets follow the same
eradication pending behavior for most other FlashArray storage objects.

Connecting Arrays
Connect two arrays to perform asynchronous and synchronous replication.
To connect two arrays:
1 Log in to one of the arrays.
2 Select Protection > Array.
3 In the Connected Arrays panel, click the menu icon and select Get Connection Key. The
Connection Key pop-up window appears.
4 Copy the connection key string.
5 Log in to the other array.
6 Select Protection > Array.
7 Click the menu icon and select Connect Array. The Connect Array pop-up window appears.
8 Set the following connection details:
l In the Management Address field, enter the virtual IP address or FQDN of the other
array.
l In the Type field, select the connection type. Valid connection types include async-
replication for asynchronous replication, and sync-replication for syn-
chronous replication.
Array connections set to async-replication support asynchronous replications
only, while array connections set to sync-replication support both synchronous
and asynchronous replications.

Note: ActiveDR supports both connection types. If ActiveCluster is used, you


should specify the sync-replication type for the array connection. Other-
wise, the async-replication type is sufficient.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 188
Chapter 7:Protection | Array

l In the Connection Key field, paste the connection key string that you copied from the
other array.
l In the Replication Address field, enter the IP address or FQDN of the interface on the
other array.
9 Click Connect. The array appears in the list of connected arrays and a green check mark
appears in the row, indicating that the two arrays are successfully connected.

Configuring Network Bandwidth Throttling


Once two arrays are connected, optionally configure network bandwidth throttling to set max-
imum threshold values for outbound traffic.
To configure network bandwidth throttling:
1 Log in to the array in which you want to set threshold values for outbound traffic.
2 Select Protection > Array.
3 In the Connected Arrays panel, click the edit icon for the array you want to configure network
bandwidth throttling. The Edit Bandwidth Throttling dialog box appears.
4 Configure the following options:
l To specify a default bandwidth limit, enable (blue) the Default Throttle toggle button
and specify a bandwidth limit for the amount of data transferred to the remote array
per second. The bandwidth limit must be between 1 MB/s and 4 GB/s. To completely
stop the data transfer process, refer to "Managing Replica Links" on page 145 and
use the Replica Links pause and resume actions.
and/or
l To specify a window of time during which network bandwidth throttling takes effect,
enable (blue) the Window Throttle toggle button, select the start and end times, and
specify a bandwidth limit for the amount of data transferred to the remote array during
the time range. The bandwidth limit must be between 1 MB/s and 4 GB/s.
If you set both limits, the “window” limit overrides the “default” limit.
5 Click Save. Bandwidth limit changes take effect immediately.

Getting the Array Connection Key


To get the array connection key:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 189
Chapter 7:Protection | Array

1 Log in to the array.


2 Select Protection > Array.
3 In the Connected Arrays panel, click the menu icon and select Get Connection Key. The
Connection Key pop-up window appears.
4 Copy the connection key string.

Disconnecting Arrays
For asynchronous replication, disconnecting two arrays suspends any in-progress data transfer
processes. The process resumes when the arrays are reconnected.
For synchronous replication, you cannot disconnect the arrays if any pods are stretched
between the two arrays.
To disconnect two arrays:
1 Log in to one of the arrays.
2 Select Protection > Array.
3 In the Connected Arrays panel, click the disconnect icon (X) for the array you want to dis-
connect.
4 Click Disconnect.

Displaying Offload Targets Connected to the Array


Select Protection > Array. The Offload Targets panel displays a list of offload targets that are
connected to the array. Offload targets that are disconnected from the array do not appear in the
list.

Displaying Protection Group and Volume Snapshot


Details for an Offload Target
1 Select Protection > Array.
2 Click the name of the offload target.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 190
Chapter 7:Protection | Array

The Protection Groups panel displays a list of all protection groups both local and remote to
the array that are connected to the offload target. If the protection group exists on local
array, click the name of the protection group to drill down to its protection group details; oth-
erwise, hover over the name of the protection group to view its snapshot retention details.
The Protection Group Snapshots panel displays a list of protection group snapshots that
have been replicated to the offload target. To further drill down to see the volume snap-
shots for a protection group snapshot, click the corresponding Get snapshots from offload
targets (download) icon.

Connecting the Array to an Azure Blob Container


Before you connect the array to the Azure Blob container, ensure that you have the Azure Blob
account name, the Azure Blob container name, and the secret access key. If this is the first time
a FlashArray array is connecting to the Azure Blob container, verify that the container is empty.
1 Select Protection > Array.
2 In the Offload Targets panel, click the connect icon. The Connect Offload Target pop-up win-
dow appears.
3 In the Connect Offload Target pop-up window, specify the following details:
l Protocol: Select azure from the drop-down list.
l Name: Type a name for the Azure Blob offload target on the array.
l Account: Type the Microsoft Azure Blob account. The account name is between 3
and 24 characters in length.
l Secret Access Key: Type the secret access key of the Azure Blob account to authen-
ticate requests between the array and Azure Blob container.
l Container: Type the name of the Microsoft Azure Blob container. If not specified, the
default is offload.
l If this is the first time a FlashArray array is connecting to this container, select the
check box next to Initialize container as offload target to prepare the Azure Blob con-
tainer as an offload target. The array will only initialize the Azure Blob container if it is
empty.
If other FlashArray arrays have already connected to this container, do not select the
check box.
4 Click Connect.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 191
Chapter 7:Protection | Array

Note: Connecting to an NFS offload target is not supported on Cloud Block Store.

Connecting the Array to an NFS Offload Target


Before you connect the array to the NFS offload target, verify you have the hostname or IP
address of the NFS server and the location of the mount point on the NFS server.
1 Select Protection > Array.
2 In the Offload Targets panel, click the connect icon. The Connect Offload Target pop-up win-
dow appears.
3 In the Connect NFS Target pop-up window, specify the following details:
l Protocol: Select nfs from the drop-down list.
l Name: Type a name for the NFS offload target on the array.
l Address: Type the hostname or IP address of the NFS server.
l Mount Point: Type the NFS export on the NFS server
l Mount Options: Specify mount options, as applicable. List mount options in comma-
separated value (CSV) format. Supported mount options include port, rsize,
wsize, nfsvers, and tcp or udp, and are common options available to all NFS file
systems.
4 Click Connect.

Connecting the Array to an S3 Bucket


Before you connect the array to the S3 bucket, verify you have the name of the S3 bucket and its
access key ID and secret access key. If this is the first time a FlashArray array is connecting to
the S3 bucket, verify that the bucket is empty.
1 Select Protection > Array.
2 In the Offload Targets panel, click the connect icon. The Connect Offload Target pop-up win-
dow appears.
3 In the Connect S3 Target pop-up window, specify the following details:
l Protocol: Select s3 from the drop-down list.
l Name: Type a name for the S3 offload target on the array.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 192
Chapter 7:Protection | Array

l Access Key ID: Type the access key ID of the AWS account. The access key is 20
characters in length.
l Bucket: Type the name of the Amazon S3 bucket.
l Secret Access Key: Type the secret access key of the AWS account to authenticate
requests between the array and S3 bucket. The secret access key is 40 characters in
length.
l If this is the first time a FlashArray array is connecting to this bucket, select the check
box next to Initialize bucket as offload target to prepare the S3 bucket as an offload
target. The array will only initialize the S3 bucket if it is empty.
If other FlashArray arrays have already connected to this bucket, do not select the
check box.
4 Click Connect.

Disconnecting the Array from an Offload Target


1 Select Protection > Array.
2 In the Offload Targets panel, click the X disconnect icon next to the offload target you want to
disconnect.
The Disconnect Target pop-up window appears.
3 Click Disconnect.

Restoring a Volume Snapshot from an Offload Target


to the Array
1 Select Protection > Array.
2 Click the offload target from where you want to restore the volume snapshot.
3 In the Protection Group Snapshots panel, click the Get snapshots from offload target (down-
load) icon.
The Get Volume Snapshots pop-up window appears.
4 In the Existing Snapshots column, select the volume snapshot(s) you want to restore from
the offload target.
5 Click Get to immediately restore the selected volume snapshot.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 193
Chapter 7:Protection | Array

The Summary panel appears with the list of restored volume snapshots. Click OK. Option-
ally click the Go to Volumes page link to view the restored snapshots in the Volume Snap-
shots panel.
Once a volume snapshot has been restored, it can be copied to create a new volume or over-
write an existing one.

Destroying an Offloaded Protection Group Snapshot


1 Select Protection > Array.
2 Click the offload target from where you want to destroy the offloaded protection group snap-
shot.
3 In the Protection Group Snapshots panel, click the Destroy Snapshot icon. The Destroy Pro-
tection Group Snapshots pop-up window appears.
4 Click Destroy.
The destroyed snapshot appears in the Destroyed Protection Group Snapshots panel and
begins its eradication pending period.
During the eradication pending period, you can recover the protection group snapshot and its
volume snapshots to bring it back to its previous state, or manually eradicate the destroyed pro-
tection group snapshot to reclaim physical storage space occupied by its volume snapshots.
When the eradication pending period has elapsed, Purity//FA starts reclaiming the physical stor-
age occupied by the volume snapshots. Once reclamation starts, either because you have
manually eradicated the destroyed protection group snapshot, or because the eradication
pending period has elapsed, the destroyed protection group snapshots and its volume snap-
shots can no longer be recovered.
(See "Eradication Delays" on page 35 for information about eradication pending periods. Erad-
ication pending periods are configured in the Settings > System > Eradication Configuration
pane. See "Eradication Delay Settings" on page 285.)

Recovering a Destroyed Offloaded Protection Group


Snapshot
1 Select Protection > Array.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 194
Chapter 7:Protection | Array

2 Click the offload target from where you want to recover the destroyed protection group snap-
shot.
3 At the bottom of the Protection Group Snapshots panel, click Destroyed to expand the win-
dow. The Destroyed Protection Group Snapshots panel appears.
4 In the Destroyed Protection Group Snapshots panel, click the Recover Protection Group
Snapshot icon. The Recover Protection Group Snapshot pop-up window appears.
5 Click Recover.

Eradicating a Destroyed Offloaded Protection Group


Snapshot
1 Select Protection > Array.
2 Click the offload target from where you want to eradicate the destroyed protection group
snapshot.
3 At the bottom of the Protection Group Snapshots panel, click Destroyed to expand the win-
dow. The Destroyed Protection Group Snapshots panel appears.
4 In the Destroyed Protection Group Snapshots panel, click the Eradicate Protection Group
Snapshot icon. The Eradicate Protection Group Snapshot pop-up window appears.
5 Click Eradicate.

Default Protection for Volumes


The Protection > Array > Default Protection panel lists the default protection groups both for
the root of the array and for each pod on the array. The container name '-' signifies the root of the
array. The default protection group list at the root of the array applies to new or copied volumes
that are not members of a pod. Figure 7-7 shows an array root default protection group list that
contains pg1 and pg2, and also a separate pod default protection group list for each pod.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 195
Chapter 7:Protection | Array

Figure 7-7. Default Protection

When default protection is not enabled, the panel is empty.


See "Automatic Protection Group Assignment for Volumes" on page 38 for information about
how default protection group lists are used for SafeMode protection.

Customizing a Default Protection Group List


Each list, for either the root of the array or for a pod, is configured separately.
1 Select Protection > Array.
2 In the Default Protection panel, click the Set Default Protection icon on the right of the appro-
priate row, either the '-' row for the root of the array or the row for a pod.
The Set Default Protection dialog appears.
3 The Available Protection Groups column on the left lists protection groups for the root of the
array. In this column, select the protection groups to be added to the default protection group
list. Those protection groups now appear in the Selected Protection Groups column on the
right.
4 Click Set.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 196
Chapter 7:Protection | Snapshots

Disabling Default Protection


You opt out of default protection separately for each pod and for the root of the array.
Opting out is only possible when protection groups in the default protection group list are empty
and unlocked. Contact Pure Storage Technical Services to opt out when a ratcheted protection
group is involved.
1 Select Protection > Array.
2 In the Default Protection panel, click the Set Default Protection icon on the right of the appro-
priate row, either the '-' row for the root of the array or the row for a pod.
The Set Default Protection dialog appears.
3 In the Selected Protection Groups column on the right, select Clear all.
4 Repeat for other rows as required.
5 Click Set.

Snapshots
The Protection > Snapshots page enables you to manage snapshots and contains panels for
volume snapshots and directory snapshots.
The Directory Snapshots panel only contains locally created snapshots. See Figure 7-8.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 197
Chapter 7:Protection | Snapshots

Figure 7-8. Protection – Snapshots

The details for each snapshot include the snapshot name, date and time created, and amount of
data transferred. The data transferred amount is calculated as the size difference between the
current and previous snapshots after data reduction.

Destroying a Snapshot
To destroy a volume snapshot or directory Snapshot:
1 Log in to the array.
2 Select Protection > Snapshots.
3 In the Volume Snapshots or Directory Snapshots panel, select the menu icon and then select
Destroy.
4 Select one or more snapshots from the list and then click the Destroy button.
The destroyed snapshot appears in either the volume snapshots or directory snapshots Des-
troyed panel and begins its eradication pending period.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 198
Chapter 7:Protection | Snapshots

During the eradication pending period, you can recover the snapshot to bring it back to its pre-
vious state, or manually eradicate the destroyed snapshot to reclaim physical storage space.
When the eradication pending period has elapsed, Purity//FA starts reclaiming the physical stor-
age occupied by the snapshots. Once reclamation starts, either because you have manually
eradicated the destroyed snapshot, or because the eradication pending period has elapsed, the
destroyed snapshot can no longer be recovered.
The length of the eradication pending period typically is different for SafeMode-protected objects
and other objects, and is configured in the Settings > System > Eradication Configuration pane.
See "Eradication Delays" on page 35 and "Eradication Delay Settings" on page 285.

Recovering a Snapshot
To recover a snapshot:
1 Log in to the array.
2 Select Protection > Snapshots.
3 In the Volume Snapshots or Directory Snapshots panel, select the Destroyed drop-down
menu.
4 Select the menu icon, select Recover, and then select one or more snapshots to recover.
Alternatively, you can recover individual snapshots by clicking the recover (clock) icon in the
row of a single snapshot that you want to recover.
5 Click Recover.
The recovered volume snapshots or directory snapshots return to the associated list of existing
snapshots.

Eradicating a Snapshot
Eradicating a snapshot permanently deletes it. During the eradication pending period, you can
manually eradicate destroyed snapshots to reclaim physical storage space that they occupy.
Once eradication starts, the destroyed snapshot can no longer be recovered.
To eradicate a snapshot:
1 Log in to the array.
2 Select Protection > Snapshots.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 199
Chapter 7:Protection | Snapshots

3 In the Volume Snapshots or Directory Snapshots panel, select the Destroyed drop-down
menu.
4 Select the menu icon, and then click Eradicate.
5 Click the Eradicate button.
The snapshots are completely eradicated from the array.
Manual eradication is not supported when SafeMode retention lock is enabled.

Download a CSV File


To download a CSV file:
1 Log in to the array.
2 Select Protection > Snapshots.
3 In the Volume Snapshots panel, click the menu icon and then select Download CSV.
4 Click the Download button.
A CSV file of snapshots is downloaded.

Copy a Volume Snapshot


To copy a volume snapshot:
1 Log in to the array.
2 Select Protection > Snapshots.
3 In the Volume Snapshots panel, select the menu icon for the volume snapshot that you want
to copy, and then select Copy.
4 In the Container field, specify the root location, pod, or volume group to where the new
volume snapshot will be created. The forward slash (/) represents the root location of the
array.
5 Enter the name that you want to assign to the copy. Optionally click the click the Overwrite
toggle icon if you want to overwrite the previous file of the same name, and then click the
Copy button.
The volume snapshot is copied to the designated location, overwriting the previous version if
specified.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 200
Chapter 7:Protection | Policies

Policies
The Protection > Policies page enables you to create and update policies for your directory
snapshots. You can assign members and rules to policies. Members are managed directories
for which you want snapshots taken. Rules specify the frequency, time taken, time kept, and cli-
ent name for each managed directory snapshot. See Figure 7-9.
Figure 7-9. Protection – Policies

Creating a Snapshot Policy


Note: Predefined snapshot policies may be present on the array. Make sure to review
and update such policies to meet your requirements before using them.
To create a policy:
1 Log in to the array.
2 Select Protection > Policies.
3 Click the Create Policy (plus) icon or the menu options icon and then select Create...
4 Enter the name of the policy, click the Enabled toggle icon to enable (blue) if you want the
policy enabled, and then click the Create button.
The policy is created.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 201
Chapter 7:Protection | Policies

Setting Policy Members and Rules


To set policy members and rules:
1 Log in to the array.
2 Select Protection > Policies.
3 In the Snapshot Policies panel, select a policy link.
4 To add or remove members, in the Members panel, click the menu icon.
l To add members, click Add Member..., select the desired directories, and then click
Add.
l To remove members, click Remove Member..., select the desired directories, and
then click Remove.
5 To add a rule, in the Rules panel, click the Create Rule button or the menu icon and then
select Create...
See Figure 7-10.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 202
Chapter 7:Protection | Policies

Figure 7-10. Protection – Policies > Add Rule

In the Add Rule window, configure the rule as follows:


l Create 1 snapshot every – Sets the frequency that snapshots are taken. The min-
imum value is five minutes (5m) and the maximum value is one year (1y).
l At – This optional setting sets the local time that the snapshot is taken. For example,
12am. Note that each policy must specify a different snapshot time.
l And keep for – The length of time the snapshot is kept before it is destroyed. The min-
imum value is five minutes (5m) and the maximum value is five years (5y).
l Client Name – Sets the client-visible name for the snapshots.
l Optional Suffix – The optional suffix can be set on a rule that retains only one snap-
shot (that is, with the same retention period as the snapshot interval). When omitted,
Purity//FA creates a unique number for the directory snapshot.
6 Click the Add button and the rule is added.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 203
Chapter 7:Protection | Policies

Enabling or Disabling a Snapshot Policy


Policies can be temporarily disabled and re-enabled. To disable or enable a snapshot policy:
1 Log in to the array.
2 Select Protection > Policies.
3 In the Snapshot Policies panel, click the menu icon for the policy and select Edit....
4 Click the toggle icon to enable (blue) the policy or disable (gray) and then click the Save but-
ton.

Renaming a Snapshot Policy


To rename a snapshot policy:
1 Log in to the array.
2 Select Protection > Policies.
3 In the Snapshot Policies panel, click the menu icon for the policy you want to rename and
select Rename....
4 In the Name field, enter the new name of the export policy and click the Rename button.
The policy is renamed.

Deleting a Policy
To delete a policy:
1 Log in to the array.
2 Select Protection > Policies.
3 In the Snapshot Policies panel, click the menu icon for the policy you want to delete and
select Delete.
4 Click the Delete button to confirm.
The policy is deleted.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 204
Chapter 7:Protection | Protection Groups

Removing a Member
To remove a member:
1 Log in to the array.
2 Select Protection > Policies.
3 In the Snapshot Policies panel, select a policy link.
4 In the Members panel, select the Remove Member icon (X) for the member that you want to
remove and then click the Remove button.
The member is removed.

Removing a Rule
To remove a rule:
1 Log in to the array.
2 Select Protection > Policies.
3 In the Snapshot Policies panel, select a policy link.
4 In the Rules panel, select the Remove Rule icon (garbage) for the rule that you want to
remove and then click the Remove button.
The rule is removed.

Protection Groups
The Protection > Protection Groups page displays source and target protection groups;
enables you to create, rename, destroy, eradicate, and recover source protection groups; and
enables you to allow and disallow target protection groups. See Figure 7-11.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 205
Chapter 7:Protection | Protection Groups

Figure 7-11. Protection – Protection Groups

A protection group represents a collection of members (volumes, hosts, or host groups) on the
FlashArray that are protected together by using snapshots. The members within the protection
group have common data protection requirements and the same snapshot, replication, and
retention schedules.
Creating a protection group snapshot creates snapshots of the volumes within the protection
group, which are then retained on the current array. Protection group snapshots can also be
asynchronously replicated to other arrays and external storage systems, such as Azure Blob
containers, NFS devices, and S3 buckets. When replicating, the array from which a snapshot is
created is called the source array, while the array to which the snapshot is replicated is called
the target.
The Protection > Protection Groups page displays a list of active and destroyed protection
groups and protection group snapshots on the array.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 206
Chapter 7:Protection | Protection Groups

A source protection group represents a protection group that has been created on the current
array to generate and retain snapshots. On the Protection Groups and Snapshots pages, source
protection groups are identified by the protection group name.
A target protection group represents a protection group that has been created on another
(remote) array and has the current array set as one of its replication targets. On the Protection
Groups and Snapshots pages, target protection groups are identified by the remote array name,
followed by a colon (:) and then the protection group name. For example, in Figure 7-11, a pro-
tection group with the name vm-zxia:pg1 represents a protection group named pg1 that has
been created on array vm-zxia. Protection group pg1 has added the current array as a target
array.
Array vm-zxia2 has three protection groups. Two of the protection groups (p and pg-01-12-
01-59) have been created on the current array. A third protection group named pg1 has been
created on remote array vm-zxia.
The Destroyed Groups panel displays a list of destroyed protection groups that are in the erad-
ication pending period.
Click a protection group name to display a detailed view of the protection group.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 207
Chapter 7:Protection | Protection Groups

See Figure 7-12 for a view of protection groups from array vm-zxia. Protection group pg1 was
created on array vm-zxia and has one volume member and one target array named vm-
zxia2. Four protection group snapshots have been created. The snapshot schedule has been
set to create a protection group snapshot once every hour, while the replication schedule has
been set to take a protection group snapshot every four hours and immediately replicate the
snapshot to the specified target array (vm-zxia2).
Figure 7-12. Protection – Protection Group Source

Pure Storage Confidential - For distribution only to Pure Customers and Partners 208
Chapter 7:Protection | Protection Groups

See Figure 7-13 for a view of the same protection group, but from array vm-zxia2. Since the
protection group is created on array vm-zxia, the attributes of protection group pg1 can only be
changed from array vm-zxia.
Figure 7-13. Protection – Protection Group Target

Default Protection Groups


Purity//FA provides a mechanism to ensure that every new volume becomes a member of a pro-
tection group, providing automatic protection group assignment for volumes. This feature is
implemented with configurable lists of one or more default protection groups.
See "Automatic Protection Group Assignment for Volumes" on page 38 for information about
default protection group lists. See "Customizing a Default Protection Group List" on page 196 to
customize default protection for volumes. See "Automatic Default Protection for Volumes in a
Pod" on page 139 to customize default protection for volumes in a pod.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 209
Chapter 7:Protection | Protection Groups

Members
The Members panel displays a list of all storage objects (volumes, hosts, or host groups) that
have been added to the source array. Only members of the same object type can belong to a pro-
tection group. Replication to offload targets is only supported for volumes and not for hosts and
host groups.
If you are viewing member details for a target group, the member name is made up of the array
name and the protection group name.
If you added volumes to the source array, Purity//FA generates snapshots of those specific
volumes. If you added hosts or host groups, Purity//FA generates snapshots of the volumes
within those hosts or host groups. If the same volume appears in multiple hosts or host groups,
only one copy of the volume is kept.
Member volumes are typically named in the Members panel. However, volumes protected
through the SafeMode global volume protection feature are presented by an asterisk in the Mem-
bers panel, as shown in Figure 7-14.
Figure 7-14. Protection – SafeMode Protection Group Member

Note: Volumes, hosts, and host groups are managed through the Storage tab.

Targets
The Targets panel lists the target arrays and offload targets that have been added to the source
array. You only need to add targets if you plan to asynchronously replicate snapshots to another
array or to an external storage system. The Allowed column indicates whether a target array has

Pure Storage Confidential - For distribution only to Pure Customers and Partners 210
Chapter 7:Protection | Protection Groups

allowed (true) or disallowed (false) asynchronous replication. By default, a target array allows
protection group snapshots to be asynchronously replicated to it from the source array.

Source Arrays
The Source Arrays panel lists the source arrays of the protection group. The Source Arrays
panel only appears if the protection group is in a pod on a remote array, and the protection group
has added the current array as a target for asynchronous replication.
If the protection group is in a stretched pod, both arrays of the stretched pod should be con-
nected to the target array for high availability and therefore be listed in the Source Arrays panel.
If only one of the arrays is connected to the target array, Purity//FA generates an alert notifying
users of this misconfiguration.

Protection Group Snapshots


The Protection Group Snapshots panel displays a list of all protection group snapshots, both
scheduled and on-demand, that have been taken and retained on the source array or taken on
another array and asynchronously replicated to this array. The list includes the snapshot cre-
ation date and time and the physical storage space occupied by snapshot data. If the snapshot
was replicated to this array, the list also displays the replication start and end times, amount of
data transferred, and replication progress. The data transferred amount is calculated as the size
difference between the current and previous snapshots after data reduction.

On-Demand Snapshots
On-demand snapshots represent single snapshots that are manually generated and retained on
the source array at any point in time. By default, an on-demand snapshot is retained indefinitely
or until it is manually destroyed. When you generate an on-demand snapshot, you can also add
a suffix to the snapshot name, apply the scheduled retention policy to the on-demand snapshot,
and asynchronously replicate the on-demand snapshot to the targets. See Figure 7-15.
When an on-demand snapshot is replicated, and no retention policy is applied, the snapshot is
retained on both the source and target arrays. If a retention policy is applied, the snapshot will
not be retained on the source after replication, although one snapshot may be kept as a
baseline. To keep the snapshot on the source after replication, take another on-demand snap-
shot without replication.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 211
Chapter 7:Protection | Protection Groups

Figure 7-15. Protection on Demand

The "Optional Suffix" option allows you to add a unique suffix to the on-demand snapshot name.
The suffix name, which can include letters, numbers, and dashes (-), replaces the protection
group snapshot number in the protection group snapshot name.
Select the “Apply Retention” option to apply the scheduled snapshot retention policy to the on-
demand snapshot. If you do not enable “Apply Retention,” the on-demand snapshot is saved
until you manually destroy it. Select the “Replicate Now” option to replicate the snapshot to the
targets.

Snapshot and Replication Schedules


The Snapshot Schedule and Replication Schedule panels display the scheduling details for the
protection group. See Figure 7-16.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 212
Chapter 7:Protection | Protection Groups

Figure 7-16. Protection – Snapshot Replication Schedules

Each protection group includes two schedules:


l Snapshot schedule
l Replication schedule
The snapshot schedule is independent of the replication schedule, meaning that you can enable
one schedule without enabling the other. You can also enable or disable both schedules.

Snapshot Schedule
The snapshot schedule displays the snapshot and retention schedule.
Configure the snapshot schedule to determine how often Purity//FA should generate protection
group snapshots and how long Purity//FA should retain the generated snapshots.
For example, a new protection group snapshot schedule may be set to:
l Create a snapshot every hour.
l Retain all snapshots for one day, and then retain four snapshots per day for seven
more days.
This means that Purity//FA generates a snapshot every hour and keeps each generated snap-
shot for 24 hours.
For example, a snapshot that is generated on Saturday at 1:00 p.m. is kept until Sunday 1:00
p.m. See Figure 7-17.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 213
Chapter 7:Protection | Protection Groups

Figure 7-17. Snapshot Schedule

After the one-day retention period, Purity//FA keeps four of the snapshots for an additional
seven days. To determine which four snapshots are retained per day, Purity//FA takes all of the
snapshots generated in the past day and selects the four snapshots that are most evenly spaced
out throughout the day. As the seven-day period for each snapshot elapses, the snapshot is
eradicated.
If the retention schedule is configured to retain one snapshot per day, Purity//FA retains the very
first snapshot taken after the snapshot schedule is enabled, and then retains the next snapshot
taken approximately 24 hours thereafter, and so on.
Once you enable the snapshot schedule, Purity//FA immediately starts the snapshot process.

Replication Schedule
The replication schedule section displays the asynchronous replication and retention schedules.
Configure the replication schedule to determine how often Purity//FA should replicate the pro-
tection group snapshots to the targets and how long Purity//FA should retain the replicated snap-
shots. You can configure a blackout period to specify when replication should not occur.
For example, a new protection group replication schedule may be set as follows:
l Replicate the snapshot every four hours, except between 8:00 a.m. and 5:00 p.m.
l Retain all replicated snapshots for one day, and then retain four snapshots per day
for seven more days.
This means that Purity//FA generates a snapshot on the source array every four hours and
immediately replicates each snapshot to the targets. Purity//FA retains each replicated snapshot
for one day (24 hours).
For example, a snapshot that is generated on the source array and replicated to the targets on
Friday 2:00 a.m. is kept until Saturday 2:00 a.m.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 214
Chapter 7:Protection | Protection Groups

The asynchronous replication process stops during the blackout period between 8:00 a.m. and
5:00 p.m. The start of a blackout period will not impact any snapshot replication sessions that
are already in progress. Instead, Purity//FA will wait until the in-progress snapshot replication is
complete before it observes the blackout period.
Blackout periods only apply to scheduled asynchronous replications. Asynchronous replications
generated by on-demand snapshots (via Protection > Snapshots > Create Snapshot > Rep-
lication Now) do not observe the blackout period. See Figure 7-18.
Figure 7-18. Replication Schedule

After the one-day retention period, Purity//FA keeps four of the replicated snapshots for an addi-
tional seven days. The other replicated snapshots are eradicated. To determine which four rep-
licated snapshots are retained per day, Purity//FA takes all of the replicated snapshots
generated in the past day and selects the four that are most evenly spaced out throughout the
day. Purity//FA destroys each replicated snapshot as its seven-day period elapses.
If the retention schedule is configured to retain one replicated snapshot per day, Purity//FA will
retain the very first snapshot taken after the replication schedule is enabled, and then retain the
next snapshot taken approximately 24 hours thereafter, and so on.
Once you enable the replication schedule, Purity//FA immediately starts the asynchronous rep-
lication process, with the following exceptions:
l If you are enabling the replication schedule during the blackout period, Purity//FA
waits for the blackout period to end before it begins the replication process.
l If you are enabling the replication schedule and the "at" time is specified, Purity//FA
starts the replication process at the specified "at" time.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 215
Chapter 7:Protection | Protection Groups

Protection Group Configuration


Configure the snapshot schedule to determine how often Purity//FA should generate protection
group snapshots and how long Purity//FA should retain the generated snapshots.
Configure the replication schedule to determine how often Purity//FA should asynchronously rep-
licate the protection group snapshots to the target arrays and how long Purity//FA should retain
the replicated snapshots.
Since the snapshot schedule is independent of the replication schedule, you can configure
either or both schedules.

Snapshot Schedule Configuration


To configure the snapshot schedule, create and configure the protection group and set its snap-
shot and retention schedule.
Create and Configure the Protection Group
Protection groups are created and configured on the source array. After you create the pro-
tection group, you must add members (volumes, hosts, or host groups) and set the snapshot
schedule. You can perform these steps in any order before you enable the schedule.
You can only add members of the same object type. For example, you cannot add hosts or host
groups to a protection group that contains volumes. If you add hosts or host groups, it is the
volumes within those hosts or host groups that are protected. The members within the protection
group have common data protection requirements and the same snapshot, replication, and
retention schedules.
Set the Snapshot and Retention Schedule

Set the snapshot schedule to specify how often Purity//FA should generate protection group
snapshots and how long Purity//FA should retain the generated snapshots. If the snapshot fre-
quency is set to one or more days, optionally specify the preferred time of day for the snapshot
to occur.
After you have added members and set the snapshot schedule, enable the schedule to start the
snapshot and retention process. You can enable and disable the schedule at any time to manu-
ally start and stop, respectively, the process.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 216
Chapter 7:Protection | Protection Groups

Replication Schedule Configuration


To configure the replication schedule, connect the source and targets, configure the protection
group, and set its replication and retention schedule.
Connect the Source and Targets
The first step in the configuration process is to establish connections between the source array
and its target, whether it be another array for asynchronous replication or an external storage
system for replication to an offload target.
To connect the source array to a target array for asynchronous replication, log in to the target
array to obtain a connection key, and then log in to the source array and use the connection key
to connect to the target array. These steps are performed through the Storage > Array > Array
Connections panel. Once a source and target array are connected, optionally configure network
bandwidth throttling on the source array to set maximum threshold values for outbound traffic.
Network bandwidth throttling is configured through the Storage > Array > Connected Arrays
panel.
To connect the source array to an external storage system, such as an Azure Blob container,
NFS device, or S3 bucket, for replication to an offload target, add the offload target to the array
through the Storage > Array > Offload Targets panel.
Create and Configure the Protection Group
Protection groups are created and configured on the source array. After you create the pro-
tection group, you must add members (volumes, hosts, or host groups), targets, and set the rep-
lication schedule. The members within the protection group must have the same snapshot,
replication, and retention schedules.
For asynchronous replication, add hosts, host groups, or volumes, respectively, as members to
the protection group. Snapshot data will only be transferred to a target array that has allowed rep-
lication. If you add hosts or host groups, it is the volumes within those hosts or host groups that
are protected. Only members of the same object type can belong to a protection group. For
example, you cannot add hosts or host groups to a protection group that contains volumes.
For replication to an offload target, add volumes as members to the protection group. Hosts and
host groups are not supported for replication to offload targets.
The protection group must include at least one target to which the replicated data is written.
For asynchronous replication, the target is another array. By default, target arrays allow pro-
tection group snapshots to be asynchronously replicated to them from the source array. Admin-
istrators on target arrays can allow and disallow asynchronous replication at any time. Allowing

Pure Storage Confidential - For distribution only to Pure Customers and Partners 217
Chapter 7:Protection | Protection Groups

and disallowing replication on a target array will not impact the replication process between the
source array and other target arrays. If you disallow asynchronous replication while a replication
session is in progress, Purity//FA will wait until the session is complete and then stop any new
replication sessions from being created.
For replication to an offload target, the target is an external storage system, such as an Azure
Blob container, NFS device, or S3 bucket.
Set the Replication and Retention Schedule
Set the replication schedule to specify how often Purity//FA should asynchronously replicate the
protection group snapshots to the targets, and how long Purity//FA should retain the replicated
snapshots. You can configure a blackout period to specify when replication should not occur.
If the replication frequency is set to one or more days, optionally specify the preferred time of
day for the replication to occur.
After you have added the targets and members and set the replication schedule, enable the
schedule to start the replication process. You can enable and disable the schedule at any time to
manually start and stop, respectively, the process.

SafeMode
The SafeMode section indicates whether retention lock is enabled for the protection group.

Retention Lock displays one of the following values:


l Ratcheted - The protection group is ratcheted and if the protection group is not
empty, manual eradication is disabled and retention reduction is disallowed.
l Unlocked - The protection group is not ratcheted.
Enabling SafeMode retention lock disallows the following for a non-empty protection group:
l Destroying the protection group
l Manual eradication of the protection group and its container
l Member and target removal
l Decreasing the eradication delay
l Disabling snapshot or replication schedule
l Decreasing snapshot or replication retention or frequency

Pure Storage Confidential - For distribution only to Pure Customers and Partners 218
Chapter 7:Protection | Protection Groups

l Changing the blackout period, only clear blackout period is allowed


l Disallow on the target side
Once the protection group retention lock is ratcheted, it cannot be unlocked by the user. Contact
Pure Storage Technical Services for further assistance. Enrollment is required with at least two
administrators and pin codes.

Creating a Protection Group


1 Log in to the source array.
2 Select Protection > Protection Groups.
3 In the Source Protection Groups panel, click the Create Protection Group (plus) icon or the
menu icon and select Create... . The Create Protection Group dialog box appears.
4 In the Container field, select the root location, pod, or volume group where the protection
group will be created.
5 In the Name field, type the name of the new protection group.
6 Click Create.

Adding a Member (volume, host, or host group) to a


Protection Group
For asynchronous replication, you can either add volumes directly to a protection group or add
volumes indirectly to a protection group via the hosts or host groups in which they belong. A pro-
tection group can only include members of one type. A volume, host, or host group can belong to
any number of protection groups.
For replication to an offload target, add volumes as members to the protection group. Hosts and
host groups are not supported for replication to offload targets.
1 Log in to the source array.
2 Select Protection > Protection Groups.
3 In the Source Protection Groups panel, click the protection group name to drill down to its
details.
4 In the Members panel, click the menu icon and select Add.... The Add Members dialog box
appears.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 219
Chapter 7:Protection | Protection Groups

5 In the Available Members column, click the member you want to add. The member appears
in the Selected Members column.
6 Click Add to confirm the addition of the selected member.

Adding a Target to a Protection Group


Before you add a target array or offload target to a protection group, verify that its respective tar-
get array or external storage system is connected to the array.
1 Log in to the source array.
2 Select Protection > Protection Groups.
3 In the Source Protection Groups panel, click the protection group name to drill down to its
details.
4 In the Targets panel, click the menu icon and select Add.... The Add Targets dialog box
appears.
5 Click Add to confirm the addition of the selected target.
6 In the Available Targets column, click the target array or offload target you want to add. The
array appears in the Selected Targets column.

Configuring the Snapshot and Retention Schedule for


a Protection Group
1 Log in to the source array.
2 Select Protection > Protection Groups.
3 In the Source Protection Groups panel, click the protection group name to drill down to its
details.
4 In the Snapshot Schedule panel, click the Edit (pen and paper) icon. The Edit Snapshot
Schedule pop-up window appears.
5 In the Edit Snapshot Schedule panel, click the Enabled toggle icon to enable (blue) the
snapshot schedule.
6 Set the following snapshot and retention details:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 220
Chapter 7:Protection | Protection Groups

l Set the frequency of the snapshot creation. If the snapshot frequency is set to one or
more days, optionally set the 'at' time to specify the preferred hour of each day when
Purity//FA creates the snapshot. For example, if the snapshot schedule is set to
"Create a snapshot every 2 days at 6pm," Purity//FA creates the snapshots every 2
days at or around 6:00 p.m. If the 'at' option is set to dash (-), Purity//FA chooses the
time of day to create the snapshot.
l Set the snapshot retention schedule to keep the specified number of snapshots for
the specified length of time (as minutes, hours, or days) and then to keep the spe-
cified number of snapshots for the specified additional number of days.
7 Click Save to save the snapshot and retention schedule. If the snapshot schedule is
enabled, the array automatically starts generating and retaining snapshots according to the
configured schedule.

Configuring the Replication and Retention Schedule


for a Protection Group
Configure the replication and retention schedule to perform asynchronous replication to a target
array or to perform replication to an offload target.
1 Log in to the source array.
2 Select Protection > Protection Groups.
3 In the Source Protection Groups panel, click the protection group name to drill down to its
details.
4 In the Replication Schedule panel, click the Edit (pen and paper) icon. The Edit Replication
Schedule pop-up window appears.
5 In the Edit Replication Schedule panel, click the Enabled toggle icon to enable (blue) the
replication schedule.
6 Set the following replication and retention details:
l Set the asynchronous replication frequency. If the replication frequency is set to one
or more days, optionally set the 'at' time to specify the preferred hour of each day
when Purity//FA replicates the snapshot. For example, if the replication schedule is
set to "Replicate every 4 days at 6pm," Purity//FA replicates the snapshots every four
days at or around 6:00 p.m. If the 'at' option is set to dash (-), Purity//FA chooses the
time of day to replicate the snapshot.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 221
Chapter 7:Protection | Protection Groups

l Set the blackout period, if any. The asynchronous replication process stops during
the blackout period. When the blackout period starts, replication processes that are
still in progress will not be interrupted. Instead, Purity//FA will wait until the in-pro-
gress snapshot replication is complete before it observes the blackout period.
l Set the retention schedule to keep the specified number of replicated snapshots for
the specified length of time (as minutes, hours, or days) and then to keep the spe-
cified number of snapshots for the specified additional number of days.
7 Click Save to save the replication and retention schedule. If the replication schedule is
enabled, the array automatically starts replicating snapshots and retaining the replicated
snapshots according to the configured schedule.

Enabling the Snapshot and Replication Schedules


1 Log in to the source array.
2 Select Protection > Protection Groups.
3 In the Source Protection Groups panel, click the protection group name to drill down to its
details.
4 Select one or both of the following options:
l To enable the snapshot and retention schedule, in the Snapshot Schedule panel,
click the Edit (pen and paper) icon, and then click the Enabled toggle icon to enable
(blue) the schedule.
l To enable the replication and retention schedule, in the Replication Schedule panel,
click the Edit (pen and paper) icon, and then click the Enabled toggle icon to enable
(blue) the schedule.

Generating an On-demand Snapshot


1 Log in to the source array.
2 Select Protection > Protection Group.
3 In the Source Protection Groups panel, click the protection group name to drill down to its
details.
4 In the Protection Group Snapshots panel, click the Create Snapshot (plus) icon or the menu
options icon and then select Create... .

Pure Storage Confidential - For distribution only to Pure Customers and Partners 222
Chapter 7:Protection | Protection Groups

5 The Create Snapshot pop-up window appears.


6 Optionally type a unique suffix for the on-demand snapshot. The suffix name can include let-
ters, numbers, and dashes (-).
7 Optionally enable the Apply Retention option to apply the current retention policies to this
snapshot.
8 Optionally enable the Replicate Now option to asynchronously replicate the snapshot to the
protection group's targets. If both Replicate Now and Apply Retention are enabled, the snap-
shot will not be retained on the source after replication, although one snapshot may be kept
as a baseline. To keep the snapshot on the source after replication, take another on-demand
snapshot without replication.
9 Click Create.

Disabling the Snapshot and Asynchronous Replication


Schedules
1 Log in to the source array.
2 In the Source Protection Groups panel, click the protection group name to drill down to its
details.
3 Select one or both of the following options:
l To disable the snapshot and retention schedule, in the Snapshot Schedule panel,
click the Edit (pen and paper) icon, and then click the Enabled toggle icon to disable
(gray) the schedule.
l To disable the replication and retention schedule, in the Replication Schedule panel,
click the Edit (pen and paper) icon, and then click the Enabled toggle icon to disable
(gray) the schedule.

Copying a Snapshot
1 Log in to the source or target array.
2 Select Protection > Protection Groups.
3 In the Source Protection Group Snapshots or Target Protection Group Snapshots panel,
click the Copy (pages) icon of the protection group snapshot that you want to copy.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 223
Chapter 7:Protection | Protection Groups

4 The Copy Protection Group pop-up window appears.


5 In the Protection Group Name field, type the name of the new protection group.
6 To overwrite an existing protection group, click the Overwrite toggle icon, setting it to blue.
7 Click Create.

Renaming a Protection Group


1 Log in to the source array.
2 Select Protection > Protection Groups.
3 In the Source Protection Groups panel, find the protection group that you want to rename,
click the menu icon at the end of the row, and select Rename....
4 The Rename Protection Group dialog box appears.
5 In the Name field, type the new name of the protection group.
6 Click Rename.

Destroying a Protection Group


Note: You can only destroy protection groups from the source array.
1 Log in to the source array.
2 Select Protection > Protection Groups.
3 In the Source Protection Groups panel, find the protection group that you want to destroy,
click the menu icon at the end of the row, and select Destroy....
The Destroy Protection Group dialog box appears.
4 Click Destroy. The destroyed protection group appears in the Destroyed Groups panel and
begins its eradication pending period.
During the eradication pending period, you can recover the protection group to bring the group
and its content back to its previous state, or manually eradicate the destroyed protection group
to reclaim physical storage space occupied by the destroyed protection group snapshots.
When the eradication pending period has elapsed, Purity//FA starts reclaiming the physical stor-
age occupied by the protection group snapshots.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 224
Chapter 7:Protection | Protection Groups

Once reclamation starts, either because you have manually eradicated the destroyed protection
group, or because the eradication pending period has elapsed, the destroyed protection group
and its snapshot data can no longer be recovered.
(See "Eradication Delays" on page 35 for information about eradication pending periods. Erad-
ication pending periods are configured in the Settings > System > Eradication Configuration
pane. See "Eradication Delay Settings" on page 285.)

Recovering a Destroyed Protection Group


Recovering a destroyed protection group brings the group and its content back to its previous
state. You can recover a destroyed protection group during the eradication pending period.
Once the eradication pending period has elapsed, the protection group and its contents are no
longer recoverable.
1 Log in to the source array.
2 Select Protection > Protection Groups.
3 In the Destroyed panel, click the Recover (clock) icon or the menu icon and select
Recover....
The Recover Protection Groups dialog box appears.
If you selected the menu icon and then selected Recover... and there are multiple des-
troyed protection groups in the list, select all of the protection groups that you want to
recover.
4 Click Recover. The recovered protection group or groups appear in the Protection Groups
panel.

Eradicating a Destroyed Protection Group


During the eradication pending period, you can manually eradicate the destroyed protection
group to reclaim physical storage space occupied by the destroyed protection group snapshots.
Once reclamation starts, the destroyed protection group and its snapshot data can no longer be
recovered.
1 Log in to the source array.
2 Select Protection > Protection Groups.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 225
Chapter 7:Protection | Protection Groups

3 In the Destroyed Protection Groups panel, click the Eradicate (garbage) icon or the menu
icon and select Eradicate....
The Eradicate Protection Group dialog box appears.
If you selected the menu icon and then selected Eradicate... and there are multiple des-
troyed protection groups in the list, select all of the protection groups that you want to des-
troy.
4 Click Eradicate. Purity//FA immediately starts reclaiming the physical storage occupied by
the protection group snapshot or snapshots.
Manual eradication is not supported when SafeMode retention lock is enabled.

Allowing Protection Group Replication


To allow protection group replication:
1 Log in to the target array.
2 Select Protection > Protection Groups.
3 In the Target Protection Groups panel, click the menu icon and select Allow.... The Allow Pro-
tection Groups dialog box appears.
4 In the Target Protection Groups column, click the target protection group you want to allow.
5 Click Allow.

Disallowing Protection Group Replication


By default, target arrays allow protection group snapshots to be asynchronously replicated to tar-
get arrays from the source array.
Allowing and disallowing protection group replication on a target array will not impact the rep-
lication process between the source array and other target arrays.
If you disallow protection group replication while a replication session is already in progress, Pur-
ity//FA will wait until the session is complete and then stop any new replication sessions from
being created.
To disallow protection group replication:
1 In the Target Protection Groups panel, click the menu icon and select Disallow.... The Dis-
allow Protection Groups dialog box appears.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 226
Chapter 7:Protection | Protection Groups

2 In the Target Protection Groups column, click the target protection group you want to dis-
allow.
3 Click Disallow.
4 Log in to the target array.
5 Select Protection > Protection Groups.

Enabling SafeMode
By default, retention lock is unlocked. Enabling the retention lock enables ransomware pro-
tection for the protection group. Once the retention lock is ratcheted, it cannot be unlocked by
the user. Contact Pure Storage Technical Services for further assistance. Enrollment is required
with at least two administrators and pin codes.
1 Log in to the source array.
2 Select Protection > Protection Groups.
3 Click the protection group where you want SafeMode enabled.
4 In the SafeMode pane, if the status is “unlocked”, click the edit icon.
5 In the pop-up dialog box, click the Ratcheted toggle button to enable (blue) the SafeMode fea-
ture.
6 Click Save.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 227
Chapter 7:Protection | ActiveDR

ActiveDR
The Protection > ActiveDR page enables you to view, create, and manage replica links. Replica
link management features include the ability to delete, pause, and resume the connection from a
source-array pod to a target-array pod, and the ability to promote and demote local pods. See
Figure 7-19.
Figure 7-19. Protection – ActiveDR

Creating and managing replica links is part of the ActiveDR configuration process, which is
described in "ActiveDR Replication" on page 140.

Creating a replica link


To create a replica link:
1 Log in to the source array.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 228
Chapter 7:Protection | ActiveCluster

2 Select Protection > ActiveDR.


3 In the Replica Links panel, select the Create Replica Link icon (plus) or the menu icon and
then select Create. See Figure 7-20.
Figure 7-20. Creating a Replica Link

4 In the Create Replica Link window, configure the link as follows:


l Local Pod Name – Select a pod from the list or click Create Pod to open the Create
Pod window and configure a new one.
l Remote Array – Select a remote array from the list or click Connect Array to open the
Connect Array window and configure a new one.
l Remote Pod Name – Select a demoted pod on the list from the target array or click
Create Remote Pod to open the Create Pod window and configure a new one.
l Click Create.
The replica link is created and appears in the Replica Links panel.

ActiveCluster
ActiveCluster replication allows I/O to be sent into either of two connected arrays and have it syn-
chronized with the other array. ActiveCluster replication is configured through pods. The Pro-
tection > ActiveCluster page enables you to clone, rename, and destroy pods that have been
configured for ActiveCluster replication. Pods configured for ActiveCluster have been promoted
but not linked.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 229
Chapter 7:Protection | ActiveCluster

Additional resources for ActiveCluster are available through the Pure Storage support website:
l Requirements and Best Practices
l Quick Start Guide
l Active-Active Asynchronous Replication
l Frequently Asked Questions
For more information about pods, see "Pods" on page 133. Also see Figure 7-21.
Figure 7-21. Protection – ActiveCluster

You can create, destroy, rename, or clone ActiveCluster pods, but you cannot promote or
demote them.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 230
Chapter 8:
Analysis
The Analysis page displays historical array data, including storage capacity, consumption or
effective used capacity, and I/O performance trends across all volumes, host and host groups,
and replication bandwidth activity across all source and target groups on the array. See Figure
8-1.
Figure 8-1. Analysis

The Analysis page displays a series of rolling graphs consisting of real-time capacity, per-
formance, and replication metrics; the incoming data appear along the right side of each graph
as older numbers drop off the left side.
The curves in each graph are comprised of a series of individual data points. Hover over any
part of a graph to display values for a specific point in time. The values that appear in the point-
in-time pop-ups are rounded to two decimal places.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 231
Chapter 8:Analysis |

Different graphs display different metrics. Furthermore, specifying all or individual volumes,
volume groups, pods, protection groups, or hosts determine the metrics that appear within a
graph.
The FlashArray maintains a rolling one-year history of data. The granularity of the historical data
increases with age; older data points are spaced further apart in time than more recent ones.
See Figure 8-2 for an example of performance statistics for the five selected volumes on the
array on 7/06/2022 at 14:40:28.
Figure 8-2. Analysis – Volume Performance Statistics

By default, the Analysis charts display data for the past 1 hour. To view historical data over a dif-
ferent time range, click the 1 Hour range button and select the desired time range. To further
zoom into a time range, from anywhere inside the chart, click and drag from the desired start
time to the desired end time. Click Reset Zoom to zoom back to the time range specified.
The charts in the Analysis page are grouped into the following areas: Performance, Capacity,
and Replication.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 232
Chapter 8:Analysis | Performance

Performance
The Performance charts display I/O performance metrics in real time. See Figure 8-3.
Figure 8-3. Analysis - Performance

By default, Purity//FA displays the performance details for the entire array.
If a volume is in a pod that is stretched across another array, optionally click the Arrays button to
filter the performance details by array. If none of the arrays are selected (default), the chart dis-
plays the overall performance trends for each selected volume. If one or more arrays are selec-
ted, the chart displays the performance trends by array for each selected volume.
To analyze the performance details of specific volumes, click the Volumes sub-tab along the top
of the Performance page, select Volumes from the drop-down list, and select the volumes you
want to analyze. If a bandwidth limit has been set for the volume, the limit appears when the
volume is selected. You can analyze up to five volumes at one time. In the Selection drop-down
list, select Clear All to clear the volume selections.
To analyze the performance details of volumes within specific volume groups, click the Volumes
sub-tab along the top of the Performance page, select Volume Groups from the drop-down list,
and select the volume groups you want to analyze. You can analyze up to five volume groups at
one time. Click Clear All to clear the volume group selections.
To analyze the performance details of volumes within specific pods, click the Pods sub-tab
along the top of the Performance page and select the pods you want to analyze. You can

Pure Storage Confidential - For distribution only to Pure Customers and Partners 233
Chapter 8:Analysis | Performance

analyze up to five pods at a time. In the Selection drop-down list, select Clear All to clear the pod
selections.
If a pod is stretched across another array, optionally click the Arrays button to filter the per-
formance details by array. If none of the arrays are selected (default), the chart displays the over-
all performance trends of all volumes in the selected pod. If one or more arrays are selected, the
chart displays the performance trends by array for each selected volume.
To analyze the performance details of managed directories, click the Directories sub-tab along
the top of the Performance page and select the directories you want to analyze. You can ana-
lyze up to five directories at a time. In the Selection drop-down list, select Clear All to clear the
directory selections.
To analyze the performance details of specific hosts and host groups, click the Hosts sub-tab
along the top of the Performance page and select the hosts or host groups you want to analyze.
Click the menu icon in the upper-right corner of the chart to display or hide mirrored data. To dis-
play remote hosts and host groups, click the menu icon in the upper-right corner of the chart.
The Performance page includes Latency, IOPS, and Bandwidth charts. The point-in-time pop-
ups in each of the performance charts display the following values:
Latency
The Latency chart displays the average latency times for various operations.
l Read Latency (R) - Average arrival-to-completion time, measured in milliseconds, for
a read operation.
l Write Latency (W) - Average arrival-to-completion time, measured in milliseconds,
for a write operation.
l Mirrored Write Latency (MW) - Average arrival-to-completion time, measured in mil-
liseconds, for a write operation. Represents the sum of writes from hosts into the
volume's pod and from remote arrays that synchronously replicate into the volume's
pod.
Latency details are displayed in graphs of one I/O type, such as Read, Write, or Mirrored
Write.
l SAN Time - Average time, measured in milliseconds, required to transfer data
between the initiator and the array.
l QoS Rate Limit Time - Average time, measured in microseconds, that all I/O
requests spend in queue as a result of bandwidth limits reached on one or more
volumes.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 234
Chapter 8:Analysis | Performance

l Queue Time - Average time, measured in microseconds, that an I/O request spends
in the array waiting to be served. The time is averaged across all I/Os of the selected
types.
l Service Time - Average time, measured in microseconds, it takes the array to serve a
read, write, or mirrored write I/O request.
l Total Latency - The sum of SAN Time, QoS Rate Limit Time, Queue Time, and Ser-
vice Time, in microseconds.
IOPS
The IOPS (Input/output Operations Per Second) chart displays I/O requests processed
per second by the array. This metric counts requests per second, regardless of how much
or how little data is transferred in each.
l Read IOPS (R) - Number of read requests processed per second.
l Read Average IO Size (R IO Size) - Average read I/O size per request processed.
Calculated as (read bandwidth)/(read IOPS).
l Write IOPS (W) - Number of write requests processed per second.
l Write Average IO Size (W IO Size) - Average write I/O size per request processed.
Calculated as (write bandwidth)/(write IOPS).
l Mirrored Write IOPS (MW) - Number of write requests processed per second. Rep-
resents the sum of writes from hosts into the volume's pod and from remote arrays
that synchronously replicate into the volume's pod.
l Mirrored Write Average IO Size (MW IO Size) - Average mirrored write I/O size per
request processed. Calculated as (mirrored write bandwidth)/(mirrored
write IOPS).
Bandwidth
The Bandwidth chart displays the number of bytes transferred per second to and from all
file systems. The data is counted in its expanded form rather than the reduced form
stored in the array to truly reflect what is transferred over the storage network. Metadata
bandwidth is not included in these numbers.
l Read Bandwidth (R) - Number of bytes read per second.
l Write Bandwidth (W) - Number of bytes written per second.
l Mirrored Write Bandwidth (MW) - Number of bytes written into the volume's pod per
second. Represents the sum of writes from hosts into the volume's pod and from
remote arrays that synchronously replicate into the volume's pod.
l Other Requests (O)- The number of other requests processed per second.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 235
Chapter 8:Analysis | Performance

Note about the Performance Charts


The Dashboard and Analysis pages display the same latency, IOPS, and bandwidth per-
formance charts, but the information is presented differently between the two pages.
In the Dashboard page:
l The performance charts are updated once every 30 seconds.
l The performance charts display up to 30 day's worth of historical data.
l The Latency charts displays only internal latency times. SAN times are not included.

In the Analysis page:


l The performance charts are updated once every minute.
l The performance charts display up to one year's worth of historical data.
l The performance charts can be further dissected by I/O type.
l The Latency charts displays both internal latency times and SAN times.

Exporting Array-Wide Performance Metrics


To export array-wide performance metrics:
1 Select Analysis > Performance > Array.
2 Click the menu icon in the upper-right corner of the chart containing the performance metrics
you want to export, and select one of the following options:
l Select Export to PNG to export an image of the chart in PNG format to your local
machine.
l Select Export to CSV to export the performance data to a comma-separated value
(CSV) file on your local machine.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 236
Chapter 8:Analysis | Capacity

Capacity
The Capacity charts display array-wide effective used capacity or space consumption inform-
ation, including physical storage capacity and the amount of storage occupied by data and
metadata. See Figure 8-4 for the Analysis > Capacity tab on a purchased array.
Figure 8-4. Analysis - Capacity

See Figure 8-5 for the Analysis > Capacity tab on subscription storage.
Figure 8-5. Analysis - Capacity on Subscription Storage

Pure Storage Confidential - For distribution only to Pure Customers and Partners 237
Chapter 8:Analysis | Capacity

The Array Capacity chart displays the amount of usable physical storage on the array and the
amount of storage occupied by data and metadata. The data point fluctuations represent
changes in physical storage consumed by a volume.
For example, a volume may experience a spike in storage consumption when more data is
being written to it or when other volumes with shared data are eradicated. Conversely, a volume
may experience a dip in storage consumption from trimming or from an increased sharing of
deduplicated data with other volumes.
By default, Purity//FA displays the capacity details for the entire array. To analyze the capacity
details of specific volumes, click the Volumes sub-tab along the top of the Capacity page, select
Volumes from the drop-down list, and select the volumes you want to analyze.
To analyze the performance details of volumes within specific volume groups, click the Volumes
sub-tab along the top of the Capacity page, select Volume Groups from the drop-down list, and
select the volume groups you want to analyze. You can analyze up to five volumes and volume
groups at a time. Click Clear All to clear the volume or volume group selections.
To analyze the capacity details of volumes within specific pods, click the Pods sub-tab along the
top of the Capacity page and select the pods you want to analyze. You can analyze up to five
pods at a time. Click Clear All to clear the pod selections.
To analyze the performance details of managed directories, click the Directories sub-tab along
the top of the Capacity page and select the directories you want to analyze. You can analyze up
to five directories at a time. Click Clear All to clear the directory selections.
In the Capacity chart on a purchased array, the point-in-time pop-up displays the following met-
rics:
Empty Space
Unused space available for allocation.
System
Physical space occupied by internal array metadata.
Replication Space
Physical system space used to accommodate pod-based replication features, includ-
ing failovers, resync, and disaster recovery testing.
Shared Space
Physical space occupied by deduplicated data, meaning that the space is shared with
other volumes and snapshots as a result of data deduplication.
Snapshots
Physical space occupied by data unique to one or more snapshots.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 238
Chapter 8:Analysis | Capacity

Unique
Physical space that is occupied by data of both volumes and file systems after data
reduction and deduplication, but excluding metadata and snapshots.
Used
Total physical space occupied by system, shared space, volume, file system, and
snapshot data.
Usable Capacity
Total physical usable space on the array. Replacing a drive may result in a dip in
usable capacity. This is intended behavior. RAID striping splits data across an array
for redundancy purposes, spreading a write across multiple drives. A newly added
drive cannot use its full capacity immediately but must stay in line with the available
space on the other drives as writes are spread across them. As a result, usable capa-
city on the new drive may initially be reported as less than the amount expected
because the array will not be able to write to the unallocatable space. Over time,
usable capacity fluctuations will occur, but as data is written to the drive and spreads
across the array, usable capacity will eventually return to expected levels.
Data Reduction
Ratio of mapped sectors within a volume versus the amount of physical space the
data occupies after data compression and deduplication. The data reduction ratio
does not include thin provisioning savings.
For example, a data reduction ratio of 5:1 means that for every 5 MB the host writes to
the array, 1 MB is stored on the array's flash modules.

On subscription storage, the point-in-time pop-up displays the following metrics based on effect-
ive used capacity:
Shared
Effective used capacity consumed by cloned data, meaning that the space is shared
with cloned volumes and snapshots as a result of data deduplication.
Snapshots
Effective used capacity consumed by data unique to one or more snapshots.
Unique
Effective used capacity data of both volumes and file systems after removing clones,
but excluding metadata and snapshots
Total
Total effective used capacity containing user data, including Shared, Snapshots, and
Unique storage.
Usable Capacity

Pure Storage Confidential - For distribution only to Pure Customers and Partners 239
Chapter 8:Analysis | Replication

Total usable capacity available from a host’s perspective, including both consumed
and unused storage.
The Host Capacity chart displays the provisioned size of all selected volumes. In the Host Capa-
city chart, the point-in-time pop-up displays the Size metric for a purchased array:
Size
Total provisioned size of all volumes. Represents storage capacity reported to hosts.
On subscription storage, the point-in-time pop-up displays the Provisioned metric:
Provisioned
Total provisioned size of all volumes. Represents the effective used capacity reported to
hosts.

Exporting Array-Wide Capacity Metrics


To export array-wide capacity metrics:
1 Select Analysis > Capacity > Array.
2 Click the menu icon in the upper-right corner of the chart containing the capacity metrics you
want to export, and select one of the following options:
l Select Export to PNG to export an image of the chart in PNG format to your local
machine.
l Select Export to CSV to export the capacity data to a comma-separated value (CSV)
file on your local machine.

Replication
The Replication charts display historical bandwidth information for asynchronous, synchronous
(ActiveCluster), and continuous (ActiveDR) replication activities on the array. The Bandwidth
chart (not to be confused with the performance Bandwidth chart) displays the number of bytes of
replication snapshot data transferred over the storage network per second between this array
and its source arrays, target arrays, and external storage systems (such as Azure Blob con-
tainers, NFS devices, and S3 buckets), at certain points in time. See Figure 8-6.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 240
Chapter 8:Analysis | Replication

Figure 8-6. Analysis – Replication

By default, Purity//FA displays bandwidth details for the entire array. In the replication Bandwidth
chart for the array, the point-in-time pop-up displays the following metrics:
l Resync (RX + TX) Number of bytes of replication data transmitted and receive per
second as the array actively gets the latest pod data so that it becomes fully syn-
chronized with its peer arrays. This can be due to an initial pod stretch or due to an
array coming back online after an extended offline event.
l Sync (RX + TX) Number of bytes of synchronous replication data transmitted and
received per second across all pods.
l Async (RX + TX) Number of bytes of asynchronous replication snapshot data trans-
mitted and received per second across all protection groups.
l Continuous (RX + TX) Number of bytes of continuous replication data transmitted
and received per second across all pods.
l Total Total number bytes of replication data transmitted and received per second
across all protection groups.
To analyze the details for a specific protection group, click the Protection Groups sub-tab along
the top of the Replication page, and select the protection groups you want to analyze. You can
select up to five protection groups at one time. The names of the selected protection groups
appear at the top of the details pane. Click Clear All to clear the protection group selection. In

Pure Storage Confidential - For distribution only to Pure Customers and Partners 241
Chapter 8:Analysis | Replication

the replication Bandwidth chart for protection groups, the point-in-time pop-up displays the fol-
lowing metrics:
l RX + TX Number of bytes of replication snapshot data transmitted and received per
second across all protection groups.
l RX Number of bytes of replication snapshot data received per second by the targets
for the selected protection groups.
l TX Number of bytes of replication snapshot data transmitted per second from the
source array for the selected protection groups.

Exporting Array-Wide Replication Metrics


To export array-wide replication metrics:
1 Select Analysis > Replication > Array.
2 Click the menu icon in the upper-right corner of the chart containing the replication metrics
you want to export, and select one of the following options:
l Select Export to PNG to export an image of the chart in PNG format to your local
machine.
l Select Export to CSV to export the replication data to a comma-separated value
(CSV) file on your local machine.

Replication Bandwidth
You can display the bandwidth information of continuous, synchronous, and resync replication
for the pods on the array by selecting Replication > Pods. The Replication > Pods page con-
tains the following panes (see Figure 8-7):
l Pods Displays the bandwidth information for each pod and the total bandwidth inform-
ation for all pods for continuous, sync, and resync replication types, including the num-
ber of bytes per second transmitted (to remote), received (from remote), and both
transmitted and received (total).
l Continuous Displays the graphical representation of continuous replication history for
individual pods over the selected range of time, annotated with the number of bytes of
replication data transmitted (to remote), received (from remote), and both transmitted
and received (total) per second for the point in time.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 242
Chapter 8:Analysis | Replication

l Sync Displays the graphical representation of synchronous replication history for indi-
vidual pods over the selected range of time, annotated with the number of bytes of
replication data transmitted (to remote), received (from remote), and both transmitted
and received (total) per second for the point in time.
l Resync Displays the graphical representation of resync replication history for indi-
vidual pods over the selected range of time, annotated with the number of bytes of
replication data transmitted (to remote), received (from remote), and both transmitted
and received (total) per second for the point in time.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 243
Chapter 8:Analysis | Replication

Figure 8-7. Replication Bandwidth Based on Pods

Pure Storage Confidential - For distribution only to Pure Customers and Partners 244
Chapter 8:Analysis | Replication

Viewing Replication Bandwidth


To view the replication bandwidth information for the pods on the array,
1 Select Replication > Pods. The Pods pane displays the bandwidth information of con-
tinuous, synchronous, and resync replication types for the individual pods on the array.
2 In the Pods pane, click one of the following buttons:
l All Displays the bandwidth information for each pod and the total bandwidth inform-
ation for all pods for continuous, sync, and resync replication types, including the num-
ber of bytes per second transmitted (to remote), received (from remote), and both
transmitted and received (total).
l Continuous Displays the bandwidth information of continuous replication for the indi-
vidual pods on the array, including the number of bytes of replication data transmitted
(to remote), received (from remote), and both transmitted and received (total) per
second.
l Sync Displays the bandwidth information of synchronous replication for the individual
pods on the array, including the number of bytes of replication data transmitted (to
remote), received (from remote), and both transmitted and received (total) per
second.
l Resync Displays the bandwidth information of resync replication for the individual
pods on the array, including the number of bytes of replication data transmitted (to
remote), received (from remote), and both transmitted and received (total) per
second.

Viewing Replication Bandwidth in Graphical Representations


To view the bandwidth information of each replication type in graphical representation,
1 Select Replication > Pods.
2 In the Pods pane, select the check boxes of the pods to display the bandwidth information.
The Continuous, Sync, and Resync panes display the bandwidth charts of continuous, syn-
chronous, and resync replication, respectively, for the selected pods for the past one hour. If
you do not select any pod in the Pods pane, the charts display no images. To deselect a pod
from the charts, use one of the following methods:
3 Hover over any of the Continuous, Sync, and Resync charts to display the bandwidth inform-
ation for continuous, synchronous, and resync replication, respectively, in the point-in-time
tooltips for the selected pods.
4 To adjust the display settings:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 245
Chapter 8:Analysis | Replication

l Select the To remote check box to view the replication bandwidth information to the
remote array.
l Select the From remote check box to view the replication bandwidth information from
the remote array.
l Select both the To remote and From remote check boxes to view the replication
bandwidth information to the remote array, from the remote array, and the total (to
and from the remote array).
l Deselect both the To remote and From remote check boxes to hide the graphs.
5 To view the replication bandwidth information over a different time range, click the 1 Hour
range button to select a predefined time range. By default, the charts display the replication
bandwidth information for the past one hour. For 1-hour time range, the charts are refreshed
every 30 seconds. For 3-hour time range, the charts are refreshed every minute.
6 (Optional) In any of the Continuous, Sync, and Resync panes, click the graph to update the
bandwidth information of all replication types in the Pods pane for a specific point in time.
Click the menu icon of a chart to export the image of the chart in PNG or CSV format.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 246
Chapter 9:
Health
The Health page displays and manages the state of the array.

Hardware
The Hardware panel graphically displays the status of the FlashArray or Cloud Block Store hard-
ware components. See Figure 9-1 for a schematic representation of a FlashArray with several
component pop-ups displayed.
Figure 9-1. Hardware – FlashArray

See Figure 9-2 for a schematic representation of a Cloud Block Store with one component pop-
up displayed.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 247
Chapter 9:Health | Hardware

Figure 9-2. Hardware – Cloud Block Store

The title bar of the Hardware panel includes the array name, the raw capacity value, and parity
information. The raw capacity value represents the total usable capacity of the array, displayed
in bytes in both base 2 (e.g., 98.50 T for 98.50 tebibytes), and base 10 (e.g., 108.30 TB for
108.30 terabytes) formats. The parity value represents the percentage of data that is fully pro-
tected. The parity value will drop below 100% if the data isn't fully protected, such as when a
module is pulled and the array is rebuilding the data to bring it back to full parity.
The image is a schematic representation of the array with colored indicators of each com-
ponent's status. The colored squares within each hardware component represent the com-
ponent status:
l Green: Healthy and functioning properly at full capacity.
l Yellow: At risk, outside of normal operating range, or unrecognized.
l Red: Failed, installed but not functioning, or not installed (but required).
l Black: Not installed. With FlashArray//M, used for NVRAM bays and storage bays
that are allowed to be empty.
l Gray: Disconnected. Also used for components that are temporarily offline while
undergoing a firmware update.
Hover the mouse over a hardware component to display its status and details. For example,
hover over the Temperature component to display the following details: name of the shelf or con-
troller that is being monitored, physical location of the temperature sensors, and current tem-
perature readings.
Hardware components that can be actively managed from the Purity//FA GUI include buttons
that perform certain functions, such as turning ID lights on and off, and changing shelf ID num-
bers. For example, hover over the Shelf component to display its health status and shelf ID num-
ber. Click the Turn On ID Light button to turn on the LED light on the physical shelf for easy
identification. Click the Change ID button to change the ID number that appears on the physical
shelf.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 248
Chapter 9:Health | Hardware

Hover over a flash module component to display its health status, physical location in the shelf,
and capacity. If the module has been added to the array and is waiting to be admitted, click the
Admit all unadmitted drives button to admit all of the unadmitted modules, including the current
one.

FlashArray Hardware Components


Hardware components and their naming vary by FlashArray series. To see the hardware tech-
nical specifications for each FlashArray model, refer to the Products page at
https://www.purestorage.com.

Hardware Components in FlashArray//XL, FlashArray//X, and


FlashArray//M
The FlashArray chassis, controller and storage shelf names have the form XXm. The names of
other components have the form XXm.YYn or XXm.YYYn, where:
XX
Denotes the type of component:
l CH - //XL, //X, or //M chassis.
l CT - Controller.
l SH - Storage shelf.
m
Identifies the specific controller or storage shelf:
l For an //XL, //X, or //M chassis, m has a value of 0. For example, CH0.
l For controllers, m has a value of 0 or 1. For example, CT0, CT1.
l For storage shelves, m represents the shelf number, starting at 0. For example, SH0,
SH1.
l The assigned number can be changed on the shelf front panel or by running the
purehw setattr --id command.
YY or YYY
Denotes the type of component. For example, FAN for cooling device, FC for Fibre Chan-
nel port.
n
Identifies the specific component by its index (its relative position within the //XL, //X, or
//M chassis, controller, or storage shelf), starting at 0.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 249
Chapter 9:Health | Hardware

The following tables list the //XL, //X, and//M hardware components that report status, grouped
by their location on the array. The hardware component names are used throughout Purity//FA,
for instance in the GUI Health > Hardware page, and with CLI commands such as puredrive
and purehw. See Table 9-3 for chassis components, Table 9-4 for controller components, and
Table 9-5 for storage shelf components.
The Identify Light column shows which components have an LED light on the physical com-
ponent that can be turned on and off.
Table 9-3. Chassis (CH0)
Component Name Identify Light Component Type
CH0 Yes Chassis
CH0.BAYn Yes Storage bay
CH0.NVBn Yes NVRAM bay
CH0.PWRn — Power module

Table 9-4. Controller (CTm)


Component Name Identify Light Component Type
CTm Yes Controller
CTm.ETHn — Ethernet port
CTm.FANn — Cooling fan
CTm.FCn — Fibre Channel port
CTm.IBn — InfiniBand port (included only with certain upgrade kits)
CTm.SASn — SAS port
CTm.TMPn — Temperature sensor

Table 9-5. Storage Shelf (SHm)


Component Name Identify Light Component Type
SHm Yes Storage shelf
SHm.BAYn Yes Storage bay
SHm.FANn — Cooling fan
SHm.IOMn — I/O module
SHm.PWRn — Power module
SHm.SASn — SAS port
SHm.TMPn — Temperature sensor

Pure Storage Confidential - For distribution only to Pure Customers and Partners 250
Chapter 9:Health | Hardware

Capacity Upgrade and Drive Admission


Increase storage capacity by adding data packs or individual drives to a shelf.
Data packs can be added to any shelf that has open space.
Individual drives can be added to any unused slot within a shelf. The drives must be DirectFlash
modules, and the shelf must already contain similar-sized DirectFlash modules. The array will
not admit individual drives of other types or sizes. Add each drive to the unused slots from left to
right, starting with the lowest open slot.
Before you begin the upgrade, ensure your shelf has enough open spaces to hold the new packs
or individual drives.
Performing a capacity upgrade is a two-step process: first, add the modules to the array; and
second, admit the newly added modules.
When a module has been added to the array, its status changes from unused to identifying
as Purity works to identify the module. The module transitions to its final unadmitted status
when it has been successfully added to the array.
After all of the modules have been added (connected) to the array, they must be admitted. Admit
all of the newly added modules at once by hovering over any one of the unadmitted modules and
clicking Admit all unadmitted modules. Once a drive has been successfully admitted to the
array, its module status changes from unadmitted to healthy. This completes the drive
admission process.
If issues arise during the drive admission process, the module status changes to unre-
cognized or failed, or reverts to unadmitted. If the module status changes to unre-
cognized or failed, contact Pure Storage Technical Services. If the module returns to the
unadmitted status, try again to admit the modules. If the subsequent “admit” attempt is not suc-
cessful, contact Pure Storage Technical Services.

Upgrading Array Capacity


1 Verify the shelf has enough space to occupy the new data packs or individual drives. If you
are adding individual drives, also verify the shelf is at least half full of DirectFlash modules
that are similar in size to the drives being added.
2 Add the data packs or individual drives to the array. New drives should be added to the
unused slots starting at the lowest numbered slot and working up slot by slot.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 251
Chapter 9:Health | Alerts

3 Hover over the newly added modules to verify that they are in unadmitted status, indicating
that the modules have been successfully connected but not yet admitted to the array.
4 Hover over any one of the unadmitted modules and click Admit all unadmitted modules to
admit all modules that have been added (connected) but not yet admitted to the array.
5 Hover over the newly admitted modules to verify that all of the shelves and drives are in
healthy status, indicating that the modules have been successfully admitted and are in use
by the system. This completes the drive admission process.

Alerts
Purity//FA generates an alert when there is a change to the array or to one of its hardware or soft-
ware components.
The Alerts panel displays the list of alerts that have been generated on the array. See Figure 9-
3.
Figure 9-3. Alerts

Pure Storage Confidential - For distribution only to Pure Customers and Partners 252
Chapter 9:Health | Alerts

To conserve space, Purity//FA stores a reasonable number of alert records on the array. Older
entries are deleted from the log as new entries are added. To access the complete list of mes-
sages, contact Pure Storage Technical Services.
Purity//FA assigns a unique numeric ID to each alert as it is created. By default, alerts are sorted
in chronological descending order by "Last Seen" date.
The icons that appear along the left side of each alert in the list output represent the alert sever-
ity level:
l Blue (INFO) icons represent informational messages generated due to a change in
state. INFO messages can be used for reporting and analysis purposes. No action is
required.
l Yellow (WARNING) icons represent important messages warning of an impending
error if action is not taken.
l Red (CRITICAL) icons represent urgent messages that require immediate attention.
Click any of the column headings in the Alerts panel to change the sort order, and click any-
where in an alert row to display additional alert details.
Each alert in the list output includes the following information:
l Flag: Alert that has been flagged by Purity//FA or the user. Purity//FA automatically
flags all warning and critical alerts. An alert remains flagged until you have manually
cleared the flag to indicate that the alert has been addressed. If there are further
changes to the condition that caused the alert (for example, a temperature of a con-
troller or shelf has changed), Purity//FA will set the flag again.
l Sev: Alert severity, categorized as critical, warning, or info.
Critical (red) alerts are typically triggered by service interruptions, major performance
issues, or risk of data loss, and require immediate attention. For example, the array
triggers a critical alert if a module has been removed from the chassis.
Warning (yellow) alerts are of low to medium severity and require attention, though
not as urgently as critical alerts. For example, the array triggers a warning alert if it
detects an unhealthy module.
Informational (blue) alerts inform users of a general behavior change and require no
action. For example, the array triggers an informational alert if the NFS service is
unhealthy.
By default, alerts of all severity levels are displayed. To filter the list to display only
alerts of a certain minimum severity level, click the All Severity Levels drop-down but-
ton and select the desired minimum severity level from the list.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 253
Chapter 9:Health | Alerts

l ID: Unique number assigned by the array to the alert. ID numbers are assigned to
alerts in chronological ascending order.
l Code: Alert code number that Pure Storage uses to identify the type of alert event.
l State: Current state of the alert. Possible states include: open and closed.
An alert goes from open state to closed state when the issue is completely
resolved.
By default, both open and closed alerts are displays. To filter the list to display only
open alerts, click the Open and Closed drop-down button and select Open Only.
l Created: Date and time the alert was first generated and initial alert email noti-
fications were sent to alert watchers.
By default, all alert records on the array are displayed. To display a list of alerts that
were created within a certain time range, click the All Time drop-down button and
select the desired time range from the list.
l Updated: Most recent date and time the array saw the issue that generated the alert.
Note that alerts that have been updated within the last 24 hours and are still open also
appear in the Dashboard > Recent Alerts panel.
l Category: Group to which the alert belongs. Categories include Array Alerts, Hard-
ware Alerts, Software Alerts.
By default, alerts from all categories are displayed. To filter the list to display only
alerts from a certain category, click the All Categories drop-down button and select
the category from the list.
l Component: Specific array, software, or hardware component that triggered the alert.
l Subject: Alert details.
Alerting also appears in other sections of the Purity//FA GUI. From any page of the Purity//FA
GUI, the alert icons that appear in the upper-right corner of the page display the number or open
alerts for the respective alert severity. For example, a "1" next to a yellow warning icon indicates
one open warning alert.
In the Dashboard page, the Recent Alerts pane displays a list of open alerts that have been
updated within the last 24 hours.

Flagging an Alert Message


To flag an alert message:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 254
Chapter 9:Health | Connections

1 Select Health > Alerts.


2 In the Alerts panel, click the gray flag next to the alert message you want to flag. The flag
turns blue, indicating that it is flagged.

Clearing an Alert Flag


To clear an alert message flag:
1 Select Health > Alerts.
2 In the Alerts panel, click the blue flag next to the alert message you want to unflag. The flag
turns gray, indicating that it is no longer flagged.

Connections
The Connections page displays connectivity details between the Purity//FA hosts and the array
ports.
The Host Connections panel displays a list of hosts, the connectivity status of each host, and the
number of initiator ports associated with each host. See Figure 9-4.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 255
Chapter 9:Health | Connections

Figure 9-4. Connections

The Paths column displays the connectivity status between the host and controllers in a highly
available environment, where the colored value indicates one of the following connection health
statuses:
l Green: Fully redundant and highly available. No issues detected.
l Yellow: Not fully redundant. Issues detected that may impact high availability.
l Red: Single controller connectivity only.
l Gray: No connectivity.
Possible connection statuses include:
Redundant
All paths between the host and each of the controllers in a highly available array are con-
nected.
Uneven
The number of paths between the host and each controller is uneven. This may impact
high availability. Make sure that there are the same number of paths from the host to
each controller.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 256
Chapter 9:Health | Connections

Unused Port
The host has unused initiators. This may impact high availability. Make sure that all of the
initiators have at least one path to the array.
Single Controller
The host has paths to only one of the controllers. No paths exist to the other controller.
This impacts high availability. Make sure that there are redundant paths from the host to
both controllers.
Single Controller - Failover
The host has paths to one controller, but one or more of those paths has failed over.
None
The host is not connected to any of the controllers.
Select the check boxes along the top of the Host Connections list to filter the hosts by con-
nection status.
The Array Ports panel displays the connection mappings between each array port and initiator
port. Each array port includes the following connectivity details: associated iSCSI Qualified
Name (IQN), NVMe Qualified Name (NQN), or Fibre Channel World Wide Name (WWN)
address, communication speed, and failover status. A check mark in the Failover column mean
the port has failed over to the corresponding port pair on the primary controller.

Viewing Host Connection Details


The connection map displays connectivity details for all hosts.
To view the host connection details:
Select Health > Connections.
l To view the connection map details for a specific host-volume connection, in the Host
Connections panel, click the host to drill down to its connection map.
l To view the connectivity status between the hosts and the array controller ports, in
the Host Connections pane, click the menu icon, select up to 10 hosts that you want
to analyze, and select Compare.

Viewing Array Port Details


To view the array port details:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 257
Chapter 9:Health | Connections

Select Health > Connections. The Array Ports pane displays the connection mappings between
each array port and initiator port.
Optionally click the menu icon and select Download CSV to save the ports.csv file to your
local machine.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 258
Chapter 9:Health | Network

Network
View network statistics, bandwidth, and errors for the network interfaces on the array by select-
ing Health > Network. See Figure 9-5.
Figure 9-5. Network

The Health > Network page contains the following panes:


l Ports
Displays the network statistics summary of individual interfaces on the array. The
errors statistics include the following errors:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 259
Chapter 9:Health | Network

l CRC Errors/s (RX): Indicates the number of received packets per second
with incorrect checksums. A cyclic redundancy check (CRC) is an error-
detecting code for data transmission.
l Frame Errors/s (RX): Indicates the number of received packets per
second with misaligned Ethernet frames.
l Carrier Errors/s (TX): Indicates the number of transmitted packets per
second with duplex mismatch or faulty hardware issues.
l Dropped Errors/s (TX): Indicates the number of transmitted packets per
second that were dropped.
l Other Errors/s: Indicates the number of packets per second with all other
types of receive and transmit errors.
l Total Errors/s
Displays the graphical representation of the error history over the selected range of
time, annotated with the number of total errors per second for individual (or the sum of
all) interfaces at a specific point in time.
l Bandwidth
Displays the graphical representation of the bandwidth history over the selected
range of time, annotated with the numbers of the transmitted bytes, received bytes,
and total bytes per second for individual (or the sum of all) interfaces at a specific
point in time.
l Packets/s
Displays the graphical representation of the historical packet information over the
selected range of time, annotated with the numbers of transmitted packets, received
packets, and total packets per second for individual (or the sum of all) interfaces at a
specific point in time.

Viewing Network Statistics


To view the network statistics,
1 Select Health > Network.
The Ports pane displays the network statistics information of individual network interfaces
on the array.
2 In the Ports panel, click either one of the following buttons:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 260
Chapter 9:Health | Network

l Summary
Displays the network statistics information. This is the default.
l Errors
Displays the error statistics including CRC, frame, carrier, dropped, and other errors.

Viewing Network Statistics in Graphical Rep-


resentations
To view the network statistics in graphical representations,
1 Select Health > Network.
2 In the Ports pane, select the check box of the interface to display the network statistics.
The Total Errors/s, Bandwidth, and Packets/s panes display the historical information of
total errors, bandwidth, and packets, respectively, for the selected interface for the past
one hour.
To deselect a network interface from the charts, use one of the following methods:
l Deselect the check box of the interface in the Ports pane.
l Click the X mark next to the interface from the Selection(n) dropdown menu in the
upper-left corner.
l Click Clear all to clear the interface selections from the Selectionn) dropdown menu
in the upper-left corner.
3 Hover over any of the Total Errors/s, Bandwidth, and Packets/s charts to display the total
errors, bandwidth, and packets, respectively, in the point-in-time tooltips for the selected net-
work interfaces.
If you do not select any interface in the Ports pane, the charts display the total network
information of all the interfaces. The values in the tooltips are rounded to two decimal
places.
4 To view historical network information over a different time range, click the 1 Hour range but-
ton to select a time range.
By default, the charts display network statistics for the past one hour. For 1-hour time inter-
val, the displays are refreshed every 30 seconds. For 3-hour time interval, the displays are
refreshed every minute.
5 To further zoom into a time range, click and drag from a start time to an end time.
Click the Reset Zoom button to zoom back to the time range specified.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 261
Chapter 9:Health | Network

6 (Optional) In the Total Errors/s, Bandwidth, or Packets/s pane, click the graph to update the
total errors, bandwidth, and packets information in the Ports pane for a specific point in time.
Click the menu icon of a chart to export the image of the chart in PNG or CSV format.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 262
Chapter 10:
Settings
The Settings page displays and manages the general attributes and network settings of an
array.

System
The Settings > System page displays and manages the general attributes of the FlashArray.
See Figure 10-1.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 263
Chapter 10:Settings | System

Figure 10-1. Settings - System Page

Pure Storage Confidential - For distribution only to Pure Customers and Partners 264
Chapter 10:Settings | System

Array Name
At the top of the System page is the name of the array. The array name is used for various admin-
istration and configuration purposes.
The array name appears in audit and alert messages. The array name also represents the send-
ing account name for Purity//FA email alert messages.
The name is used to identify the array when connecting it to other arrays. For asynchronous rep-
lication, the array name appears as part of the snapshot name when viewing replicated snap-
shots on a target array. For ActiveCluster (synchronous replication), the array name is used to
identify the arrays over which pods are stretched and unstretched.
The array can be renamed at any time, and the name change takes effect immediately. Note that
Purity//FA does not register array names with the DNS, so if you change the array name, you
must re-register the name before the array can be addressed by name in browser address bars,
ICMP ping commands, and so on.

Renaming the Array


1 Select Settings > System.
2 Click the edit icon next to the current array name. The array name becomes an editable box.
3 In the editable box, type the new array name.
4 Click the check mark icon to confirm the change.

Alert Watchers
Purity//FA generates an alert whenever the health of a component degrades or a capacity
threshold is reached. Alerts can also be sent as email notifications to designated alert watchers.
The Alert Watchers panel displays the email addresses of designated alert watchers and the
alert status of each watcher. The sending account name for Purity//FA alert email notifications is
the array name at the configured sender domain.
The list of alert watchers includes the built-in [email protected]
address, which cannot be deleted.
Once added, an alert watcher will starts receiving alert email notifications.
Alert watchers can be in enabled or disabled status. Alert watchers who are in enabled status
receive alert email notifications. When an alert watcher is created, its watcher status is

Pure Storage Confidential - For distribution only to Pure Customers and Partners 265
Chapter 10:Settings | System

automatically set to enabled status. Alert watchers who are in disabled status do not receive
alert email notifications. Disabling an alert watcher does not delete the recipient's email address
- it only stops the watcher from receiving alert notifications. Alert watchers can be enabled and
disabled at any time. The current alert watcher status is determined by the color of the toggle but-
ton that appears next to the alert watcher email address, where blue represents an enabled alert
watcher and gray represents a disabled alert watcher.
Deleting an alert watcher completely removes the watcher from the list. Once an email address
has been deleted, the corresponding alert watcher will no longer receive alert notifications.

Adding an alert watcher


You can designate up to 19 alert recipients.
1 Select Settings > System.
2 In the Alert Watchers panel, type the email address of the alert watcher.
3 Click the Add button to add the email address to the list of alert watchers. Once added, the
alert watcher immediately starts receiving alert email messages.

Enabling and Disabling an Alert Watcher


You cannot disable built-in alert recipient [email protected].
1 Select Settings > System.
2 In the Alert Watchers panel, click the toggle button to enable (blue) and disable (gray) an
alert watcher. Once enabled, an alert watcher starts receiving alert email notifications.

Deleting an Alert Watcher


You cannot delete built-in alert recipient [email protected].
1 Select Settings > System.
2 In the Alert Watchers panel, click the delete icon next to the alert watcher you want to delete.

Alert Routing
The Alert Routing panel displays the ways in which alerts and logs are managed.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 266
Chapter 10:Settings | System

Relay Host
The relay host represents the hostname or IP address of the email relay server currently being
used as a forwarding point for alert email notifications generated by the array.
For SMTP servers that require authentication, also specify the username and password. The
username represents the SMTP account name used to authenticate into the relay host SMTP
server. The password represents the SMTP password used to authenticate into the relay host
SMTP server.
If a relay host is not configured, Purity//FA sends all alert email notifications directly to the recip-
ient addresses rather than route them via the relay (mail forwarding) server.

Configuring the SMTP Relay Host


1 Select Settings > System.
2 In the Alert Routing panel, click the edit icon. The Edit SMTP dialog box appears.
3 In the Relay Host field, type the host name or IP address of the email relay server that is to be
used as the forwarding point for alert email notifications generated by the array. If specifying
an IP address, enter the IPv4 or IPv6 address.
For IPv4, specify the IP address in the form ddd.ddd.ddd.ddd, where ddd is a number
ranging from 0 to 255 representing a group of 8 bits. If a port number is also specified,
append it to the end of the address in the form ddd.ddd.ddd.ddd:PORT, where PORT rep-
resents the port number.
For IPv6, specify the IP address in the form
xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx, where xxxx is a hexadecimal num-
ber representing a group of 16 bits. Consecutive fields of zeros can be shortened by repla-
cing the zeros with a double colon (::). If a port number is also specified, enclose the
entire address in square brackets ([]) and append the port number to the end of the
address. For example, [xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx]:PORT,
where PORT represents the port number.
4 In the Username field, type the SMTP account name used to authenticate into the relay host
SMTP server. Only specify the username and password if the SMTP server requires authen-
tication.
5 In the Password field, type the SMTP password used to authenticate into the relay host
SMTP server. Only specify the username and password if the SMTP server requires authen-
tication.
6 Click Save.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 267
Chapter 10:Settings | System

Deleting the SMTP Relay Host


1 Select Settings > System.
2 In the Alert Routing panel, click the edit icon. The Edit SMTP dialog box appears.
3 Delete the relay host name or IP address. Optionally delete the SMTP user name and pass-
word.
4 Click Save.

Sender Domain
The sender domain determines how logs are parsed and treated by Pure Storage Technical Ser-
vices. The domain name is also used in the "from" address of outgoing alert email notifications.
By default, the sender domain is set to the domain name please-configure.me.
It is crucial that you set the sender domain to the correct domain name. If the array is not a Pure
Storage test array, set the sender domain to the actual customer domain name. For example,
mycompany.com.
The email address that Purity//FA uses to send alert messages includes the sender domain
name and is comprised of the following components:
<Array_Name>-<Controller_Name>@<Sender_Domain_Name>.com
For example, [email protected].

Configuring the Sender Domain


1 Select Settings > System.
2 In the Sender Domain section of the Alert Routing panel, click the edit icon. The field
becomes an editable box.
3 In the editable box, type the sender domain name.
The default domain name is please-configure.me. If this is not a Pure Storage test
array, the domain name must be set to your company's domain name. For example,
mycompany.com.

Important: The sender domain determines how Purity//FA logs are parsed and
treated by Pure Storage Technical Services, so it is crucial that you set the sender
domain to the correct domain name.
4 Click the check mark icon to confirm the change.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 268
Chapter 10:Settings | System

UI
The UI panel displays and manages general user interface details, including banner text and idle
timeout.

Login Banner
The Login Banner section enables you to create a message that Purity//FA users see in the Pur-
ity//FA GUI login screen when logging into the Purity//FA GUI, and before the password prompt
when logging into the CLI.

Creating a Banner Message


To create a banner message for all Purity//FA users to see:
1 Go to the Settings > System > UI panel.
2 In the Login Banner section, click Create. The Edit Login Banner dialog box appears.
3 Click inside the text box and type the banner message. The message can be up to 2000 char-
acters long and accepts ASCII characters.
4 Click Save, and then click Close. Notice that the Create button becomes a View button,
allowing you to view and edit the existing banner message at any time.
5 To verify that the banner message appears, log out of the Purity//FA GUI. The banner mes-
sage appears in the Purity//FA GUI login screen, just above the Pure Storage logo.

GUI Idle Timeout


The GUI Idle Timeout feature displays the length of time, measured in minutes, that the Pur-
ity//FA GUI can be idle before the user is logged out of the session.
The default idle time is 30 minutes.

Setting the Idle Timeout Value


To set the idle timeout value:
1 Go to the Settings > System > UI panel.
2 In the GUI Idle Timeout section, click the edit icon next to the current idle timeout value (in
minutes), and then enter the amount of time in minutes that a Purity//FA GUI session can be

Pure Storage Confidential - For distribution only to Pure Customers and Partners 269
Chapter 10:Settings | System

idle before the user is logged out. The idle time can be any length between 5 and 180
minutes. To disable the idle timeout setting, set the idle time to 0 minutes.
3 Click the check mark to confirm the change. The idle timeout setting takes effect the next
time you log in to the Purity//FA GUI.

Disabling the Idle Timeout Setting


To disable the idle timeout setting:
1 Go to the Settings > System > UI panel.
2 In the GUI Idle Timeout section, click the edit icon next to the current idle timeout value (in
minutes), and then enter 0.
3 Click the check mark to confirm the change. The idle timeout setting becomes disabled the
next time you log in to the Purity//FA GUI.

Syslog Servers
The Syslog Servers feature enables you to forward syslog messages to remote servers.
The Purity//FA syslog logging facility generates messages of major events within the FlashArray
and forwards the messages to remote servers. Purity//FA generates syslog messages for three
types of events:
l Alerts (purity.alert)
l Audit Trails (purity.audit)
l Tests (purity.test)
Purity//FA generates alerts when there is a change to the array or to one of the Purity//FA hard-
ware or software components. There are three alert severity levels:
l INFO: Informational messages that are generated due to a change in state. INFO
messages can be used for reporting and analysis purposes. No action is required.
l WARNING: Important messages that warn of an impending error if action is not
taken.
l CRITICAL: Urgent messages that require immediate attention.
Syslog alerts are broken down into the following format:
<Event Timestamp> <Array IP Address> purity.alert <Alert Severity>
<Alert Details>

Pure Storage Confidential - For distribution only to Pure Customers and Partners 270
Chapter 10:Settings | System

In , Purity//FA generated a WARNING alert because space consumption on the array exceeded
90%:
Figure 10-2. Syslog Server – Alerts

Alerts are also sent via the phone home facility to the Pure Storage Technical Services team. If
configured, alerts can also be sent to designated email recipients and SNMP trap managers.
You can also view alerts through the GUI (Health > Alerts) and CLI (puremessage list com-
mand).
An audit trail represents a chronological history of the GUI, CLI, or REST API operations that a
user has performed to modify the configuration of the array. Each message within an audit trail
includes the name of the Purity//FA user who performed the operation and the Purity//FA oper-
ation that was performed.
Syslog audit trail messages are broken down into the following format:
Event Timestamp> <Array IP Address> <purity.audit> <Purity//FA User-
name> <Purity//FA Command> <Audit Trail Message Details>
In , pureuser performed various GUI, CLI, or REST API operations:
Figure 10-3. Syslog Server – Audit Trails

Pure Storage Confidential - For distribution only to Pure Customers and Partners 271
Chapter 10:Settings | System

You can also view audit messages through the GUI (Settings > Access) and CLI (pureaudit
list command).
Test messages represent a history of all tests generated by users to verify that the array can
send messages to email recipients. The message does not indicate whether or not the test mes-
sage successfully reached the recipients.
Syslog test messages are broken down into the following format:
Event Timestamp> <Array IP Address> <purity.test> <Purity//FA User-
name> <Test Message Details>
In , pureuser performed a test to determine if the array could send messages to email
addresses:
Figure 10-4. Syslog Server – Tests

Setting the Syslog Server Output Location


To set the syslog server output location:
1 Select Settings > System.
2 In the Syslog Server panel, enter the URI of the remote syslog server. For example,
tcp://MyHost.com.
Specify the URI in the format PROTOCOL://HOSTNAME:PORT.
PROTOCOL is "tcp", "tls", or "udp".
HOSTNAME is the syslog server hostname or IP address. If specifying an IP address, for
IPv4, specify the IP address in the form ddd.ddd.ddd.ddd, where ddd is a number ran-
ging from 0 to 255 representing a group of 8 bits.
For IPv6, specify the IP address in the form
[xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx], where xxxx is a hexadecimal
number representing a group of 16 bits. Enclose the entire address in square brackets
([]). Consecutive fields of zeros can be shortened by replacing the zeros with a double
colon (::).
PORT is the port at which the server is listening. Append the port number after the end of
the entire address. If the port is not specified, it defaults to 514.
3 Click the Add button to add the URI to the list of syslog server output locations.
4 Optionally, click Test to test the setting.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 272
Chapter 10:Settings | System

SMI-S
The SMI-S panel manages the Pure Storage Storage Management Initiative Specification (SMI-
S) provider.
Enable the SMI-S provider to administer the array through an SMI-S client. The SMI-S provider
is optional and must be enabled before its first use.
For more information about the SMI-S provider, refer to the Pure Storage SMI-S Provider Guide
on the Knowledge site at https://support.purestorage.com.

Array Time
The Array Time panel displays the array’s current time, and the IP addresses or fully qualified
hostnames of the Network Time Protocol (NTP) servers with which array time is synchronized.
Pure Storage technicians set the array time zone during installation. By default, the array time is
synchronized to an NTP server operated by Pure Storage. Alternate NTP servers can be des-
ignated.

Time
The displayed time is based on the time zone of the array, which is set during the FlashArray
installation.

NTP Servers
The NTP Servers section displays the hostnames or IP addresses of the Network Time Protocol
(NTP) servers that are currently being used by the array to maintain reference time. The install-
ation technician sets the proper time zone for an array when it is installed. During operation,
arrays maintain time synchronization by interacting with the NTP server.
Designating an Alternate NTP server
The array maintains time synchronization by interacting with the NTP server.
To designate an alternate NTP server:
1 Select Settings > System.
2 In the NTP Servers section of the Time panel, perform one of the following tasks:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 273
Chapter 10:Settings | System

l To add an NTP server, in the New NTP Server(s) text box, type the hostname or IP
address of the NTP server used by the array to maintain reference time, and then
click the Add button. You can add up to four NTP servers. Enter multiple servers as
comma-separated values.
If specifying an IP address, for IPv4, specify the IP address in the form ddd.d-
dd.ddd.ddd, where ddd is a number ranging from 0 to 255 representing a group of
8 bits. For IPv6, specify the IP address in the form
xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx, where xxxx is a hexadecimal
number representing a group of 16 bits. When specifying an IPv6 address, con-
secutive fields of zeros can be shortened by replacing the zeros with a double colon
(::).
l To remove an NTP server, select the check box of the server you want to remove,
and then click the delete icon.

Note: Editing NTP servers is not supported on Cloud Block Store.

Cloud Features
Cloud Features displays and manages features associated with cloud applications.

Single Sign-On
The single sign-on (SSO) facility enables users to configure secure access to cloud applications.
To change the setting, click the edit icon and then click the toggle button to switch between
enabled (blue) and disabled (gray) status. Then click Save.

Pure1 Support
The Pure1 Support panel displays and manages the features used to communicate with Pure
Storage Technical Services.

Phone Home
The phone home facility provides a secure direct link between the array and Pure Storage Tech-
nical Services to transmit log and diagnostic information.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 274
Chapter 10:Settings | System

This information provides Pure Storage Technical Services with complete recent history about
array performance and significant events in case diagnosis or remedial actions are required.
Alerts are reported immediately when they occur so that timely action can be taken.
The phone home facility can be enabled (blue) or disabled (gray) at any time. By default, the
phone home facility is enabled. Log and diagnostic information is only transmitted when the fea-
ture is enabled. If the phone home facility is disabled, historical log contents are delivered when
the facility is next enabled; Purity will continue to send alerts to designated email recipients and
SNMP trap managers if those features are configured.
Enabling and disabling phone home
Enable the phone home facility to automatically transmit log files on an hourly basis to Pure Stor-
age Technical Services via the phone home channel.

Note: If a proxy host is required by https, configure the proxy server.


1 Select Settings > System.
2 In the Phone Home section of the Pure1 Support panel, click the toggle button to switch
between enabled (blue) and disabled (gray) status.

Manual Phone Home


Phone home logs can be sent to Pure Storage Technical Services on demand, with options
including Today's Logs, Yesterday's Logs, or All Logs.
Sending Phone Home Logs to Pure Storage Technical Services

Note: If a proxy host is required by https, configure the proxy server.


To manually send array log files to Pure Storage Technical Services via the phone home chan-
nel:
1 Select Settings > System.
2 In the Manual Phone Home section of the Pure1 Support panel, In the Manual Phone Home
section, select one of the following options from the drop-down list:
l Today's Logs: Sends log information from the current day (in the array’s time zone)
l Yesterday's Logs: Sends log information from the previous day (in the array’s time
zone)

Pure Storage Confidential - For distribution only to Pure Customers and Partners 275
Chapter 10:Settings | System

l All Log History: Sends log information from the previous day (in the array’s time
zone)
3 Click Send Now to send the log files to Pure Storage Technical Services.

Remote Assist
In some cases, the most efficient way for Pure Storage Technical Services to service a FlashAr-
ray array or diagnose problems is through direct access to the array. A remote assistance (RA)
session grants Pure Storage Technical Services direct and secure access to the array through a
reverse tunnel which you, the administrator, open. This is a two-way communication.
Opening an RA session gives Pure Storage Technical Services the ability to log into the array,
effectively establishing an administrative session. Once the RA session is successfully estab-
lished, the array returns connection details, including the date and time when the session was
opened, the date and time when the session expires, and the proxy status (true, if configured).
After the Pure Storage Technical Services team has performed all of the necessary diagnostic
or maintenance functions, close the RA session to terminate the connection.
RA sessions can be opened/connected (blue) and closed/disconnected (gray) at any time. By
default, the RA session is closed/disconnected.
Opening and closing a remote assist session does not affect the current administrative session.
An open RA session automatically terminates (disconnects) after two days have elapsed.
Opening and Closing a Remote Assistance (RA) Session
To open and close an RA session:
1 Select Settings > System.
2 In the Remote Assistance section of the Pure1 Support panel, click the toggle button to open
(blue) and close (gray) an RA session. Opening an RA session gives Pure Storage Technical
Services direct and secure access to the array. After the Pure Storage Technical Services
team has performed all of the necessary diagnostic functions, close the RA session.

Support Logs
Purity//FA continuously logs a variety of array activity, including performance metrics, hardware
and software operations, and administrative actions. Array activity is time stamped and organ-
ized in chronological order. The Support Logs panel allows you to download the Purity//FA log
contents of the specified controller to the current administrative workstation.
If Phone Home is enabled, the logs are periodically transmitted to Pure Storage. The logs are
also saved to the array, available for manual download.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 276
Chapter 10:Settings | System

When the support logs are manually downloaded, the array generates a password-protected
.zip file containing all of the logs and saves it to your local machine.
Downloading Support Logs
1 Select Settings > System.
2 In the Support Logs section of the Pure1 Support panel, select the time range representing
the approximate array time the activity of interest occurred.
3 In the "Download from" section, click the button corresponding to the controller from which
you want to download the support logs. For example, click CT0 to download the logs for the
primary controller. The password-encrypted .zip file is saved to your local machine. The file
can only be opened by Pure Storage Technical Services.

Event Logs
The Purity//FA event log continuously logs array events and administrative actions with time-
stamped entries. The logging detail level is customizable for audit, security monitoring,
forensics, time line, troubleshooting, or other purposes.
Complete event log content is not displayed directly through the GUI or CLI. Instead, event logs
are available for manual download through the GUI (Settings > System > Pure1 Support
> Event Logs). A portion of event log content are alert, audit, and session entries that are dis-
played (separately from the event log) in the GUI (Health > Alerts, Settings > Access > Audit
Trail, and Settings > Access > Session Log) and by CLI commands (purealert list, pur-
eaudit list, and puresession list).
Use the CLI purelog global setattr --logging-severity command to customize
the event logging level.
The severity of events that are collected in the event log is customizable to the following levels:
l notice. Events that are unusual or require attention, including warnings and errors.
l info. Normal operations that require no action. Default.
l debug. Verbose information useful for debugging and auditing.
The event log retains logs either for 90 days or for 10GB of logs, whichever occurs first. In addi-
tion, if remote syslog is configured, the contents of the event log are sent to the remote syslog.

Downloading Event Logs


1 Select Settings > System.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 277
Chapter 10:Settings | System

2 In the Event Logs section of the Pure1 Support panel, select the time range of logs to down-
load: today's logs or the last 3, 7, 30, or 90 days of logs.
3 Click Download. Navigate to the location to save the file. Optionally rename the file.
The .zip file contains .gz files for the selected time range and an .md5 checksum file for each
.gz file.

Proxy Server
The Proxy section manages the proxy hostname for https log transmission. The proxy host-
name, if set, represents the server to be used as the HTTP or HTTPS proxy. The format for the
proxy host name is http(s)://hostname:port, where hostname is the name of the proxy
host, and port is the TCP/IP port number used by the proxy host.

Configuring the Proxy Host


To configure the proxy host for HTTPS communication for phone home and log transmission:
1 Select Settings > System.
2 In the Proxy Server section of the Pure1 Support panel, click the edit icon. The field becomes
an editable box.
3 In the editable box, type the proxy host name.
The format for the host name is http(s)://hostname:port, where hostname is the
name of the proxy host, and port is the TCP/IP port number used by the proxy host.
4 Click the check mark icon to confirm the change.
Deleting the Proxy Host
1 Select Settings > System.
2 In the Proxy Server section of the Pure1 Support panel, click the edit icon next to the current
proxy host name. The proxy host name becomes an editable box.
3 Delete the proxy host name.
4 Click the check mark icon to confirm the deletion.

SSL Certificate
Purity//FA creates a self-signed certificate and private key when you start the system for the first
time. The SSL Certificate panel allows you to view and change certificate attributes, create a

Pure Storage Confidential - For distribution only to Pure Customers and Partners 278
Chapter 10:Settings | System

new self-signed certificate, construct certificate signing requests, import certificates and private
keys, and export certificates.

Self-Signed Certificate
Creating a self-signed certificate replaces the current certificate. When you create a self-signed
certificate, include any attribute changes, specify the validity period of the new certificate, and
optionally generate a new private key. See Figure 10-5.
Figure 10-5. SSL Certificate – Create Self-Signed Certificate

When you create the self-signed certificate, you can generate a private key and specify a dif-
ferent key size. If you do not generate a private key, the new certificate uses the existing key.
You can change the validity period of the new self-signed certificate. By default, self-signed cer-
tificates are valid for 3650 days.

CA-Signed Certificate
Certificate authorities (CA) are third party entities outside the organization that issue certificates.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 279
Chapter 10:Settings | System

To obtain a CA certificate, you must first construct a certificate signing request (CSR) on the
array. See Figure 10-6.
Figure 10-6. SSL Certificate – Construct Certificate Signing Request

The CSR represents a block of encrypted data specific to your organization. You can change the
certificate attributes when you construct the CSR; otherwise, Purity//FA will reuse the attributes
of the current certificate (self-signed or imported) to construct the new one. Note that the cer-
tificate attribute changes will only be visible after you import the signed certificate from the CA.
Send the CSR to a certificate authority for signing. The certificate authority returns the SSL cer-
tificate for you to import. Verify that the signed certificate is PEM formatted (Base64 encoded),
includes the"-----BEGIN CERTIFICATE-----"and"-----END CERTIFICATE-----
"lines, and does not exceed 3000 characters in total length. When you import the certificate,
also import the intermediate certificate if it is not bundled with the CA certificate. See Figure 10-
7.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 280
Chapter 10:Settings | System

Figure 10-7. SSL Certificate – Import CA Certificate

If the certificate is signed with the CSR that was constructed on the current array and you did not
change the private key, you do not need to import the key. However, if the CSR was not con-
structed on the current array or if the private key has changed since you constructed the CSR,
you must import the private key. If the private key is encrypted, also specify the passphrase.

Certificate Administration
The attributes of a self-signed certificate can only be changed by creating a new certificate. Cer-
tificate attributes include organization-specific information, such as country, state, locality, organ-
ization, organizational unit, common name, and email address.
The export feature allows you to view and export the certificate and intermediate certificates for
backup purposes.

Creating or Changing the Attributes of a Self-Signed Certificate

Note: When you change the certificate attributes, Purity//FA replaces the existing cer-
tificate with the new certificate and its specified attributes.
1 Select Settings > System.
2 In the SSL Certificate panel, click the menu icon and select Create Self-Signed Certificate.
The Create Self-Signed Certificate pop-up window appears.
3 Complete or modify the following fields:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 281
Chapter 10:Settings | System

l Generate new key: Click the toggle button to generate (blue) or not generate (gray) a
new private key with the self-signed certificate. If you do not generate a new private
key, the certificate uses the existing key.
l Key Size: If you generate a new private key, specify the key size. The default key size
is 2048 bits. A key size smaller than 2048 is considered insecure.
l Country: Enter the two-letter ISO code for the country where your organization is loc-
ated.
l State/Province: Enter the full name of the state or province where your organization
is located.
l Locality: Enter the full name of the city where your organization is located.
l Organization: Enter the full and exact legal name of your organization. The organ-
ization name should not be abbreviated and should include suffixes such as Inc,
Corp, or LLC.
l Organizational Unit: Enter the department within your organization that is managing
the certificate.
l Common Name: Enter the fully qualified domain name (FQDN) of the current array.
For example, the common name for https://purearray.example.com is pur-
earray.example.com, or *.example.com for a wildcard certificate. The common name
can also be the management IP address of the array or the short name of the current
array. Common names cannot have more than 64 characters.
l Email: Enter the email address used to contact your organization.
l Days: Specify the number of valid days for the self-signed certificate being gen-
erated. If not specified, the self-signed certificate expires after 3650 days.
4 Click Create. Purity//FA restarts the GUI and signs you in using the self-signed certificate.

Constructing a Certificate Signing Request to Obtain a CA Certificate

Note: When you change the certificate attributes, Purity//FA replaces the existing cer-
tificate with the new certificate and its specified attributes.
1 Select Settings > System.
2 In the SSL Certificate panel, click the menu icon and select Construct Certificate Signing
Request. The Construct Certificate Signing Request pop-up window appears.
3 Complete or modify the following fields:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 282
Chapter 10:Settings | System

l Country: Enter the two-letter ISO code for the country where your organization is loc-
ated.
l State/Province: Enter the full name of the state or province where your organization
is located.
l Locality: Enter the full name of the city where your organization is located.
l Organization: Enter the full and exact legal name of your organization. The organ-
ization name should not be abbreviated and should include suffixes such as Inc,
Corp, or LLC.
l Organizational Unit: Enter the department within your organization that is managing
the certificate.
l Common Name: Enter the fully qualified domain name (FQDN) of the current array.
For example, the common name for https://purearray.example.com is pur-
earray.example.com, or *.example.com for a wildcard certificate. The common name
can also be the management IP address of the array or the short name of the current
array. Common names cannot have more than 64 characters.
l Email: Enter the email address used to contact your organization.
4 Click Create to construct the CSR. The CSR pop-up window appears, displaying the CSR as
a block of encrypted data.
5 Click Download to download the CSR, which you can send to a certificate authority (CA) for
signing.

Importing a CA Certificate
After you receive the signed certificate from the CA, you are ready to import it to replace the
existing certificate.
1 Verify that the signed certificate is PEM formatted (Base64 encoded), includes the "-----
BEGIN CERTIFICATE-----" and "-----END CERTIFICATE-----" lines, and does
not exceed 3000 characters in length.
2 Select Settings > System.
3 In the SSL Certificate panel, click the menu icon and select Import Certificate. The Import
Certificate pop-up window appears.
4 Complete or modify the following fields:
l Intermediate Certificate: If you also received an intermediate certificate from the CA,
click Choose File and select the intermediate certificate.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 283
Chapter 10:Settings | System

l Key: If the CSR was not constructed on the current array or the private key has
changed since you constructed the CSR, click Choose File and select the private
key.
l Key Passphrase: If the private key is encrypted with a passphrase, enter the pass-
phrase.
l Certificate: Click Choose File and select the signed certificate you received from the
CA.
5 Click Import.

Viewing and Exporting Certificate Details


1 Select Settings > System.
2 In the SSL Certificate panel, click the menu icon and select Export Certificate or Export
Intermediate Certificate. The Export Certificate pop-up window appears.
3 Click Download to view and export the certificate and its details.

Maintenance Windows
The Maintenance Windows panel displays whether the array is undergoing maintenance. If the
array is being maintained, the Enabled field indicates True, the message "The system is cur-
rently undergoing maintenance" appears in the panel, and the name, time it was created, and
expiration time are listed in a table below that message.

Initiating a Maintenance Window


1 Click the edit icon in the upper right corner of the Maintenance Windows panel.
2 In the Schedule Maintenance window, enter the number of hours (1–24) and click the Sched-
ule button. Alternatively, you can configure the maintenance window for a period of seconds,
minutes, or days, but the maximum period of time is one day.
The array begins a maintenance window for the specified period of time. The message in
the Maintenance Window panel changes to "The system is currently undergoing main-
tenance." The maintenance window can be ended immediately by clicking the delete icon.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 284
Chapter 10:Settings | System

Eradication Delay Settings


The Eradication Delay panel displays the current eradication delay settings, which control the
length of the eradication pending period for a destroyed object. The default is eight days for
objects protected by SafeMode and one day for other objects.
l Disabled delay: The eradication delay for SafeMode-protected objects. Known as
the "disabled" eradication delay because manual eradication is disabled on those
objects. Default 8 days.
l Enabled delay: The eradication delay for objects for which eradication is enabled,
that is, objects not protected by SafeMode. Default 1 day.
Delays can be set to 1 to 30 days, whole numbers only. See "Eradication Delays" on page 35 for
more information about eradication delays and eradication pending periods for destroyed
objects.
Figure 10-8 shows the Eradication Configuration pane in the Settings > System tab.
Figure 10-8. Eradication Delay Settings

The Manual Eradication column has the following values:


l all-disabled: SafeMode is enabled for all objects on the array.
l all-enabled: SafeMode is not enabled.
l partially-enabled: At least one protection group on the array is protected by
SafeMode.
The Manual Eradication status cannot be edited.

Figure 10-9 shows the Edit Eradication Configuration dialog.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 285
Chapter 10:Settings | System

Figure 10-9. Edit Eradication Configuration

Changing an Eradication Delay Setting


Except for file systems, if the eradication pending period is increased, items already pending
eradication immediately inherit the new pending period, and if the eradication pending period is
decreased, items pending eradication keep their higher pending period. File systems are an
exception, as the eradication pending period for a destroyed file system is not affected by a later
increase or decrease of an enabled delay or disabled delay setting.
1 Click the edit icon in the upper right corner of the Eradication Configuration panel.
2 To change the eradication delay for objects not protected by SafeMode, enter the new num-
ber of days in the Enabled Delay field.
3 To change the eradication delay for objects protected by SafeMode, enter the new number of
days in the Disabled Delay field.
4 Click Save.

Rapid Data Locking


The Rapid Data Locking panel displays the status of the Rapid Data Locking (RDL) feature as
enabled or disabled. The RDL feature is a FlashArray option that adds external security tokens
to enhance the data security of an array.
The RDL feature requires both hardware and software configuration and can only be enabled by
a CLI command. For more information about the Rapid Data Locking (RDL) feature, refer to the

Pure Storage Confidential - For distribution only to Pure Customers and Partners 286
Chapter 10:Settings | System

FlashArray Enhanced Data Security Guide on the Knowledge site at https://sup-


port.purestorage.com.

Note: The Rapid Data Locking feature is not supported on Cloud Block Store.

SNMP
The Simple Network Management Protocol (SNMP) is used by SNMP agents and SNMP man-
agers to send and retrieve information. FlashArray supports SNMP versions v2c and v3.
The SNMP panel displays the SNMP agent and the list of SNMP managers running in hosts with
which the array communicates.
In the FlashArray, the built-in SNMP agent has local knowledge of the array. The agent collects
and organizes this array information and translates it via SNMP to or from the SNMP managers.
The agent, named localhost, cannot be deleted or renamed. The managers are defined by
creating SNMP manager objects on the array. The managers communicate with the agent via
the standard TCP port 161, and they receive notifications on port 162.
In the FlashArray, the localhost SNMP agent has two functions, namely, responding to GET-
type SNMP requests and transmitting alert messages.
The agent responds to GET-type SNMP requests made by the SNMP managers, returning val-
ues for an information block, such as purePerformance, or individual variables within the block,
depending on the type of request issued. The variables supported are:
pureArrayReadBandwidth Current array-to-host data transfer rate
pureArrayWriteBandwidth Current host-to-array data transfer rate
pureArrayReadIOPS Current read request execution rate
pureArrayWriteIOPS Current write request execution rate
pureArrayReadLatency Current average read request latency
pureArrayWriteLatency Current average write request latency

The FlashArray Management Information Base (MIB) describes the purePerformance variables
and can be downloaded from the array to your local machine.
SNMP managers are added to the array through the creation of SNMP manager objects. When
creating an SNMP manager object, enter the Host, which represents the DNS hostname or IP
address of the computer that hosts the SNMP manager. Also specify the SNMP version from the
Version drop-down list. Valid versions are v2c and v3.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 287
Chapter 10:Settings | System

The SNMP agent generates and transmits messages to the SNMP manager as traps or inform
requests (informs), depending on the notification type that is configured on the manager. An
SNMP trap is an unacknowledged SNMP message, meaning the SNMP manager does not
acknowledge receipt of the message. An SNMP inform is an acknowledged trap.
If the SNMP manager notification type is set to trap, the agent sends the SNMP message (trap)
without expecting a response. If the SNMP manager is set to inform, the agent sends the
SNMP message (inform) and waits for a reply from the manager confirming message retrieval. If
the agent does not receive a response within a certain timeframe, it will retry until the inform has
passed through successfully. If the notification type is not set, the manager defaults to trap.
SNMPv2 uses a type of password called a community string to authenticate the messages that
are passed between the agent and manager. The community string is sent in clear text, which is
considered an unsecured form of communication. SNMPv3, on the other hand, supports secure
communication between the agent and manager through the use of authentication and privacy
encryption methods. As such, SNMPv2c and SNMPv3 have different security attributes.
To configure the SNMPv2c agent and managers, set the Community field to the community
string under which the agent is to communicate with the managers. The agent and manager
must belong to the same community; otherwise, the agent will not accept requests from the man-
ager. When setting the community, Purity prompts twice for the community string. To remove the
agent or manager from the community, leave the field blank.
To configure the SNMPv3 agent and managers, in the User field, specify the user ID that Purity
uses to communicate with the SNMP manager. Also set the authentication and privacy encryp-
tion security levels for the agent and managers. SNMPv3 supports the following security levels:
l noAuthNoPriv. Authentication and privacy encryption is not set. Similar to SNMPv2c,
communication between the SNMP agent and managers is not authenticated and not
encrypted. noAuthNPriv security requires no configuration.
l authNoPriv. Authentication is set, but privacy encryption is not set. Communication
between the SNMP agent and managers is authenticated but not encrypted. Pass-
word authentication is based on MD5 or SHA hash authentication.
To configure authNoPriv security, in the Auth Protocol field, set the authentication pro-
tocol to MD5 or SHA, and in the Auth Passphrase field, enter an authentication pass-
phrase.
l authPriv. Communication between the SNMP agent and managers is authenticated
and encrypted. Password authentication is based on MD5 or SHA hash authentication.
Traffic between the FlashArray and SNMP manager is encrypted using encryption
protocol AES or DES.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 288
Chapter 10:Settings | System

To configure authPriv security, in the Auth Protocol field, set the authentication pro-
tocol to MD5 or SHA, and in the Auth Passphrase field, enter an authentication pass-
phrase. Also, in the Privacy Protocol field, set the privacy protocol to AES5 or DES,
and in the Privacy Passphrase field, enter a privacy passphrase.

Note: Privacy cannot be configured without authentication.


Once an SNMP manager object is created on the array, the FlashArray immediately starts trans-
mitting SNMP messages and alerts to the manager.

Downloading the Management Information Base (MIB) File


To download the management information base (MIB) file:
1 Select Settings > System.
2 In the SNMP panel, click the menu icon and select Download MIB to download the MIB file
to your local machine. The default filename is PURESTORAGE-MIB.

Specifying the SNMP Community String (Applies to SNMPv2c Only)


Specifying the community string adds the array to the SNMP community. You must specify the
SNMP community string if the SNMP agent is configured to use the SNMPv2c protocol.
To specify the SNMP community string:
1 Select Settings > System.
2 Click the Edit (pencil) icon next to the built-in localhost SNMP agent. The Edit SNMP
Agent dialog box appears.
3 In the Community field, enter the manager community ID under which Purity is to com-
municate with the managers.
4 Click Save.

Creating an SNMP Manager Object


Once an SNMP manager object is created on the array, the FlashArray immediately starts trans-
mitting SNMP messages and alerts to the manager.
To create an SNMP manager object:
1 Select Settings > System.
2 In the SNMP panel, click the menu icon and select Add SNMP Manager. The Add SNMP
Manager dialog box appears.
3 Complete the following fields:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 289
Chapter 10:Settings | System

l Name: Name of the SNMP manager.


l Host: DNS hostname or IP address of a computer that hosts an SNMP manager to
which Purity is to send messages when it generates alerts. If specifying an IP
address, enter the IPv4 or IPv6 address.
For IPv4, specify the IP address in the form ddd.ddd.ddd.ddd, where ddd is a num-
ber ranging from 0 to 255 representing a group of 8 bits. If a port number is also spe-
cified, append it to the end of the address in the format ddd.ddd.ddd.ddd:PORT,
where PORT represents the port number.
For IPv6, specify the IP address in the form
xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx, where xxxx is a hexadecimal
number representing a group of 16 bits. Consecutive fields of zeros can be shortened
by replacing the zeros with a double colon (::). If a port number is also specified,
enclose the entire address in square brackets ([]) and append the port number to the
end of the address. For example,
[xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx]:PORT, where PORT rep-
resents the port number.
l SNMP Version: Version of the SNMP protocol to be used by Purity in com-
munications with the specified managers. Valid values are v2c (default) and v3.
l Community: SNMPv2c only. SNMP manager community ID under which Purity is to
communicate with the specified managers.
l User: User ID recognized by the specified SNMP managers that Purity is to use in
communications with them.
l Auth Protocol: SNMPv3 only. Hash algorithm used to validate the authentication
passphrase. Valid values are MD5, SHA, or None.
l Auth Passphrase: SNMPv3 only. Passphrase used by Purity to authenticate the
array with the specified managers. Required if the Auth Protocol option is set to MD5
or SHA.
l Privacy Protocol: SNMPv3 only. Encryption protocol for SNMP messages. Valid val-
ues are AES, DES, or None.
l Privacy Passphrase: SNMPv3 only. Passphrase used to encrypt SNMP messages.
The passphrase must be between 8 and 63 non-spaced ASCII characters.
l Notification: Notification type that determines whether the recipient (remote host)
acknowledges receipt of SNMP messages. Valid options are trap and inform. An
SNMP trap is an unacknowledged (asynchronous) SNMP message, meaning the

Pure Storage Confidential - For distribution only to Pure Customers and Partners 290
Chapter 10:Settings | System

recipient does not acknowledge receipt of the message. An SNMP inform request is
an acknowledged trap. If not specified, the notification type defaults to trap.
4 Click Save.

Configuring the SNMP Manager Object


To configure the SNMP manager object:
1 Select Settings > System.
2 In the SNMP panel, click the menu icon for the SNMP manager object and select Edit. The
Edit SNMP Manager dialog box appears.
3 Modify the following fields:
l Name: Name of the SNMP manager.
l Host: DNS hostname or IP address of a computer that hosts an SNMP manager to
which Purity is to send messages when it generates alerts.
l SNMP Version: Version of the SNMP protocol to be used by Purity in com-
munications with the specified managers. Valid values are v2c (default) and v3.
l Community: SNMPv2c only. SNMP manager community ID under which Purity is to
communicate with the specified managers.
l User: User ID recognized by the specified SNMP managers that Purity is to use in
communications with them.
l Auth Protocol: SNMPv3 only. Hash algorithm used to validate the authentication
passphrase. Valid values are MD5, SHA, or None.
l Auth Passphrase: SNMPv3 only. Passphrase used by Purity to authenticate the
array with the specified managers. Required if the Auth Protocol option is set to MD5
or SHA.
l Privacy Protocol: SNMPv3 only. Encryption protocol for SNMP messages. Valid val-
ues are AES, DES, or None.
l Privacy Passphrase: SNMPv3 only. Passphrase used to encrypt SNMP messages.
The passphrase must be between 8 and 63 non-spaced ASCII characters.
l Notification: Notification type that determines whether the recipient (remote host)
acknowledges receipt of SNMP messages. Valid options are trap and inform. An
SNMP trap is an unacknowledged (asynchronous) SNMP message, meaning the
recipient does not acknowledge receipt of the message. An SNMP inform request is
an acknowledged trap. If not specified, the notification type defaults to trap.
4 Click Save.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 291
Chapter 10:Settings | System

Deleting an SNMP Manager Object


Deleting an SNMP manager object stops communication with the specified SNMP manager and
deletes the SNMP manager object from Purity.
To delete an SNMP manager:
1 Select Settings > System.
2 In the SNMP panel, click the menu icon for the SNMP manager object and select Delete.
The Delete SNMP Manager dialog box appears.
3 Click Delete.

Sending a Test SNMP Message to a Manager


To send a test SNMP message to a manager:
1 Select Settings > System.
2 In the SNMP panel, click the menu icon for the SNMP manager object and select Send Test
Message.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 292
Chapter 10:Settings | Network

Network
The Network page displays the network connection attributes of the array. See Figure 10-10.
Figure 10-10. Settings – Network Page

The Network page panels manage the Fibre Channel (physical), Ethernet (physical), subnets
and virtual, bond, VLAN, and app interfaces used to connect the array to a network.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 293
Chapter 10:Settings | Network

Fibre Channel
The Fibre Channel panel manages the Fibre Channel interfaces used to connect the array to a
network. The panel displays the Fibre Channel interfaces on the array along with the following
network connection attributes: interface status (enabled or disabled), World Wide Network
(WWN), speed, and network service (scsi-fc, replication, or nvme-fc) that is attached to the inter-
face.
A value of True in the Enabled column indicates that an interface is enabled.

Ethernet
The Ethernet panel manages the Ethernet interfaces used to connect the array to a network.
The panel displays the Ethernet interfaces on the array along with the following network con-
nection attributes: interface status (enabled or disabled), type of connection (physical, bond,
LACP bond, or virtual interface), subnet, IP address, netmask, gateway, maximum transmission
units (MTU), MAC address, speed, network service (file, iscsi, management, nvme-roce, or
nvme-tcp) that is attached to the interface, and subinterfaces.
A value of True in the Enabled column indicates that an interface is enabled. If an interface
belongs to a subnet, the subnet name appears in the Subnet column, and all of its interfaces are
grouped with the subnet. A dash (-) in the Subnet column means the interface does not belong
to a subnet.

Subnets
Note: Subnets can only be configured on Ethernet ports.
Interfaces with common attributes can be organized into subnetworks, or subnets, to enhance
the efficiency of data (file, iSCSI, NVMe-RoCE, or NVMe-TCP), management, and replication
traffic.
In Purity//FA, subnets can include physical, virtual, bond, and VLAN interfaces. Physical, virtual,
and bond interfaces can belong to the same subnet. VLAN interfaces can only belong to subnets
with other VLAN interfaces.
If the subnet is assigned a valid IP address, once it is created, all of its enabled interfaces are
immediately available for connection. The subnet inherits the services from all of its interfaces.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 294
Chapter 10:Settings | Network

Likewise, the interfaces contained in the subnet inherit the gateway, MTU, and VLAN ID (if
applicable) attributes from the subnet.
Physical, virtual, and bond interfaces in a subnet share common address and MTU attributes.
The subnet can contain a mix of physical, virtual, and bond interfaces, and the interface services
can be of any type, such as file, iSCSI, management, NVMe-RoCE, NVMe-TCP, or replication
services.
Adding physical, virtual, and bond interfaces to a subnet involves the following steps:
1 Create a subnet.
2 Add the physical, virtual, and bond interfaces to the subnet.
A VLAN interface is a dedicated virtual network interface that is designed to be used with an
organization’s virtual local area network (VLAN). Through VLAN interfaces, Purity//FA employs
VLAN tags to ensure the data passing between the array and VLANs is securely isolated and
routed properly.

VLAN Tagging
VLAN tagging allows customers to isolate traffic through multiple virtual local area networks
(VLANs), ensuring data routes to and from the appropriate networks. The array performs the
work of tagging and untagging the data that passes between the VLAN and array.
VLAN tagging is supported for the following service types: file, iSCSI, NVMe-RoCE, and NVMe-
TCP. Before creating a VLAN interface, verify that one or more of these are configured on the
physical interface.
Creating and adding VLAN interfaces to a subnet involves the following steps:
1 Create a subnet, assigning a VLAN ID to the subnet.
2 Add one VLAN interface to the subnet for each corresponding physical network interface to
be associated with the VLAN. All of the VLAN interfaces within the subnet must be in the
same VLAN.
In Purity//FA, VLAN interfaces have the naming structure CTx.ETHy.z, where x denotes the con-
troller (0 or 1), y denotes the interface (0 or 1), and z denotes the VLAN ID number. For example,
ct0.eth1.500.
When VLAN tagging is used for file, VLAN IDs must be mirrored for two controllers. For
example, if a subnet with VLAN ID 50 is assigned to ct0.eth5, the same subnet must be
assigned to ct1.eth5.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 295
Chapter 10:Settings | Network

Networking – Creating a Subnet with VLAN Interfaces


In the following example, subnet 10.14.224.0/24 is being created. The subnet will be named
sub001 and assigned VLAN ID 1001. See Figure 10-11.
Figure 10-11. Networking – Creating a Subnet with VLAN Interfaces

The new subnet details appear In the Subnets panel. See Figure 10-12.
Click Add interface to add interfaces to the subnet.
Figure 10-12. Network – Subnets Panel

LACP
Link Aggregation Control Protocol (LACP) is an IEEE standard that allows individual Ethernet
links to be aggregated into a single logical Ethernet link. Depending on your scenario, it can be
used to increase bandwidth utilization, increase availability, or simplify network configurations.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 296
Chapter 10:Settings | Network

In order for LACP to work with the FlashArray, the network switch must be configured for LACP
as well.
LACP (IEEE 802.3ad) is supported on the following FlashArray Ethernet ports:
l iSCSI
l File VIFs
l NMVe-TCP
l Replication (ActiveCluster only)
Prior to configuring LACP on the FlashArray, LACP must be configured on the network switch
according to the network switch vendor’s best practices. LACP can only be configured between
Ethernet ports on the same controller. LACP is not supported on ports across controllers. Subin-
terfaces added to an LACP interface must have the same speed, MTU, and service.

Changing the Attributes of a Network Interface


You can change specified attributes of network interfaces. You can change the IP address, net-
mask, gateway, and MTU attributes of Ethernet interfaces. If the interface belongs to a subnet,
you can change the name, prefix address, VLAN, gateway, and MTU attributes. IPv4 and IPv6
addresses follow the addressing architecture set by the Internet Engineering Task Force.

Note: Fibre Channel interface attributes cannot be changed.


To change the attributes of an Ethernet physical, virtual, or bond interface:
1 Select Settings > Network.
2 On the Ethernet panel, click the Edit Interface icon next to the interface name. The Edit Net-
work Interface dialog box appears.
3 View or complete the following fields:
l Name: Name of the network interface. The interface name cannot be changed.
l Enabled: Indicates whether the network interface is enabled (blue) or disabled (gray).

l Type: Indicates the interface type which is physical for Ethernet interfaces. The
type cannot be changed.
l Address: IP address to be associated with the specified Ethernet interface.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 297
Chapter 10:Settings | Network

l For IPv4, enter the address in CIDR notation ddd.ddd.ddd.ddd/dd. For


example, 10.20.20.210/24. Alternatively, specify the address ddd.d-
dd.ddd.ddd, and then specify the netmask in the Netmask field.
l For IPv6, enter the address and prefix length in the form For example,
2620:125:9014:3224:14:227:196:0/64. Consecutive fields of
zeros can be shortened by replacing the zeros with a double colon (::).
Alternatively, specify the address
xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx, and then specify
the prefix length in the Netmask field.
l Netmask: Range of IP addresses that make up a group of IP addresses on the same
network.
l For IPv4, if the address entered is not in CIDR notation, enter the subnet
mask in the form ddd.ddd.ddd.ddd. For example, 255.255.255.0.
l For IPv6, if the address entered did not include a prefix length, specify the
prefix length. For example, 64.
l Gateway: IP address of the gateway through which the specified interface is to com-
municate with the network.
l For IPv4, specify the gateway IP address in the form ddd.ddd.ddd.ddd.
l For IPv6, specify the gateway IP address in the form
xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx. Consecutive fields
of zeros can be shortened by replacing the zeros with a double colon (::).
l MTU: Maximum transmission unit (MTU) for the interface in bytes. If not specified, the
MTU value defaults to 1500. If you are changing the MTU of a physical interface that
is associated with a VLAN, verify the MTU of the physical interface is greater than or
equal to (>=) the MTU of the VLAN interface. Note that the VLAN interface inherits the
MTU value from its subnet.
l MAC: Unique media access control (MAC) address assigned to the network inter-
face. This field cannot be modified.
l Speed: The speed of the interface in Mbps.
l Service(s): Services attached to the interface. For example, ds, file, iscsi, man-
agement, nvme-roce, nvme-tcp, or replication. This field cannot be modified.
4 Click Save. Purity//FA restarts the GUI and signs you in using the self-signed certificate.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 298
Chapter 10:Settings | Network

Enabling or Disabling a Network Interface


If a bond interface is disabled, all of its child interfaces are also disabled.
To enable or disable a physical, virtual, or bond interface:
1 Select Settings > Network.
2 In the Ethernet panel, click the Edit Interface icon for interface you want to enable or disable.
The Edit Network Interface dialog box appears.
3 Click the Enabled toggle button to enable (blue) or disable (gray) the network interface.
4 Click Save.

Creating a Subnet
Creating the subnet involves setting the subnet attributes, and then adding the interfaces to the
subnet.
A subnet can contain physical, virtual, and bond interfaces (for non-VLAN tagging purposes) or
VLAN interfaces (for VLAN tagging purposes).
To create a subnet:
1 Select Settings > Network.
2 In the Subnets panel, click the Create Subnet icon in the upper-right corner of the panel. The
Create Subnet dialog box appears.
3 Complete the following fields:
l Name: Name of the subnet.
l Enabled: Indicates whether the subnet is enabled (blue) or disabled (gray).
l Prefix: IP address of the subnet prefix and prefix length (defaults to 24).
l For IPv4, specify the prefix in the form ddd.ddd.ddd.ddd/dd.
l For IPv6, specify the prefix in the form
xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx/xxx. Consecutive
fields of zeros can be shortened by replacing the zeros with a double colon
(::).
l VLAN: For VLAN tagging, specify the VLAN ID, between 1 and 4094, to which the
subnet is associated. If you specify the VLAN ID number, Purity//FA filters out all

Pure Storage Confidential - For distribution only to Pure Customers and Partners 299
Chapter 10:Settings | Network

available physical interfaces to only those set to iSCSI services. The physical inter-
face name with the appended VLAN ID number becomes the VLAN interface name.
If the interface is not part of a VLAN, leave this field blank.
l Gateway: IP address of the gateway through which the specified interface is to com-
municate with the network.
l For IPv4, specify the gateway IP address in the form ddd.ddd.ddd.ddd.
l For IPv6, specify the gateway IP address in the form
xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx. Consecutive fields
of zeros can be shortened by replacing the zeros with a double colon (::).
l MTU: Maximum transmission unit (MTU) of the subnet. If not specified, the MTU
value defaults to 1500. Interfaces inherit their MTU values from the subnet. Note that
the MTU of a VLAN interface cannot exceed the MTU of the corresponding physical
interface.
4 Click Create. After the subnet has been created, add interfaces to it.

Enabling or Disabling a Subnet


If a subnet is disabled, all of its interfaces, including ones that are individually enabled, are also
disabled. If a subnet is enabled, only the enabled interfaces in the subnet are reachable; its dis-
abled interfaces remain unreachable.
To enable or disable a subnet:
1 Select Settings > Network.
2 In the Subnets panel, click the Edit Subnet icon for the subnet you want to enable or disable.
The Edit Subnet dialog box appears.
3 Click the Enabled toggle button to enable (blue) or disable (gray) the network interface.
4 Click Save.

Deleting a Subnet
Deleting a subnet automatically removes all of the interfaces for the subnet and deletes the sub-
net. Any current connections through the subnet are disconnected.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 300
Chapter 10:Settings | Network

To delete a subnet:
1 Select Settings > Network.
2 In the Subnets & Interfaces panel, click the Delete Subnet icon for the subnet you want to
delete.
The Delete Subnet dialog box appears notifying you that all interfaces in the subnet will be
removed and the subnet will be deleted. When Purity//FA removes the interfaces, any cur-
rent connections through the subnet are disconnected.
3 Click Save. The interfaces appear in the subnet. If the subnet and added interfaces are
enabled, they are immediately available for connection.

DNS Settings
The DNS Settings panel manages the DNS attributes for an array's administrative and, option-
ally, file services network. DHCP mode is not supported. DNS server settings can be added,
edited, or deleted.

Configuring Domain Name System (DNS) Server IP Addresses


To add DNS settings to the configuration:
1 Select Settings > Network.
2 In the DNS Settings panel, click the menu icon in the upper-right corner of the panel and
select Create.... The Create DNS Setting dialog box appears.
3 Complete the following fields:
l Name: A name for the DNS configuration.
l Domain: The domain suffix to be appended by the array when doing DNS look-ups.
l Servers: One or up to three DNS server IP addresses, comma separated, for Pur-
ity//FA to use to resolve hostnames to IP addresses.
l Services: Specify the types of services that will leverage this DNS configuration.
l Source: Only for non-management services. Specifies the name of the virtual file net-
work interface used to communicate with the DNS servers. If not specified, the default
network interface resolution will be used.
4 Click Save.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 301
Chapter 10:Settings | Access

Access
The buttons at the top of the page allow you to switch between the Array page and the File Sys-
tem page. The Array page manages the Purity//FA user accounts and their attributes. This page
also displays user details, such as audit trails and login activity. The File System page manages
the Purity//FA file system local users and groups. See Figure 10-13.
Figure 10-13. Settings – Access Page

Pure Storage Confidential - For distribution only to Pure Customers and Partners 302
Chapter 10:Settings | Access

Array Accounts
The Array page displays a list of Purity//FA user accounts and their attributes.

Users Panel
In the Array section, the Users panel displays the following types of users:
l pureuser administrative account.
l Users that have been created on the array.
l LDAP users with a public key and/or API token. LDAP users that do not have a public
key or API token do not appear in the list.
The FlashArray array is delivered with a single administrative account named pureuser. The
account is password protected and may alternatively be accessed using a public-private key
pair. The pureuser account is set to the array administrator role, which has array-wide per-
missions. The pureuser account cannot be renamed or deleted.
Users can be added to the array either locally by creating and configuring a local user directly on
the array, or through Lightweight Directory Access Protocol (LDAP) by integrating the array with
a directory service, such as Active Directory or OpenLDAP. For more information about integ-
rating the array with a directory service, refer to the Settings > Access > Directory Service sec-
tion.
Locally, on the array, users can only be created by array administrators. The name of the local
user must be unique. The local user name cannot be the same name as an LDAP user. If an
LDAP user appears with the same name as a local user, the local user always has priority. The
Type column of the Users panel identifies the way in which a user is added to the array as Local
or LDAP.
Role-based access control (RBAC) restricts system access and capabilities to each user based
on their assigned role in the array.
All users in the array, whether created locally or added to the array through LDAP integration,
are assigned one of the following roles in the array:
l Read-Only. Users with the Read-Only (readonly) role can perform operations that
convey the state of the array. Read Only users cannot alter the state of the array.
l Ops Admin. Users with the Ops Admin (ops_admin) role can perform the same oper-
ations as Read Only users plus enable and disable remote assistance sessions. Ops
Admin users cannot alter the state of the array.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 303
Chapter 10:Settings | Access

l Storage Admin. Users with the Storage Admin (storage_admin) role can perform
the same operations as Read Only users plus storage related operations, such as
administering volumes, hosts, and host groups. Storage Admin users cannot perform
operations that deal with global and system configurations.
l Array Admin. Users with the Array Admin (array_admin) role can perform the same
operations as Storage Admin users plus array-wide changes dealing with global and
system configurations. In other words, Array Admin users can perform all operations.
For local users, the role is set during user creation. For LDAP users, the role is set by configuring
groups in the directory that correspond to the FlashArray user roles.
Each local user account on the array is password protected. The password is assigned during
user creation and can be modified by array administrators. All local users can manage their own
passwords, but only array administrators can manage the passwords of other users. Changing a
local user's password requires knowledge of the current password. If the password of a local
user is unknown, delete the account and recreate it with the desired password. Note that delet-
ing a local user's account means deleting any public key associated with the user. If the pass-
word of the pureuser account is unknown, contact Pure Storage Technical Services to reset
the account to the default pureuser password. Passwords of LDAP users are managed in the
directory service.

Note: For arrays with optional multi-factor authentication enabled, passwords are not
used. Instead, a third-party application, such as Microsoft® Active Directory Federation
Services (AD FS) authentication identity management system or RSA SecurID® soft-
ware, manages array authentication. For AD FS, see "Multi-factor Authentication with
SAML2 SSO" on page 316. Multi-factor authentication with RSA SecurID® software is
managed only with the CLI puremultifactor command. The Purity//FA GUI does not
configure or show the status of RSA SecurID® software multi-factor authentication on an
array.
If a public key has been created for the user, it appears masked in the Public Key column. All
users can manage their own public keys, but only array administrators can manage the public
keys associated with other users.
If an API token has been created for the user, it appears masked in the API Token column. API
tokens are used to securely create REST API sessions. After creating an API token, users can
create REST API sessions and start sending requests. For more information about the Pure Stor-
age REST API, refer to the REST API Reference Guide on the Knowledge site at https://sup-
port.purestorage.com.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 304
Chapter 10:Settings | Access

An API token is unique to the Purity//FA user for whom it was created. Once created, an API
token is valid until it is deleted or recreated.
API token management does not affect Purity//FA user names and passwords. For example,
deleting an API token does not invalidate the Purity//FA user name or password that was used to
create the token. Likewise, changing the Purity//FA password does not affect the API token.
Single sign-on (SSO) gives LDAP users the ability to navigate seamlessly from Pure1 Manage
to the current array through a single login. If single sign-on is not enabled on an array, users
must manually log in with their credentials each time they navigate from Pure1 Manage to the
array. Enabling and disabling single sign-on takes effect immediately. By default, single sign-on
is not enabled.
Enabling single sign-on is a two-step process: first, configure single sign-on and LDAP integ-
ration through Pure1 Manage, and second, enable single sign-on on the array through Pur-
ity//FA. For more information about SSO and LDAP integration with Pure1 Manage, refer to the
Pure1 Manage - SSO Integration article on the Knowledge site at https://sup-
port.purestorage.com.
Creating a User
1 Select Settings > Access.
2 In the Users panel, click the Edit icon in the upper-right corner of the panel and select Create
User… The Create User pop-up window appears.
3 In the User field, type the name of the new user. The name must be between 1 and 32 char-
acters (alphanumeric and '-') in length and begin and end with a letter or number. The name
must include at least one letter or '-'. All letters must be in lowercase.
4 In the Role field, select the role for the new user. Options include:
l Read-Only: Users with the Read-Only (readonly) role can perform operations that
convey the state of the array. Read Only users cannot alter the state of the array.
l Ops Admin. Users with the Ops Admin (ops_admin) role can perform the same oper-
ations as Read Only users plus enable and disable remote assistance sessions. Ops
Admin users cannot alter the state of the array.
l Storage Admin. Users with the Storage Admin (storage_admin) role can perform
the same operations as Read Only users plus storage related operations, such as
administering volumes, hosts, and host groups. Storage Admin users cannot perform
operations that deal with global and system configurations.
l Array Admin. Users with the Array Admin (array_admin) role can perform the same

Pure Storage Confidential - For distribution only to Pure Customers and Partners 305
Chapter 10:Settings | Access

operations as Storage Admin users plus array-wide changes dealing with global and
system configurations. In other words, Array Admin users can perform all operations.
5 In the Password field, type a password for the new user. The password must be between 1
and 100 characters in length, and can include any character that can be entered from a US
keyboard.
6 In the Confirm Password field, type the password again.
7 Click Create.
Changing the Login Password of a User
1 Select Settings > Access.
2 In the Users panel, click the Edit icon for the user you want to modify and select Edit User….
The Edit User pop-up window appears.
3 In the Current Password field, type the user's current password.
4 In the New Password field, type the user's new password. The password must be between 1
and 100 characters in length, and can include any character that can be entered from a US
keyboard.
5 In the Confirm New Password field, type the new password again.
6 Click Save. The new password is required the next time the user logs in to Purity//FA.
Changing the Role of a User
1 Select Settings > Access.
2 In the Users panel, click the Edit icon for the user you want to modify and select Edit User….
The Edit User pop-up window appears.
3 In the Role field, select the role.
4 Click Save.
Deleting a User
1 Select Settings > Access.
2 In the Users panel, click the Edit icon for the user you want to modify and select Delete
User…. The Delete User pop-up window appears.
3 Click Delete.
Adding a Public Key
1 Select Settings > Access.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 306
Chapter 10:Settings | Access

2 In the Users panel, click the Edit icon in the panel heading and select Update Public Key….
The Update Public Key pop-up window appears.
3 In the User field, type the name of the local or LDAP user for which you want to create the
public key.
4 If the user does not have an existing public key, enter the public key in the Public Key field. If
the user already has a public key, select Overwrite and enter the public key.
5 Click Save.
Updating a Public Key
1 Select Settings > Access.
2 In the Users panel, click the Edit icon for the user you want to modify and select Edit User….
The Edit User pop-up window appears.
3 In the Public Key field, select Overwrite and enter the public key.
4 Click Save.
Deleting a Public Key
1 Select Settings > Access.
2 In the Users panel, click the Edit icon for the user you want to modify and select Edit User….
The Edit User pop-up window appears.
3 In the Public Key field, select Remove.
4 Click Save.
5 Click Remove.
Creating an API Token

1 Select Settings > Access.


2 In the Users panel, click the Edit icon in the panel heading and select Create API Token….
The Create API Token pop-up window appears.
3 In the User field, type the name of the local or LDAP user for which you want to create the
API token.
4 To set an expiry date for the API token, in the Expires In field, specify the validity period of the
API token. When an API token expires and is therefore no longer valid, the user cannot
access the REST API until the token is recreated. To create the API token without an expiry
date, leave the Expires In field blank.
5 Click Create.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 307
Chapter 10:Settings | Access

Recreating an API Token


1 Select Settings > Access.
2 In the Users panel, click the Edit icon for the user you want to modify and select Recreate
API Token…. The Recreate API Token pop-up window appears.
3 Click Recreate.
Removing an API Token
1 Select Settings > Access.
2 In the Users panel, click the Edit icon for the user you want to modify and select Remove API
Token…. The Remove API Token pop-up window appears.
3 Click Remove. Once the API token has been deleted, the user can no longer access the
REST API.
Displaying the Details of an API Token
1 Select Settings > Access.
2 In the Users panel, click the Edit icon for the user you want to modify and select Show API
Token…. The details for the API token, including token string, token creation date, and token
expiry date, if any, appear.

API Clients
An API client represents an identity type that is created on the array. The user name and identity
tokens of an API client are used as claims for the JSON Web Token that you create to authen-
ticate into the REST API.
To create an API client,
1 Select Settings > Access.
2 In the API Clients panel, click the Create API Client icon (+).
3 In the Create API Client window, enter the API client name, OAuth issuer, the maximum role
(Array Admin, Storage Admin, Ops Admin, or Read-Only), the time to live (one day is the
default), and RSA public key in PEM format.
4 Click Create.

Active Directory Accounts


Active Directory (AD) members for directory services can be created and joined to an array. AD
members can be modified or deleted from the list.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 308
Chapter 10:Settings | Access

To create an Active Directory member and join an Active Directory account,


1 Select Settings > Access.
2 In the Active Directory Accounts panel, click the Create Active Directory Account icon (+).
3 In the Create Active Directory Account window, enter an account name, the DNS domain
name, the computer name, a user name, and password.
l Join OU: Specifies the organizational unit for the new account, in distinguished
name format. For example, OU=Dev,OU=Sweden,DC=purestorage,DC=com. The
DC=... components of the distinguished name can be optionally omitted. If the option
is omitted, the organizational unit defaults to CN=Computers.
l TLS mode: TLS mode for communication with domain controllers. If not specified,
this option defaults to “required”.
l Required: Forces TLS communication with the domain controller.
l Optional: Allows the use of non-TLS communication. However, TLS is still
preferred.
l Join Existing Account: If enabled, the domain is searched for a pre-existing com-
puter account to join and a new account will not be created within the domain. The
user specified when joining a preexisting account must have “Read all properties"
and "Reset password" permissions for the computer account. The “Join OU” option
cannot be used when joining a pre-existing computer account.
4 Click Create.

Directory Service
The Directory Service panel manages the integration of FlashArray arrays with an existing dir-
ectory service.
The Purity//FA release comes with a single local administrative account named pureuser with
array-wide (array_admin) permissions. The account is password protected and may altern-
atively be accessed using a public-private key pair.
Additional users can be added to the array by creating and configuring local users directly on the
array. For more information about local users, refer to the Settings > Access > Users section.
Users can also be added to the array through Lightweight Directory Access Protocol (LDAP) by
integrating the array with an existing directory service. If a user is not found locally, the directory
servers are queried. OpenLDAP and Microsoft's Active Directory (AD) are two implementations
of LDAP that Purity//FA supports.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 309
Chapter 10:Settings | Access

With LDAP integration, the array leverages the directory for authentication (validate user's pass-
word) and authorization (determine user's role in the array).
The Directory Service panel displays the settings for the directory service to be used for role-
based access control.
The Configuration section of the Directory Service panel displays the details for the base con-
figuration of the directory service, including its URLs, base DN, bind user name, and bind pass-
word. Configuring and then enabling the directory service allows users in the LDAP directory to
log in to the array. If Check Peer is enabled, server authenticity using the CA certificate is
enforced during the bind and query test. Note that you must set the CA certificate before you can
enable Check Peer.
The Roles section of the Directory Service panel displays the current role-to-group con-
figurations for the directory service. In order to log in to the array, a user must belong to a con-
figured group in the LDAP directory, and that group must be mapped to an RBAC role in the
array. The Group field represents the common name (CN) of the configured group that maps to
the role in the array. The group name excludes the "CN=" specifier. For example, pure-
readonly. The Group Base field represents the common organizational unit (OU) under which
to search for the group. The order of OUs gets smaller in scope from right to left. Multiple OUs
are listed in comma-separated format.
The Test button in the upper-right corner of the Directory Service panel, when clicked, runs a
series of tests to verify that the URIs can be resolved and that the array can bind and query the
tree using the bind user credentials. The test also verifies that the array can find all the con-
figured groups to ensure the common names and group base are correctly configured. The test
can be run at any time.
Users

An LDAP user is an individual in the LDAP directory.


LDAP users log in to the array via ssh by entering the following information, where <user_
name> represents the sAMAccountName for Active Directory or uid for OpenLDAP, and
<array_name> represents the name or the IPv4 or IPv6 address of the FlashArray array:
<user_name>@<array_name>

For IPv4, specify the IP address in the form ddd.ddd.ddd.ddd, where ddd is a number ran-
ging from 0 to 255 representing a group of 8 bits.
For IPv6, specify the IP address in the form
[xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx], where xxxx is a hexadecimal number

Pure Storage Confidential - For distribution only to Pure Customers and Partners 310
Chapter 10:Settings | Access

representing a group of 16 bits. Enclose the entire address in square brackets ([]). Consecutive
fields of zeros can be shortened by replacing the zeros with a double colon (::).
For directory service enabled accounts, user passwords to the array are managed through the
directory service, while public keys are configured through Purity//FA.
Accounts with user names that conflict with local accounts will not be authenticated against the
directory. These account names include, but are not limited to: pureuser, os76, root, dae-
mon, sys, man, mail, news, proxy, backup, nobody, syslog, mysql, ntp, avahi, post-
fix, sshd, snmp.
If an LDAP user has the same name as a locally created user, the locally created user always
has priority.
Users with disabled accounts will not have access to the array.
Groups
A group in the LDAP directory consists of users who share a common purpose.
Each configured group in the directory has a unique distinguished name (DN) representing the
entire path of the object's location in the directory tree. The DN is comprised of the following
attribute-value pairs:
l DC - Domain component base of the DN. For example, DC=mycompany,DC=com.
l OU - Organizational unit base of the group. For example,
OU=PureGroups,OU=SAN,OU=IT.
l CN - Common name of the groups themselves. For example, CN=purereadonly.
For example,
CN=purereadonly,OU=PureGroups,OU=SAN,OU=IT,DC=mycompany,DC=com is the DN
for configured group purereadonly at group base OU=PureGroups,OU=SAN,OU=IT and
with base DN DC=mycompany,DC=com.
The DN can contain multiple DC and OU attributes.
OUs are nested, getting more specific in purpose with each nested OU.
For OpenLDAP, for group configurations based on the non-posixAccount class, groups must
have the full DN of members in the member attribute (groupOfNames). For group con-
figurations based on the posixAccount class, groups must have the uid of members in the
memberUid attribute.
When a user who is a member of a configured group logs in to the array, only the CLI actions
that the user has permission to execute will be visible. Similarly, in the GUI, actions the user
does not have permission to execute will be grayed out or disabled.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 311
Chapter 10:Settings | Access

For Active Directory, two types of groups are supported: security groups and distribution groups.
Distribution groups are used only with email applications to distribute messages to collections of
users. Distribution groups are not security enabled. Security groups assign access to resources
on your network. All groups configured on the array must be security groups.
Role-Based Access Control
Role-based access control (RBAC) restricts the system access and capabilities of each user
based on their assigned role in the array.
All users in the array, whether created locally or added to the array through LDAP integration,
are assigned one of the following roles in the array:
l Read Only. Users with the Read-Only (readonly) role can perform operations that
convey the state of the array. Read Only users cannot alter the state of the array.
l Ops Admin. Users with the Ops Admin (ops_admin) role can perform the same oper-
ations as Read Only users plus enable and disable remote assistance sessions. Ops
Admin users cannot alter the state of the array.
l Storage Admin. Users with the Storage Admin (storage_admin) role can perform
the same operations as Read Only users plus storage related operations, such as
administering volumes, hosts, and host groups. Storage Admin users cannot perform
operations that deal with global and system configurations.
l Array Admin. Users with the Array Admin (array_admin) role can perform the same
operations as Storage Admin users plus array-wide changes dealing with global and
system configurations. In other words, Array Admin users can perform all operations.
For LDAP users, role-based access control is achieved by configuring the groups in the LDAP
directory to correspond to the different roles in the array. For example, a group named "pure-
readonly" in the directory might correspond to the readonly role in the array.
For security purposes, each user should be assigned to only one role in the array. If a user
belongs to multiple configured groups that map to different roles in the array, modify the LDAP
directory to ensure that the user belongs to only one group. If a user has multiple roles, one of
which includes the ops_admin role, the user will be locked out of the system and an alert will be
sent to all alert recipients. Modify the LDAP directory to ensure that the user has only one user
role assigned. If a user has multiple roles, none of which include the ops_admin role, the user
will have privileges corresponding to the least privileged group. For example, a user who has
both the readonly and array_admin roles will have read-only privileges.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 312
Chapter 10:Settings | Access

Directory Service Configuration


Configuring the Pure Storage directory service requires a URI, a base DN, a bind user, a bind
password, and at least one group within the LDAP directory that corresponds to a role in the
array.
Before you start the configuration process, note the DN of each group within the directory server.
Each component of the DN will be used to configure Pure Storage directory service. If you plan
to enable Check Peer, also have the CA certificate available.
When you configure the array to integrate with a directory service, consider the following:
l If the directory service contains multiple groups, each group must have a common
name (CN).
l All uniform resource identifiers (URIs) must be in the same, single domain.
To configure the Pure Storage directory service:
1 Configure the CA certificate. This is only required if Check Peer is going to be enabled.
2 Configure the base directory service settings, including the URIs, base DN, bind user name,
and bind password. Optionally enable Check Peer.
3 Configure the directory service roles to map each role in the array to the appropriate LDAP
group in the directory tree.
4 Test the directory service settings.
5 Enable the directory service. This allows users in the LDAP directory to log in to the array.
Disable the directory service at any time to stop all users in the directory server from logging in to
the array.
Configuring the Directory Service

1 Select Settings > Access.


2 In the Directory Service panel, click the Configuration edit icon. The Edit Directory Service
Configuration dialog box appears.
3 In the Edit Directory Service Configuration dialog box, complete or modify the following
fields:
Enabled:
Click the toggle button to enable (blue) the directory service. Enable the directory ser-
vice after you have configured the directory service, configured the roles, and tested
the directory service configuration.
URI:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 313
Chapter 10:Settings | Access

Enter the comma-separated list of up to 30 URIs of the directory servers. Each URI
must include the scheme ldap:// or ldaps:// (for LDAP over SSL), a hostname,
and a domain name or IP address. For example, ldap://ad.company.com con-
figures the directory service with the hostname "ad" in the domain "company.com"
while specifying the unencrypted LDAP protocol.
If specifying a domain name, it should be resolvable by the configured DNS servers.
If specifying an IP address, for IPv4, specify the IP address in the form ddd.d-
dd.ddd.ddd, where ddd is a number ranging from 0 to 255 representing a group of
8 bits.
For IPv6, specify the IP address in the form
[xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx], where xxxx is a hexa-
decimal number representing a group of 16 bits. Enclose the entire address in square
brackets ([]). Consecutive fields of zeros can be shortened by replacing the zeros
with a double colon (::).
If the scheme of the URIs is ldaps://, SSL is enabled. SSL is either enabled or dis-
abled globally, so the scheme of all supplied URIs must be the same. They must also
all have the same domain.
If base DN is not configured and a URI is provided, the base DN will automatically
default to the domain components of the URIs.
Optionally specify a port. Append the port number after the end of the entire address.
Default ports are 389 for ldap, and 636 for ldaps. Non-standard ports can be specified
in the URI if they are in use.
Base DN:
Enter the base distinguished name (DN) of the directory service. The Base DN is built
from the domain and must be in a valid DN syntax. For example, for
ldap://ad.storage.company.com, the Base DN would be: “DC=sto-
orage,DC=company,DC=com.”
Bind User:
Enter the username for the account that is used to perform directory lookups.
For Active Directory, enter the user name—often referred to as sAMAccountName or
User Logon Name—for the account that is used to perform directory lookups. The user
name cannot contain the characters " [ ] : ; | = + * ? < > / \, and can-
not exceed 20 characters in length.
For OpenLDAP, enter the full DN of the user. For example, "CN=John,O-
U=Users,DC=example,DC=com".
The bind account must be configured to allow the array to read the directory. It is good
practice for this account to not be tied to any actual person and to have different pass-
word restrictions, such as "password never expires". The bind account should also not
be a privileged account, since only read access to the directory is required.
Bind Password:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 314
Chapter 10:Settings | Access

Enter the password for the bind user account. The password appears in masked form.

Check Peer:
Optionally click the toggle button to enable (blue) Check Peer. If Check Peer is
enabled, Purity//FA validates the authenticity of the directory servers using the CA
Certificate. If you enable Check Peer, you must provide a CA Certificate.
4 Click Save.
Configuring the CA Certificate
1 Select Settings > Access.
2 In the Directory Service panel, click Edit next to CA Certificate. The Edit CA Certificate dialog
box appears.
3 In the Edit CA Certificate dialog box, enter the certificate of the issuing certificate authority.
Only one certificate can be configured at a time, so the same certificate authority should be
the issuer of all directory server certificates.
4 The certificate must be PEM formatted (Base64 encoded) and include the "-----BEGIN
CERTIFICATE-----" and "-----END CERTIFICATE-----" lines. The certificate can-
not exceed 3000 characters in total length.
5 Click Save.
Configuring the Directory Service Roles
1 Select Settings > Access.
2 In the Directory Service panel, click the Roles edit icon. The Edit Directory Service Roles dia-
log box appears.
3 In the Edit Directory Service Roles dialog box, complete or modify the following fields:
Group:
Enter the common name (CN) of the configured group that maps to the role in the
array. The group name should be just the common name of the group without the
"CN=" specifier. For example, purereadonly.
Group Base:
Enter the common organizational unit (OU) under which to search for the group. Spe-
cify "OU=" for each organizational unit. The order of OUs should get smaller in scope
from right to left. List multiple OUs in comma-separated format.
4 Click Save.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 315
Chapter 10:Settings | Access

Testing the Directory Service Settings


1 Select Settings > Access.
2 In the Directory Service panel, click Test. The Test <Directory Service> Configuration pop-
up window appears, displaying the output of the test.
During the directory service test, Purity//FA tests the directory service configuration to
verify that the URIs can be resolved and that the directory service can successfully bind
and query the tree using the bind user credentials.
If the test passes, enable the directory service.

Multi-factor Authentication with SAML2 SSO


Overview
Purity supports single sign-on (SSO) integration with the Microsoft® Active Directory Federation
Services (AD FS), Okta, Azure Active Directory (Azure AD),and Duo Security via the SAML2 pro-
tocol. When SAML2 SSO is configured and enabled, all user logins to the Purity//FA man-
agement GUI are redirected for multi-factor authentication.
For example, the AD FS identity provider can optionally be configured to enable multi-factor
authentication (MFA) and supports X.509 certificates, Microsoft Azure™ authentication, and
other authentication methods, in addition to password authentication. This release is verified
with X.509 certificate authentication as the multi-factor authentication method.
Prerequisites
l The SAML2 SSO configuration steps require either administrator access to the AD
FS Management Tool or coordination with an AD FS administrator to create a relying
party trust for the array and to provide the following information about AD FS:
l The identity provider (IdP) entity ID, which specifies the globally unique

name for the identity provider.


l The IdP URL, which specifies the URL of the identity provider.

l The IdP verification certificate.

The IdP metadata file contains this information and having the IdP metadata file URL
is sufficient.
l The directory service used with the array must be the same directory service instance
as used by the identity provider in the relying party trust configuration for this array.
Purity//FA does not support multiple or federated directory services.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 316
Chapter 10:Settings | Access

l If both Pure1 Manage SSO and FlashArray SAML2 SSO are enabled, both must use
the same directory service.
l The AD FS server must be configured to support TLS 1.2 or 1.3 with strong authen-
tication, if AD FS monitoring is to be configured for the array relying party trust.

Important: Before configuring SSO, create a strong password for the pureuser and
other array administrator accounts and save those passwords according to your organ-
ization’s security policies.

SAML2 SSO Configuration

This list summarizes the steps to configure and enable SAML2 SSO authentication on a FlashAr-
ray. Configurations are required in both the service provider side (Purity//FA) and on the identity
provider side, in order to complete the SSO configuration.
1 In Purity//FA, configure SAML2 SSO on the array.
a Obtain IdP information from AD FS, Okta, Azure AD,and Duo Security or an admin-
istrator.
b Configure the service provider (SP) with array information and IdP information.
c Test the basic SP Configuration.
2 In the identity provider, set up SSO using SP information from the array.
3 In Purity//FA, run the end-to-end test of the SAML2 SSO configuration.
4 In Purity//FA, enable SAML2 SSO Authentication.

The service provider configuration, basic test, end-to-end test, and enabling SAML2 SSO are
performed in the Settings > Access tab > SAML2 SSO pane.
For directory service configuration, see the Settings > Access tab > Directory Service pane. Use
the AD FS Management Tool on AD FS to configure the AD FS IdP for SSO and optionally MFA
with the array.

Important: Tests are required at two different steps. Do not bypass either test.

Configuration Notes
l The verification certificate (the AD FS primary token-signing certificate) must be an
X.509 certificate in PEM format.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 317
Chapter 10:Settings | Access

l The array name portion of the URL in the browser used to configure the service pro-
vider must be consistent with the URL entered into the Array URL field in the SAML2
SSO pane. Whether the URLs are based on a FQDN (such as pure01.-
mycompany.com) or on a hostname (such as pure01), the browser URL and the
Array URL field configured in the SAML2 SSO pane must be consistent in the way the
array name is specified.
When there is a mismatch, the browser used to configure SAML2 SSO cannot find
the results of the required end-to-end test.
l Due to a difference in the treatment of IP addresses, the directory service test and the
SAML2 SSO configuration tests may fail on Cloud Block Store arrays.
Contact Pure Storage Technical Services to run these tests on Cloud Block Store
arrays.
Group to Role Mapping
Group to Role Mapping on Purity//FA

Group to role mapping on Purity//FAis only certified with the AD FS server. To configure the
group to role mapping , see the Settings > Access tab > Directory Service pane. Follow the steps
to configure the AD FS server to send the user group in the SAML response, see Mapping attrib-
utes from AD with AD FS and SAML.
Group to Role Mapping on IdPs

Group to role mapping on IdPs is certified with AD FS, Okta, Duo Security, and Azure AD. Each
IdP has unique configuration steps to map the user group(s) to the Purity//FAroles (array_
admin, storage_admin, ops_admin, and readonly). The specified role is then sent in the
SAML respose with attribute name purity_roles. Contact Pure Storage Technical Services
to create an SSO integration application on the IdP side and to configure group to role mapping
for the specified IdP.

Note: Directory service configuration is no longer neccesary for role mapping on IdPs.

Typical Configuration Run


Configure the Directory Service

This step is required only if the directory service has not yet been configured for use with the
FlashArray or is not the same directory service instance as used by the IdP.
Notes about the directory service:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 318
Chapter 10:Settings | Access

l The directory service configured with the array must be the same directory service
instance that is configured in the IdP relying party trust for this array.
l The directory service configuration must include groups and roles. (See the "Dir-
ectory Service" on page 309 section, especially "Groups" on page 311 and "Con-
figuring the Directory Service Roles" on page 315.)
l If you configure the directory service, also run its test of array management con-
figuration. The Test button is near the top right of the Settings > Access > Directory
Service pane.
Configure SAML2 SSO in Purity//FA

Figure 10-14 shows a sample Edit SAML2 SSO Configuration page completed for the initial con-
figuration. This page can optionally be completed with only a configuration display name, array
URL, and URL for the IdP metadata file. Purity//FA fills in the shaded SP ID and URL fields
based on the configuration Name and Array URL fields.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 319
Chapter 10:Settings | Access

Figure 10-14. Edit SAML2 SSO Page

We recommend not using the SP optional credentials or the IdP request or assertion
until after the initial configuration passes the end-to-end test. Note also that the
Enable toggle remains in the off position.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 320
Chapter 10:Settings | Access

Note: These instructions require IdP information available either from the
AD FS Management Tool or from an AD FS administrator.

1 Open the Settings > Access tab and scroll to the SAML2 SSO pane. Click the Create SAML2
SSO icon.

2 In the Name field, enter a local display name for the SAML2 SSO configuration on the array.
3 Leave the Enabled toggle off. The toggle cannot be set on at this time.
4 Review the Array URL discussion in "Configuration Notes" on page 317.
In the Array URL field, enter the FlashArray URL. The URL must use HTTPS.
5 In the Service Provider (SP) section, Purity//FA fills in the ID and URL fields based on the
configuration Name and Array URL information provided in the previous steps.
The information in these service provider fields is required later to create a relying party for
the array in the AD FS identity provider.

6 Leave the optional signing credential and decryption credential fields empty for the initial con-
figuration.
7 Obtain the IdP information from your IdP administrator or from the AD FS Management Tool.
The IdP Entity ID is found under the AD FS Management Tool under `AD FS/Ser-
vice/Federated Service Properties and other URLs are under AD FS Ser-

Pure Storage Confidential - For distribution only to Pure Customers and Partners 321
Chapter 10:Settings | Access

vice/Endpoints. The verification certificate is under AD FS/Ser-


vice/Certificates/Token-signing.
l Enter either the IdP Metadata URL by itself or enter the Idp Entity ID and IdP URL
fields and the Verification Certificate.
If both the IdP Metadata URL and any of the Idp Entity ID, IdP URL, or
Verification Certificate are provided, the metadata file is only read for those
options that are not provided.
l Do not enable Sign Request and Encrypt Assertion for the initial configuration.
8 Click Test at the bottom of the Edit SAML2 SSO Configuration page. This runs a basic test of
the array URL, connectivity to the IdP, and directory service configuration (but is not a com-
plete end-to-end test). See Figure 10-15 for an example of basic test results.
Figure 10-15. SAML2 SSO Basic Test Results

9 If "Testing directory service correctness" does not pass, click Test on the Directory Service
pane for detailed error messages. Rerun the SAML2 SSO test after correcting the directory
service configuration.
10 Click Close on the test results pop-up.
11 On the Edit SAML2 SSO Configuration page, click Save.
Configure the Active Directory Federation Services IdP

This step registers Purity//FA as a relying party trust on AD FS and requires administrator
access to the AD FS Management Tool.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 322
Chapter 10:Settings | Access

Optionally use the copy icons to the right of the SP ID and URL fields in the Purity//FA SAML2
SSO pane to copy and paste during the AD FS configuration.
1 On the machine running AD FS, open Control Panel > System and Security > Admin-
istrative Tools and select AD FS Management.
2 In the left panel, select Relying Party Trusts. In the right panel, select Add Relying Party
Trust....
3 The Add Relying Party Trust Wizard opens.
Table 10-6. Sample Configuration in the Add Relying Party Trust Wizard
Wizard Page Action
Welcome Ensure Claims aware is selected
Select Data Source Select Enter data about the relying party manually
Enter a display name for the new relying party
Specify Display (for convenience, this name could match the SP configuration display name, as set in the Pur-
Name ity//FA Settings > Access > SAML2 SSO pane)
Configure Cer-
tificate No action
Select Enable support for the SAML 2.0 Web SSO protocol
In the Relying party SAML 2.0 SSO service URL field, enter the Assertion Con-
Configure URL sumer URL from the Purity SAML2 SSO pane
Configure Iden- In the Relying party trust identifier field, enter the SP entity ID from the Purity
tifiers SAML2 SSO pane
Choose Access Con-
trol Policy Select Permit everyone
Ready to Add Trust No action
Finish Select Configure claims insurance policy for this application

Note: Access Control Policy Permit everyone corresponds to password authentication.


Multi-factor authentication can be configured later if required. Consider waiting until after
the SSO configuration passes with password authentication before enabling multi-factor
authentication.
4 The new relying party now appears in the Relying Party Trusts table. Select the relying party
and in the right panel, select Edit Claims Insurance Policy....
5 The Edit Claims Insurance Policy page opens. Click Add Rule... in the bottom right.
These rules specify the content the IdP returns in an assertion to the SP. Two rules are
required: one for a unique identity for the user, the second for group information.
6 The Add Transform Claim Rule Wizard opens. In the Select Rule Template page, in the
"Claim rule template" field, select Send LDAP Attributes as Claims and then click Next.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 323
Chapter 10:Settings | Access

7 Click Choose Rule Type. Follow the instructions in the table.

Table 10-7. Actions in the Add Transform Claim Rule Wizard for Users
Field Action
Claim rule name Enter a descriptive name for the rule, such as map name id
Attribute store Select Active Directory
Mapping table LDAP Attribute column Select SAM-Account-Name
Select Name ID
Outgoing Claim Type column Click Finish

The new rule for name mapping appears in the Edit Claims Insurance Policy page Insur-
ance transform Rules table.
a Again click Add Rule..., this time for the group information rule.
b The Add Transform Claim Rule Wizard opens. In the Select Rule Template page, in the
Claim rule template field, select Send LDAP Attributes as Claims.
Table 10-8. Actions in the Add Transform Claim Rule Wizard for Groups
Field Action
Claim rule name Enter a descriptive name for the rule, such as pass group info
Attribute store Select Active Directory
Mapping table LDAP Attribute column Select Is-Member-Of-DL
Select Group
Outgoing Claim Type column Click Finish

The new rule for group mapping appears in the Edit Claims Insurance Policy page Insur-
ance Transform Rules table.
Perform the SAML2 SSO End-to-end Test

Do not bypass this test. SAML2 SSO configuration is complicated, requiring correct con-
figuration on both the SP and IdP. All SSO user authentication can fail if SSO is enabled pre-
maturely. Perform the end-to-end test before enabling SSO and also after future configuration
changes.
This test does not affect current user sessions or attempts to login.
1 Open the Settings > Access tab and scroll to the SAML2 SSO pane.
2 Click Test.
3 The Test SAML2 SSO Configuration dialog opens. The "Basic test results" section reports on
an array URL test, connectivity to the AD FS server and the directory service, and basic

Pure Storage Confidential - For distribution only to Pure Customers and Partners 324
Chapter 10:Settings | Access

directory service configuration. If any failures appear in the "Basic test results" section,
resolve those issues and redo the test before proceeding.
4 When the basic tests all pass, click E2E Test.
5 The AD FS login page opens in a new browser tab. See Figure 10-16.
As this page is customizable, the page for each organization will have different text and
appearance.
Figure 10-16. AD FS Login Screen

Enter your AD FS credentials and click Sign In.


6 The End To End Test Result opens in a new browser tab with the test status and details.
Figure 10-17 for an example of successful E2E test results.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 325
Chapter 10:Settings | Access

Figure 10-17. SAML2 SSO E2E Test Results

If the test reports any error, check that the service provider and AD FS configurations are
consistent and correct, make any corrections if necessary, and rerun the test. See "Perform
the SAML2 SSO End-to-end Test" on page 324.

Note: If you cannot complete the test, because it takes too long or for a different reason,
the SAML2 SSO configuration may be incorrect and may not function properly if enabled.

Note: If the browser does not open the End To End Test Results pop-up, confirm that the
current browser matches (in terms of FQDN or hostname) the URL specified in the Array
URL field. Log into Purity//FA using the URL specified in the Array URL field, and retry the
test.
7 When the E2E test passes, click Close and go back to configurations.
8 The Test SAML2 SSO Configuration dialog opens again but still shows "End-to-end test has
not been started yet". Click Check E2E Test Result to update the display.
Figure 10-18 for an example of a successful test.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 326
Chapter 10:Settings | Access

Figure 10-18. SAML2 SSO Configuration Test Results

9 Click Close.
10 In the SAML2 SSO configuration screen, click Close.
Optionally Enable Multi-factor Authentication

The types of multi-factor authentication available depend on the IdP. The AD FS IdP supports
certificate authentication, authentication with Microsoft® Azure™ software, and others.
If the GUI end-to-end test is successful with password authentication, optionally enable MFA:
1 In the AD FS Management tool Relying Party Trust page, select the relying party you cre-
ated. In the right panel, select Edit Access Control Policy....
2 On the "Choose an access control policy" page, select Permit everyone and require MFA
and then click Next.
3 Next steps, such as configuring a certificate, depend on the type of MFA selected.
Enable SSO Authentication

After the SAML2 SSO configuration being enabled, all new login attempts to the Purity//FA GUI
by SAML users are referred to the identity provider for authentication. Existing user sessions are
not affected.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 327
Chapter 10:Settings | Access

Important: Only enable SAML2 SSO authentication if the end-to-end test passes!
Otherwise all SAML users could be locked out of the Purity//FA management GUI.
Then the only access to the management GUI is the login page Local Access link.
1 Optionally log into Purity//FA in a second browser also, for use if login access is interrupted.
2 Open the Settings > Access tab and scroll to the SAML2 SSO pane. Click the Configuration
edit icon.
3 Click the right side of the Enable toggle to slide the toggle to the right. When enabled, the
toggle is on the right side and changes to blue.
Click Save.
4 The SAML2 SSO pane shows that SSO authentication is enabled. See Figure 10-19.

Figure 10-19. SAML2 SSO Enabled

5 Click Save.

With SAML2 SSO properly configured and enabled, the Purity//FA login screen no longer
prompts for a password, as shown in Figure 10-20.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 328
Chapter 10:Settings | Access

Figure 10-20. Login Screen with SAML2 SSO

On a user's first log in, and also after an SSO session expires, the AD FS login page opens for
the user's AD FS credentials.

The appearance of the login page varies based on organizational customizations. "GUI Login"
on page 78 describes login steps.

Other Configuration Steps


Links to third-party articles are correct at the time of this writing but may change without notice.
Enable Sign Request

To enable signed SAML authentication requests:


1 Enable the Sign Request toggle by sliding the toggle to the right. When enabled, the toggle
is on the right side and changes to blue.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 329
Chapter 10:Settings | Access

2 Enter the signing credential in the Service Provider Signing Credential field. This credential
must match the signature verification certificate configured for the relying party in the IdP.
Enable Encrypt Assertion

To enable encrypted assertions:


1 Enable the Encrypt Assertion toggle by sliding the toggle to the right. When enabled, the
toggle is on the right side and changes to blue.
2 Enter the decryption credential in the Service Provider Decryption Credential field. This cre-
dential must match the IdP encryption certificate.
SSO Session Timeout

When SAML2 SSO is enabled, Purity//FA GUI session timeouts are based on AD FS timeouts.
By default, an AD FS SSO session times out after eight hours.
Optionally see the following Microsoft articles for information on customizing the time out setting.
l AD FS Single Sign-On Settings
l Set-AdfsProperties -- discusses the PowerShell cmdlets Get-AdfsProperties,
SsoLifetime, and Set-AdfsProperties.

TLS 1.2 or 1.3 Support

The FlashArray requires TLS 1.2 or 1.3 with strong authentication.


The AD FS instance must also support TLS 1.2 or 1.3 with strong authentication in order to test
the relying party's federation metadata URL in an AD FS instance,
If TLS 1.2 or 1.3 with strong authentication is not configured, an error is seen when clicking the
Test URL button when configuring monitoring in the Replying Party Configuration GUI in the
AD FS instance. See Figure 10-21 for the error and Figure 10-22 for the page where it can
occur.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 330
Chapter 10:Settings | Access

Figure 10-21. AD FS Error on Monitoring Page

Figure 10-22. AD FS Error on Monitoring Page

For instructions on how to enable TLS1.2/1.3 and strong authentication in AD FS, see the sec-
tions Enable and Disable TLS 1.2 and Enabling Strong Authentication for .NET applications
in the Microsoft article Managing SSL/TLS Protocols and Cipher Suites for AD FS.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 331
Chapter 10:Settings | Access

Runtime Notes
l If a user is deactivated in AD FS but is currently logged in, the current login session is
not affected. The user is denied access at the next login attempt.
l In case the SAML2 SSO service is temporarily unavailable, an array administrator
(such as pureuser) can access the array through the Local Access link on the login
page. This link provides emergency administrator access when GUI logins are
unavailable.
l By default, an SSO session times out after eight hours. A different time out length can
be configured in the identity provider. See "SSO Session Timeout" on page 330.
l When a user logs in through SAML2 SSO, the browser URL field reports its URL
based on what is configured in the Array URL field, regardless of whether the user
entered the array hostname or FQDN when directing the browser to the array.
Limitations
The following considerations apply to this release:
l Only one SAML2 SSO configuration can be created at a time on an array.
l Only one AD FS identity provider instance is supported with an array.
l SSO authentication applies only to GUI logins. SSH logins continue to use their exist-
ing password authentication or other authentication mechanism, including LDAP
authentication or multi-factor authentication with RSA SecurID® software.
SAML2 SSO Troubleshooting
This section lists common error messages and suggestions to resolve the error.
After making any configuration change, rerun the end-to-end configuration test from the SAML2
SSO pane under Settings > Access (see "Perform the SAML2 SSO End-to-end Test" on
page 324).
End-to-end Test Not Completed
Assertion is Missing a Subject
No User Group Information is Provided
Failed to Load Metadata from Metadata URL
No Assertions Found in Response
Failed to Read Signing Credential
Failed to Read Decryption Credential

Pure Storage Confidential - For distribution only to Pure Customers and Partners 332
Chapter 10:Settings | Access

Invalid Assertion for SAML Response


Invalid Issuer for SAML Response
Failed to Decrypt EncryptedData
User with Multiple Roles is Not Allowed
No Valid User Group Information Found
Failed to Authenticate
SSO Not Available Error

Figure 10-23. End-to-end Test Not Completed

To recover:
1 Expand the Error details link in the AD FS login page, as shown below, or go to the IdP for
more information.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 333
Chapter 10:Settings | Access

2 Ensure the IdP URL is correct in the SP configuration.


3 Ensure the SP entity ID is added to the Relying party trust identifier field in the relying party
trust on the IdP.
4 Ensure the SP assertion consumer URL is added to the Relying party SAML 2.0 SSO ser-
vice URL field in the relying party trust on the IdP.
5 If the relying party metadata URL is configured on the IdP, ensure that the relying party-
metadata URL matches the SP metadata URL in the SAML2 SSO pane.
6 Ensure the primary certificate of the certificates in AD FS/Ser-
vice/Certificates/Token-signing (verification certificate) on the IdP has not
expired.
7 Run the end-to-end test. Check the test results and resolve any remaining configuration
issues. Repeat until the end-to-end test passes.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 334
Chapter 10:Settings | Access

Figure 10-24. Assertion is Missing a Subject

To recover:
1 Ensure that a claim rule for Name ID is configured in AD FS. See "Configure the Active Dir-
ectory Federation Services IdP" on page 322.
2 After changing the IdP configuration, rerun the end-to-end test. Check the test results and
examine any error messages. Make configuration changes if necessary.
3 Repeat until the end-to-end test passes.

Figure 10-25. No User Group Information is Provided

To recover:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 335
Chapter 10:Settings | Access

1 Ensure that a claim rule for Group is configured in AD FS. See "Configure the Active Dir-
ectory Federation Services IdP" on page 322.
2 After changing the IdP configuration, rerun the end-to-end test. Check the test results and
examine any error messages. Make configuration changes if necessary.
3 Repeat until the end-to-end test passes.

Figure 10-26. Failed to Load Metadata from Metadata URL

To recover:
1 Confirm that the Metadata URL field in the Purity//FA SAML2 SSO pane matches the
Metadata URL in under AD FS/Service/Endpoints.
2 After changing the configuration, rerun the end-to-end test. Check the test results and exam-
ine any error messages. Make configuration changes if necessary.
3 Repeat until the end-to-end test passes.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 336
Chapter 10:Settings | Access

Figure 10-27. No Assertions Found in Response

To recover:
1 If the Sign Request feature is not required, disable Sign Request in the Purity//FA SAML2
SSO pane. Then save the configuration and run the end-to-end test.
2 If the Sign Request feature is required:
a Enable Sign Request in the Purity//FA SAML2 SSO pane and save the configuration.
b Ensure the signature verification certificates on the IdP are correct and not expired.
c If the encryption certificate is configured on the IdP, ensure that certificate is correct and
not expired.
d Rerun the end-to-end test. Check the test results and examine any error messages.
Make configuration changes if necessary. Repeat until the end-to-end test passes.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 337
Chapter 10:Settings | Access

Figure 10-28. Failed to Read Signing Credential

To recover:
1 Ensure that the signing credential exists on the IdP.
2 After an IdP configuration change, rerun the end-to-end test. Check the test results and
examine any error messages. Make additional configuration changes if necessary.
3 Repeat until the end-to-end test passes.

Figure 10-29. Failed to Read Decryption Credential

Pure Storage Confidential - For distribution only to Pure Customers and Partners 338
Chapter 10:Settings | Access

To recover:
1 Ensure that the decryption credential exists on the IdP.
2 After an IdP configuration change, rerun the end-to-end test. Check the test results and
examine any error messages. Make additional configuration changes if necessary.
3 Repeat until the end-to-end test passes.

Figure 10-30. Invalid Assertion for SAML Response

To recover:
1 Ensure the verification certificate (primary token-signing certificate) on the IdP has not
expired.
2 Ensure that the verification certificate is correctly entered in the Purity//FA SAML2 SSO
pane.
3 After a configuration change, rerun the end-to-end test. Check the test results and examine
any error messages. Make additional configuration changes if necessary.
4 Repeat until the end-to-end test passes.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 339
Chapter 10:Settings | Access

Figure 10-31. Invalid Issuer for SAML Response

To recover:
1 Ensure that the IdP EntityID is correctly entered in the Purity//FA SAML2 SSO pane.
2 Rerun the end-to-end test. Check the test results and examine any error messages. Make
additional configuration changes if necessary.
Repeat until the end-to-end test passes.

Figure 10-32. Failed to Decrypt EncryptedData

Pure Storage Confidential - For distribution only to Pure Customers and Partners 340
Chapter 10:Settings | Access

To recover:
1 If the Encrypt Assertion feature is not required, disable Encrypt Assertion in the Purity//FA
SAML2 SSO pane and remove the encryption certificate on the IdP. Then save the con-
figuration and run the end-to-end test.
2 If the Encrypt Assertion feature is required:
a Enable Encrypt Assertion in the Purity//FA SAML2 SSO pane and save the con-
figuration.
b Ensure the encryption certificate on the IdP is correct and not expired.
c Save the configuration and rerun the end-to-end test. Check the test results and exam-
ine any error messages. Make additional configuration changes if necessary.
Repeat until the end-to-end test passes.

Figure 10-33. User with Multiple Roles is Not Allowed

To recover:
1 For the AD FS credentials entered on the AD FS login page, ensure that user is a member of
only one valid directory service group that is mapped to a role.
2 After correcting the user account in the directory service, have the user retry the login.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 341
Chapter 10:Settings | Access

Figure 10-34. No Valid User Group Information Found

To recover:
1 For the AD FS credentials entered on the AD FS login page, ensure that user is added to a
valid directory service group that is mapped to a role.
2 After correcting the user account in the directory service, have the user retry the login.

Figure 10-35. Failed to Authenticate

To recover:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 342
Chapter 10:Settings | Access

1 Use the Local Access link to log into the array.


2 Rerun the end-to-end test. Check the test results and resolve any configuration issues.
3 Repeat the end-to-end test and configuration changes until the test passes.

Figure 10-36. SSO Not Available Error

This error is seen when the AD FS server is not available or when there is an issue with the
SAML2 SSO configuration.
To recover:
1 Confirm with the AD FS administrator whether the AD FS server is operational and reachable
by the array.
2 Use the Local Access link to log into the array and rerun the end-to-end test.
3 Check the test results and resolve any configuration issues.
4 Repeat the end-to-end test and configuration changes until the test passes.

Multi-factor Authentication with RSA


For multi-factor authentication with RSA SecurID®software, see the puremultifactor com-
mand in the Purity//FA CLI Reference Guide. MFA with RSA SecurID is configured and man-
aged only with the CLI puremultifactor command. RSA SecurID® authentication
management configuration is not available in the Purity GUI.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 343
Chapter 10:Settings | Access

Purity//FA offers also multi-factor authentication (MFA) with SAML2 Single Sign-on (SSO), see
"Multi-factor Authentication with SAML2 SSO" on page 316. When SAML2 SSO authentication
is configured for the array, MFA is an option available through the identity provider, such as the
Microsoft® Active Directory Federation Services (AD FS) authentication identity management
system.

Audit and Session Logs


Audit Trail
The audit trail represents a chronological history of the Purity//FA GUI, Purity//FA CLI, or REST
API operations that a user has performed to modify the configuration of the array. Each record
within an audit trail includes the date and time the operation was performed, the name of the Pur-
ity//FA user who performed the operation and the Purity//FA operation that was performed.
Purity//FA creates audit records for operations that modify the configuration of the array (e.g.,
creating, modifying, deleting, or connecting volumes).
By default, all audit records on the array are displayed. To display a list of audit records that
were created within a certain time range, click the All Time drop-down button and select the
desired time range from the list.
Purity//FA does not flag audit records. Users can, however, manually flag audit records for
internal tracking purposes. Each audit record includes the following information: the UTC time of
the operation, the Purity//FA user who performed the operation, the Purity//FA command and
subcommand that was performed, the object name against which the command was performed,
and the arguments that were included in the command.
In addition to the Audit Trail panel, audit records are also logged and transmitted to Pure Stor-
age Technical Services via the phone home facility. If SNMP managers are configured, Purity
also sends alert messages as SNMP traps or informs to designated SNMP managers and as
syslog messages to remote servers.
Session Log
The Session Log panel displays a list of user session events performed through the Pure Stor-
age Purity//FA GUI, Purity//FA CLI, and Pure Storage REST API interfaces.
User session events are divided into two main categories: session login and logout actions, and
session authentication actions.
Login and logout actions include:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 344
Chapter 10:Settings | Access

l Logging in to the Purity//FA GUI or Purity//FA CLI. This includes remote logins to the
Purity//FA CLI via SSH.
l Logging out of the Purity//FA GUI or Purity//FA CLI
l Opening a Pure Storage REST API session
l Pure Storage REST API session timeouts
Authentication actions include:
l Generating an API token through the REST API
l Submitting a REST API request in a closed REST session
l Attempting to log in to the Purity//FA GUI or Purity//FA CLI using an invalid password
or multi-factor passcode
l Attempting to open a Pure Storage REST API session using an invalid API token
l Attempting to obtain a REST API token using an invalid user name and/or password
The Location column displays the IP address of the user client connecting to the array.
The Method column displays the authentication method by which the user attempted to log in,
log out, or authenticate. Authentication methods include API token, password, public
key, and saml2_sso. saml2_sso indicates a session authenticated by an identity provider
through SAML2 SSO.
By default, all user session events on the array are displayed. To display a list of user session
events that were performed within a certain time range, click the All Time drop-down button and
select the desired time range from the list.
In addition to the Sessions panel, user session messages are also logged and transmitted to
Pure Storage Technical Services via the phone home facility. If configured, Purity//FA can also
send user session messages as syslog messages to remote servers.

Login and Logout Events


When a user logs in to the Purity//FA GUI or Purity//FA CLI, the event appears as an 'login' event
in the Sessions panel, where the Start Time represents the user login time. The End Time value
appears as a dash ("-") symbol until the user logs out.
When the user logs out of the Purity//FA interface, the same login event becomes a closed ses-
sion, where the End Time represents the user logout time. The login event message ID is
replaced with the logout event message ID. To conserve space, Purity//FA stores a reasonable
number of log entries. Older entries are deleted from the log as new entries are added. If the
matching start time log entry is no longer stored in the log when the user logs out of the Pur-
ity//FA interface, the Start Time value appears as a dash ("-") symbol.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 345
Chapter 10:Settings | Access

Login and Logout Event Examples


The following examples represent common login and logout events:
Example 1

The pureuser command logs in to the Purity//FA GUI with a valid password. See Figure 10-37.
Figure 10-37. User Session Logs – Login and Logout Example

Example 2

The root user logs in to the Purity//FA CLI with a valid public key and then logs out. See Figure
10-38.
Figure 10-38. User Session Logs - Login and Logout Example

Example 3

The pureuser opens a REST API session with a valid API token and then the session times
out. See Figure 10-39.
Figure 10-39. User Session Logs - Login and Logout Example

Example 4

A user logs into the Purity//FA GUI through SAML2 SSO. See Figure 10-40.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 346
Chapter 10:Settings | Access

Figure 10-40. User Session Logs - SAML2 SSO Login Example

Authentication Events
Users can log into Purity//FA through various authentication methods, including passwords, pub-
lic keys, and API tokens.
Purity//FA creates a “failed authentication” event when a user performs any of the following
actions: log in to the Purity//FA GUI with an incorrect password, log in to the Purity//FA CLI with
an invalid password or public key, or open a REST API session with an invalid API token.
Purity//FA creates an “API token obtained” event when a user attempts to create an API token
via any of the Purity//FA interfaces.
Purity//FA creates a “request without session” event when a user attempts to submit a REST API
request as an unauthenticated user.
In the Sessions panel, repeated failed authentication attempts are displayed in pre-configured
time periods. By default, failed authentication attempts are displayed in 15-minute time periods.
The Repeat value represents the number of attempts in addition to an initial attempt that a user
has performed an authentication action within the 15-minute time period.

File System Local Users and Groups


The File System page manages the Purity//FA file system local users and groups. See Figure
10-41.
Local Users and Groups is a feature that allows a locally stored directory of users and groups in
place of an external authentication solution such as Active Directory (AD) or LDAP. By having
local users and groups, clients are allowed to connect to the FlashArray File domain, through
SMB or NFS, and authenticate with their respective credentials.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 347
Chapter 10:Settings | Access

Figure 10-41. Settings – Access Page – File System

Local Users Panel


The Local Users panel displays local users. A local user is a user account that includes a user-
name and a password. Each user is a member of one primary group, as displayed in the Primary
Group column. Before creating a user, a primary group for that user must be created if it does
not already exist. A user can also be a member of other groups, denoted as secondary groups,
as displayed in the Groups column.
Note that permissions are managed from the client side, for example through Windows Explorer
or Computer Management, by adding and removing permissions to users or groups.
The following local users are built in. This is indicated with true in the Built In column. These
built-in users cannot be removed or modified:
l Administrator (UID 0, member of the Administrators group)
l Guest (UID 65534, disabled by default, member of the Guests group)
The attributes of a local user account include:
l Name: The local user name must be between 1 and 20 characters in length and can
contain alphanumeric US ascii, space, or symbols. The name cannot be numbers
only, start or end with dot or space, contain any control characters or any of the fol-
lowing characters: " / \ [ ] : ; | = , + * ? < > . Names are case-insens-
itive on input. Purity//FA displays names in the case in which they were specified
when created or renamed.
l Primary group: The one primary group that the local user belongs to.
l Group: Optionally, one or more secondary local groups that the user belongs to.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 348
Chapter 10:Settings | Access

l Enabled: The user account can be enabled (true) or disabled (false). Enabling a user
is only allowed when the password is set.
l Password: A password is only required when the user account is enabled. The pass-
word must be between 1 and 100 characters in length, and can include any character
that can be entered from a US keyboard.
l UID: The unique user ID, automatically or manually set.
l SID: The security identifier of the user is automatically set.
l Email: An optional email address, for example used for quota notifications.
Creating a Local User

1 Select Settings > Access and select the File System section using the File System button.
2 In the Local Users panel, click the plus icon in the upper-right corner of the panel, or click the
menu icon and select Create... The Create Local User window appears. Fill in the following
information:
l Name: Type the name of the new local user.
l Primary Group: Click and then select one of the groups to act as the primary group of
the user.
l Enabled: Toggle button to enable (blue) the user. Note that if enabled, a password is
required.
l Password: Type a password for the new user, only required when the user account is
enabled.
l Confirm Password: Type the password again.
l Uid: Optionally, enter the user ID to override the automatically set UID.
l Email: Optionally, enter the email address for the user.
3 Click Create.
Managing a Local User
A local user can be managed as follows:
1 Select Settings > Access and select the File System section using the File System button.
2 In the Local Users panel, click the menu icon and select one of the following operations:
l Edit... enables you to change the primary group of a user, enable or disable the user,
set a new password, set an optional UID, or set the optional email address.
l Rename... to change the user name.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 349
Chapter 10:Settings | Access

l Add to local group... to add the user to one or multiple secondary groups.
l Remove from local group... to remove the user from one or more secondary groups.
l Delete... to delete the local user.
3 Confirm the changes.
Deleting Local Users
1 Select Settings > Access and select the File System section using the File System button.
2 In the Local Users panel:
l To delete one local user: Click the menu icon next to the user and select Delete...
l Many users: Click the menu icon in the upper-right corner of the panel and select
Delete... Select the users to delete.
3 Click Delete to confirm.

Local Groups Panel


The Local Groups panel displays local groups, under which one or more users can be gathered
for simplified management of permissions. For example, accounting, development, sales, and
so on. A group can have many members. Only users can be members of a group, not other
groups. Before deleting a group, all members must be removed from the group.
The following groups are built in. This is indicated with true in the Built In column. These built-in
groups cannot be removed or modified:
l Administrators (GID 0)
l Guests (GID 65534)
l Backup Operators (GID 65535)
To add external members, user accounts, or groups that reside on external AD or LDAP servers,
refer to the pureds local command in the Purity//FA CLI Reference Guide.
Note that permissions are managed from the client side, for example through Windows Explorer
or Computer Management, by adding and removing permissions to users or groups.
The attributes of a local group include:
l Name: The local group name must be between 1 and 63 characters in length and can
contain alpha- numeric US ascii, space, or symbols. The name cannot be numbers
only, start or end with dot or space, contain any control characters or any of the fol-
lowing characters: " / \ [ ] : ; | = , + * ? < > . Names are case-insens-
itive on input. Purity//FA displays names in the case in which they were specified

Pure Storage Confidential - For distribution only to Pure Customers and Partners 350
Chapter 10:Settings | Access

when created or renamed.


l GID: A unique group ID, automatically or manually set.
l SID: The security identifier of the group is automatically set.
l Email: An optional email address, for example used for quota notifications.
Creating a Local Group
1 Select Settings > Access and select the File System section using the File System button.
2 In the Local Groups panel, click the plus icon in the upper-right corner of the panel, or click
the menu icon and select Create... The Create Local Group window appears. Fill in the fol-
lowing information:
l Name: Type the name of the new local group.
l Email: Optionally, enter the email address for the group.
l Gid: Optionally, enter the group ID to override the automatically set GID.
3 Click Create.
Modifying a Local Group
A local group can be managed as follows:
1 Select Settings > Access and select the File System section using the File System button.
2 In the Local Groups panel, click the menu icon and select one of the following operations:
l Edit... to set or change the optional email address, or manually set or change the
group ID.
l Rename... to change the local group name.
l Delete... to delete the local group.
3 Confirm the changes.
Deleting Local Groups
Before deleting a group, all members must be removed from the group.
1 Select Settings > Access and select the File System section using the File System button.
2 In the Local Groups panel:
l To delete one local group: Click the menu icon next to the group and select Delete...
l For many groups: Click the menu icon in the upper-right corner of the panel and
select Delete... Select the groups to delete.
3 Click Delete to confirm.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 351
Chapter 10:Settings | Software

Software
The Software page manages software, apps and third party plug-ins associated with the array.
See Figure 10-42.
Figure 10-42. Settings – Software Page

Updates
The Updates panel displays a list of software updates. Software updates add or enhance Purity
features and functionality. Perform periodic software updates to get the most out of your Purity
system.
An interactive software upgrade process is supported with the puresw upgrade CLI command
but is not available in the GUI. This section describes the GUI's one-click non-interactive update.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 352
Chapter 10:Settings | Software

The Auto Download toggle icon enables (blue) or disables (gray) the Auto Download feature. If
Auto Download is enabled, any software installation files that Pure Storage Technical Services
send to the array will be automatically downloaded and ready to install. If Auto Download is dis-
abled, the software installation files that Pure Storage Technical Services send to the array will
only be downloaded during the software update process. Auto Download is disabled by default.
Note that the Auto Download feature impacts software updates only. The Auto Download feature
does not impact Purity apps or third party plug-ins.
A software version that is available for update will have one of the following statuses:
l available: A software update for this version is available, but the installation files have
not been downloaded to the array. Instead, the files will be downloaded to the array
during the installation process. When scheduling the software update, make sure to
factor enough time for the download process.
l downloaded: The installation files for this software version have been successfully
downloaded to the array.
Click Install to start the software update process. As the software update process progresses,
the following statuses will appear:
l downloading: Purity is downloading the installation files to the array for this software
version. If Auto Download is enabled, any software installation files that Pure Storage
Technical Services send to the array will be automatically downloaded and ready to
install. If Auto Download is disabled, the software installation files that Pure Storage
Technical Services send to the array will only be downloaded during the software
update process. Auto Download is disabled by default.
l installing: Purity is updating the software. Be prepared to be logged out of the soft-
ware during the update process.
During the update process, you will be logged out of the software. After you have been logged
out, log back in to continue monitoring the process. The software update process is complete
when the software update no longer appears in the Updates panel.
If the software update fails, the software reverts to the previous version. If you encounter any
problems during the update process, contact Pure Storage Technical Services.

Enabling and Disabling Auto Download


You cannot change the Auto Download status while a software update is in progress.
To enable or disable Auto Download, select one of the following options:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 353
Chapter 10:Settings | Software

l Click the Auto Download toggle button to enable (blue) automatic download. When
the software update files are available, they will be automatically downloaded to the
array.
l To disable Auto Download, click the Auto Download toggle button (gray).

Performing a Software Update


1 Select Settings > Software.
2 In the Updates panel, click Install to start the software update process.
You will be logged out of the software during the update process. After you have been
logged out, log back in to continue monitoring the update process.
The update is complete once the progress bar has reached 100%.

vSphere Plugin
The Pure Storage Management Plugin for vSphere extends the vSphere Web Client, enabling
users to manage Pure Storage FlashArray volumes and snapshots in a vCenter context.
The vSphere Plugin panel displays the connection details for the vSphere Web Client. Once a
connection has been established, users can open Purity//FA GUI sessions via the vSphere Web
Client.
For more information about the vSphere plugin, refer to the Pure Storage Management Plugin
for vSphere User Guide on the Knowledge site at https://support.purestorage.com.

App Catalog
The Purity Run platform extends array functionality by integrating add-on services into the Pur-
ity//FA operating system. Each service that runs on the platform is provided by an app.
The App Catalog panel displays a list of apps that are available to be installed on the array,
along with the following attributes for each app:
l Name: App name. The app name is pre-assigned and cannot be changed.
l Version: App version that is ready to be installed on the array.
l Status: Status of the app installation. Possible app statuses include:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 354
Chapter 10:Settings | Software

l Available: App (new or upgraded version of an existing one) is available to


be installed.
l Downloading: App installation files are being downloaded in preparation
for an installation.
l Downloaded: App installation files have been successfully downloaded.
Installation will begin shortly.
l Installing: App is currently being installed.
l Uninstalling: App is currently being uninstalled.
l Aborted: App installation process has encountered issues. The installation
is rolling back. If the app is no longer available, it will not reappear in the
list; otherwise, it will eventually return to its previous "Available" status. If
the app is available, try the installation again. If you continue to encounter
issues, contact Pure Storage Technical Services.
l Progress: Download progress during the installation process.
l Description: Description of the app.

Note: The App Catalog panel is not supported on Cloud Block Store.
Apps require CPU, memory, network, and storage resources. For this reason, apps by default
are not installed.
To install an app, click the menu icon next to the app and select Install. After an app has been
installed, it appears in the Installed Apps panel.

Installing an App
1 Select Settings > Software.
2 In the App Catalog panel, click the menu icon and select Install. The Install App dialog box
appears.
3 Click Install.

Installed Apps
The Installed Apps panel displays a list of apps that are installed on the array, along with the fol-
lowing attributes for each app:

Pure Storage Confidential - For distribution only to Pure Customers and Partners 355
Chapter 10:Settings | Software

l Name: App name. The app name is pre-assigned and cannot be changed.
l Enabled: App enable/disable status. An app must be enabled so the array can reach
the app service. Apps are disabled by default.
l Version: App version that is currently installed on the array.
l Status: App status. A status of healthy means the app is running. A status of
unhealthy means the app is not running.
There are various factors that contribute to an unhealthy app. In most cases, the
unhealthy status is temporary, such as when the app is being restarted; upon suc-
cessful restart, the app returns to healthy status. The app might also be unhealthy if,
upon enabling the app, Purity//FA determines that there are insufficient resources to
run it. An accompanying message appears in the Details column stating that there
are insufficient resources to operate the app. Disable any apps that are currently not
in use to free up some resources and try to enable the app again.
If the app is in an unhealthy status for a longer than expected period of time, contact
Pure Storage Technical Services.
l VNC Enabled: Indicates whether VNC access is enabled (true) or disabled (false)
for each installed app. The default is false. When VNC Enabled is true, a port is
open to allow VNC connections.

Note: If an app migrates between controllers, it briefly stops and restarts.

App Volumes
For each app that is installed, a boot volume is created. For some apps, a data volume is also
created. Boot and data volumes are known as app volumes.
Select Storage > Volumes to see a list of volumes, including app volumes.
Boot and data app volume names begin with a distinctive @ symbol. The naming convention for
app volumes is @APP_boot for boot volumes and @APP_data for data volumes, where APP
denotes the app name.
App volumes are connected to their associated app host. For example, the linux boot and data
volumes are connected to the linux app host. From the list of volumes, click an app volume to
see its associated app host.
The boot volume represents a copy of the boot drive of the app. Do not modify or save data to
the boot volume. When an app is upgraded, the boot volume is overwritten, completely des-
troying its contents including any other data that is saved to it. The data volume is used by the
app to store data.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 356
Chapter 10:Settings | Software

The following example shows that the drives were correctly mounted inside the linux app.
pureuser@linux:~$ df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 8198768 0 8198768 0% /dev
tmpfs 1643272 8756 1634516 1% /run
/dev/sda1 15348720 1721392 12824616 12% /
/dev/sdb 17177782208 33608 17177748600 1% /data

Disk device /dev/sdb, which corresponds to the app data volume, is mounted on /data,
meaning the data will be saved to the data volume (and not the boot volume), and disk device
/dev/sda1, which corresponds to the app boot volume, is mounted on /.

App Hosts
Each app has a dedicated host, known as an app host. The app host is connected to the asso-
ciated boot and data volumes. The app host is also used to connect FlashArray volumes to the
app.
Select Storage > Hosts to see a list of hosts, including app hosts.
Unlike regular FlashArray hosts, app hosts cannot be deleted, renamed, or modified in any way.
Furthermore, app hosts cannot be added to host groups or protection groups.
App host names begin with a distinctive @ symbol. The naming convention for app hosts is
@APP, where APP denotes the app name.

Connecting FlashArray Volumes to an App


FlashArray volumes are connected to apps via the app host. The volumes are connected to the
app hosts in the same way that they are connected to regular FlashArray hosts.
A FlashArray volume can only be connected to one app host at a time. Furthermore, the FlashAr-
ray volume cannot be connected to other hosts or host groups while it is connected to an app
host.
After a FlashArray volume has been connected to an app host, rescan the SCSI bus to ensure
the newly-connected volumes are visible from inside the app.
The following example displays five FlashArray volumes (and their target LUNs) as SCSI
devices from inside the linux app, ready to be mounted.
pureuser@linux:~$ cat /proc/scsi/scsi
Attached devices:
Host: scsi2 Channel: 00 Id: 01 Lun: 03

Pure Storage Confidential - For distribution only to Pure Customers and Partners 357
Chapter 10:Settings | Software

Vendor: PURE Model: FlashArray Rev: 9999


Type: Direct-Access ANSI SCSI revision: 06
Host: scsi2 Channel: 00 Id: 01 Lun: 04
Vendor: PURE Model: FlashArray Rev: 9999
Type: Direct-Access ANSI SCSI revision: 06
Host: scsi2 Channel: 00 Id: 01 Lun: 05
Vendor: PURE Model: FlashArray Rev: 9999
Type: Direct-Access ANSI SCSI revision: 06
Host: scsi2 Channel: 00 Id: 01 Lun: 06
Vendor: PURE Model: FlashArray Rev: 9999
Type: Direct-Access ANSI SCSI revision: 06
Host: scsi2 Channel: 00 Id: 01 Lun: 07
Vendor: PURE Model: FlashArray Rev: 9999
Type: Direct-Access ANSI SCSI revision: 06

App Interfaces
For each app that is installed, one app management interface is created per array management
interface. An app data interface may also be created for high-speed data transfers.
Select Settings > Network to view and configure app interfaces.
The naming convention for app interfaces is APP.datay for the app data interface, and
APP.mgmty for the app management interface, where APP denotes the app name, and y
denotes the interface.
Configure an app interface to give pureuser the ability to log into the app or transfer data
through a separate interface. Configuring an app interface involves assigning an IP address to
the interface and then enabling the interface.
Optionally set the gateway. Note that only one of the app interfaces of a particular app can have
a gateway set.
Before you configure an app interface, make sure the corresponding external interface is phys-
ically connected.
Configure one or more of the following app interfaces:
l App Management Interface
Configure the app management interface to give pureuser the ability to log into the
app with the same Purity//FA login credentials. If a public key has been created for

Pure Storage Confidential - For distribution only to Pure Customers and Partners 358
Chapter 10:Settings | Software

the user, it can be used to log into the app. Purity//FA password changes are auto-
matically applied to the app.
To configure the app management interface, assign an IP address to one of the app
management interfaces, and then enable the interface.
l App Data Interface
Configure the app data interface to use a separate interface for high-speed data trans-
fers.
To configure the app data interface, assign an IP address to the app data interface,
and then enable the interface.

VNC Access for Apps


VNC (Virtual Network Computing) enables you to remotely access an installed app on the array
in graphical mode from anywhere over the network. If an app supports VNC access, you can
access the app through VNC when VNC access is enabled. Enabling VNC access opens a TCP
port; the array management IP address and the TCP port create an endpoint on which the VNC
server listens for connections.

Nodes of an App
A node of an app is a dedicated instance running the app. Some apps are made up of multiple
nodes. For easy identification, nodes are indexed starting at 0.

Uninstalling an App
1 Select Settings > Software.
2 In the Installed Apps panel, verify the app is disabled.
3 Click the menu icon and select Uninstall. The Uninstall App dialog box appears.
4 Click Uninstall.

Enabling an App
1 Select Settings > Software.
2 In the Installed Apps panel, click the menu icon and select Enable.

Disabling an App
1 Select Settings > Software.
2 In the Installed Apps panel, click the menu icon and select Disable.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 359
Chapter 10:Settings | Software

Enabling VNC access for an app


1 Select Settings > Software.
2 In the Installed Apps panel, click the menu icon and select Enable VNC.
The VNC Enabled column of the app changes to true.

Disabling VNC access for an app


1 Select Settings > Software.
2 In the Installed Apps panel, click the menu icon and select Disable VNC.
The VNC Enabled column of the app changes to false.

Displaying the Node Details of an App


1 Select Settings > Software.
2 In the Installed Apps panel, click the app hyperlink in the Name column.
The Nodes and Details information panels appear. The Nodes information panel shows
the name, node indexes, version, status, and VNC endpoints in the format IP
address:port, where IP address represents the array management IP address and
port represents the VNC port. The Details information panel shows the app details.

Establishing Connections Between FlashArray Volumes and Apps


FlashArray volumes are connected to apps via the app host. To connect a FlashArray volume to
an app:
1 Select the Storage > Hosts.
2 In the Hosts panel, click the app host associated with the app to which you want to connect
the volumes.
3 In the Connected Volumes panel, click an existing volume in the left column to add it to the
Selected Volumes column.
4 Click Connect.
5 Rescan the SCSI bus to ensure that all newly-added FlashArray volumes are visible from
inside the app.
After the SCSI bus rescan, the FlashArray volumes (and their target LUNs) are visible as
SCSI devices from inside the app, ready to be mounted.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 360
Chapter 11:
Cloud Block Store
Pure Cloud Block Store™ is Pure’s state-of-the-art software defined storage solution running Pur-
ity//FA and delivered natively in the cloud. Pure Cloud Block Store™ provides seamless data
mobility across on-premises and cloud environments with a consistent experience, regardless of
whether your data lives on premises, cloud, hybrid cloud or multicloud.
To learn more about Pure Cloud Block Store™ refer to the Knowledge site at Pure Cloud Block
Store.
The following information can be found on the Knowledge site:
l General design, use, and interoperability of Pure Cloud Block Store™.
l Requirements, procurement, and deployment of Pure Cloud Block Store™.
l Operations and capabilities of Pure Cloud Block Store™.
l General troubleshooting information.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 361
Chapter 12:
FlashArray Storage Capacity and
Utilization
The discussion of array administrators in this chapter does not apply to administrators of Ever-
green//One™ subscription storage, as Pure Storage is responsible for managing the physical
capacity of arrays that supply subscription storage.
The two keys to FlashArray cost-effectiveness are highly efficient provisioning and data reduc-
tion. One of an array administrator's primary tasks is understanding and managing physical and
virtual storage capacity. This chapter describes the ways in which physical storage and virtual
capacity are used and measured.

Array Capacity and Storage Consumption


Administrators monitor physical storage consumption and manage it by adding storage capacity
or relocating data sets when available (unallocated) storage becomes dangerously low.

Physical Storage States


In a FlashArray, the physical storage that holds data can be in one of four states: unique,
shared, stale, and unallocated. See Figure 12-1.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 362
Chapter 12:FlashArray Storage Capacity and Utilization | Array Capacity and Storage Con-
sumption

Figure 12-1. FlashArray Physical Storage States

l Unique data. Reduced host-written data that is not duplicated elsewhere in the array
and descriptive metadata.
l Shared data. Deduplicated data. Data that comprises the contents of two or more
sector addresses in the same or different volumes (FlashArray deduplication is array-
wide).
l Stale data. Overwritten or deleted data. Data representing the contents of virtual sec-
tors that have been overwritten or deleted by a host or by an array administrator.
Such storage is deallocated and made available for future use by the continuous stor-
age reclamation process, but because the process runs asynchronously in the back-
ground, deallocation is not immediate.
l Unallocated storage. Available for storing incoming data.

Reporting Array Capacity and Storage Consumption


Array physical storage capacity and the amount of storage occupied by data and metadata is dis-
played through the GUI (Storage > Dashboard) and CLI (purearray list --space). For
example (single line of output displayed over two rows),

$ purearray list --space


Name Capacity Parity Provisioned Size Thin Provisioning Data Reduction
FLASH 48.93T 100% 262995794746K 71% 1.8 to 1
Total Reduction Unique Snapshots Shared System Replication Total
6.4 to 1 38.04T 476.45G 412.97G 0.00 0.00 38.90T

Pure Storage Confidential - For distribution only to Pure Customers and Partners 363
Chapter 12:FlashArray Storage Capacity and Utilization | Volume and Snapshot Storage Con-
sumption

Effective used capacity (EUC), reflecting billable capacity, is displayed through the CLI (pur-
earray list --effective-used). For example (single line of output displayed over two
rows):

$ purearray list --effective-used


Name Provisioned Size Unique Effective Snapshots Effective Shared Effective
pure02 158722M 0.00 499.05M 0.00
Total
499.05M

Volume and Snapshot Storage Consumption


FlashArrays present disk-like volumes to connected hosts. They also maintain immutable snap-
shots of volume contents. As with conventional disks, a volume's storage capacity is presented
as a set of consecutively numbered 512-byte sectors into which data can be written and from
which it can be read. Hosts read and write data in blocks, which are represented as con-
secutively-numbered sequences of sectors.
Purity//FA allocates (also known as "provisions") storage for data written by hosts, and reduces
the data before storing it.

Provisioning
The provisioned size of a volume is its capacity as reported to hosts. As with conventional disks,
the size presented by a FlashArray volume is nominally fixed, although it can be increased or
decreased by an administrator. To optimize physical storage utilization, however, FlashArray
volumes are thin and micro provisioned.
l Thin provisioning. Like conventional arrays that support thin provisioning, FlashAr-
rays do not allocate physical storage for volume sectors that no host has ever written,
or for trimmed (expressly deallocated by host or array administrator command) sector
addresses.
l Micro provisioning. Unlike conventional thin provisioning arrays, FlashArrays alloc-
ate only the exact amount of physical storage required by each host-written block

Pure Storage Confidential - For distribution only to Pure Customers and Partners 364
Chapter 12:FlashArray Storage Capacity and Utilization | Volume and Snapshot Storage Con-
sumption

after reduction. In FlashArrays, there is no concept of allocating storage in "chunks"


of some fixed size.

Data Reduction
The second key to FlashArray cost effectiveness is data reduction, which is the elimination of
redundant data through pattern elimination, duplicate elimination, and compression.
l Pattern elimination. When Purity//FA detects sequences of incoming sectors whose
contents consist entirely of repeating patterns, it stores a description of the pattern
and the sectors that contain it rather than the data itself. The software treats zero-
filled sectors as if they had been trimmed—no space is allocated for them.
l Duplicate elimination. Purity//FA computes a hash value for each incoming sector
and attempts to determine whether another sector with the same hash value is stored
in the array. If so, the sector is read and compared with the incoming one to avoid the
possibility of aliasing. Instead of storing the incoming sector redundantly, Purity//FA
stores an additional reference to the single data representation. Purity//FA dedu-
plicates data globally (across an entire array), so if an identical sector is stored in an
array, it is a deduplication candidate, regardless of the volume(s) with which it is asso-
ciated.
l Compression. Purity//FA attempts to compress the data in incoming sectors, curs-
orily upon entry, and more exhaustively during its continuous storage reclamation
background process.
Purity//FA applies pattern elimination, duplicate elimination, and compression techniques to
data as it enters an array, as well as throughout the data's lifetime.
Figure 12-2 for a hypothetical example of the cumulative effect of FlashArray data reduction on
physical storage consumption.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 365
Chapter 12:FlashArray Storage Capacity and Utilization | Volume and Snapshot Storage Con-
sumption

Figure 12-2. Data Reduction Example

In the example, hosts have written data to a total of 1,000 unique sector addresses through:
l Pattern elimination. 100 blocks contain repeated patterns, for which Purity//FA
stores metadata descriptors rather than the actual data.
l Duplicate elimination. 200 blocks are duplicates of blocks already stored in the
array; Purity//FA stores references to these rather than duplicating stored data.
l Compression. The remaining 70% of blocks compress to half their host-written size;
Purity//FA compresses them before storing, and during continuous storage reclam-
ation.
Therefore, the net physical storage consumed by host-written data in this example is 35% of the
number of unique volume sector addresses to which hosts have written data.
The data reduction example is hypothetical; each data set reduces differently, and unrelated
data stored in an array can influence reduction. Nevertheless, administrators can use the array
and volume measures reported by Purity//FA to estimate the amount of physical storage likely to
be consumed by data sets similar to those already stored in an array.

Snapshots and Physical Storage


FlashArray snapshots occupy physical storage only in proportion to the number of sectors of
their source volumes that are overwritten by hosts. See Figure 12-3.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 366
Chapter 12:FlashArray Storage Capacity and Utilization | Volume and Snapshot Storage Con-
sumption

Figure 12-3. Snapshot Space Consumption Example

In Figure 12-3, two snapshots of a volume, S1 and S2, are taken at times t1 and t2 (t1 prior to t2).
If a host writes data to the volume after t1 but before t2, Purity//FA preserves the overwritten sec-
tors' original contents and associates them with S1 (i.e., space accounting charges them to S1).
If in the interval between t1 and t2 a host reads sectors from snapshot S1, Purity//FA delivers:
l For sectors not modified since t1, current sector contents associated with the volume.
l For sectors modified since t1, preserved volume sector contents associated with S1.
Similarly, if a host writes volume sectors after t2, Purity//FA preserves the overwritten sectors'
previous contents and associates them with S2 for space accounting purposes. If a host reads
sectors from S2, Purity//FA delivers:
l For sectors not modified since t2, current sector contents associated with the volume.
l For sectors modified since t2, preserved volume sector contents associated with S2.
If, however, a host reads sectors from S1 after t2, Purity//FA delivers:
l For sectors not modified since t1, current sector contents associated with the volume.
l For sectors modified between t1 and t2, preserved volume sector contents associated
with S1.
l For sectors modified since t2, preserved volume sector contents associated with S2.
If S1 is destroyed, storage associated with it is reclaimed because there is no longer a need to
preserve pre-update content for updates made prior to t2.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 367
Chapter 12:FlashArray Storage Capacity and Utilization | Volume and Snapshot Storage Con-
sumption

If S2 is destroyed, however, storage associated with it is preserved and associated with S1


because the data in it represents pre-update content for sectors updated after t1.
To generalize, for volumes with two or more snapshots:
l Destroying the oldest snapshot. Space associated with the destroyed snapshot is
reclaimed after its eradication pending period has elapsed or after an administrator
purposely eradicates the destroyed snapshot.
l Destroying other snapshots. Space associated with the destroyed snapshot is asso-
ciated with the next older snapshot unless it is already reflected there because the
same sector was written both after the next older snapshot and after the destroyed
snapshot, in which case it is reclaimed.

Reporting Volume and Snapshot Storage Con-


sumption
Because data stored in a FlashArray is virtualized, thin-provisioned, and reduced, volume stor-
age is monitored, managed, and displayed from two viewpoints:
l Host view. Displays the virtual storage capacity (size) and consumption as seen by
the host storage administration tools.
l Array view. Displays the physical storage capacity occupied by data and the
metadata that describes and protects it.
Volume size and physical storage consumption data is displayed through the GUI (Storage >
Volumes) and CLI (purevol list --space).
For example (single line of output displayed over two rows),

$ purevol list --space


Name Size Thin Provisioning Data Reduction Total Reduction
VOL1 3T 50% 3.7 to 1 7.4 to 1
Unique Snapshots Total
484.78G 31.85G 664.64G

Pure Storage Confidential - For distribution only to Pure Customers and Partners 368
Chapter 12:FlashArray Storage Capacity and Utilization | FlashArray Data Lifecycle

FlashArray Data Lifecycle


Data stored in a FlashArray undergoes continuous reorganization to improve physical storage
utilization and reclaim storage occupied by data that has been superseded by host overwrite or
deletion. See Figure 12-4.
Figure 12-4. FlashArray Physical Storage Life Cycle

The steps enumerated in Figure 12-4 are as follows:


Host Write Processing (1)
Data written by hosts undergoes initial processing as it enters an array. The result is data
that has undergone initial reduction, and been placed in write buffers.
Writing to Persistent Storage (2)
As write buffers fill, they are written to segments of persistent flash storage.
Segment Selection (3)
A Purity//FA background process continually monitors storage segments for data that has
been obsoleted by host overwrites, volume destruction or truncation, or trimming (4).
Segments that contain a predominance of obsoleted data become high-priority can-
didates for storage reclamation and further reduction of the live data in them.
Data Reduction (3)
As Purity//FA processes segments, it opportunistically deduplicates and compresses (5)
the live data in them, using more exhaustive algorithms than those used during initial
reduction (1). Reprocessed data is moved to write buffers that are being filled; thus, write
buffers generally contain a combination of new data entering the array and data that has
been moved from segments being vacated to improve utilization.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 369
Chapter 12:FlashArray Storage Capacity and Utilization | FlashArray Data Lifecycle

Storage Reclamation (6)


As live data is moved from segments, they are returned to the pool of storage available
for allocating segments. Purity//FA treats all of an array's flash module storage as a single
homogeneous pool.
Reallocation (7)
Purity//FA allocates segments of storage from the pool of available flash modules as they
are required. Typically, the software fills write buffers for multiple segments concurrently.
This allows the software to consolidate different types of data (e.g., highly-compressible,
highly-duplicated, etc.) so that the most appropriate policies can be applied to them.
Occasionally, continuous data reduction can result in behavior unfamiliar to administrators
experienced with conventional arrays. For example, as Purity//FA detects additional duplication
and compresses block contents more efficiently, a volume's physical storage occupancy may
decrease, even as hosts write more data to it.

Pure Storage Confidential - For distribution only to Pure Customers and Partners 370
Pure Storage, Inc.
Twitter: @purestorage
2555 Augustine Drive
Santa Clara, CA 95054
T: 650-290-6088
F: 650-625-9667
Sales: [email protected]
Support: [email protected]
Media: [email protected]
General: [email protected]

You might also like